Science topic

Operations Research - Science topic

A group of techniques developed to apply scientific methods and tools to solve the problems of DECISION MAKING in complex organizations and systems. Operations research searches for optimal solutions in situations of conflicting GOALS and makes use of mathematical models from which solutions for actual problems may be derived. (From Psychiatric Dictionary, 6th ed)
Questions related to Operations Research
  • asked a question related to Operations Research
Question
2 answers
I am exploring benchmarks for the Dynamic Vehicle Routing Problem (DVRP) and want to identify widely recognized datasets or problem sets used in recent research.
I am aware of the 21 problems proposed by Kilby et al. (1998), which were based on classical VRP benchmarks. However, I am looking for additional or more recent examples.
Could anyone recommend other DVRP benchmarks, or point me to resources where these datasets are published? For example, are there repositories, articles, or research papers that focus specifically on DVRP benchmarks?
Any insights or suggestions for expanding my dataset collection would be greatly appreciated!
Relevant answer
Answer
Steftcho P. Dokov Thank you for your answer! However, I’m specifically looking for benchmarks on the Dynamic Vehicle Routing Problem (DVRP), which addresses dynamic changes during operations. The benchmarks in the papers you shared are for the static version of VRP with Time Windows. I truly appreciate you taking the time to respond. Thank you again!
  • asked a question related to Operations Research
Question
1 answer
What is the difference between academic research and operational research?
Quelle est la différence entre la recherche académique et la recherche opérationnelle?
Relevant answer
Answer
1. Academic Research
Purpose:
  • Knowledge Creation: Academic research aims to generate new knowledge, theories, or insights in a particular field. It seeks to advance understanding and contribute to the body of knowledge.
  • Theoretical Focus: Often focused on theoretical frameworks and models, academic research explores fundamental questions and underlying principles.
Scope:
  • Broad and Deep: This type of research can cover a wide range of topics within a discipline or delve deeply into a specific aspect of a field. It is not always concerned with immediate practical applications.
  • Innovation: Often explores new ideas or technologies without immediate practical constraints or considerations.
Methodology:
  • Systematic and Rigorous: Follows a structured methodology, including hypothesis formulation, literature review, experimentation, and analysis. The methods are designed to ensure validity and reliability.
  • Peer Review: Results are typically published in academic journals and subjected to peer review to validate findings and methodologies.
Outcomes:
  • Publications: Results are disseminated through academic papers, journals, and conferences.
  • Contributions to Theory: Aims to advance theoretical understanding and knowledge in a particular field.
Examples:
  • Basic Science: Research into fundamental principles of physics or mathematics.
  • Social Sciences: Theoretical studies on human behaviour or societal structures.
2. Operational Research
Purpose:
  • Practical Problem Solving: Operational research (OR) focuses on solving practical problems and improving decision-making processes in organizations and industries.
  • Application-Oriented: Aims to apply analytical methods to real-world problems to enhance efficiency, effectiveness, and performance.
Scope:
  • Specific and Applied: Deals with particular problems within organizations or industries, such as logistics, supply chain management, resource allocation, and optimization.
  • Immediate Impact: Emphasizes practical outcomes and solutions that can be implemented to achieve tangible improvements.
Methodology:
  • Applied Analytical Methods: Utilizes mathematical modelling, optimization techniques, simulations, and statistical analysis to address operational problems.
  • Decision Support: Provides tools and models that help decision-makers make informed choices.
Outcomes:
  • Recommendations and Solutions: Results in actionable recommendations and solutions that can be implemented in practice.
  • Reports and Tools: Often presented as reports, decision support tools, or implemented systems.
Examples:
  • Supply Chain Optimization: Designing efficient supply chain networks to minimize costs and improve service levels.
  • Workforce Scheduling: Developing employee schedules to optimize labour costs and meet operational needs.
Key Differences
  1. Objective: Academic Research: Aims to advance knowledge and theory. Operational Research: Aims to solve practical problems and improve operations.
  2. Focus: Academic Research: More theoretical and exploratory. Operational Research: More practical and applied.
  3. Methodology: Academic Research: Structured and organized, focusing on rigour and theoretical contribution. Operational Research: Practical and solution-oriented, using analytical methods to address specific issues.
  4. Outcomes: Academic Research: Publications and theoretical advancements. Operational Research: Practical solutions and decision support tools.
In summary, while academic research seeks to expand theoretical knowledge and understanding, operational research focuses on applying analytical techniques to solve practical problems and improve organizational performance. Both types of research play crucial roles in advancing knowledge and enhancing real-world applications.
  • asked a question related to Operations Research
Question
2 answers
Is there a universally accepted set of benchmarks for the Dynamic Vehicle Routing Problem (DVRP) like there are ones for the Traveling Salesman Problem and the Vehicle Routing Problem?
I'm interested in Capacitated DVRP and DVRP with Time Windows.
I know Kilby et al. (1998) proposed 21 problems based on VRP benchmarks, but I'm looking for more examples.
Relevant answer
Answer
Supposedly, there is a website by Pankratz and Krypczyk with an updated list of publicly available instance sets for dynamic vehicle routing problems,
cited as "Pankratz, G. and Krypczyk, V. (2009). Benchmark data sets for dynamic vehicle routing problems"
but the link provided in papers doesn't seem to work (http://www.fernuni-hagen.de/WINF/inhalte/benchmark_data.htm)
  • asked a question related to Operations Research
Question
1 answer
I have a bi-level max-min problem, the lower level is a single-level minimization linear program and is always feasible, through strong duality we can obtain a single-level maximization.
I can't understand the dual problem of the main paper(section 4.2 of the paper) so I get the dual of lower-level on my own and it's different from the main paper.
I uploaded the primal problem and dual problem please let me know if my dual problem is wrong or explain the dual problem of the main paper(section 4.2 of the paper).
Relevant answer
Answer
Just consult " Conjugate duality and optimization " by T.R. Rockafellar for an extremely clear treatment of duality. Important message of this booklet : "the" dual problem does not exist. It all depends on the embedding of your primal problem in a family of perturbed problems, and the kind of perturbation is up to you.
  • asked a question related to Operations Research
Question
2 answers
I want to know Linear programming model and net work analysis can be used in irrigation model
Relevant answer
Answer
However, certainly, this one
that is,
is a really new and advanced paper considering water allocation issues.
  • asked a question related to Operations Research
Question
2 answers
Using heuristic algorithms, I'm studying solution methods for various flow-shop scheduling problems. Each method generates an optimal job sequence. With this job sequence, we get the total elapsed time or makespan. However, for an n-jobs and m-machine flow-shop scheduling problem, manually determining the makespan and creating the Gantt chart is quite tedious.
Could you suggest any software or tools to generate the Gantt chart and to determine the makespan (total elapsed time) of an optimal job sequence for a flow-shop scheduling problem?
Looking forward to your precious suggestions. Thank you!
Relevant answer
Answer
Tedious it is. yes. Use plotly:
  • asked a question related to Operations Research
Question
1 answer
I'm working on a routing and scheduling problem in the home care services context. I consider a break as a dummy patient, so routing and scheduling are also implemented for the break node (with some conditions). For some reason, I need to determine whether a visit occurs before or after a break. I defined two binary variables "Zikt" which means if patient i is visited by doctor k on shift t before the break, and "Z'ikt" for after the break. I added 2 new constraints as follows:
S(b,k,t)-S(i,k,t) =l= M* Z(i,k,t)
Z(i,k,t) + Z'(i,k,t) =l= sum((j), X(i,j,k,t))
P.s: S(b,k,t) is the starting time of break by doctor k on shift t
S(i,k,t) is the starting time of visiting patient i by doctor k on shift t
X(i,j,k,t) is if doctor k on shift t goes from node i to node j (binary variable)
In the first constraint, if the left-hand side becomes positive, Z(i,k,t) will be 1.
The second one is guaranteeing that Z and Z' could get value if patient i was visited by doctor k on shift t.
By adding these 2 constraints to my model, the result of my S variables get wrong.
I think it's better to rewrite the first constraint so that it becomes related somehow to X(i,j,k,t), but I can not figure out how to do it.
I'd appreciate it if anyone could help me.
Thanks
Relevant answer
Answer
Let's break down the problem and try to reframe the constraints to ensure they reflect the requirements accurately.
Firstly, your goal is to determine whether a visit to a patient \( i \) by doctor \( k \) on shift \( t \) occurs before or after a break.
Your variables:
- \( Z_{ikt} \): Binary, 1 if patient \( i \) is visited by doctor \( k \) on shift \( t \) before the break.
- \( Z'_{ikt} \): Binary, 1 if patient \( i \) is visited by doctor \( k \) on shift \( t \) after the break.
- \( S_{bkt} \): Starting time of break by doctor \( k \) on shift \( t \).
- \( S_{ikt} \): Starting time of visiting patient \( i \) by doctor \( k \) on shift \( t \).
- \( X_{ijkt} \): Binary, 1 if doctor \( k \) on shift \( t \) goes from node \( i \) to node \( j \).
Let's restate the constraints:
1. If \( S_{bkt} \) is greater than \( S_{ikt} \), then \( Z_{ikt} \) should be 1. This implies that the visit to patient \( i \) is before the break. You've used a large \( M \) to handle this condition. This is a big-M constraint and is often used in linear programming to model conditional constraints.
2. A visit can either be before the break or after the break, hence the summation of \( Z_{ikt} \) and \( Z'_{ikt} \) is 1 if patient \( i \) is visited by doctor \( k \) on shift \( t \). You've used the \( X \) variable to enforce this, which seems correct.
For the first constraint: the way you've modeled it assumes that if \( S_{bkt} - S_{ikt} \) is positive (i.e., \( S_{bkt} > S_{ikt} \)), \( Z_{ikt} \) will be 1. However, this doesn't enforce the converse (i.e., if \( S_{bkt} \leq S_{ikt} \), \( Z_{ikt} \) should be 0).
You can re-write this constraint using two inequalities to cover both scenarios:
\[
S_{bkt} - S_{ikt} \leq M(1 - Z_{ikt})
\]
\[
S_{ikt} - S_{bkt} \leq MZ_{ikt}
\]
The first equation ensures that if \( S_{bkt} \) is greater than \( S_{ikt} \), \( Z_{ikt} \) is forced to be 0. The second equation ensures the opposite.
The second constraint you provided seems to be correct. It ties the \( Z \) variables to the \( X \) variable, ensuring that \( Z \) and \( Z' \) can only be non-zero if there's a visit.
Lastly, ensure that the value of \( M \) is sufficiently large to avoid infeasibilities but not too large to cause numerical instability in the solver.
Try implementing these changes and see if they correct the issues with the \( S \) variables.
  • asked a question related to Operations Research
Question
1 answer
I want to solve a Mixed Integer problem with several constraints. Gurobi or Google OR tools solve such problems. Though I am not sure, I think they use an exact method such as branch and bound or plane cut. Whereas, OptaPlanner is a metaheuristic solver. Does it give equally good results in terms of optimality?
Relevant answer
Answer
Gurobi, Google OR-Tools, and OptaPlanner are all powerful optimization solvers, but they differ in terms of the techniques they employ and the types of problems they are best suited for. Let's discuss their approaches and the trade-offs between optimality and efficiency.
  1. Gurobi and Google OR-Tools:Gurobi and Google OR-Tools are commercial solvers that employ state-of-the-art optimization algorithms, including exact methods like branch and bound, cutting planes, and advanced heuristics. They excel at solving a wide range of optimization problems, including mixed integer programming (MIP) problems with linear and nonlinear objectives and constraints. Gurobi and OR-Tools are known for providing high-quality solutions and are often used in industry and academia when optimality is crucial. These solvers can exploit problem structure, exploit parallel computing, and incorporate various algorithmic enhancements to improve performance. They strive for optimality, but the runtime required for finding the provably optimal solution may increase exponentially as the problem size grows.
  2. OptaPlanner:OptaPlanner is an open-source constraint satisfaction solver and a metaheuristic optimization framework. It utilizes various metaheuristic algorithms such as simulated annealing, tabu search, and local search to iteratively explore the solution space and improve the quality of solutions. OptaPlanner is particularly well-suited for constraint satisfaction problems (CSPs), where the goal is to find feasible solutions that satisfy a set of constraints. While metaheuristics like OptaPlanner do not guarantee optimal solutions, they can often find high-quality solutions within a reasonable amount of time. OptaPlanner is known for its scalability and efficiency, making it suitable for large-scale, real-world problems.
When comparing the optimality of solutions, exact methods like Gurobi and OR-Tools generally provide provably optimal solutions if given sufficient runtime. On the other hand, metaheuristics like OptaPlanner may not always guarantee optimality but can often provide good solutions within practical time constraints.
The choice between Gurobi/Google OR-Tools and OptaPlanner depends on your specific problem characteristics, time constraints, and the importance of achieving optimality. If optimality is critical and the problem size is manageable, Gurobi or OR-Tools may be preferable. However, if scalability, efficiency, and a good-quality solution within a reasonable time frame are more important, OptaPlanner's metaheuristic approach can be a viable choice.
It's worth noting that different solvers may perform differently on specific problem instances, so it's beneficial to experiment and compare their results on your particular problem to determine the most suitable solver for your needs.
  • asked a question related to Operations Research
Question
4 answers
I am attempting to build a computational optimization algorithm to reassign licenses within a given band with the objective of maximizing continuity and minimizing reassignments. The model’s main constraint is that licenses can’t overlap.
I’ve been struggling with identifying a good algorithm candidate, given the problem is not a simple one.
The problem is as follows:
The problem has two objectives:
  • Maximize the continuity of assignments or minimize gaps between assignments
  • Minimize reassignments of current licenses (optional objective)
The objectives are measured by two functions:
  1. The continuity index (CI): a quantitative measurement of the continuity of a spectrum subset, ranging from (0) for perfect continuity and (1) for maximum imperfect continuity. The continuity index works by penalizing two features, gap number in a defined subset and gap sizes.
  2. Work (Wtotal) to measure reassignments calculated as Wtotal = w1 + w2 + wn, where n is the number of licenses in a given band and where Wn = licensen bandwidth * distance moved in Mhz
Constraints:
Although the model will eventually include multiple constraints, initially I'd like to only consider one. That being, licenses must not overlap.
Attached is a visualization of an example problem
Relevant answer
Answer
  • asked a question related to Operations Research
Question
3 answers
A collection of solved examples in Pyomo environment (Python package)
The solved problems are mainly related to supply chain management and power systems.
Feel free to follow / branch / contribute
Relevant answer
Answer
udemy. com/course/optimization-in-python/?couponCode= 36C6F6B228A087695AD9
  • asked a question related to Operations Research
Question
4 answers
Hello! I am going to conduct a study on the application of Vehicle Routing Problem in real world. However, I am struggling with how I construct my networks. I would like to ask on how to define the edges/arcs/links between the vertices.
For example, what should the edge between cityA and cityB represent? Most literatures use travel times which is based on road distances from cityA to cityB. However, there are a lot of paths from cityA to cityB. The way to address this is to use the shortest path from cityA to city B. Are there any alternatives to address this issue? What should the edge between two vertices represent?
Relevant answer
Answer
You can use the real distance or travel time of the best path that link each pair of cities. Usually, travel times are calculated based on the real distance and an assumed vehicle average velocity.
You always need to calculate the Cost-Matrix between all the cities (nodes) as instance data. To make this possible recommend OSRM or Vallhalla or any proprietary solution like ArcGIS Desktop Network Analyst
  • asked a question related to Operations Research
Question
5 answers
Could any expert try to examine the new interesting methodology for multi-objective optimization?
A brand new conception of preferable probability and its evaluation were created, the book was entitled "Probability - based multi - objective optimization for material selection", and published by Springer, which opens a new way for multi-objective orthogonal experimental design, uniform experimental design, respose surface design, and robust design, etc.
It is a rational approch without personal or other subjective coefficients, and available at https://link.springer.com/book/9789811933509,
DOI: 10.1007/978-981-19-3351-6.
Best regards.
Yours
M. Zheng
Relevant answer
Answer
  1. Evolutionary Algorithms (EA): Evolutionary algorithms (EA) are a family of optimization algorithms that are inspired by the principles of natural evolution. These algorithms are widely used in multi-objective optimization because they can handle multiple objectives and constraints and can find a set of Pareto-optimal solutions that trade-off between the objectives.
  2. Particle Swarm Optimization (PSO): Particle Swarm Optimization (PSO) is a population-based optimization algorithm that is inspired by the social behavior of birds and fish. PSO has been applied to multi-objective optimization problems, and it has been shown to be effective in finding Pareto-optimal solutions.
  3. Multi-objective Artificial Bee Colony (MOABC): MOABC is a multi-objective optimization algorithm inspired by the foraging behavior of honeybees. MOABC has been applied to various multi-objective optimization problems and has been found to be efficient in finding the Pareto-optimal solutions
  4. Decomposition-based Multi-objective Optimization Algorithms (MOEA/D): Decomposition-based multi-objective optimization algorithms (MOEA/D) decompose the multi-objective problem into a set of scalar subproblems, then solve them by using a scalar optimization algorithm. MOEA/D has been found to be effective in solving multi-objective problems with high dimensionality and/or large numbers of objectives.
  5. Deep reinforcement learning (DRL) : DRL is a category of machine learning algorithm that allows the agent to learn by interacting with the environment and using the rewards as feedback. This approach has been used to optimize the decision-making process in multi-objective problems.
  • asked a question related to Operations Research
Question
9 answers
LINGO is an open source application for mathematical modeling. How about your experience about LINGO? What are the uncommon tools in LINGO to solve optimization models?
Relevant answer
Answer
I have solved hundreds of problems using Solver, an Excel Add-in sin 1993
It is excellent and free, and can handle scenarions up to 200 alternatiives and 100 criteria, i.e., 200 x 100 matrices.
  • asked a question related to Operations Research
Question
2 answers
In my data envelopment analysis (DEA) model, I have 3 outputs, namely (i) No. of Sponsored Projects, (ii) No. of Consultancies and (iii) Total Revenue generated from Sponsored Projects and Consultancies. My DMUs are different universities. It is clear that the third output is the overlapping factor as the revenue is generated due to the sponsored projects and consultancies. If I choose only the third output, the problem is that I cannot take into account the social benefits from a university because it is possible that a university undertakes many projects and consultancies (i.e. indicators of social benefits) while generating less revenue. Again, if I take the first two outputs, then I miss the part of revenue generation because all the projects and consultancies may not generate same revenue. Should I consider the first two outputs or the third output or all the three outputs in my model?
Relevant answer
Answer
It is generally recommended to include all three outputs in your DEA model, as each of them represents a distinct aspect of the performance of the DMUs (universities).
What I understand is that the first output, No. of Sponsored Projects, reflects the ability of the university to attract funding for research and development. The second output, No. of Consultancies, reflects the university's ability to provide services to external organizations. The third output, Total Revenue generated from Sponsored Projects and Consultancies, reflects the university's overall financial performance.
If you include all three outputs in your model, you can capture a university's social benefits and financial performance. However, you should be aware that the third output may be correlated with the first two outputs, as revenue is generated from sponsored projects and consultancies. This could potentially affect the results of your DEA model.
One way to account for this is to use a model that allows for overlapping outputs, such as the common weights model or the free disposal hull model. These models allow for the possibility that some inputs and outputs may be used in the production of multiple outputs, and can provide a more accurate assessment of the efficiency of the DMUs.
Ultimately, the decision of which outputs to include in your DEA model will depend on the specific research question you are trying to answer and the data that is available to you. It may be helpful to consider the strengths and limitations of each model and the implications of including or excluding certain outputs in your analysis.
Good luck.
  • asked a question related to Operations Research
Question
3 answers
I'm struggling to understand the method followed in the following analysis. Can someone please explain how the author got the values of Δ_1 and K_1 that justify its analysis?
I have tried to isolate "Δ" and "K" by setting Equation (B8) equal to zero. but I have failed to get similar conditions.
P.S: I'm new to mathematical modelling, so I really need to understand what's going on here. Thanks
Relevant answer
Answer
The RHS is a fraction, whose numerator and denominator are quadratic expressions in Δ. Therefore the fraction takes positive values when numerator and denominator are of the same sign...
  • asked a question related to Operations Research
Question
20 answers
I am planning to use DEA analysis for my on going research. I have SPSS with me. therefore, I would like to figure out how it can be done with SPSS.
Your suggestions would be highly appreciated. 
Relevant answer
Answer
No You can't
  • asked a question related to Operations Research
Question
2 answers
I need help to come up with a possible topic proposal related to any existing problems can be in community, Hospitals, Banks which can be solved using multiple Operations Research topic. Thanks.
Relevant answer
Answer
This web page might provide some inspiration:
  • asked a question related to Operations Research
Question
20 answers
Could you suggest some contemporary research topics in Operations Research (OR)?
In your opinion, which research topics of OR could be impactful in the next decade?
Thanks in advance.
Relevant answer
Answer
My scientific opinion in that question is we must hybrid some problems together and create new tools (mathematically or artificially) to solve these problems.
The big data or large scale problems will be focused by several researchers in the next few years from now.
Try to hybrid the big data problems with the solving tools of operations research.
  • asked a question related to Operations Research
Question
2 answers
I have just recently started a new "weekend project" in addition to my master's studies and I am looking for a data-set. I would like to use some Operations Research to design an optimal gym schedule that conforms to a specific set of constraints.
The idea is to create a daily gym schedule that conforms to a set of constraints (e.g. time, target muscles etc) as well as a set of physiological constraints. The physiological constraints are things such as do not exercise muscle-x and muscle-y together or do not do abdominal exercises for two consecutive days etc.
However the problem I face is data, specifically a data-set (or data-sets).
Are there any open-source datasets which list an exercise, as well as all the muscles targeted? Preferably one that lists as much of the physiological data as possible. E.g. which stabilizers are activated, which secondary muscle is also activated, is it an extension or flexion. I am also looking for datasets which could help me with some of the physiological constraints, such as muscle recovery times, which muscles not to exercise together etc?
My goal is to algorithmically capture an OR model which I can provide with input data such as muscle group target and time and the model must output a schedule of exercises which targets all the muscles in that muscle group, is not physiologically harmful and is within the time constraint.
Relevant answer
Answer
I don't have a specific data set or study in mind, but the US Army should have some data sets from recent studies. They recently transitioned to a new physical fitness plan developed with the physiological aspects of job performance in different areas, instead of a generalized physical fitness plan for all soldiers. Also, it should be fairly varied with categories ranging from 18 to 40 years old, various Heights, body fat content, sex, ethnicity, and race. Not to mention most soldiers are in good physical condition, healthy with proper nutrition and hydration.
  • asked a question related to Operations Research
Question
13 answers
Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.
Relevant answer
Answer
Norbert Tihanyi one little warning, if you look whether a particular journal is mentioned in the Beall’s list you should not only check the journal title in the stand-alone journal list (https://beallslist.net/standalone-journals/) but also the publisher behind it (if any). In this case the publisher is not mentioned in the Beall’s list (https://beallslist.net/). Anis Hamza I suppose you mean ISSN number, this journal with ISSN 1547-5816 and/or E-ISSN:1553-166X is mentioned in Scopus (https://www.scopus.com/sources.uri?zone=TopNavBar&origin=searchbasic) and Clarivate’s Master journal list (https://mjl.clarivate.com/home).
Back to your question, it is somewhat diffuse. There are signs that you are dealing with a questionable organization:
-Contact info renders in Google a nice residence but does not seem to correspond to an office and I quote “The American Institute of Mathematical Sciences is an international organization for the advancement and dissemination of mathematical and applied sciences.” https://www.aimsciences.org/common_news/column/aboutaims
-Both websites https://www.aimsciences.org/and http://www.aimspress.com/ function more or less okay but not flawless
-The journal “Journal of Industrial & Management Optimization (JIMO)“ is somewhat vague about the APC. It positions itself as hybrid (with an APC of 1800 USD), but all papers I checked can be read as open access (although not all have a CC etc. license). It mentions something like open access for free when an agreement is signed with your institution but how much this cost is unclear
-No problem by itself but the majority of authors are from China, makes you wonder about American Institute…
-Editing is well…sober
On the other hand it looks like and I quote “AIMS is a science organization with two independent operations: AIMS Press (www.aimspress.com) and the American Institute of Mathematical Sciences (AIMS) (www.aimsciences.org ).” AIMS Press is focused on Open Access journals while the journals published by AIMS (www.aimsciences.org) are/used to be subscription-based journals. Pretty much like Springer has there BioMed Central (BMC) journal portfolio and Bentham has their Bentham Open division.
Facts are:
-AIMS ( www.aimsciences.org ), more than 20 of their journals are indexed in SCIE and indexed in Scopus as well (under the publisher’s name: American Institute of Mathematical Sciences)
-AIMS Press (www.aimspress.com ), four journals are indexed in SCIE and thus have an impact factor and 14 journals are indexed in Clarivate’s ESCI. 7 journals are indexed in Scopus.
-AIMS Press, 20 of their journals are a member of DOAJ
-Journal of Industrial & Management Optimization (JIMO) https://www.aimsciences.org/journal/1547-5816 is indexed in Clarivate’s SCIE (impact factor 1.801, see enclosed file for latest JCR Report) and Scopus indexed CiteScore 1.8 https://www.scopus.com/sourceid/12900154727.
-For the papers I checked the time between received and accepted varies between 6 and 9 months and an additional 3-4 months before publication (it is well… not fast but not unusual)
So, overall, I think that the publisher has quite some credibility and it might be worthwhile to consider.
Best regards.
  • asked a question related to Operations Research
Question
5 answers
Hello, There is a dataset with several KPIs which are varying between (0,1). What is the best analytical approach to split the data and define a line in two dimensional (or define a plane in multi-dimensional space) based on data behavior and practical assumptions/considerations (there is some recommended ranges for each KPIs etc.)?
For instance in the attached screenshot, I want to flag the individuals/observations in Ae area for more investigation. I want to be able to apply the proposed approach in multi-dimensional space with several KPIs as well. Any thoughts would be appreciated.
Relevant answer
Answer
If you want to 'flag' individuals by some data-driven approach, possibly making a cluster tree would be helpful, and you can visually compare if the cluster tree flags the observations you were anticipating. This can help avoid simpson's paradox and such
  • asked a question related to Operations Research
Question
22 answers
Operations research techniques are used widely in the scientific literature to support decision-making problems in healthcare. However, such methods are rarely applied in practice? What are the obstacles? What could be the solution?
Relevant answer
Answer
Indeed, operations research (OR) and management science (MS) methods are not consistently used in practice for healthcare management decision-making. A report published by National Academy of Engineering and Institute of Medicine (Reid et al, 2005) states in an unusually blunt way, “In fact, relatively few health care professionals or administrators are equipped to think analytically about health care delivery as a system or to appreciate the relevance of engineering tools. Even fewer are equipped to work with engineers to apply these tools.”
Thus, it is often difficult for many administrators to appreciate the role of MS and OR methodology in the healthcare delivery process. A wide gap exists between the OR and MS publications that urge the use of this methodology in healthcare settings but provide few or no practical examples, and the publications with examples that are too specialized and complex for digesting by a typical hospital administrator. This gap is probably one of the reasons why too many administrators still have a vague idea of the practical value of healthcare OR and MS methodology. Many of them simply do not see ‘what’s in it for me’.
On the other hand, OR and MS professionals/engineers do not always have enough knowledge of healthcare or the role of physicians in making not only clinical but also management decisions. Healthcare has a culture of rigid division of labor. This functional division does not effectively support the methodology that crosses the functional areas, especially if it assumes significant change in traditional relationships.
Nonetheless, to address the challenge of transforming the system of care delivery in practice, some leading healthcare organizations have adopted this area as a strategic priority. For example, the Mayo Clinic, one of the largest integrated medical centers in the USA, has defined the Science of Healthcare Delivery as one of its four strategic directions. The others are Quality, Individualized Medicine, and Integration (Fowler et al, 2011). The Mayo Clinic has also created the Center for the Science of Healthcare Delivery, a new initiative that will focus on creating improved approaches to how healthcare is delivered (Mayo Clinic, 2011).
The bottom line: physicians and healthcare administrators are not supposed to have the knowledge of the OR/MS methods. They are too busy with other problems. Rather, they are supposed to understand why traditional management approaches and education guess are usually not accurate, short-lived or unsustainable; which quantitative technique is more appropriate for addressing a particular managerial problem; what can be expected from a particular technique and what are its strengths and limitations. For example, is queuing analytic theory (QAT) or discrete event simulation (DES) appropriate methodology for addressing a particular problem? What are the caveats in Linear Optimization for staffing and scheduling? What technique is the most appropriate for making a particular forecast type and why? What is the best approach to the fair cost (savings) allocation? And so on…Collaboration and trust between the healthcare/physicians leaders and OR/MS professionals is the key to progress in this area.
  • asked a question related to Operations Research
Question
77 answers
Some journals give reviewers 60 days, others give 40 days, 30 days, or 20 days to review a paper. MDPI journals give only 10 days, but it can be extended if the reviewer needs more time. In my opinion, 10 days might be too short, but 60 days is excessive. Allowing 60 days for a peer review is adding to the response time unnecessarily, and disadvantaging the authors. I can thoroughly review a paper in a day (if I dedicate myself to it), or two at most. A reviewer should only accept a review request if they are not too busy to do it in the next 10 to 20 days. I have encountered situations in which a reviewer agrees to the review, but does not submit the review at the end of 60 days, wasting those valuable 60 days from the author. What do you think the allowed time for reviewers should be?
Relevant answer
Answer
15 day is enough....
  • asked a question related to Operations Research
Question
3 answers
Dear all,
I want to start learning discrete choice-based optimization so that I can use it later for my research works. I want to know about free courses, books, study materials available on this topic. Any suggestions will be appreciated.
Thanks,
Soumen Atta
Relevant answer
Answer
You must to begin studying discrete optimization methods. in general. After that yout could to study models anf methids for choicing options. I am the author of the Selection of Proposals and of the Integration of Variables methods devoted to the options selection, that you can find in my researchgte profile, including applications.
  • asked a question related to Operations Research
Question
4 answers
Can anyone guide me on the pain areas in cloud computing where Operations Research techniques can be applied. Please guide me on this.
Regards,
JP
Relevant answer
Answer
A more recent survey (but also already quite some years old) is the following:
  • asked a question related to Operations Research
Question
3 answers
Hello everyone,
I am currently developing a small simulation model of an assembly worker pool in which I would like to consider randomly occurring absenteeism due to illness. Meaning: if one of lets say 17 workers is ill, he becomes unavailable and the Pool capacity is dynamically set to 16. After recovery, the worker becomes available again and the Pool capacity is increased by 1.
The model shall be based on historical data, which are available in the following structure (see excerpt @attachement).
The Pool Capacity is set every 8 hours with each shift change with the following logic (triggered by schedule):
- Create a list of all worker, that belong to the next shift group
- For each worker which is available (workers have parameter called “available”):
o determine randomly, if worker gets ill (using randomFalse(average chance to become ill, e.g. 2,5%, see above)
o If worker becomes ill, assign a value from custom distribution (bases on observations for the number of absent days per illness) for the number of unavailable days and create a dynamic event which will set the availability of this worker back to true after this number of days
The Pool capacity is set to the value that equals the number of the remaining available workers in the list
The model looks like this (see model overview @attachement).
On the first glance, the model works as intended. However, after 50 replications I aggregated the simulation data and compared the simulation result with my real data (table @attachement). I found out, that the model indicates, that ~41% of the shifts are staffed with 17 workers (= highest rel. frequency) while the real data show that 44% of the shifts are staffed with 16 workers (= highest rel. frequency of real data).
Something in the model concept does not seem to fit, otherwise the relative frequencies would somehow match in a better way, right?
Does anybody can tell me, if my current approach makes sense or am I overlooking something crucial? Is there a better approach to model this kind of problem?
Relevant answer
Answer
I agree with Christopher C Kelly , there may be non-random factors that contribute to absenteeism. There may also be auto-correlation between employees' absenteeism.
  • asked a question related to Operations Research
Question
9 answers
Is there any method to find the optimal solution without finding the initial basic solution in solving the transportation problems?
Relevant answer
Answer
If you are looking for the optimal solution then you can use 'TORA' or Excel solver. But if you want to use algorithm like VAM, RAM etc, the answer is 'NO'. You have to calculate IBFS and then test it whether it is optimal or not.
  • asked a question related to Operations Research
Question
12 answers
Any recommendation from a scientific journal to submit a paper on operations research applying linear programming and vehicle routing (VRP) using the B&B algorithm?
Relevant answer
Answer
You can ask your thesis advisor about what journal they think would be best to submit your work. It is hard to suggest a journal for you without seeing the actual paper.
  • asked a question related to Operations Research
Question
5 answers
Sensitivity Analysis can be used to check the variation of the optimum solution when changing the coefficients of the objective function or constant values in constraints. Are there exist any other things to investigate using this approach?
Relevant answer
Answer
Sensitivity analysis is useful to determine the robustness of the optimal solution. If the optimal solution changes significantly, when one of the problem parameters is changed only slightly, then the optimal solution is said to be sensitive to changes in that parameter, otherwise, it is robust.
Sensitivity analysis also gives insights into the problem under study. You can use it to validate your hypotheses about the problem or you can derive conclusions about the relationship of the optimal objective function value to the various parameters of the problem. This helps ground a problem from practice on a more reliable and intuitive basis, and demonstrates its applicability in practice.
  • asked a question related to Operations Research
Question
11 answers
Dear Sir/Madam,
I would like to see if anyone is interested in collaborating on some research papers. I work in the fields of SMART GRID, SMART BUILDINGS, OPTIMIZATION, and ENERGY MANAGEMENT.
If you are interested, could you send me a private text, please?
Thanks,
have a nice day!
Relevant answer
Answer
Many thanks for giving us the open invitation. I am interested to collaborate with you in the said area.
Looking forward to hearing from you soon
  • asked a question related to Operations Research
Question
11 answers
Types like; Job Shop, Batch Production, Mass Production, Manual Line Production. and why?
Relevant answer
Answer
mass production, manual assembly line.
  • asked a question related to Operations Research
Question
8 answers
We are looking for the datasets that can be used for evaluating fairness models in machine learning on real data. Could you recommend a labeled dataset in which the labeling reveals some unfair decision process. E.g., unfair decisions in hiring, courts, healthcare etc.
Relevant answer
Answer
I would recommend trying a disruptive approach, which follows the human process against unfair decisions:
-identify worst cases you could think of, with bias, discrimination, unfair decisions observed and documented.
-identify steps which led to such decisions
You can perform machine learning, deep learning, reinforcement learning to get to a bad systematically unfair "automated decision agent".
You can define a similarity measure between the "automated decision agent" you are trying to build, and these negative reference scenarios embedded in the unfair "automated decision agents" you have gathered.
As the learning system progresses towards the target "automated decision agent" you can iterate similarity measure computations with the bad unfair references, and when you get too close, an alarm is raised.
There are many ways to address reducing this risk at the next learning batch: introduce a repulsive gradient along the negative reference(s), build a Lagragian driving you away from it, etc...
Does it help you build a robust algorithm?
  • asked a question related to Operations Research
Question
3 answers
Dear Malek Masmoudi,
Could you please provide me a pdf copy of the paper of Ichraf Jridi entiteled:
Modelling and simulation in dialysis centre of Hedi Chaker Hospital
March 2020 In book: Operations research and simulation in healthcare Publisher: SPRINGER.
Looking forward to hearing from you ASAP.
Sincerely, yours.
Professor Mohamed Ben Hmida
Relevant answer
Answer
Dear Richard,
Thank you for your assistance.
I asked first, but she did not have a copy, she said.
So I asked Malek Masmoudi who is the editor of the book, and I still wait for a replay.
Could help me to get a pdf copy of this paper?
Sincerely yours.
Pr Mohamed Ben Hmida
  • asked a question related to Operations Research
Question
33 answers
Assume, we found an approximate solution A(D),
where A is a metaheuristic algorithm, D is concrete data of your problem.
How close the approximate solution A(D) to an optimal solution OPT(D)?
Relevant answer
  • asked a question related to Operations Research
Question
3 answers
When comparing two optimization methods on a function, should we use a two sample t-test or a paired t-test? I would say we should use the latter since paired t-test is used for correlated observations and in our case we can consider the unit of observation to be the function and the two methods as two treatments. Am I right?
Thank you in advance
Relevant answer
Answer
A novel statistical approach for comparing meta-heuristic stochastic optimization algorithms according to the distribution of the solutions in the search space is introduced, known as extended Deep Statistical Comparison. This approach is an extension of the recently proposed Deep Statistical Comparison approach used for comparing meta-heuristic stochastic optimization algorithms according to the solutions values. Its main contribution is that the algorithms are compared not only according to obtained solutions values, but also according to the distribution of the obtained solutions in the search space. The information it provides can additionally help to identify exploitation and exploration powers of the compared algorithms. This is important when dealing with a multimodal search space, where there are a lot of local optima with similar values. The benchmark results show that our proposed approach gives promising results and can be used for a statistical comparison of meta-heuristic stochastic optimization algorithms according to solutions values and their distribution in the search space. For more information, you can refer the following paper.
  • asked a question related to Operations Research
Question
17 answers
I understand that there is no specific rule to define this multi-author order. However, perhaps it is possible to find some common criteria.
Relevant answer
Answer
That's the clincher, and yes, it would not be an issue as the team would essentially all be equal. In fact, in articles with 2-4 authors who are of equal standing you tend to find that they take turns being first in every publication.
  • asked a question related to Operations Research
Question
5 answers
In AHP, I have come across random consistency index (RI) values as given by Saaty (1987).
Also, Prof. Taha, in his book Operations Research: An introduction has given a formulae for calculating RI.
Which RI should be considered and why?
Relevant answer
Answer
Alessandro is correct, however, I caution you on the use of AHP due to known issues with the methodology. Unfortunately APH does not follow the rules of axiomatic decision theory, and thus can result in inconsistent preferences and orders when options added or removed. I would recommend the use of multi-objective decision analysis either using utility or value depending on your preference.
  • asked a question related to Operations Research
Question
6 answers
Dear all
I am working on an inventory model in closed-loop supply chain system to optimize the cost of the system. There are lots of model to optimize the cost of the system, but I am looking forward to incorporate the concept of the Analytics to handle the real time inventory.
Looking forward to hearing from you.
with regards
Sumit Maheshwari
Relevant answer
Answer
This is a challenging problem, specially for manufacturing companies, needless to say this problem is undergoing lots of research and practically there are no viable examples of companies that have achieved success, best cases have hit a 90% mark (but the metrics of such proclaimed achievements have been highly debatable). As inventory management is being driven by close to real time demand and supply data(analytics) plugged to AI and Machine Learning tools, the potential to reach close to 99% efficiency in CLSCM might become a reality -- but how do we identify and embed external disruptions like COVID19 into this model? and to what extent will these external disruptions impact CLSCM based inventory dynamics?
  • asked a question related to Operations Research
Question
6 answers
Cycle counting
i) is a process by which inventory records are verified once a year
ii) provides a measure of inventory accuracy
iii) provides a measure of inventory turnover
iv) assumes that all inventory records must be verified with the same frequency.
Relevant answer
Answer
Dear
Cycle counting of Inventory is using in Industries for some of the other purposes
It for ----- How many time This inventory in the organization , what is consumption resio , what is value impact on the organisation , through this we can reduce
1-: inventory carrying cost .
2-: Inventory testing Cost
3-: and avoid to product manufacturing or supply loss
  • asked a question related to Operations Research
Question
15 answers
The sampling allocation problem is an important problem in survey statistics. Recently, many authors have formulated it as a nonlinear optimization problem and solved it. However, Neyman Allocation also comes under the optimal allocation techniques. Why?
Thanks in advance!
Relevant answer
Answer
Thanks Professor Michael Patriksson for encouraging and supportive words. I have some discussion points, please share your contact details. My email address: irfii.st@amu.ac.in.
  • asked a question related to Operations Research
Question
2 answers
(Proposal) Oil Refinery Production: What is the company's goal?
----------------
[Purpose: get Engineers & Scientists thinking outside their box ... think -large- problems. What's possible today vs. needs for tomorrow?]
Question: Are you interested in increasing your sales income by several orders of magnitude? Are you willing to think outside the box? If so, please read on. This is a large proposal, the size of NASA's Apollo Space program back in the early 1960s.
A new level of Computers and Software will be required for this Oil Production proposal. Today's Computers are Algebraic, i.e. bare bones, conceived designs that run similar to a 'model T' car. They 'run' along at a '30 mph' clip. We need fast super Computers like the Wright Brothers 'Airplane' that can run at a '3,000 mph' clip. These super Computers need 'Automatic Differentiation' based technologies; i.e. smart thinking abilities. NASA realized this when starting the Apollo space program; spent tons to get it and put us on the moon.
---
📷
Oil production depends on many factors; e.g. Supply, Demand, present inventory, etc. An oil company may have many refineries with many distillation units. How can a company simulate extracting products 'a', 'b', and 'c' from its crude oil? Assume the company wants product 'a' on the west coast, 'b' in the middle of US, and 'c' on the east coast. Assume the company has refineries 'x' on west coast, 'y' in middle US, and 'z' on east coast. How does one model such a company's oil production so as to produce/refine the 'right' amounts of each product at each refinery site in order to meet the company's goal of maximizing profits?
Partial Differential Equations (PDEs) will be used to model the crude oil distillation for each distillation unit at each site; i.e. many PDEs must be solved at once! Are there computers large enough to handle such problems today? Are there plans for some super computer that will be able to handle many (1,000s) PDEs at once?
With maintenance of distillation units being continual, e.g. fix one, stop another, this will be a constant problem when trying to simulate the next day's crude oil work load. For example, assume a company has 600 distillation units overall. That means a computer program would be required to solve 600 PDEs ASAP; i.e. 10 hours of PDEs. My past experience with modeling in FortranCalculus™ language/compiler, I was taught that a modeling requiring 'Tmod' time to execute the model, would require around 2'Tmod' time for the optimal solution. That would then get us into the 20 hr. time range for 600 PDEs. Too long! Need faster computers and solvers to get into reasonable solution times. Ideas how this could be done today? For more, visit http://fortrancalculus.info/apps/fc-compiler.html ... Solves Algebraic Equations through Ordinary Differential Equations.
Many people thought that the Wright Brother's idea of an 'airplane' would never fly. But, what if it did? What if Oil sales income doubled or more? Would crude oil prices increase? (Everyone is going to want more for their piece of the pie, right?) How would this effect your company?
John D Rockefeller was quoted saying, "If you want to succeed you should strike out on new paths, rather than travel the worn paths of accepted success."
Any future John D Rockefeller's reading this proposal? Are you interested in increasing your company profits by several orders of magnitude? Does your company have a company goal or objective that all employees know about and follow? If so, continue reading on this proposal by reading my article "Company Goal: Increase Productivity?" (a dozen pages). Go to web page eBook on Engineering Design Optimization using Calculus level Methods, A Casebook Approach and click on the 'download' link, its free!
Relevant answer
Answer
Dear Phil,
The complexity of the refining business grows every day with new markets, new feedstocks and new regulations. Additionally, refineries are facing declining profit margins. I think, to sustain their profitability, refineries must leverage process simulation technology and capabilities to achieve best-in-class operational excellence. I think advance process simulation could be more helpful towards the inclining profit margin. Couple of areas where it can use effectively to reduce the market burden.
1. Heat exchanger maintenance and monitoring-thorough simulation of heat exchanger operations within the broader process simulation model. The heat exchanger design tool must also simulate all major heat exchanger types used in the refining industry. Furthermore, the solution should allow process engineers to easily develop and integrate their heat exchangers’ simulation as part of the refinery flowsheet without leaving their familiar process simulation environment.
2. Column operations troubleshooting-an integrated process simulator that accurately simulates the thermal and hydraulic behavior of the column unit to provide enough information to support column operations. With the correct process simulation software, users can accurately simulate thermo-hydraulic functioning of columns based on their construction and operating conditions. As a result, they can better understand the columns’ behavior and avoid operational mishaps. Simulating the operation of the column in the broader setting of the overall process enables users to identify root causes of the problems and determine the optimal point of operation for the overall process unit.
3. Integrated refining and gas plant analysis-Refineries need a solution that meticulously simulates the entire gas plant including acid gas treatment units, sulfur recovery, tail gas units and flare systems together with the mainstream refining process units, such as distillation units and reactor units. Advanced simulation technology would provide the refiners enough confidence to push the levels of sour crudes closer to the limit the refinery can process while meeting regulations. Feed flexibility, capacity creep and operating expenditure optimization, enabled using integrated refining and gas plant process simulation, can save refiners millions each year in operating margins while ensuring maximum reliability and plant uptime. In addition, the rigorous simulation of the gas plant operation offers refineries visibility and the ability to better document their emission levels. This capability is valuable for boosting their profit margins.
4. Planning model update for refineries-The ideal option is to give refinery process engineers the ability to maintain the planning models with the help of advanced process simulation software that can offer a streamlined workflow to update the planning models, enabling frequent updates when the models become out of sync with the operating range of the refinery.
5. Refinery-wide process analysis-With an advanced integrated solution for process simulation and refinery planning, refineries can develop a refinery-wide process model out of their refinery-wide planning model in a relatively short period of time. The accuracy of the simulation model can be enhanced by selectively incorporating rigorous models of reactor units to the refinery-wide flowsheet.
Ashish
  • asked a question related to Operations Research
Question
20 answers
i want to use GAMS in optimization MINLP problem but I hope to use Meta heuristic algorithms like particle swarm or genetic algorithm instead of the given GAMS solvers like CPLEX, PARON
Can I do this?
Relevant answer
Answer
Reza Lotfi Except that you will most probably not find an optimum.
  • asked a question related to Operations Research
Question
3 answers
In 2 stage stochastic optimization, why I found that the optimization problem has equations for the 1st stage and equations for the 2nd stage but those two groups are solved simultaneously however I think that we first solve the 1st stage equations then take the results and substitute in the 2nd stage equations (new problem) or there is something I overlooked?
Also why we dont combine the equations for 1st and 2nd stages as they are solved simultaneously ?
my case study is for power systems with renewable energy uncertainty when I make day ahead decisions for the power dispatch, power dispatch of each generator are computed (1st stage decisions) then after the realization of uncertain events (renewable energy) redispatch is done in the 2nd stage using reserve or may some load shedding is done.
My question is how the decisions of the 2 stages done simultaneously as I see from some papers. why we optimize the 1st stage first and run our optimization then take the results and apply them in the 2nd stage problem and run it again? and if the optimization of the 2 stages made simultaneously, why we did not combine the equations (constraints) of the 2 stages together ? as I see 1st stage constraints and 2nd stage constraints.
Relevant answer
Answer
I agree that both decisions should be considered together. I know of several large power producers that do just that. I wrote the original scheduling software for several of TVA's plants and more that was used at the Load Control Center. I have performed many historical simulations of this sort and recently published a book on the subject (https://www.amazon.com/dp/B07YJ1JFLS ) The book will be free on the Tuesday after Christmas. The software (including several plant models) is free here http://dudleybenton.altervista.org/software/index.html More of the power plant models can be seen here http://dudleybenton.altervista.org/projects/Power%20Plants/PowerPlants.html Appendix A of the book has a whole list of links to publications you can download free with a description of each. Appendices B, C, and D explain how to get weather data for anywhere on Earth and build that into the model. I have provided spreadsheet models for several plants that go out and grab the latest weather predictions, estimate the capacity for multiple units, factor in the ramp rates and demand, then calculate the advantage of bringing units up or down, switching from simple cycle to combined cycle, and turning duct firing on or off, including how this might impact their emissions quotas. Of course, these last are proprietary, but I'd be glad to get you started building your own.
  • asked a question related to Operations Research
Question
4 answers
I am a student at Ghulam Ishaq Khan Institute, Pakistan and I am conducting research on evaluating the barriers to adoption of industry 4.0 in the construction industry of developing countries with Pakistan as a case in point. You are requested to fill the questionnaire attached. It will take 15-20 minutes of your precious time. Your response will be highly acknowledged. The questionnaire is attached. Once the survey has been filled, kindly reply to this discussion with the updated response file.
Thanks
Relevant answer
Answer
  • asked a question related to Operations Research
Question
2 answers
Dear Pierre Le Bot Thank you for introducting resources Please could i ask you If it's possible to answer these 2 questions : 1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in 2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted? regards
Relevant answer
Answer
Hello Amid,
very sorry to see only today that you asked me a question two years ago !
My answer :
1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in
MERMOS is built at the level of a the failure of the working team, then I do no know how to consider that event without more information. Our assumption is that failure happen because of a rational teamwork behavior that is no more adequate. The anayst describe with cica the behaviour of the team that are at the center of the failure story that is quantified. here it could be the CICA 1 the team (or the worker if no team) wants to fix the conveyor 2 the team doesnt want to spare time by avoding to switch off conveyor power source . that two elements are enough to exmplain the story.
2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted?
the three elements (situation, CICAs, non reconfiguration) are the required elements of the failing scenario to exist (situation generating the CICAs, CICAS lasting too long without reconfiguration). they have to be conditional : the situtation features probability is the conditional probabilities that that particular situation occurs given the general context of the analysis, the CICAs occurs with a given probability given that situation, the non reconfiguration depends on the both. The has to try to built a plausible scenrio by having the probability of the cicas to 1. tha mean that he have to describe precisely the situation features (more it is precise, more the probability os low). the non reconfiguration takes in account the recovery induced by the MMI and the organsiaation (redundant roles, verification by procedures ...).
If these explanations are still useful but not enough do not hesitate to ask me again.
  • asked a question related to Operations Research
Question
4 answers
Hello,
I submitted a short communication to the "Operations Research Letters" the 8th of July 2019. After some days awaiting for an editor to be assigned, it got to the "with the editor" stage. It is still in that stage, specifically it was in this state for 38 days (data of today).
Looking online, I only found one mention of the review process time on Scimag, obtained by voluntary contributions by authors only (i.e., no official data), and it read 18 months. By looking at previous issues of ORL, I noticed that the time elapsed between the letter being "Received" and the letter being "Accepted" ranges from 4 to 8 months.
I would like to ask if anyone has ever published in ORL, and how much time it took to move from the "with editor" state to the "invited reviewers" stage.
Thank you in advance
EDIT: wrote "reviewer" instead of "editor"
Relevant answer
Answer
In my case, it took ten weeks the first review, two weeks the second with the acceptance, ten days to appear online, and two months to be published afterward. Very fast, in my criterion.
  • asked a question related to Operations Research
Question
8 answers
In my research work, I want to construct mathematical programming model for a supply chain network problem. I have assumed production cost to be linear in nature. Is this assumption correct or should I change this assumption. Please suggest with valid description.
Relevant answer
Answer
Dear Sir, it depends so many factor and environment like demand, product nature and product life cycle, it may be linear, quadratic, exponentially, stock-dependent etc.,
  • asked a question related to Operations Research
Question
9 answers
My problem consists:
1. More than thousand Constraints and Variables
2. It is purely 0-1 programming i.e. all variables are binary.
3. Kindly note that I am not a good programmer.
Please provide me some links of books or videos discussing application of GA in Matlab for solving 0-1 programming with large number of variables and constraints.
I have gone through many YouTube videos but they have taken examples with only two or three variables without integer restrictions.
Relevant answer
Answer
Simple:
1) Open optimization toolbox
2) Select GA solver (and enter your objective function and constraint file detail)
3) Set lower bound as 0 upper bound as 1 (this set must be as many variable)
4) Under integer variable put index of all the variable whose values to be either 0 or 1
5) Run
You can get the result.
You may require basic knowledge of how to use optimization toolbox.
Hope that you can do it now.
Sincerely,
Alam
  • asked a question related to Operations Research
Question
5 answers
I want to assign n different customers to m different stores (such that n > m) and at the same time I want to do vehicle routing between the stores and the customers. A customer can be assigned to only one store. But a store can serve many customers. The maximum customers it can serve is p. I need to find the minimum number of vehicles required to this.
Relevant answer
Answer
The web is full of models and methods on that topic. If you can create a search string at Google with the most important search terms that you would like the articles of web pages to contain, then you are probably going to find exactly what you want.
  • asked a question related to Operations Research
Question
9 answers
What is your opinion about the use of qualitative methods on researches (e.g. case studies, action researches) in Operations Management Field?
Relevant answer
Answer
Guilherme - my opinion is the same as it is for any discipline. Qualitative insight is equally important as quantitative. To me, regardless of topic, it is a limited worldview if we only view our disciplines according to numerical outcomes i.e. 'how often does something happen' if we don't compliment it it with narrative insight i.e. 'what is the experience of what happened'. Better still, for me, is that we don't 'divorce' qualitative from quantitative if we can avoid it - especially in adopting mixed method approaches. We also need to be clear about paradigm positions. For instance, you classify action research as qualitative here. I don't. Action research, to me, is mixed methods (the 3rd paradigm). Action research can contain as much quantitative, if not more, methods than qualitative. The same say with Delphi - which can be more quantitative than qualitative - yet is still often classified as qualitative.
The two attached resources may assist. One - an article that links action research with project management - a common approach in operations management. The other is a mixed methods chapter - containing action research. Note that it is not a qualitative research chapter.
  • asked a question related to Operations Research
Question
1 answer
In most of robust optimization models, uncertain parameters are assumed to be independent. For example Bertsimas and Sym or Ben-Tal and Nemirovski discussed that it is too conservative to assume that all of the uncertain parameters in a problem simultaneously take their worst values, and by this reason they introduced their famous uncertainty sets. However, if there is a kind of correlation between uncertain parameters, taking worst values by most of them will not be so unexpected. Furthermore if all parameters are completely correlated, we will expect that if one of them takes its worst value all other ones do the same. Therefore I think Bertsimas and Sym or Ben-Tal and Nemirovski’s approaches are suitable just with the assumption of independency of parameters. Is it true? Can anyone advise me about the truth of this issue?
Relevant answer
Answer
Dear, did you figure out something about your question?
  • asked a question related to Operations Research
Question
6 answers
For the application of Industry 4.0 and hence making the machine self aware, what optimization techniques could be used for a machining process? (Preferably please explain a mathematical model regarding the same or a case study)
Relevant answer
Answer
Some times in order to know the important parameters, it is necesary to use a methodology such experiment design. Although it is time consuming, it is better in order to obtain better results
  • asked a question related to Operations Research
Question
4 answers
I am an undergraduate student in Production and Industrial Engineering, looking for a research proposal for applying in a doctoral program. Also, it would be great if you suggest some read. Or any suggestions?
Thank you for your time.
Relevant answer
There are number of research works in the area of FMS Scheduling. I would suggest that you have broader scope of scheduling challenges under different types and levels of automation. It would be instructive to look at the practical utility of the FMS concept in today's manufacturing environments. Please think through the features of an FMS and their impacts on scheduling rules and their preferences.
  • asked a question related to Operations Research
Question
3 answers
For a multi-objective problem with uncertainty in demand, consider the scenario tree (attached herewith) for a finite planning horizon consisting of three time periods. It's a two objective minimization problem in which the augmented e-constraint method is utilized to obtain Pareto optimal solutions (POS).
In the time period T1, only the mean demand is considered. Then in T2, demand follows a certain growth rate for a scenario with expected probability of growth for each scenario. Similar trend is outlined for T3.
The deterministic counterpart envisaged for the problem is a set of time periods with specific pattern of growth rate for mean demand - say 15% in T1, 10% in T2 and 10% in T3.
I want to draw out a comparison of the POS obtained from the stochastic and deterministic analysis. What is the best way to proceed in order to give the decision maker a whole picture of the POS with the scenario and time period considered in both type of analyses?
Do I obtain POS sets for all the 13 scenarios from T1 to T3, or just the 9 scenarios in T3? It'd mean 13 or 9 Pareto fronts for the stochastic analysis alone. In other words - a Pareto front with POS for each time period and scenario! How do I compare whatever I obtained from the stochastic analysis with the deterministic one?
Once again, the aim is to analyze the stochastic analysis and draw out a comparison of the POS obtained from the stochastic and deterministic analysis for the time periods and scenarios considered.
Comments on the aforementioned approach and recommendations for alternatives are appreciated.
Relevant answer
Answer
The problema seem to be small. Under such conditions clasical approaches are appropiated.
  • asked a question related to Operations Research
Question
4 answers
I have a project about operatios research. In my case I have several-vary vehicles but one source and one target. Vehicles have const and they must assign some areas. Like vehicle 1 must carry a type product , vehicle 2 must carry b type product etc.. But all products stored same place. I can not find problem type for this case.
Relevant answer
Dear Barış Karakum, in the first OR courses, we usually teach the students, that the important thing is to solve the problems, not apply models.
Thus, unless it is for theoretical purposes, the model that is applying is not important. The important thing is that the model you constructed solves the problem situation. Whether or not it belongs to a certain type is irrelevant.
In any case, even if it has only one source and only one destination, if it has several types of vehicles, especially if they have different characteristics, it can be considered as a transportation problem. Vehicles are transformed into sources or destinations, as appropriate. Here in Research Gate there are several articles of ours that discuss the problem of multiple transports, which may be useful to you.
We hope many successes in your research. Best regards,
José Hernández.
  • asked a question related to Operations Research
Question
26 answers
Is Entropy Shanon a good technique for weighting in Multi-Criteria Decision-Making?
As you know we use Entropy Shanon for weighting criteria in multi-criteria decision-making.
I think it is not a good technique for weighting in real world because:
It just uses decision matrix data.
If we add some new alternatives, weights change.
If we change period of time weights change.
For example we have 3 criteria: price, speed, safety
In several period of time weights of criteria vary
For example if our period of time is one month"
This month may price get 0.7 (speed=0.2, safety=0.1)
Next month may price get 0.1 (speed=0.6, safety=0.3)
It is against reality! What is your opinion?
Relevant answer
Answer
Once I was working on several variables and I wanted to weight them. At this time, people usually say that we'd better provide a questionnaire and then through AHP, ANP or other related methods define the weights for variables. That's quite common but how about the bias of the those who fill the questionnaire. Therefore, I looked for some other methods to weight variables based on the reality and I came across with Entropy. In fact, I weighted variables based on the each of these methods and then I compared the results. Entropy results were much closer to what is going on in real world.
  • asked a question related to Operations Research
Question
6 answers
Hi All,
I have modeled an MILP model using two different formulations, one of the formulation uses three indexes, while the other formulation uses five indexes. Comparing the solution speed of two formulations using the same solver (Gurobi, CPLEX), it turns out that the formulation with five indexes is solved faster by the solver. Not sure why this is happening, has anyone had this experience or are any studies related to this problem available. Please let me know.
Thanks,
Bhawesh
Relevant answer
Answer
Of the same problem? Yes, oh yes. In large-scale integer or mixed-integer optimization, this is legio. And indeed, quite often a "richer" formulation (which typically means MANY more integer variables) means that the solver has more alternatives. Quite often the richer formulation will have a better duality gap, too.
  • asked a question related to Operations Research
Question
7 answers
Can numbers (the Look then Leap Rule OR the Gittins Index) be used to help a person decide when to stop looking for the most suitable career path and LEAP into it instead or is the career situation too complicated for that?
^^^^^^^^^^^^^^^^^
Details:
Mathematical answers to the question of optimal stopping in general (When you should stop looking and leap)?
Gittins Index , Feynman's restaurant problem (not discussed in details)
Look then Leap Rule (secretary problem, fiancé problem): (√n , n/e , 37%)
How do apply this rule to career choice?
1- Potential ways of application:
A- n is Time .
Like what Michael Trick did https://goo.gl/9hSJT1 . Michael Trick A CMU Operations Research professor who applied this to his decide the best time for his marriage proposal., though he seems to think that this is a failed approach.
In our case, should we do it by age 20-70= 50 years --- 38 years old is where you stop looking for example? Or Should we multiply 37% by 80,000 hours to get a total of 29600 hours of career "looking"?
B- n is the number of available options. Like the secretary problem.
If we have 100 viable job options, we just look into the first 37? If we have 10, we just look into the first 4? If we are still in a stage of our lives where we have thousands of career paths?
2- Why the situation is more complicated in the career choice situation:
A- You can want a career and pursue it and then fail at it.
B- You can mix career paths. If you take option c, it can help you later on with option G. for example, if I went as an IRS, the irs will help me later on if I decide to become a writer so there's overlap between the options and a more dynamic relationship. Also the option you choose in selection #1 will influence the likelihood of choosing other options in Selection 2 (For example, if in 2018 I choose to work at an NGO, that will influence my options if I want to do a career transition in 2023 since that will limit my possibility of entering the corporate world in 2023).
C- You need to be making money so "looking" that does not generate money is seriously costly.
D- The choice is neither strictly sequential nor strictly simultaneous.
E- Looking and leaping alternates over a lifetime not like the example where you keep looking then leap once.
Is there a practical way to measure how the probability of switching back and forth between our career options affects the optimal exploration percentage?
F- There is something between looking and leaping, which is testing the waters. Let me explain. "Looking" here doesn't just mean "thinking" or "self-reflection" without action. It could also mean trying out a field to see if you're suited for it. So we can divide looking into "experimentation looking" and "thinking looking". And what separates looking from leaping is commitment and being settled. There's a trial period.
How does this affect our job/career options example since we can theoretically "look" at all 100 viable job positions without having to formally reject the position? Or does this rule apply to scenarios where looking entails commitment?
G- * You can return to a career that you rejected in the past. Once you leap, you can look again.
"But if you have the option to go back, say by apologizing to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%." https://80000hours.org/podcast/episodes/brian-christian-algorithms-to-live-by/
*3- A Real-life Example:
Here are some of my major potential career paths:
1- Behavioural Change Communications Company 2- Soft-Skills Training Company, 3- Consulting Company, 4-Blogger 5- Internet Research Specialist 6- Academic 7- Writer (Malcolm Gladwell Style; Popularization of psychology) 8- NGOs
As you can see the options here overlap to a great degree. So with these options, should I just say "ok the root of 8 is about 3" so pick 3 of those and try them for a year each and then stick with whatever comes next and is better?!!
Relevant answer
Answer
Hey Kenneth Carling , I got this number from page 29 in their book (Always Be Stopping, Chapter 1). They quote research results from Seale & Rapoport (1997) who found that on average their subjects leapt at 31% when given the secretary problem - they say that most people leapt too soon. They also say that there are more studies ("about a dozen") with the same result, which makes it more credible in my view.
  • asked a question related to Operations Research
Question
28 answers
There are lots of Optimization method /Evolutionary algorithms (EAs) in literature. Some of them is more effective (for solving linear/nonlinear problem) compared to other. But we don’t know which will fit our model. As a result we checked for everything as we can do. But cant get the desire result. Some of those methods are 1. Genetic algorithms (GA) ; Haupt and Haupt (2004) 2. Pattern search (Mathlab) 3. Particle swarm optimization (PSO), Binary Particle Swarm Optimization (BPSO); Eberhart and Kennedy (1995) 4. Bee optimization; Karaboga and Bosturk (2007) Pham et al (2006) 5. Cuckoo algorithm; Yang and Deb (2009, 2010) 6. Differential evolution (DE) ; Storn and Price (1995, 1997) 7. Firefly optimization; Yang (2010) 8. Bacterial foraging optimization; Kim, Abraham and Cho (2007) 9. Ant colony optimization (ACO) ; I Dorigo and Stutzle (2004) 10. Fish optimization; Huang and Zhou (2008) 11.Raindrop optimization ; Shah-Hosseini (2009) 12.Simulated annealing ; Kirkpatrick, Gelatt and Vecchi (1983) 13.Biogeography-based optimization (BBO), 14. Chemical reaction optimization (CRO) 15. A group search optimizer (GSO), 16. Imperialist algorithm 17. Swine flow Optimization Algorithm. 18. Teaching Learning Based Optimization (TLBO) 19. Bayesian Optimization Algorithms (BOA) 20. Population-based incremental learning (PBIL) 21. Evolution strategy with covariance matrix adaptation (CMA-ES) 22. Charged system search Optimization Algorithm 23. Continuous scatter search (CSS) Optimization Algorithm 24. Tabu search Continuous Optimization 25. Evolutionary programming 26. League championship algorithm 27. Harmony search Optimization algorithm 28. Gravitational search algorithm Optimization 29. Evolution strategies Optimization 30. Firework algorithm, Ying Tan, 2010 31. Big-bang big-crunch Optimization algorithm, OK Erol, 2006 32. Artificial bee colony optimization (ABC), Karaboga,2005 33. Backtracking Search Optimization algorithm (BSA) 34. Differential Search Algorithm (DSA) (A modernized particle swarm optimization algorithm) 35. Hybrid Particle Swarm Optimization and Gravitational Search Algorithm (PSOGSA) 36. Multi-objective bat algorithm(MOBA) Binary Bat Algorithm (BBA) 37. Flower Pollination Algorithm 38. The Wind Driven Optimization (WDO) algorithm 39. Grey Wolf Optimizer (GWO) 40. Generative Algorithms 41. Hybrid Differential Evolution Algorithm With Adaptive Crossover Mechanism 42.Lloyd's Algorithm 43.One Rank Cuckoo Search (ORCS) algorithm: An improved cuckoo search optimization algorithm 44. Huffman Algorithm 45. Active-Set Algorithm (ASA) 46. Random Search Algorithm 47. Alternating Conditional Expectation algorithm (ACE) 48. Normalized Normal Constraint (NNC) algorithm 49. Artificial immune system optimization; Cutello and Nicosia (2002) 50. fmincon .
Besides this there are many other optimization algorithm recently invented which are generally called Hybrid optimization Technique because it’s a combination of two method. If we share our experiences then it will be helpful for all of us who are in the field of optimization. I may be missing some methods, researcher are requested to add those algorithms and the way of use like many model needs initial value, weight, velocity, different type of writing objective function etc. I am facing some problems that’s why I make this format which will definitely help me as well as all other researchers in this field. Expecting resourceful and cordial cooperation.
Relevant answer
Answer
Dear Mashud,
I have some experiences in improving optimization algorithms like quantum invasive weed optimization algorithm ( ) and world cup optimization algorithm ( ).
From my experiences two point are important to select a good optimization algorithm:
1) check that do your problems can be solved by the classic methods? if so, do not go to the meta-heuristics.
2) If your problem was NP-hard and can not solve by the classic methods:
after a lot of testifying, I found that there is no more differences among the evolutionary algorithms. of course in some case, one algorithm may have better performance or has high speed, but this prominence is not so bold.
3) Two cases that you should point to them (in evolutionary algorithms) are: Exploration and Exploitation.
Exploration parameter is for places that you have no information about the search space anymore and Exploitation is about that you have an approximate solution for your problem and this parameter in this case finds better solution.
These two parameters comprise the structure of all of the optimization algorithms. for example, in Genetic algorithm, Mutation is an exploration parameter and crossover is an exploitation parameter.
So based on your requirements, select an algorithm that has promonancy about your considered parameter.
good luck,
  • asked a question related to Operations Research
Question
4 answers
There are N tasks and M workers.
  1. For every tuple task-worker the efficiency is known;
  2. For every task one worker must be assigned;
  3. For every worker at least one task must be assigned;
  4. For every worker multiple tasks can be assigned;
  5. Tasks must be grouped (e.g. by location), and for every group the number of workers is fixed. Every worker must be in exactly one group.
Can you suggest an algorithm or approach for optimal (or suboptimal) assignment (maximal efficiency)?
As my knowledge goes:
  1. Without 4. and 5. this problem can be stated as the “Assignment Problem”, for which there are algorithms with polynomial complexity;
  2. Without 4. this problem can be addressed as “Generalized Assignment Problem” which is NP-hard;
  3. Without 4. and if M = 1 this problem can be addressed as “0-1 Knapsack Problem”.
I can’t see how to use any of the mentioned to address my problem.
Relevant answer
Answer
Some clarifications are necessary.
1. What is the purpose of efficiency? It is not mentioned later.
2. In point 2. is it exactly one worker, at least one worker or something else?
3. What parameter do you want to minimize (maximize)?
The problem could be modeled using linear programming. You can find the LP model for assignment and try to modify it. Try for some small instances to state it by hand and type it into some LP solver like CPLEX. In a few try-error iteration you should be able to formulate the problem. With correct formulation of the LP problem, you should get optimal values for instances that are not to large.
In general, the formulation should be proved to be equivalent to the problem, but that is another issue.
  • asked a question related to Operations Research
Question
9 answers
I am looking for recent research questions in Reinforcement Learning( RL ) from Artificial Intelligence( AI ) . I also want know where it is applicable. As i know it is applied in Games, robotics and Operation Research, i would like know more about it. If any other areas where it is applied too?
Relevant answer
Answer
  • Reinforcement Learning in Robotics: How can be a Robot like Human? Emotion, reaction, intelligent are to be as like as human.
  • Safety in Reinforcement Learning: How to avoid unwanted behaviour and reward hacking in RL.
  • Competitive and Cooperative Multi-agent Reinforcement Learning:
  • Data Efficiency: React and take a decision just from a single raining.
  • asked a question related to Operations Research
Question
3 answers
There is a need to automate several industrial tasks which may require a number of humans and robots to perform it. Some can be done only using robots. Say there is a task X. My output looks like: Task X can be done if around 4 robots are assigned to it or 1 human and 1 robot are assigned to it. My input will describe the task based on which an algorithm will compute the desired output.
So basically could you share some research work where resource requirement for industrial tasks are modeled mathematically or even empirically? Or could you point to some existing algorithms in the domain of industrial engineering or otherwise where researchers have tackled the problem of identifying how much resources need to be thrown on a task to finish it successfully?
Relevant answer
Answer
I follow answers
regards
  • asked a question related to Operations Research
Question
7 answers
I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :
Vnew= Vold+(Current-Best) * f;
S= 1/ ( 1+Math.exp(-Vnew));
X(t+1) = { 1  S>Rnd  , 0   Rnd>=S)
the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew
Relevant answer
Answer
Unless you are doing just an "exercise", I discourage you from trying "new" metaheuristics for knapsack. Besides being a widely studied problem, there are very good knapsack specific algorithms. Check David Pissinger's webpage for codes and test instances generators.
  • asked a question related to Operations Research
Question
7 answers
In CRS model, the input and output oriended objective function values are reciprocal to each others. But why not in VRS?
Relevant answer
Answer
It is not sufficient to attribute it just to the difference in the orientation between the CCR and the BCC models since "maximizing the outputs subject to the given inputs" and "minimizing the inputs subject to the given outputs" under CRS assumtpion provide reciprocal efficiency scores. Thus, we firstly have to consider the additional free in sign variable in the multiplier BCC model and the additional constraint about the lambdas (Σλ=1) in its envelopment counterpart. This changes the shape of the frontier (BCC efficient frontier) and allows, besides CRS, for increasing and decreasing returns to scale. The above, in conjuction with the orientation that guides the projections, provides non-reciprocal efficiency scores between input and output orientation under VRS assumption.
For a schematic represenation of the above check the attached image.
  • asked a question related to Operations Research
Question
4 answers
In optimization problem often we use local optimum but is it global? Or are there any meta-heuristics algorithm to obtain global solution? If there any then what is the name of that algorithm and if possible how we can get that solution?
Relevant answer
Answer
I wish not to let you down, but the basic answer is "very seldom", and another one is "you will not know if you have stumbled upon an optimal solution, because there is no natural termination criterion based on the concept of optimality". (In contrast, a branch-and-bound, or branch-and-cut, methodology is based on local AND global bounds on the optimal value generated throughout the procedure, and in most cases the correct procedures will either fix some variables to their optimal values before termination, and they will be able to discard a very large portion of the search space based on parts of the search space being infeasibie or inferior, in which case we do know for sure that an optimum has been reached.)
If you have a structure of the problem that makes it emanable to be solved by special methods, such as Benders decomposition - when you have a mix of integer variables and continuous variables. you also have a fail-proof method.
  • asked a question related to Operations Research
Question
13 answers
Dear Friends and colleagues
I have an optimization in which I have a nonlinear term in the following form:
x(t)* a(k)
where, x and a are variables. a is a binery variable and the sets in which each of the variables are defined is not the same. Would you please suggest me a method that I can use to handle this term and transfer my model to a mixed integer linear programming?
Thank you for your suggestions.
Relevant answer
Answer
Olivér Ősz is right.
  • asked a question related to Operations Research
Question
13 answers
Is it possible to manage with supply chain in a more effective way?
Relevant answer
Answer
Dear Abu Hashan Md Mashud ,
Kindly clarify your question.
  • asked a question related to Operations Research
Question
3 answers
for optimization problem of fuel delivery from Depot to petrol stations, the solution approach is to use Tabu neighborhood for solving the model. (objective to minimize the delivery cost) how this can be done on Lingo or GAMS?
  • asked a question related to Operations Research
Question
11 answers
In the multi-objective optimization problems we often say that the objective functions are conflicting in nature. In what sense the objective functions are said to be conflicting with each other? Also, how it could be proved numerically that the objective functions in our multi-objective problem are conflicting in nature?
Relevant answer
Answer
If you are, for example, designing automobile you might have many objectives. E.g. you want for automobile to be big (spacious) from inside, but also to have smaller dimensions from the outside so it is easier to park the automobile. Obviously inside size and outside size are very dependent in a way that you might can not considerably increase inside space without increasing outer size. So there you have conflicting objectives.
So if improvement in one objective lead to worsening of the other, then this two objectives are conflicted.