Science topic

Operations Research - Science topic

A group of techniques developed to apply scientific methods and tools to solve the problems of DECISION MAKING in complex organizations and systems. Operations research searches for optimal solutions in situations of conflicting GOALS and makes use of mathematical models from which solutions for actual problems may be derived. (From Psychiatric Dictionary, 6th ed)
Questions related to Operations Research
  • asked a question related to Operations Research
Question
20 answers
I am planning to use DEA analysis for my on going research. I have SPSS with me. therefore, I would like to figure out how it can be done with SPSS.
Your suggestions would be highly appreciated. 
Relevant answer
Answer
No You can't
  • asked a question related to Operations Research
Question
2 answers
I need help to come up with a possible topic proposal related to any existing problems can be in community, Hospitals, Banks which can be solved using multiple Operations Research topic. Thanks.
Relevant answer
Answer
This web page might provide some inspiration:
  • asked a question related to Operations Research
Question
17 answers
Could you suggest some contemporary research topics in Operations Research (OR)?
In your opinion, which research topics of OR could be impactful in the next decade?
Thanks in advance.
Relevant answer
Answer
My scientific opinion in that question is we must hybrid some problems together and create new tools (mathematically or artificially) to solve these problems.
The big data or large scale problems will be focused by several researchers in the next few years from now.
Try to hybrid the big data problems with the solving tools of operations research.
  • asked a question related to Operations Research
Question
2 answers
I have just recently started a new "weekend project" in addition to my master's studies and I am looking for a data-set. I would like to use some Operations Research to design an optimal gym schedule that conforms to a specific set of constraints.
The idea is to create a daily gym schedule that conforms to a set of constraints (e.g. time, target muscles etc) as well as a set of physiological constraints. The physiological constraints are things such as do not exercise muscle-x and muscle-y together or do not do abdominal exercises for two consecutive days etc.
However the problem I face is data, specifically a data-set (or data-sets).
Are there any open-source datasets which list an exercise, as well as all the muscles targeted? Preferably one that lists as much of the physiological data as possible. E.g. which stabilizers are activated, which secondary muscle is also activated, is it an extension or flexion. I am also looking for datasets which could help me with some of the physiological constraints, such as muscle recovery times, which muscles not to exercise together etc?
My goal is to algorithmically capture an OR model which I can provide with input data such as muscle group target and time and the model must output a schedule of exercises which targets all the muscles in that muscle group, is not physiologically harmful and is within the time constraint.
Relevant answer
Answer
I don't have a specific data set or study in mind, but the US Army should have some data sets from recent studies. They recently transitioned to a new physical fitness plan developed with the physiological aspects of job performance in different areas, instead of a generalized physical fitness plan for all soldiers. Also, it should be fairly varied with categories ranging from 18 to 40 years old, various Heights, body fat content, sex, ethnicity, and race. Not to mention most soldiers are in good physical condition, healthy with proper nutrition and hydration.
  • asked a question related to Operations Research
Question
6 answers
Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.
Relevant answer
Answer
Norbert Tihanyi one little warning, if you look whether a particular journal is mentioned in the Beall’s list you should not only check the journal title in the stand-alone journal list (https://beallslist.net/standalone-journals/) but also the publisher behind it (if any). In this case the publisher is not mentioned in the Beall’s list (https://beallslist.net/). Anis Hamza I suppose you mean ISSN number, this journal with ISSN 1547-5816 and/or E-ISSN:1553-166X is mentioned in Scopus (https://www.scopus.com/sources.uri?zone=TopNavBar&origin=searchbasic) and Clarivate’s Master journal list (https://mjl.clarivate.com/home).
Back to your question, it is somewhat diffuse. There are signs that you are dealing with a questionable organization:
-Contact info renders in Google a nice residence but does not seem to correspond to an office and I quote “The American Institute of Mathematical Sciences is an international organization for the advancement and dissemination of mathematical and applied sciences.” https://www.aimsciences.org/common_news/column/aboutaims
-Both websites https://www.aimsciences.org/and http://www.aimspress.com/ function more or less okay but not flawless
-The journal “Journal of Industrial & Management Optimization (JIMO)“ is somewhat vague about the APC. It positions itself as hybrid (with an APC of 1800 USD), but all papers I checked can be read as open access (although not all have a CC etc. license). It mentions something like open access for free when an agreement is signed with your institution but how much this cost is unclear
-No problem by itself but the majority of authors are from China, makes you wonder about American Institute…
-Editing is well…sober
On the other hand it looks like and I quote “AIMS is a science organization with two independent operations: AIMS Press (www.aimspress.com) and the American Institute of Mathematical Sciences (AIMS) (www.aimsciences.org ).” AIMS Press is focused on Open Access journals while the journals published by AIMS (www.aimsciences.org) are/used to be subscription-based journals. Pretty much like Springer has there BioMed Central (BMC) journal portfolio and Bentham has their Bentham Open division.
Facts are:
-AIMS ( www.aimsciences.org ), more than 20 of their journals are indexed in SCIE and indexed in Scopus as well (under the publisher’s name: American Institute of Mathematical Sciences)
-AIMS Press (www.aimspress.com ), four journals are indexed in SCIE and thus have an impact factor and 14 journals are indexed in Clarivate’s ESCI. 7 journals are indexed in Scopus.
-AIMS Press, 20 of their journals are a member of DOAJ
-Journal of Industrial & Management Optimization (JIMO) https://www.aimsciences.org/journal/1547-5816 is indexed in Clarivate’s SCIE (impact factor 1.801, see enclosed file for latest JCR Report) and Scopus indexed CiteScore 1.8 https://www.scopus.com/sourceid/12900154727.
-For the papers I checked the time between received and accepted varies between 6 and 9 months and an additional 3-4 months before publication (it is well… not fast but not unusual)
So, overall, I think that the publisher has quite some credibility and it might be worthwhile to consider.
Best regards.
  • asked a question related to Operations Research
Question
5 answers
Hello, There is a dataset with several KPIs which are varying between (0,1). What is the best analytical approach to split the data and define a line in two dimensional (or define a plane in multi-dimensional space) based on data behavior and practical assumptions/considerations (there is some recommended ranges for each KPIs etc.)?
For instance in the attached screenshot, I want to flag the individuals/observations in Ae area for more investigation. I want to be able to apply the proposed approach in multi-dimensional space with several KPIs as well. Any thoughts would be appreciated.
Relevant answer
Answer
If you want to 'flag' individuals by some data-driven approach, possibly making a cluster tree would be helpful, and you can visually compare if the cluster tree flags the observations you were anticipating. This can help avoid simpson's paradox and such
  • asked a question related to Operations Research
Question
22 answers
Operations research techniques are used widely in the scientific literature to support decision-making problems in healthcare. However, such methods are rarely applied in practice? What are the obstacles? What could be the solution?
Relevant answer
Answer
Indeed, operations research (OR) and management science (MS) methods are not consistently used in practice for healthcare management decision-making. A report published by National Academy of Engineering and Institute of Medicine (Reid et al, 2005) states in an unusually blunt way, “In fact, relatively few health care professionals or administrators are equipped to think analytically about health care delivery as a system or to appreciate the relevance of engineering tools. Even fewer are equipped to work with engineers to apply these tools.”
Thus, it is often difficult for many administrators to appreciate the role of MS and OR methodology in the healthcare delivery process. A wide gap exists between the OR and MS publications that urge the use of this methodology in healthcare settings but provide few or no practical examples, and the publications with examples that are too specialized and complex for digesting by a typical hospital administrator. This gap is probably one of the reasons why too many administrators still have a vague idea of the practical value of healthcare OR and MS methodology. Many of them simply do not see ‘what’s in it for me’.
On the other hand, OR and MS professionals/engineers do not always have enough knowledge of healthcare or the role of physicians in making not only clinical but also management decisions. Healthcare has a culture of rigid division of labor. This functional division does not effectively support the methodology that crosses the functional areas, especially if it assumes significant change in traditional relationships.
Nonetheless, to address the challenge of transforming the system of care delivery in practice, some leading healthcare organizations have adopted this area as a strategic priority. For example, the Mayo Clinic, one of the largest integrated medical centers in the USA, has defined the Science of Healthcare Delivery as one of its four strategic directions. The others are Quality, Individualized Medicine, and Integration (Fowler et al, 2011). The Mayo Clinic has also created the Center for the Science of Healthcare Delivery, a new initiative that will focus on creating improved approaches to how healthcare is delivered (Mayo Clinic, 2011).
The bottom line: physicians and healthcare administrators are not supposed to have the knowledge of the OR/MS methods. They are too busy with other problems. Rather, they are supposed to understand why traditional management approaches and education guess are usually not accurate, short-lived or unsustainable; which quantitative technique is more appropriate for addressing a particular managerial problem; what can be expected from a particular technique and what are its strengths and limitations. For example, is queuing analytic theory (QAT) or discrete event simulation (DES) appropriate methodology for addressing a particular problem? What are the caveats in Linear Optimization for staffing and scheduling? What technique is the most appropriate for making a particular forecast type and why? What is the best approach to the fair cost (savings) allocation? And so on…Collaboration and trust between the healthcare/physicians leaders and OR/MS professionals is the key to progress in this area.
  • asked a question related to Operations Research
Question
66 answers
Some journals give reviewers 60 days, others give 40 days, 30 days, or 20 days to review a paper. MDPI journals give only 10 days, but it can be extended if the reviewer needs more time. In my opinion, 10 days might be too short, but 60 days is excessive. Allowing 60 days for a peer review is adding to the response time unnecessarily, and disadvantaging the authors. I can thoroughly review a paper in a day (if I dedicate myself to it), or two at most. A reviewer should only accept a review request if they are not too busy to do it in the next 10 to 20 days. I have encountered situations in which a reviewer agrees to the review, but does not submit the review at the end of 60 days, wasting those valuable 60 days from the author. What do you think the allowed time for reviewers should be?
Relevant answer
Answer
15 day is enough....
  • asked a question related to Operations Research
Question
3 answers
Dear all,
I want to start learning discrete choice-based optimization so that I can use it later for my research works. I want to know about free courses, books, study materials available on this topic. Any suggestions will be appreciated.
Thanks,
Soumen Atta
Relevant answer
Answer
You must to begin studying discrete optimization methods. in general. After that yout could to study models anf methids for choicing options. I am the author of the Selection of Proposals and of the Integration of Variables methods devoted to the options selection, that you can find in my researchgte profile, including applications.
  • asked a question related to Operations Research
Question
4 answers
Can anyone guide me on the pain areas in cloud computing where Operations Research techniques can be applied. Please guide me on this.
Regards,
JP
Relevant answer
Answer
A more recent survey (but also already quite some years old) is the following:
  • asked a question related to Operations Research
Question
3 answers
Hello everyone,
I am currently developing a small simulation model of an assembly worker pool in which I would like to consider randomly occurring absenteeism due to illness. Meaning: if one of lets say 17 workers is ill, he becomes unavailable and the Pool capacity is dynamically set to 16. After recovery, the worker becomes available again and the Pool capacity is increased by 1.
The model shall be based on historical data, which are available in the following structure (see excerpt @attachement).
The Pool Capacity is set every 8 hours with each shift change with the following logic (triggered by schedule):
- Create a list of all worker, that belong to the next shift group
- For each worker which is available (workers have parameter called “available”):
o determine randomly, if worker gets ill (using randomFalse(average chance to become ill, e.g. 2,5%, see above)
o If worker becomes ill, assign a value from custom distribution (bases on observations for the number of absent days per illness) for the number of unavailable days and create a dynamic event which will set the availability of this worker back to true after this number of days
The Pool capacity is set to the value that equals the number of the remaining available workers in the list
The model looks like this (see model overview @attachement).
On the first glance, the model works as intended. However, after 50 replications I aggregated the simulation data and compared the simulation result with my real data (table @attachement). I found out, that the model indicates, that ~41% of the shifts are staffed with 17 workers (= highest rel. frequency) while the real data show that 44% of the shifts are staffed with 16 workers (= highest rel. frequency of real data).
Something in the model concept does not seem to fit, otherwise the relative frequencies would somehow match in a better way, right?
Does anybody can tell me, if my current approach makes sense or am I overlooking something crucial? Is there a better approach to model this kind of problem?
Relevant answer
Answer
I agree with Christopher C Kelly , there may be non-random factors that contribute to absenteeism. There may also be auto-correlation between employees' absenteeism.
  • asked a question related to Operations Research
Question
9 answers
Is there any method to find the optimal solution without finding the initial basic solution in solving the transportation problems?
Relevant answer
Answer
If you are looking for the optimal solution then you can use 'TORA' or Excel solver. But if you want to use algorithm like VAM, RAM etc, the answer is 'NO'. You have to calculate IBFS and then test it whether it is optimal or not.
  • asked a question related to Operations Research
Question
12 answers
Any recommendation from a scientific journal to submit a paper on operations research applying linear programming and vehicle routing (VRP) using the B&B algorithm?
Relevant answer
Answer
You can ask your thesis advisor about what journal they think would be best to submit your work. It is hard to suggest a journal for you without seeing the actual paper.
  • asked a question related to Operations Research
Question
5 answers
Sensitivity Analysis can be used to check the variation of the optimum solution when changing the coefficients of the objective function or constant values in constraints. Are there exist any other things to investigate using this approach?
Relevant answer
Answer
Sensitivity analysis is useful to determine the robustness of the optimal solution. If the optimal solution changes significantly, when one of the problem parameters is changed only slightly, then the optimal solution is said to be sensitive to changes in that parameter, otherwise, it is robust.
Sensitivity analysis also gives insights into the problem under study. You can use it to validate your hypotheses about the problem or you can derive conclusions about the relationship of the optimal objective function value to the various parameters of the problem. This helps ground a problem from practice on a more reliable and intuitive basis, and demonstrates its applicability in practice.
  • asked a question related to Operations Research
Question
11 answers
Dear Sir/Madam,
I would like to see if anyone is interested in collaborating on some research papers. I work in the fields of SMART GRID, SMART BUILDINGS, OPTIMIZATION, and ENERGY MANAGEMENT.
If you are interested, could you send me a private text, please?
Thanks,
have a nice day!
Relevant answer
Answer
Many thanks for giving us the open invitation. I am interested to collaborate with you in the said area.
Looking forward to hearing from you soon
  • asked a question related to Operations Research
Question
7 answers
LINGO is an open source application for mathematical modeling. How about your experience about LINGO? What are the uncommon tools in LINGO to solve optimization models?
Relevant answer
Answer
Yes, it is a good system, provided that optimality exists in your problem
  • asked a question related to Operations Research
Question
11 answers
Types like; Job Shop, Batch Production, Mass Production, Manual Line Production. and why?
Relevant answer
Answer
mass production, manual assembly line.
  • asked a question related to Operations Research
Question
8 answers
We are looking for the datasets that can be used for evaluating fairness models in machine learning on real data. Could you recommend a labeled dataset in which the labeling reveals some unfair decision process. E.g., unfair decisions in hiring, courts, healthcare etc.
Relevant answer
Answer
I would recommend trying a disruptive approach, which follows the human process against unfair decisions:
-identify worst cases you could think of, with bias, discrimination, unfair decisions observed and documented.
-identify steps which led to such decisions
You can perform machine learning, deep learning, reinforcement learning to get to a bad systematically unfair "automated decision agent".
You can define a similarity measure between the "automated decision agent" you are trying to build, and these negative reference scenarios embedded in the unfair "automated decision agents" you have gathered.
As the learning system progresses towards the target "automated decision agent" you can iterate similarity measure computations with the bad unfair references, and when you get too close, an alarm is raised.
There are many ways to address reducing this risk at the next learning batch: introduce a repulsive gradient along the negative reference(s), build a Lagragian driving you away from it, etc...
Does it help you build a robust algorithm?
  • asked a question related to Operations Research
Question
3 answers
Dear Malek Masmoudi,
Could you please provide me a pdf copy of the paper of Ichraf Jridi entiteled:
Modelling and simulation in dialysis centre of Hedi Chaker Hospital
March 2020 In book: Operations research and simulation in healthcare Publisher: SPRINGER.
Looking forward to hearing from you ASAP.
Sincerely, yours.
Professor Mohamed Ben Hmida
Relevant answer
Answer
Dear Richard,
Thank you for your assistance.
I asked first, but she did not have a copy, she said.
So I asked Malek Masmoudi who is the editor of the book, and I still wait for a replay.
Could help me to get a pdf copy of this paper?
Sincerely yours.
Pr Mohamed Ben Hmida
  • asked a question related to Operations Research
Question
33 answers
Assume, we found an approximate solution A(D),
where A is a metaheuristic algorithm, D is concrete data of your problem.
How close the approximate solution A(D) to an optimal solution OPT(D)?
Relevant answer
  • asked a question related to Operations Research
Question
3 answers
When comparing two optimization methods on a function, should we use a two sample t-test or a paired t-test? I would say we should use the latter since paired t-test is used for correlated observations and in our case we can consider the unit of observation to be the function and the two methods as two treatments. Am I right?
Thank you in advance
Relevant answer
Answer
A novel statistical approach for comparing meta-heuristic stochastic optimization algorithms according to the distribution of the solutions in the search space is introduced, known as extended Deep Statistical Comparison. This approach is an extension of the recently proposed Deep Statistical Comparison approach used for comparing meta-heuristic stochastic optimization algorithms according to the solutions values. Its main contribution is that the algorithms are compared not only according to obtained solutions values, but also according to the distribution of the obtained solutions in the search space. The information it provides can additionally help to identify exploitation and exploration powers of the compared algorithms. This is important when dealing with a multimodal search space, where there are a lot of local optima with similar values. The benchmark results show that our proposed approach gives promising results and can be used for a statistical comparison of meta-heuristic stochastic optimization algorithms according to solutions values and their distribution in the search space. For more information, you can refer the following paper.
  • asked a question related to Operations Research
Question
17 answers
I understand that there is no specific rule to define this multi-author order. However, perhaps it is possible to find some common criteria.
Relevant answer
Answer
That's the clincher, and yes, it would not be an issue as the team would essentially all be equal. In fact, in articles with 2-4 authors who are of equal standing you tend to find that they take turns being first in every publication.
  • asked a question related to Operations Research
Question
5 answers
In AHP, I have come across random consistency index (RI) values as given by Saaty (1987).
Also, Prof. Taha, in his book Operations Research: An introduction has given a formulae for calculating RI.
Which RI should be considered and why?
Relevant answer
Answer
I agree, @Nicholas Shallcross
As the AHP is a method of choices, whose preferences denote the subjectivity of the participant, it is complex to establish definitive parameters. However, a software that helps me during collections is the Expert Choice. Using this support, it is possible to identify, at the time of collection, whether the participant's opinions are pointing to contradictory trends. Thus, it is possible to coordinate the questions so as not to induce the participant to offer inconsistency in their perceptions. In this way, the researcher is able to monitor the Consistency Ratio (CR) and assess the incidence of outliers.
Thank you for your observation!
  • asked a question related to Operations Research
Question
6 answers
Dear all
I am working on an inventory model in closed-loop supply chain system to optimize the cost of the system. There are lots of model to optimize the cost of the system, but I am looking forward to incorporate the concept of the Analytics to handle the real time inventory.
Looking forward to hearing from you.
with regards
Sumit Maheshwari
Relevant answer
Answer
This is a challenging problem, specially for manufacturing companies, needless to say this problem is undergoing lots of research and practically there are no viable examples of companies that have achieved success, best cases have hit a 90% mark (but the metrics of such proclaimed achievements have been highly debatable). As inventory management is being driven by close to real time demand and supply data(analytics) plugged to AI and Machine Learning tools, the potential to reach close to 99% efficiency in CLSCM might become a reality -- but how do we identify and embed external disruptions like COVID19 into this model? and to what extent will these external disruptions impact CLSCM based inventory dynamics?
  • asked a question related to Operations Research
Question
6 answers
Cycle counting
i) is a process by which inventory records are verified once a year
ii) provides a measure of inventory accuracy
iii) provides a measure of inventory turnover
iv) assumes that all inventory records must be verified with the same frequency.
Relevant answer
Answer
Dear
Cycle counting of Inventory is using in Industries for some of the other purposes
It for ----- How many time This inventory in the organization , what is consumption resio , what is value impact on the organisation , through this we can reduce
1-: inventory carrying cost .
2-: Inventory testing Cost
3-: and avoid to product manufacturing or supply loss
  • asked a question related to Operations Research
Question
15 answers
The sampling allocation problem is an important problem in survey statistics. Recently, many authors have formulated it as a nonlinear optimization problem and solved it. However, Neyman Allocation also comes under the optimal allocation techniques. Why?
Thanks in advance!
Relevant answer
Answer
Thanks Professor Michael Patriksson for encouraging and supportive words. I have some discussion points, please share your contact details. My email address: irfii.st@amu.ac.in.
  • asked a question related to Operations Research
Question
2 answers
(Proposal) Oil Refinery Production: What is the company's goal?
----------------
[Purpose: get Engineers & Scientists thinking outside their box ... think -large- problems. What's possible today vs. needs for tomorrow?]
Question: Are you interested in increasing your sales income by several orders of magnitude? Are you willing to think outside the box? If so, please read on. This is a large proposal, the size of NASA's Apollo Space program back in the early 1960s.
A new level of Computers and Software will be required for this Oil Production proposal. Today's Computers are Algebraic, i.e. bare bones, conceived designs that run similar to a 'model T' car. They 'run' along at a '30 mph' clip. We need fast super Computers like the Wright Brothers 'Airplane' that can run at a '3,000 mph' clip. These super Computers need 'Automatic Differentiation' based technologies; i.e. smart thinking abilities. NASA realized this when starting the Apollo space program; spent tons to get it and put us on the moon.
---
📷
Oil production depends on many factors; e.g. Supply, Demand, present inventory, etc. An oil company may have many refineries with many distillation units. How can a company simulate extracting products 'a', 'b', and 'c' from its crude oil? Assume the company wants product 'a' on the west coast, 'b' in the middle of US, and 'c' on the east coast. Assume the company has refineries 'x' on west coast, 'y' in middle US, and 'z' on east coast. How does one model such a company's oil production so as to produce/refine the 'right' amounts of each product at each refinery site in order to meet the company's goal of maximizing profits?
Partial Differential Equations (PDEs) will be used to model the crude oil distillation for each distillation unit at each site; i.e. many PDEs must be solved at once! Are there computers large enough to handle such problems today? Are there plans for some super computer that will be able to handle many (1,000s) PDEs at once?
With maintenance of distillation units being continual, e.g. fix one, stop another, this will be a constant problem when trying to simulate the next day's crude oil work load. For example, assume a company has 600 distillation units overall. That means a computer program would be required to solve 600 PDEs ASAP; i.e. 10 hours of PDEs. My past experience with modeling in FortranCalculus™ language/compiler, I was taught that a modeling requiring 'Tmod' time to execute the model, would require around 2'Tmod' time for the optimal solution. That would then get us into the 20 hr. time range for 600 PDEs. Too long! Need faster computers and solvers to get into reasonable solution times. Ideas how this could be done today? For more, visit http://fortrancalculus.info/apps/fc-compiler.html ... Solves Algebraic Equations through Ordinary Differential Equations.
Many people thought that the Wright Brother's idea of an 'airplane' would never fly. But, what if it did? What if Oil sales income doubled or more? Would crude oil prices increase? (Everyone is going to want more for their piece of the pie, right?) How would this effect your company?
John D Rockefeller was quoted saying, "If you want to succeed you should strike out on new paths, rather than travel the worn paths of accepted success."
Any future John D Rockefeller's reading this proposal? Are you interested in increasing your company profits by several orders of magnitude? Does your company have a company goal or objective that all employees know about and follow? If so, continue reading on this proposal by reading my article "Company Goal: Increase Productivity?" (a dozen pages). Go to web page eBook on Engineering Design Optimization using Calculus level Methods, A Casebook Approach and click on the 'download' link, its free!
Relevant answer
Answer
Dear Phil,
The complexity of the refining business grows every day with new markets, new feedstocks and new regulations. Additionally, refineries are facing declining profit margins. I think, to sustain their profitability, refineries must leverage process simulation technology and capabilities to achieve best-in-class operational excellence. I think advance process simulation could be more helpful towards the inclining profit margin. Couple of areas where it can use effectively to reduce the market burden.
1. Heat exchanger maintenance and monitoring-thorough simulation of heat exchanger operations within the broader process simulation model. The heat exchanger design tool must also simulate all major heat exchanger types used in the refining industry. Furthermore, the solution should allow process engineers to easily develop and integrate their heat exchangers’ simulation as part of the refinery flowsheet without leaving their familiar process simulation environment.
2. Column operations troubleshooting-an integrated process simulator that accurately simulates the thermal and hydraulic behavior of the column unit to provide enough information to support column operations. With the correct process simulation software, users can accurately simulate thermo-hydraulic functioning of columns based on their construction and operating conditions. As a result, they can better understand the columns’ behavior and avoid operational mishaps. Simulating the operation of the column in the broader setting of the overall process enables users to identify root causes of the problems and determine the optimal point of operation for the overall process unit.
3. Integrated refining and gas plant analysis-Refineries need a solution that meticulously simulates the entire gas plant including acid gas treatment units, sulfur recovery, tail gas units and flare systems together with the mainstream refining process units, such as distillation units and reactor units. Advanced simulation technology would provide the refiners enough confidence to push the levels of sour crudes closer to the limit the refinery can process while meeting regulations. Feed flexibility, capacity creep and operating expenditure optimization, enabled using integrated refining and gas plant process simulation, can save refiners millions each year in operating margins while ensuring maximum reliability and plant uptime. In addition, the rigorous simulation of the gas plant operation offers refineries visibility and the ability to better document their emission levels. This capability is valuable for boosting their profit margins.
4. Planning model update for refineries-The ideal option is to give refinery process engineers the ability to maintain the planning models with the help of advanced process simulation software that can offer a streamlined workflow to update the planning models, enabling frequent updates when the models become out of sync with the operating range of the refinery.
5. Refinery-wide process analysis-With an advanced integrated solution for process simulation and refinery planning, refineries can develop a refinery-wide process model out of their refinery-wide planning model in a relatively short period of time. The accuracy of the simulation model can be enhanced by selectively incorporating rigorous models of reactor units to the refinery-wide flowsheet.
Ashish
  • asked a question related to Operations Research
Question
3 answers
Hello,
If I have 5 hours to wait for GAMS result on Monday and 3 hours on Tuesday, and if I know that the model will be solved after 8 hours first run, is there a way to interrupt GAMS on Monday after 5 hours and run again on Tuesday from where it stops on Monday. So, I will get result on Tuesday but totally within 8 hours.
Thanks
Relevant answer
Answer
There is a section in the user manual related to save and restart. You can check the following link:
  • asked a question related to Operations Research
Question
6 answers
I have 4 variables in an integer programming. If I define all as integer, solution period is increasing.
If I define 3 of them as positive variables and one of them as integer, model is solved shorter and found same optimal solution. I need all values for all variables as integer.
All parameters and rhs values are integer. I think total unimodular is an effect to make positive variables become integer, but I am not sure about that.
What do you think about this approach? Is it logical to define 3 of them positive variables and 1 of them integer to save time?
Relevant answer
Answer
Soner Demirel , I tell you once again - specify the problem explicitly and you'll surely obtain some helpful advices.
After all, what is you real problem - the very long time of calculations when all the variables are set to be integer? So solve the relaxed problem (without integer constraints) and if you are lucky, you'll obtain an integer optimal solution anyway. Even if not, rounding the real-number solution may work.
As to your example with baking times.
You have a relationship between the three variables (if I understand correctly)
x1-x2=x3
x1, x2, x3 >=0 and integer
In this case x3 is obviously integer as the difference of two integer numbers, so the constraint x3 - integer is redundant. But, in fact, if you replace x3>=0 with x1-x2>=0 (or x1>=x2) then the variable x3 is redundant at all since it can be then replaced anywhere in the model with x1-x2.
  • asked a question related to Operations Research
Question
20 answers
i want to use GAMS in optimization MINLP problem but I hope to use Meta heuristic algorithms like particle swarm or genetic algorithm instead of the given GAMS solvers like CPLEX, PARON
Can I do this?
Relevant answer
Answer
Reza Lotfi Except that you will most probably not find an optimum.
  • asked a question related to Operations Research
Question
3 answers
In 2 stage stochastic optimization, why I found that the optimization problem has equations for the 1st stage and equations for the 2nd stage but those two groups are solved simultaneously however I think that we first solve the 1st stage equations then take the results and substitute in the 2nd stage equations (new problem) or there is something I overlooked?
Also why we dont combine the equations for 1st and 2nd stages as they are solved simultaneously ?
my case study is for power systems with renewable energy uncertainty when I make day ahead decisions for the power dispatch, power dispatch of each generator are computed (1st stage decisions) then after the realization of uncertain events (renewable energy) redispatch is done in the 2nd stage using reserve or may some load shedding is done.
My question is how the decisions of the 2 stages done simultaneously as I see from some papers. why we optimize the 1st stage first and run our optimization then take the results and apply them in the 2nd stage problem and run it again? and if the optimization of the 2 stages made simultaneously, why we did not combine the equations (constraints) of the 2 stages together ? as I see 1st stage constraints and 2nd stage constraints.
Relevant answer
Answer
I agree that both decisions should be considered together. I know of several large power producers that do just that. I wrote the original scheduling software for several of TVA's plants and more that was used at the Load Control Center. I have performed many historical simulations of this sort and recently published a book on the subject (https://www.amazon.com/dp/B07YJ1JFLS ) The book will be free on the Tuesday after Christmas. The software (including several plant models) is free here http://dudleybenton.altervista.org/software/index.html More of the power plant models can be seen here http://dudleybenton.altervista.org/projects/Power%20Plants/PowerPlants.html Appendix A of the book has a whole list of links to publications you can download free with a description of each. Appendices B, C, and D explain how to get weather data for anywhere on Earth and build that into the model. I have provided spreadsheet models for several plants that go out and grab the latest weather predictions, estimate the capacity for multiple units, factor in the ramp rates and demand, then calculate the advantage of bringing units up or down, switching from simple cycle to combined cycle, and turning duct firing on or off, including how this might impact their emissions quotas. Of course, these last are proprietary, but I'd be glad to get you started building your own.
  • asked a question related to Operations Research
Question
4 answers
I am a student at Ghulam Ishaq Khan Institute, Pakistan and I am conducting research on evaluating the barriers to adoption of industry 4.0 in the construction industry of developing countries with Pakistan as a case in point. You are requested to fill the questionnaire attached. It will take 15-20 minutes of your precious time. Your response will be highly acknowledged. The questionnaire is attached. Once the survey has been filled, kindly reply to this discussion with the updated response file.
Thanks
Relevant answer
Answer
  • asked a question related to Operations Research
Question
2 answers
Dear Pierre Le Bot Thank you for introducting resources Please could i ask you If it's possible to answer these 2 questions : 1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in 2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted? regards
Relevant answer
Answer
Hello Amid,
very sorry to see only today that you asked me a question two years ago !
My answer :
1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in
MERMOS is built at the level of a the failure of the working team, then I do no know how to consider that event without more information. Our assumption is that failure happen because of a rational teamwork behavior that is no more adequate. The anayst describe with cica the behaviour of the team that are at the center of the failure story that is quantified. here it could be the CICA 1 the team (or the worker if no team) wants to fix the conveyor 2 the team doesnt want to spare time by avoding to switch off conveyor power source . that two elements are enough to exmplain the story.
2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted?
the three elements (situation, CICAs, non reconfiguration) are the required elements of the failing scenario to exist (situation generating the CICAs, CICAS lasting too long without reconfiguration). they have to be conditional : the situtation features probability is the conditional probabilities that that particular situation occurs given the general context of the analysis, the CICAs occurs with a given probability given that situation, the non reconfiguration depends on the both. The has to try to built a plausible scenrio by having the probability of the cicas to 1. tha mean that he have to describe precisely the situation features (more it is precise, more the probability os low). the non reconfiguration takes in account the recovery induced by the MMI and the organsiaation (redundant roles, verification by procedures ...).
If these explanations are still useful but not enough do not hesitate to ask me again.
  • asked a question related to Operations Research
Question
4 answers
Hello,
I submitted a short communication to the "Operations Research Letters" the 8th of July 2019. After some days awaiting for an editor to be assigned, it got to the "with the editor" stage. It is still in that stage, specifically it was in this state for 38 days (data of today).
Looking online, I only found one mention of the review process time on Scimag, obtained by voluntary contributions by authors only (i.e., no official data), and it read 18 months. By looking at previous issues of ORL, I noticed that the time elapsed between the letter being "Received" and the letter being "Accepted" ranges from 4 to 8 months.
I would like to ask if anyone has ever published in ORL, and how much time it took to move from the "with editor" state to the "invited reviewers" stage.
Thank you in advance
EDIT: wrote "reviewer" instead of "editor"
Relevant answer
Answer
In my case, it took ten weeks the first review, two weeks the second with the acceptance, ten days to appear online, and two months to be published afterward. Very fast, in my criterion.
  • asked a question related to Operations Research
Question
8 answers
In my research work, I want to construct mathematical programming model for a supply chain network problem. I have assumed production cost to be linear in nature. Is this assumption correct or should I change this assumption. Please suggest with valid description.
Relevant answer
Answer
Dear Sir, it depends so many factor and environment like demand, product nature and product life cycle, it may be linear, quadratic, exponentially, stock-dependent etc.,
  • asked a question related to Operations Research
Question
9 answers
My problem consists:
1. More than thousand Constraints and Variables
2. It is purely 0-1 programming i.e. all variables are binary.
3. Kindly note that I am not a good programmer.
Please provide me some links of books or videos discussing application of GA in Matlab for solving 0-1 programming with large number of variables and constraints.
I have gone through many YouTube videos but they have taken examples with only two or three variables without integer restrictions.
Relevant answer
Answer
Simple:
1) Open optimization toolbox
2) Select GA solver (and enter your objective function and constraint file detail)
3) Set lower bound as 0 upper bound as 1 (this set must be as many variable)
4) Under integer variable put index of all the variable whose values to be either 0 or 1
5) Run
You can get the result.
You may require basic knowledge of how to use optimization toolbox.
Hope that you can do it now.
Sincerely,
Alam
  • asked a question related to Operations Research
Question
5 answers
I want to assign n different customers to m different stores (such that n > m) and at the same time I want to do vehicle routing between the stores and the customers. A customer can be assigned to only one store. But a store can serve many customers. The maximum customers it can serve is p. I need to find the minimum number of vehicles required to this.
Relevant answer
Answer
The web is full of models and methods on that topic. If you can create a search string at Google with the most important search terms that you would like the articles of web pages to contain, then you are probably going to find exactly what you want.
  • asked a question related to Operations Research
Question
9 answers
What is your opinion about the use of qualitative methods on researches (e.g. case studies, action researches) in Operations Management Field?
Relevant answer
Answer
Guilherme - my opinion is the same as it is for any discipline. Qualitative insight is equally important as quantitative. To me, regardless of topic, it is a limited worldview if we only view our disciplines according to numerical outcomes i.e. 'how often does something happen' if we don't compliment it it with narrative insight i.e. 'what is the experience of what happened'. Better still, for me, is that we don't 'divorce' qualitative from quantitative if we can avoid it - especially in adopting mixed method approaches. We also need to be clear about paradigm positions. For instance, you classify action research as qualitative here. I don't. Action research, to me, is mixed methods (the 3rd paradigm). Action research can contain as much quantitative, if not more, methods than qualitative. The same say with Delphi - which can be more quantitative than qualitative - yet is still often classified as qualitative.
The two attached resources may assist. One - an article that links action research with project management - a common approach in operations management. The other is a mixed methods chapter - containing action research. Note that it is not a qualitative research chapter.
  • asked a question related to Operations Research
Question
1 answer
In most of robust optimization models, uncertain parameters are assumed to be independent. For example Bertsimas and Sym or Ben-Tal and Nemirovski discussed that it is too conservative to assume that all of the uncertain parameters in a problem simultaneously take their worst values, and by this reason they introduced their famous uncertainty sets. However, if there is a kind of correlation between uncertain parameters, taking worst values by most of them will not be so unexpected. Furthermore if all parameters are completely correlated, we will expect that if one of them takes its worst value all other ones do the same. Therefore I think Bertsimas and Sym or Ben-Tal and Nemirovski’s approaches are suitable just with the assumption of independency of parameters. Is it true? Can anyone advise me about the truth of this issue?
Relevant answer
Answer
Dear, did you figure out something about your question?
  • asked a question related to Operations Research
Question
6 answers
For the application of Industry 4.0 and hence making the machine self aware, what optimization techniques could be used for a machining process? (Preferably please explain a mathematical model regarding the same or a case study)
Relevant answer
Answer
Some times in order to know the important parameters, it is necesary to use a methodology such experiment design. Although it is time consuming, it is better in order to obtain better results
  • asked a question related to Operations Research
Question
4 answers
I am an undergraduate student in Production and Industrial Engineering, looking for a research proposal for applying in a doctoral program. Also, it would be great if you suggest some read. Or any suggestions?
Thank you for your time.
Relevant answer
There are number of research works in the area of FMS Scheduling. I would suggest that you have broader scope of scheduling challenges under different types and levels of automation. It would be instructive to look at the practical utility of the FMS concept in today's manufacturing environments. Please think through the features of an FMS and their impacts on scheduling rules and their preferences.
  • asked a question related to Operations Research
Question
3 answers
For a multi-objective problem with uncertainty in demand, consider the scenario tree (attached herewith) for a finite planning horizon consisting of three time periods. It's a two objective minimization problem in which the augmented e-constraint method is utilized to obtain Pareto optimal solutions (POS).
In the time period T1, only the mean demand is considered. Then in T2, demand follows a certain growth rate for a scenario with expected probability of growth for each scenario. Similar trend is outlined for T3.
The deterministic counterpart envisaged for the problem is a set of time periods with specific pattern of growth rate for mean demand - say 15% in T1, 10% in T2 and 10% in T3.
I want to draw out a comparison of the POS obtained from the stochastic and deterministic analysis. What is the best way to proceed in order to give the decision maker a whole picture of the POS with the scenario and time period considered in both type of analyses?
Do I obtain POS sets for all the 13 scenarios from T1 to T3, or just the 9 scenarios in T3? It'd mean 13 or 9 Pareto fronts for the stochastic analysis alone. In other words - a Pareto front with POS for each time period and scenario! How do I compare whatever I obtained from the stochastic analysis with the deterministic one?
Once again, the aim is to analyze the stochastic analysis and draw out a comparison of the POS obtained from the stochastic and deterministic analysis for the time periods and scenarios considered.
Comments on the aforementioned approach and recommendations for alternatives are appreciated.
Relevant answer
Answer
The problema seem to be small. Under such conditions clasical approaches are appropiated.
  • asked a question related to Operations Research
Question
4 answers
I have a project about operatios research. In my case I have several-vary vehicles but one source and one target. Vehicles have const and they must assign some areas. Like vehicle 1 must carry a type product , vehicle 2 must carry b type product etc.. But all products stored same place. I can not find problem type for this case.
Relevant answer
Dear Barış Karakum, in the first OR courses, we usually teach the students, that the important thing is to solve the problems, not apply models.
Thus, unless it is for theoretical purposes, the model that is applying is not important. The important thing is that the model you constructed solves the problem situation. Whether or not it belongs to a certain type is irrelevant.
In any case, even if it has only one source and only one destination, if it has several types of vehicles, especially if they have different characteristics, it can be considered as a transportation problem. Vehicles are transformed into sources or destinations, as appropriate. Here in Research Gate there are several articles of ours that discuss the problem of multiple transports, which may be useful to you.
We hope many successes in your research. Best regards,
José Hernández.
  • asked a question related to Operations Research
Question
25 answers
Is Entropy Shanon a good technique for weighting in Multi-Criteria Decision-Making?
As you know we use Entropy Shanon for weighting criteria in multi-criteria decision-making.
I think it is not a good technique for weighting in real world because:
It just uses decision matrix data.
If we add some new alternatives, weights change.
If we change period of time weights change.
For example we have 3 criteria: price, speed, safety
In several period of time weights of criteria vary
For example if our period of time is one month"
This month may price get 0.7 (speed=0.2, safety=0.1)
Next month may price get 0.1 (speed=0.6, safety=0.3)
It is against reality! What is your opinion?
Relevant answer
Answer
Once I was working on several variables and I wanted to weight them. At this time, people usually say that we'd better provide a questionnaire and then through AHP, ANP or other related methods define the weights for variables. That's quite common but how about the bias of the those who fill the questionnaire. Therefore, I looked for some other methods to weight variables based on the reality and I came across with Entropy. In fact, I weighted variables based on the each of these methods and then I compared the results. Entropy results were much closer to what is going on in real world.
  • asked a question related to Operations Research
Question
6 answers
Hi All,
I have modeled an MILP model using two different formulations, one of the formulation uses three indexes, while the other formulation uses five indexes. Comparing the solution speed of two formulations using the same solver (Gurobi, CPLEX), it turns out that the formulation with five indexes is solved faster by the solver. Not sure why this is happening, has anyone had this experience or are any studies related to this problem available. Please let me know.
Thanks,
Bhawesh
Relevant answer
Answer
Of the same problem? Yes, oh yes. In large-scale integer or mixed-integer optimization, this is legio. And indeed, quite often a "richer" formulation (which typically means MANY more integer variables) means that the solver has more alternatives. Quite often the richer formulation will have a better duality gap, too.
  • asked a question related to Operations Research
Question
7 answers
Can numbers (the Look then Leap Rule OR the Gittins Index) be used to help a person decide when to stop looking for the most suitable career path and LEAP into it instead or is the career situation too complicated for that?
^^^^^^^^^^^^^^^^^
Details:
Mathematical answers to the question of optimal stopping in general (When you should stop looking and leap)?
Gittins Index , Feynman's restaurant problem (not discussed in details)
Look then Leap Rule (secretary problem, fiancé problem): (√n , n/e , 37%)
How do apply this rule to career choice?
1- Potential ways of application:
A- n is Time .
Like what Michael Trick did https://goo.gl/9hSJT1 . Michael Trick A CMU Operations Research professor who applied this to his decide the best time for his marriage proposal., though he seems to think that this is a failed approach.
In our case, should we do it by age 20-70= 50 years --- 38 years old is where you stop looking for example? Or Should we multiply 37% by 80,000 hours to get a total of 29600 hours of career "looking"?
B- n is the number of available options. Like the secretary problem.
If we have 100 viable job options, we just look into the first 37? If we have 10, we just look into the first 4? If we are still in a stage of our lives where we have thousands of career paths?
2- Why the situation is more complicated in the career choice situation:
A- You can want a career and pursue it and then fail at it.
B- You can mix career paths. If you take option c, it can help you later on with option G. for example, if I went as an IRS, the irs will help me later on if I decide to become a writer so there's overlap between the options and a more dynamic relationship. Also the option you choose in selection #1 will influence the likelihood of choosing other options in Selection 2 (For example, if in 2018 I choose to work at an NGO, that will influence my options if I want to do a career transition in 2023 since that will limit my possibility of entering the corporate world in 2023).
C- You need to be making money so "looking" that does not generate money is seriously costly.
D- The choice is neither strictly sequential nor strictly simultaneous.
E- Looking and leaping alternates over a lifetime not like the example where you keep looking then leap once.
Is there a practical way to measure how the probability of switching back and forth between our career options affects the optimal exploration percentage?
F- There is something between looking and leaping, which is testing the waters. Let me explain. "Looking" here doesn't just mean "thinking" or "self-reflection" without action. It could also mean trying out a field to see if you're suited for it. So we can divide looking into "experimentation looking" and "thinking looking". And what separates looking from leaping is commitment and being settled. There's a trial period.
How does this affect our job/career options example since we can theoretically "look" at all 100 viable job positions without having to formally reject the position? Or does this rule apply to scenarios where looking entails commitment?
G- * You can return to a career that you rejected in the past. Once you leap, you can look again.
"But if you have the option to go back, say by apologizing to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%." https://80000hours.org/podcast/episodes/brian-christian-algorithms-to-live-by/
*3- A Real-life Example:
Here are some of my major potential career paths:
1- Behavioural Change Communications Company 2- Soft-Skills Training Company, 3- Consulting Company, 4-Blogger 5- Internet Research Specialist 6- Academic 7- Writer (Malcolm Gladwell Style; Popularization of psychology) 8- NGOs
As you can see the options here overlap to a great degree. So with these options, should I just say "ok the root of 8 is about 3" so pick 3 of those and try them for a year each and then stick with whatever comes next and is better?!!
Relevant answer
Answer
Hey Kenneth Carling , I got this number from page 29 in their book (Always Be Stopping, Chapter 1). They quote research results from Seale & Rapoport (1997) who found that on average their subjects leapt at 31% when given the secretary problem - they say that most people leapt too soon. They also say that there are more studies ("about a dozen") with the same result, which makes it more credible in my view.
  • asked a question related to Operations Research
Question
28 answers
There are lots of Optimization method /Evolutionary algorithms (EAs) in literature. Some of them is more effective (for solving linear/nonlinear problem) compared to other. But we don’t know which will fit our model. As a result we checked for everything as we can do. But cant get the desire result. Some of those methods are 1. Genetic algorithms (GA) ; Haupt and Haupt (2004) 2. Pattern search (Mathlab) 3. Particle swarm optimization (PSO), Binary Particle Swarm Optimization (BPSO); Eberhart and Kennedy (1995) 4. Bee optimization; Karaboga and Bosturk (2007) Pham et al (2006) 5. Cuckoo algorithm; Yang and Deb (2009, 2010) 6. Differential evolution (DE) ; Storn and Price (1995, 1997) 7. Firefly optimization; Yang (2010) 8. Bacterial foraging optimization; Kim, Abraham and Cho (2007) 9. Ant colony optimization (ACO) ; I Dorigo and Stutzle (2004) 10. Fish optimization; Huang and Zhou (2008) 11.Raindrop optimization ; Shah-Hosseini (2009) 12.Simulated annealing ; Kirkpatrick, Gelatt and Vecchi (1983) 13.Biogeography-based optimization (BBO), 14. Chemical reaction optimization (CRO) 15. A group search optimizer (GSO), 16. Imperialist algorithm 17. Swine flow Optimization Algorithm. 18. Teaching Learning Based Optimization (TLBO) 19. Bayesian Optimization Algorithms (BOA) 20. Population-based incremental learning (PBIL) 21. Evolution strategy with covariance matrix adaptation (CMA-ES) 22. Charged system search Optimization Algorithm 23. Continuous scatter search (CSS) Optimization Algorithm 24. Tabu search Continuous Optimization 25. Evolutionary programming 26. League championship algorithm 27. Harmony search Optimization algorithm 28. Gravitational search algorithm Optimization 29. Evolution strategies Optimization 30. Firework algorithm, Ying Tan, 2010 31. Big-bang big-crunch Optimization algorithm, OK Erol, 2006 32. Artificial bee colony optimization (ABC), Karaboga,2005 33. Backtracking Search Optimization algorithm (BSA) 34. Differential Search Algorithm (DSA) (A modernized particle swarm optimization algorithm) 35. Hybrid Particle Swarm Optimization and Gravitational Search Algorithm (PSOGSA) 36. Multi-objective bat algorithm(MOBA) Binary Bat Algorithm (BBA) 37. Flower Pollination Algorithm 38. The Wind Driven Optimization (WDO) algorithm 39. Grey Wolf Optimizer (GWO) 40. Generative Algorithms 41. Hybrid Differential Evolution Algorithm With Adaptive Crossover Mechanism 42.Lloyd's Algorithm 43.One Rank Cuckoo Search (ORCS) algorithm: An improved cuckoo search optimization algorithm 44. Huffman Algorithm 45. Active-Set Algorithm (ASA) 46. Random Search Algorithm 47. Alternating Conditional Expectation algorithm (ACE) 48. Normalized Normal Constraint (NNC) algorithm 49. Artificial immune system optimization; Cutello and Nicosia (2002) 50. fmincon .
Besides this there are many other optimization algorithm recently invented which are generally called Hybrid optimization Technique because it’s a combination of two method. If we share our experiences then it will be helpful for all of us who are in the field of optimization. I may be missing some methods, researcher are requested to add those algorithms and the way of use like many model needs initial value, weight, velocity, different type of writing objective function etc. I am facing some problems that’s why I make this format which will definitely help me as well as all other researchers in this field. Expecting resourceful and cordial cooperation.
Relevant answer
Answer
Dear Mashud,
I have some experiences in improving optimization algorithms like quantum invasive weed optimization algorithm ( ) and world cup optimization algorithm ( ).
From my experiences two point are important to select a good optimization algorithm:
1) check that do your problems can be solved by the classic methods? if so, do not go to the meta-heuristics.
2) If your problem was NP-hard and can not solve by the classic methods:
after a lot of testifying, I found that there is no more differences among the evolutionary algorithms. of course in some case, one algorithm may have better performance or has high speed, but this prominence is not so bold.
3) Two cases that you should point to them (in evolutionary algorithms) are: Exploration and Exploitation.
Exploration parameter is for places that you have no information about the search space anymore and Exploitation is about that you have an approximate solution for your problem and this parameter in this case finds better solution.
These two parameters comprise the structure of all of the optimization algorithms. for example, in Genetic algorithm, Mutation is an exploration parameter and crossover is an exploitation parameter.
So based on your requirements, select an algorithm that has promonancy about your considered parameter.
good luck,
  • asked a question related to Operations Research
Question
4 answers
There are N tasks and M workers.
  1. For every tuple task-worker the efficiency is known;
  2. For every task one worker must be assigned;
  3. For every worker at least one task must be assigned;
  4. For every worker multiple tasks can be assigned;
  5. Tasks must be grouped (e.g. by location), and for every group the number of workers is fixed. Every worker must be in exactly one group.
Can you suggest an algorithm or approach for optimal (or suboptimal) assignment (maximal efficiency)?
As my knowledge goes:
  1. Without 4. and 5. this problem can be stated as the “Assignment Problem”, for which there are algorithms with polynomial complexity;
  2. Without 4. this problem can be addressed as “Generalized Assignment Problem” which is NP-hard;
  3. Without 4. and if M = 1 this problem can be addressed as “0-1 Knapsack Problem”.
I can’t see how to use any of the mentioned to address my problem.
Relevant answer
Answer
Some clarifications are necessary.
1. What is the purpose of efficiency? It is not mentioned later.
2. In point 2. is it exactly one worker, at least one worker or something else?
3. What parameter do you want to minimize (maximize)?
The problem could be modeled using linear programming. You can find the LP model for assignment and try to modify it. Try for some small instances to state it by hand and type it into some LP solver like CPLEX. In a few try-error iteration you should be able to formulate the problem. With correct formulation of the LP problem, you should get optimal values for instances that are not to large.
In general, the formulation should be proved to be equivalent to the problem, but that is another issue.
  • asked a question related to Operations Research
Question
9 answers
I am looking for recent research questions in Reinforcement Learning( RL ) from Artificial Intelligence( AI ) . I also want know where it is applicable. As i know it is applied in Games, robotics and Operation Research, i would like know more about it. If any other areas where it is applied too?
Relevant answer
Answer
  • Reinforcement Learning in Robotics: How can be a Robot like Human? Emotion, reaction, intelligent are to be as like as human.
  • Safety in Reinforcement Learning: How to avoid unwanted behaviour and reward hacking in RL.
  • Competitive and Cooperative Multi-agent Reinforcement Learning:
  • Data Efficiency: React and take a decision just from a single raining.
  • asked a question related to Operations Research
Question
3 answers
There is a need to automate several industrial tasks which may require a number of humans and robots to perform it. Some can be done only using robots. Say there is a task X. My output looks like: Task X can be done if around 4 robots are assigned to it or 1 human and 1 robot are assigned to it. My input will describe the task based on which an algorithm will compute the desired output.
So basically could you share some research work where resource requirement for industrial tasks are modeled mathematically or even empirically? Or could you point to some existing algorithms in the domain of industrial engineering or otherwise where researchers have tackled the problem of identifying how much resources need to be thrown on a task to finish it successfully?
Relevant answer
Answer
I follow answers
regards
  • asked a question related to Operations Research
Question
7 answers
I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :
Vnew= Vold+(Current-Best) * f;
S= 1/ ( 1+Math.exp(-Vnew));
X(t+1) = { 1  S>Rnd  , 0   Rnd>=S)
the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew
Relevant answer
Answer
Unless you are doing just an "exercise", I discourage you from trying "new" metaheuristics for knapsack. Besides being a widely studied problem, there are very good knapsack specific algorithms. Check David Pissinger's webpage for codes and test instances generators.
  • asked a question related to Operations Research
Question
7 answers
In CRS model, the input and output oriended objective function values are reciprocal to each others. But why not in VRS?
Relevant answer
Answer
It is not sufficient to attribute it just to the difference in the orientation between the CCR and the BCC models since "maximizing the outputs subject to the given inputs" and "minimizing the inputs subject to the given outputs" under CRS assumtpion provide reciprocal efficiency scores. Thus, we firstly have to consider the additional free in sign variable in the multiplier BCC model and the additional constraint about the lambdas (Σλ=1) in its envelopment counterpart. This changes the shape of the frontier (BCC efficient frontier) and allows, besides CRS, for increasing and decreasing returns to scale. The above, in conjuction with the orientation that guides the projections, provides non-reciprocal efficiency scores between input and output orientation under VRS assumption.
For a schematic represenation of the above check the attached image.
  • asked a question related to Operations Research
Question
4 answers
In optimization problem often we use local optimum but is it global? Or are there any meta-heuristics algorithm to obtain global solution? If there any then what is the name of that algorithm and if possible how we can get that solution?
Relevant answer
Answer
I wish not to let you down, but the basic answer is "very seldom", and another one is "you will not know if you have stumbled upon an optimal solution, because there is no natural termination criterion based on the concept of optimality". (In contrast, a branch-and-bound, or branch-and-cut, methodology is based on local AND global bounds on the optimal value generated throughout the procedure, and in most cases the correct procedures will either fix some variables to their optimal values before termination, and they will be able to discard a very large portion of the search space based on parts of the search space being infeasibie or inferior, in which case we do know for sure that an optimum has been reached.)
If you have a structure of the problem that makes it emanable to be solved by special methods, such as Benders decomposition - when you have a mix of integer variables and continuous variables. you also have a fail-proof method.
  • asked a question related to Operations Research
Question
13 answers
Dear Friends and colleagues
I have an optimization in which I have a nonlinear term in the following form:
x(t)* a(k)
where, x and a are variables. a is a binery variable and the sets in which each of the variables are defined is not the same. Would you please suggest me a method that I can use to handle this term and transfer my model to a mixed integer linear programming?
Thank you for your suggestions.
Relevant answer
Answer
Olivér Ősz is right.
  • asked a question related to Operations Research
Question
13 answers
Is it possible to manage with supply chain in a more effective way?
Relevant answer
Answer
Dear Abu Hashan Md Mashud ,
Kindly clarify your question.
  • asked a question related to Operations Research
Question
3 answers
for optimization problem of fuel delivery from Depot to petrol stations, the solution approach is to use Tabu neighborhood for solving the model. (objective to minimize the delivery cost) how this can be done on Lingo or GAMS?
  • asked a question related to Operations Research
Question
11 answers
In the multi-objective optimization problems we often say that the objective functions are conflicting in nature. In what sense the objective functions are said to be conflicting with each other? Also, how it could be proved numerically that the objective functions in our multi-objective problem are conflicting in nature?
Relevant answer
Answer
If you are, for example, designing automobile you might have many objectives. E.g. you want for automobile to be big (spacious) from inside, but also to have smaller dimensions from the outside so it is easier to park the automobile. Obviously inside size and outside size are very dependent in a way that you might can not considerably increase inside space without increasing outer size. So there you have conflicting objectives.
So if improvement in one objective lead to worsening of the other, then this two objectives are conflicted.
  • asked a question related to Operations Research
Question
18 answers
Please suggest recent topics or research in operations research (management science).
Thanks in advance.
Relevant answer
Answer
Supply Chain Design and Modeling using Global Optimization Techniques....
  • asked a question related to Operations Research
Question
7 answers
We all know that data is very important in decision making processes, and also it's obvious that row data can not lead to a precise decision.
Operations Research (OR) can help transform the data to useful information by which accurate decisions can be made. Decision Support Systems are good examples in which this transformation and process of making accurate decisions occur.
Health care problems have attracted too much attention and researchers are using OR tools to solve them.
Now a critical question is: What are the main trends of using operations research tools in healthcare problems?
Relevant answer
Answer
Thank you Behzad for sharing such a significant question. The answers to this question were important for me too.I follow this.
  • asked a question related to Operations Research
Question
13 answers
In primal form of DEA CRS model, we are going to maximize the efficiency of the reference firm. But how is it possible that in the dual form also, we are minimizing the efficiency of the reference firm? What is the physical interpretation of the dual version? Please see the attached file.
Relevant answer
Answer
There are two things to keep in mind when dealing with DEA models. Input oriented and output oriented objective function. In the input oriented you minimize the ratio of inputs to outputs. In otherwords, you are seeking here the minimum input combination that produce specific level of output, Where as in the output oriented objective function you maximie the ratio of outputs combination at specific level of inputs. Your case of theta minimiation refers to the first case mentioned above.
  • asked a question related to Operations Research
Question
4 answers
I am looking for examples of the combination of ABM, MO optimization, and game theory, preferably the ones that have been used for practical purposes.
Relevant answer
Answer
Dear Ali,
The answer provided by Dr. Lafifi is recommendable. Please follow.
  • asked a question related to Operations Research
Question
21 answers
Which of the following is not a component of inventory carrying cost?
i) capital cost
ii) transportation cost
iii) insurance cost
iv) obsolescence cost.
Relevant answer
Answer
transportation cost
  • asked a question related to Operations Research
Question
5 answers
The implementation of JIT offers several advantages, including
(i) work-in-process increases
(ii) rework reduction
(iii) decreased profit margins
(iv) increase in variability to better respond to variable demand.
Relevant answer
Answer
Which advantage JIT offers?
Is JIT refers to Just In Time? If yes agreed above points except "decreased profit margins" - why JIT decreases profit margin? Is it because JIT can reduce production cost & hence the product can sell at lower price for better competitive strategy?
Think another advantage includes reduce or minimal warehouse / storage cost of finished goods.
  • asked a question related to Operations Research
Question
5 answers
Hello All. If a country wants to lay down railway tracks(assuming there are no tracks already). Let the goal be maximising connectivity and increasing flow, are there any known mathematical models for such a problem?
Relevant answer
Answer
Actually, there is not a predetermined model. I think the best way for modelling and optimizing the mentioned problem, is to use some transportation softwares, which are designed to simulate the mentioned problem.
  • asked a question related to Operations Research
Question
14 answers
I am trying to optimize a function that is non-linear in parameters, three in number. I am using Genetic Algorithms (GA) for this purpose. Thus, I have a function of time that is non-linear in three parameters as time-series data. I am using the ga() function of the GA package in R language for the purpose. However, as I see, the initial values that I set for the parameters heavily influences the parameters computed by the ga() function. I also read the following article:
Scrucca, L. (2013). GA: a package for genetic algorithms in R. Journal of Statistical Software, 53(4), 1-37.
In section 4.4 Curve Fitting, if I use the following initial values (min, max): a(1000, 10,000), b(0,10), and c(0.5, 10) instead of the ones used, that is a(3000, 4000), b(0, 1) and c(2, 4) I get completely different results from the ones obtained by the paper. I get a=2772, b=.0235, c=4.07 as against a= 3534.741, b=0.01575454, c= 2.800797 in the paper.
My understanding is that global optimization techniques such as GA would be able to find out global optima irrespective of initial values although it might take more or less iterations depending on the initial values of parameters. Why is this not happening in case of my function and also the example that I cited?
Thanking you all in advance.
Relevant answer
Answer
There is no guarantee to obtain a global optimal solution with any metaheuristic, unless you make near infinite iterations. Also, because GAs are random, the solution obtained may be different in any moment. If you can use another method to obtain a good approximation to the optimal solution, GA will improve it.
  • asked a question related to Operations Research
Question
27 answers
What is the difference between a research monograph and a book or article?
Thanks
Relevant answer
Answer
This might also be useful to you: a slide, "How to Write a Research Monograph: Basics" (https://www.slideshare.net/NasreddineELGUEZAR/how-to-write-a-research-monograph-basics).
  • asked a question related to Operations Research
Question
3 answers
Over a single link, SRPT is known to be the optimal policy for minimizing mean completion times. However, is that also true for a network case with many sources and destinations? Is it possible that Fair Sharing (based on Max-Min Fairness) performs better in a network scenario (considering mean completion times)?
Relevant answer
Answer
While experimenting with traffic scheduling on multicast trees, we found that in some cases, the inter-dependency across multicast flows can result in fair sharing offering better mean times (and tail) than SRPT and FCFS:
  • asked a question related to Operations Research
Question
4 answers
I have a 30*40 matrix. Lets say the components in the matrix are specified with "P" and the related number of the row and column of each "P" is specified by "X" and "Y" accordingly. I have a model that the output should give us the P, X and Y. How can I define constraints (for solving a simplex) which connect P with it's exact X and Y? I want to say for example:
if X=1 and Y=1 then P= 0.1
if X=1 and Y=2 then P= 0.5
if X=1 and Y=3 then P= 0.8 and so on.
I don't want the model to return a P that does not match it's location in the matrix. How can I achieve this?
Everything is known, except for P(k), X and Y
Relevant answer
Answer
Your constraint relating X and Y with P is inherently non-convex (unless P has some special property). So it looks like binary variables are indeed necessary.
  • asked a question related to Operations Research
Question
4 answers
Does anyone know of publicly available large/huge data envelopment analysis (DEA) instances?
Relevant answer
Answer
I upload DEAwithCrossEfficient.pdf that is one of first chapter of DEA with CCR and Inverted DEA models and crossEficient published from Amazon.
I explain how to build LINGO SEts model and DEA.
  • asked a question related to Operations Research
Question
4 answers
Project life cycle has four phases namely, (i) Initiation phase,
(ii) Planning/Design Phase (Work Breakdown Structure), (iii)
Execution/Control Phase and (iv) Closing Phase. Among these for
phases when does project purchasing process take place?
For example if ABC railway company gives a tender to construct a
bridge, then
1. When (in which phase) project agreement (purchase) will be
sanctioned with DEF construction company?
2. In which phase project identification and selection is made?
3. In which phase statement of work and project appraisal (for
proposed project) is done?
Relevant answer
Answer
what you mentioned is actually project procurement management which can be divided into 4 process groups; Planning, executing, monitoring and closing. I suggest you to refer pmbok guide since it is widely used as standard reference of industry best practices for project management
  • asked a question related to Operations Research
Question
23 answers
I am working on small project which is to apply the Operation research knowledge into daily life. Do you have fun or brilliant ideas ? Please share!
Thank you
Relevant answer
Answer
3-D Bin packing - when packing groceries, your suitcase, build your bookshelf, or arrange your books in it, storing hobby things, or the left-overs in the basement. :-)
People who are on a diet can optimize it so that it matches the nutritional values and maximizes its tastefulness - or minimizes its cost. (The diet problem is a classic LP one.)
  • asked a question related to Operations Research
Question
32 answers
In transportation problem, which method gives the best result: North-West, Row Minima, Column Minima, Least Cost or Vogel’s Approximation (VAM)?
Relevant answer
Answer
To add to @Sanghita Basu's answer: in case of degeneracy the small amount is to be added at a cell which is having the smallest cost and not making any loop with the other allocated cells... Now about Vogel's Approximation method it gives the best initial Basic Feasible Solution (which can be degenerate)... All the other methods are either equal or inferior to Vogel's... Moreover, degeneracy can happen in any method... Whenever any individual demand quantity is equal to the individual supply quantity then degeneracy is going to take place... BTW whatever one gets as the initial BFS, MODIji will lead to Optimality...
  • asked a question related to Operations Research
Question
5 answers
In Branch and Bound Algorithm, if the linear relaxation of the problem provides more than one fractional values (for more than one decision variables), then which decision variable should be considered for the next step?
If arbitrarily any one of the fractional values is chosen, does it guarantee the optimal solution finally?
Relevant answer
Answer
It is one possibility to always choose the larges fractional value - but by no means the only one. The choice of the branching variable especially in the beginning is very important for the size of the resulting tree.
Unfortunately there is no way to know the optimal choice a priori,
The answer of Daniel Molina Pérez pointed out two means incorporated in many solvers.
The backwards looking one is called pseudo costs branching in CPLEX. It records the changes in the LP bound by branching on te variables and chooses one which gave the greatest change. Of course this information is not available in the beginning.
The forward looking one is called strong branching and does a limited number of simplex steps on some chosen candidate branching variables to see which one gives the greatest improvement of the bound.
There is a third way offered by most solvers: that the user (who knows his model much better than the solver) identifies 'important' variables by assigning a higher branching priority.
I really encourage you to do a literature search. Have a look to Wolsey's book, too.
Besides the choice of the branching variable the implementation of clever heuristics is very important since early knowledge of good primal bounds would allow for earlier inferiority cutoffs and reduces the size of the B&B tree considerably.