Science topic

# Operations Research - Science topic

A group of techniques developed to apply scientific methods and tools to solve the problems of DECISION MAKING in complex organizations and systems. Operations research searches for optimal solutions in situations of conflicting GOALS and makes use of mathematical models from which solutions for actual problems may be derived. (From Psychiatric Dictionary, 6th ed)

Questions related to Operations Research

I am planning to use DEA analysis for my on going research. I have SPSS with me. therefore, I would like to figure out how it can be done with SPSS.

Your suggestions would be highly appreciated.

I need help to come up with a possible topic proposal related to any existing problems can be in community, Hospitals, Banks which can be solved using multiple Operations Research topic. Thanks.

Could you suggest some contemporary research topics in Operations Research (OR)?

In your opinion, which research topics of OR could be impactful in the next decade?

Thanks in advance.

I have just recently started a new "weekend project" in addition to my master's studies and I am looking for a data-set. I would like to use some Operations Research to design an optimal gym schedule that conforms to a specific set of constraints.

The idea is to create a daily gym schedule that conforms to a set of constraints (e.g. time, target muscles etc) as well as a set of physiological constraints. The physiological constraints are things such as do not exercise muscle-x and muscle-y together or do not do abdominal exercises for two consecutive days etc.

However the problem I face is data, specifically a data-set (or data-sets).

Are there any open-source datasets which list an exercise, as well as all the muscles targeted? Preferably one that lists as much of the physiological data as possible. E.g. which stabilizers are activated, which secondary muscle is also activated, is it an extension or flexion. I am also looking for datasets which could help me with some of the physiological constraints, such as muscle recovery times, which muscles not to exercise together etc?

My goal is to algorithmically capture an OR model which I can provide with input data such as muscle group target and time and the model must output a schedule of exercises which targets all the muscles in that muscle group, is not physiologically harmful and is within the time constraint.

Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.

Hello, There is a dataset with several KPIs which are varying between (0,1). What is the best analytical approach to split the data and define a line in two dimensional (or define a plane in multi-dimensional space) based on data behavior and practical assumptions/considerations (there is some recommended ranges for each KPIs etc.)?

For instance in the attached screenshot, I want to flag the individuals/observations in A

_{e}area for more investigation. I want to be able to apply the proposed approach in multi-dimensional space with several KPIs as well. Any thoughts would be appreciated.Operations research techniques are used widely in the scientific literature to support decision-making problems in healthcare. However, such methods are rarely applied in practice? What are the obstacles? What could be the solution?

Some journals give reviewers 60 days, others give 40 days, 30 days, or 20 days to review a paper. MDPI journals give only 10 days, but it can be extended if the reviewer needs more time. In my opinion, 10 days might be too short, but 60 days is excessive. Allowing 60 days for a peer review is adding to the response time unnecessarily, and disadvantaging the authors. I can thoroughly review a paper in a day (if I dedicate myself to it), or two at most. A reviewer should only accept a review request if they are not too busy to do it in the next 10 to 20 days. I have encountered situations in which a reviewer agrees to the review, but does not submit the review at the end of 60 days, wasting those valuable 60 days from the author. What do you think the allowed time for reviewers should be?

Dear all,

I want to start learning discrete choice-based optimization so that I can use it later for my research works. I want to know about free courses, books, study materials available on this topic. Any suggestions will be appreciated.

Thanks,

Soumen Atta

Can anyone guide me on the pain areas in cloud computing where Operations Research techniques can be applied. Please guide me on this.

Regards,

JP

Hello everyone,

I am currently developing a small simulation model of an assembly worker pool in which I would like to consider randomly occurring absenteeism due to illness. Meaning: if one of lets say 17 workers is ill, he becomes unavailable and the Pool capacity is dynamically set to 16. After recovery, the worker becomes available again and the Pool capacity is increased by 1.

The model shall be based on historical data, which are available in the following structure (see excerpt @attachement).

The Pool Capacity is set every 8 hours with each shift change with the following logic (triggered by schedule):

- Create a list of all worker, that belong to the next shift group

- For each worker which is available (workers have parameter called “available”):

o determine randomly, if worker gets ill (using randomFalse(average chance to become ill, e.g. 2,5%, see above)

o If worker becomes ill, assign a value from custom distribution (bases on observations for the number of absent days per illness) for the number of unavailable days and create a dynamic event which will set the availability of this worker back to true after this number of days

The Pool capacity is set to the value that equals the number of the remaining available workers in the list

The model looks like this (see model overview @attachement).

On the first glance, the model works as intended. However, after 50 replications I aggregated the simulation data and compared the simulation result with my real data (table @attachement). I found out, that the model indicates, that ~41% of the shifts are staffed with 17 workers (= highest rel. frequency) while the real data show that 44% of the shifts are staffed with 16 workers (= highest rel. frequency of real data).

Something in the model concept does not seem to fit, otherwise the relative frequencies would somehow match in a better way, right?

Does anybody can tell me, if my current approach makes sense or am I overlooking something crucial? Is there a better approach to model this kind of problem?

Is there any method to find the optimal solution without finding the initial basic solution in solving the transportation problems?

Any recommendation from a scientific journal to submit a paper on operations research applying linear programming and vehicle routing (VRP) using the B&B algorithm?

**can be used to check the variation of the**

*Sensitivity Analysis**optimum solution*when changing the coefficients of the

*objective function*or constant values in

*constraints*. Are there exist any other things to investigate using this approach?

Dear Sir/Madam,

I would like to see if anyone is interested in collaborating on some research papers. I work in the fields of SMART GRID, SMART BUILDINGS, OPTIMIZATION, and ENERGY MANAGEMENT.

If you are interested, could you send me a private text, please?

Thanks,

have a nice day!

**is an open source application for mathematical modeling. How about your experience about**

*LINGO***? What are the uncommon tools in**

*LINGO***to solve optimization models?**

*LINGO*Types like; Job Shop, Batch Production, Mass Production, Manual Line Production. and why?

We are looking for the datasets that can be used for evaluating fairness models in machine learning on real data. Could you recommend a labeled dataset in which the labeling reveals some unfair decision process. E.g., unfair decisions in hiring, courts, healthcare etc.

Dear Malek Masmoudi,

Could you please provide me a pdf copy of the paper of Ichraf Jridi entiteled:

Modelling and simulation in dialysis centre of Hedi Chaker Hospital

March 2020 In book: Operations research and simulation in healthcare Publisher: SPRINGER.

Looking forward to hearing from you ASAP.

Sincerely, yours.

Professor Mohamed Ben Hmida

Assume, we found an approximate solution A(D),

where A is a metaheuristic algorithm, D is concrete data of your problem.

How close the approximate solution A(D) to an optimal solution OPT(D)?

When comparing two optimization methods on a function, should we use a two sample t-test or a paired t-test? I would say we should use the latter since paired t-test is used for correlated observations and in our case we can consider the unit of observation to be the function and the two methods as two treatments. Am I right?

Thank you in advance

I understand that there is no specific rule to define this multi-author order. However, perhaps it is possible to find some common criteria.

In AHP, I have come across random consistency index (RI) values as given by Saaty (1987).

Also, Prof. Taha, in his book Operations Research: An introduction has given a formulae for calculating RI.

Which RI should be considered and why?

Dear all

I am working on an inventory model in closed-loop supply chain system to optimize the cost of the system. There are lots of model to optimize the cost of the system, but I am looking forward to incorporate the concept of the Analytics to handle the real time inventory.

Looking forward to hearing from you.

with regards

Sumit Maheshwari

Cycle counting

i) is a process by which inventory records are verified once a year

ii) provides a measure of inventory accuracy

iii) provides a measure of inventory turnover

iv) assumes that all inventory records must be verified with the same frequency.

The sampling allocation problem is an important problem in survey statistics. Recently, many authors have formulated it as a nonlinear optimization problem and solved it. However, Neyman Allocation also comes under the optimal allocation techniques. Why?

Thanks in advance!

(

**Proposal**) Oil Refinery Production: What is the company's goal?----------------

[

**Purpose**: get Engineers & Scientists thinking outside their box ... think -large- problems. What's possible today vs. needs for tomorrow?]**Question**: Are you interested in increasing your sales income by several orders of magnitude? Are you willing to think outside the box? If so, please read on. This is a large proposal, the size of NASA's Apollo Space program back in the early 1960s.

A new level of Computers and Software will be required for this Oil Production proposal. Today's Computers are Algebraic, i.e. bare bones, conceived designs that run similar to a 'model T' car. They 'run' along at a '30 mph' clip. We need fast super Computers like the Wright Brothers 'Airplane' that can run at a '3,000 mph' clip. These super Computers need '

**Automatic Differentiation**' based technologies; i.e. smart thinking abilities. NASA realized this when starting the Apollo space program; spent tons to get it and put us on the moon.---

📷

Oil production depends on many factors; e.g. Supply, Demand, present inventory, etc. An oil company may have many refineries with many distillation units. How can a company simulate extracting products 'a', 'b', and 'c' from its crude oil? Assume the company wants product 'a' on the west coast, 'b' in the middle of US, and 'c' on the east coast. Assume the company has refineries 'x' on west coast, 'y' in middle US, and 'z' on east coast. How does one model such a company's oil production so as to produce/refine the 'right' amounts of each product at each refinery site in order to meet the company's goal of maximizing profits?

*Partial Differential Equations*(PDEs) will be used to model the crude oil distillation for each distillation unit at each site; i.e. many PDEs must be solved at once! Are there computers large enough to handle such problems today? Are there plans for some super computer that will be able to handle many (1,000s) PDEs at once?

With maintenance of distillation units being continual, e.g. fix one, stop another, this will be a constant problem when trying to simulate the next day's crude oil work load. For example, assume a company has 600 distillation units overall. That means a computer program would be required to solve 600 PDEs ASAP; i.e. 10 hours of PDEs. My past experience with modeling in

**FortranCalculus**™ language/compiler, I was taught that a modeling requiring 'Tmod' time to execute the model, would require around 2'Tmod' time for the optimal solution. That would then get us into the 20 hr. time range for 600 PDEs. Too long! Need faster computers and solvers to get into reasonable solution times. Ideas how this could be done today? For more, visit http://fortrancalculus.info/apps/fc-compiler.html ... Solves Algebraic Equations through Ordinary Differential Equations.Many people thought that the Wright Brother's idea of an 'airplane' would never fly. But, what if it did? What if Oil sales income doubled or more? Would crude oil prices increase? (Everyone is going to want more for their piece of the pie, right?) How would this effect your company?

John D Rockefeller was quoted saying, "If you want to succeed you should strike out on new paths, rather than travel the worn paths of accepted success."

*Any future John D Rockefeller's reading this proposal?*Are you interested in increasing your company profits by several orders of magnitude? Does your company have a company goal or objective that all employees know about and follow? If so, continue reading on this proposal by reading my article "Company Goal: Increase Productivity?" (a dozen pages). Go to web page eBook on Engineering Design Optimization using Calculus level Methods, A Casebook Approach and click on the 'download' link, its free!

Hello,

If I have 5 hours to wait for GAMS result on Monday and 3 hours on Tuesday, and if I know that the model will be solved after 8 hours first run, is there a way to interrupt GAMS on Monday after 5 hours and run again on Tuesday from where it stops on Monday. So, I will get result on Tuesday but totally within 8 hours.

Thanks

I have 4 variables in an integer programming. If I define all as integer, solution period is increasing.

If I define 3 of them as positive variables and one of them as integer, model is solved shorter and found same optimal solution. I need all values for all variables as integer.

All parameters and rhs values are integer. I think total unimodular is an effect to make positive variables become integer, but I am not sure about that.

What do you think about this approach? Is it logical to define 3 of them positive variables and 1 of them integer to save time?

i want to use GAMS in optimization MINLP problem but I hope to use Meta heuristic algorithms like particle swarm or genetic algorithm instead of the given GAMS solvers like CPLEX, PARON

Can I do this?

In 2 stage stochastic optimization, why I found that the optimization problem has equations for the 1st stage and equations for the 2nd stage but those two groups are solved simultaneously however I think that we first solve the 1st stage equations then take the results and substitute in the 2nd stage equations (new problem) or there is something I overlooked?

Also why we dont combine the equations for 1st and 2nd stages as they are solved simultaneously ?

my case study is for power systems with renewable energy uncertainty when I make day ahead decisions for the power dispatch, power dispatch of each generator are computed (1st stage decisions) then after the realization of uncertain events (renewable energy) redispatch is done in the 2nd stage using reserve or may some load shedding is done.

My question is how the decisions of the 2 stages done simultaneously as I see from some papers. why we optimize the 1st stage first and run our optimization then take the results and apply them in the 2nd stage problem and run it again? and if the optimization of the 2 stages made simultaneously, why we did not combine the equations (constraints) of the 2 stages together ? as I see 1st stage constraints and 2nd stage constraints.

I am a student at Ghulam Ishaq Khan Institute, Pakistan and I am conducting research on evaluating the barriers to adoption of industry 4.0 in the construction industry of developing countries with Pakistan as a case in point. You are requested to fill the questionnaire attached. It will take 15-20 minutes of your precious time. Your response will be highly acknowledged. The questionnaire is attached. Once the survey has been filled, kindly reply to this discussion with the updated response file.

Thanks

Dear Pierre Le Bot Thank you for introducting resources Please could i ask you If it's possible to answer these 2 questions : 1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in 2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted? regards

Hello,

I submitted a short communication to the "Operations Research Letters" the 8th of July 2019. After some days awaiting for an editor to be assigned, it got to the "with the editor" stage. It is still in that stage, specifically it was in this state for 38 days (data of today).

Looking online, I only found one mention of the review process time on Scimag, obtained by voluntary contributions by authors only (i.e., no official data), and it read 18 months. By looking at previous issues of ORL, I noticed that the time elapsed between the letter being "Received" and the letter being "Accepted" ranges from 4 to 8 months.

I would like to ask if anyone has ever published in ORL, and how much time it took to move from the "with editor" state to the "invited reviewers" stage.

Thank you in advance

EDIT: wrote "reviewer" instead of "editor"

In my research work, I want to construct mathematical programming model for a supply chain network problem. I have assumed production cost to be linear in nature. Is this assumption correct or should I change this assumption. Please suggest with valid description.

My problem consists:

1. More than thousand Constraints and Variables

2. It is purely 0-1 programming i.e. all variables are binary.

3. Kindly note that I am not a good programmer.

Please provide me some links of books or videos discussing application of GA in Matlab for solving 0-1 programming with large number of variables and constraints.

I have gone through many YouTube videos but they have taken examples with only two or three variables without integer restrictions.

I want to assign n different customers to m different stores (such that n > m) and at the same time I want to do vehicle routing between the stores and the customers. A customer can be assigned to only one store. But a store can serve many customers. The maximum customers it can serve is p. I need to find the minimum number of vehicles required to this.

What is your opinion about the use of qualitative methods on researches (e.g. case studies, action researches) in Operations Management Field?

In most of robust optimization models, uncertain parameters are assumed to be independent. For example Bertsimas and Sym or Ben-Tal and Nemirovski discussed that it is too conservative to assume that all of the uncertain parameters in a problem simultaneously take their worst values, and by this reason they introduced their famous uncertainty sets. However, if there is a kind of correlation between uncertain parameters, taking worst values by most of them will not be so unexpected. Furthermore if all parameters are completely correlated, we will expect that if one of them takes its worst value all other ones do the same. Therefore I think Bertsimas and Sym or Ben-Tal and Nemirovski’s approaches are suitable just with the assumption of independency of parameters. Is it true? Can anyone advise me about the truth of this issue?

For the application of Industry 4.0 and hence making the machine self aware, what optimization techniques could be used for a machining process? (Preferably please explain a mathematical model regarding the same or a case study)

I am an undergraduate student in Production and Industrial Engineering, looking for a research proposal for applying in a doctoral program. Also, it would be great if you suggest some read. Or any suggestions?

Thank you for your time.

For a multi-objective problem with uncertainty in demand, consider the scenario tree (attached herewith) for a finite planning horizon consisting of three time periods. It's a two objective minimization problem in which the augmented e-constraint method is utilized to obtain Pareto optimal solutions (POS).

In the time period T1, only the mean demand is considered. Then in T2, demand follows a certain growth rate for a scenario with expected probability of growth for each scenario. Similar trend is outlined for T3.

The deterministic counterpart envisaged for the problem is a set of time periods with specific pattern of growth rate for mean demand - say 15% in T1, 10% in T2 and 10% in T3.

I want to draw out a comparison of the POS obtained from the stochastic and deterministic analysis. What is the best way to proceed in order to give the decision maker a whole picture of the POS with the scenario and time period considered in both type of analyses?

Do I obtain POS sets for all the 13 scenarios from T1 to T3, or just the 9 scenarios in T3? It'd mean 13 or 9 Pareto fronts for the stochastic analysis alone. In other words - a Pareto front with POS for each time period and scenario! How do I compare whatever I obtained from the stochastic analysis with the deterministic one?

Once again, the aim is to analyze the stochastic analysis and draw out a comparison of the POS obtained from the stochastic and deterministic analysis for the time periods and scenarios considered.

*Comments on the aforementioned approach and recommendations for alternatives are appreciated.*

I have a project about operatios research. In my case I have several-vary vehicles but one source and one target. Vehicles have const and they must assign some areas. Like vehicle 1 must carry a type product , vehicle 2 must carry b type product etc.. But all products stored same place. I can not find problem type for this case.

Is Entropy Shanon a good technique for weighting in Multi-Criteria Decision-Making?

As you know we use Entropy Shanon for weighting criteria in multi-criteria decision-making.

I think it is not a good technique for weighting in real world because:

It just uses decision matrix data.

If we add some new alternatives, weights change.

If we change period of time weights change.

For example we have 3 criteria: price, speed, safety

In several period of time weights of criteria vary

For example if our period of time is one month"

This month may price get 0.7 (speed=0.2, safety=0.1)

Next month may price get 0.1 (speed=0.6, safety=0.3)

It is against reality! What is your opinion?

Hi All,

I have modeled an MILP model using two different formulations, one of the formulation uses three indexes, while the other formulation uses five indexes. Comparing the solution speed of two formulations using the same solver (Gurobi, CPLEX), it turns out that the formulation with five indexes is solved faster by the solver. Not sure why this is happening, has anyone had this experience or are any studies related to this problem available. Please let me know.

Thanks,

Bhawesh

Can

**numbers**(the**Look then Leap Rule OR the Gittins Index**) be used to help a person decide when to**stop looking**for the most suitable career path and LEAP into it instead or is the career situation**too complicated**for that?^^^^^^^^^^^^^^^^^

Details:

Mathematical answers to the question of optimal stopping in general (When you should stop looking and leap)?

**Gittins Index , Feynman's restaurant problem**(not discussed in details)

**Look then Leap Rule (secretary problem, fiancé problem):**

**(**

*√n ,*n/**e**

**, 37%)**

How do apply this rule to career choice?

1- Potential ways of application:

A- n is

**Time**.Like what Michael Trick did https://goo.gl/9hSJT1 . Michael Trick A CMU Operations Research professor who applied this to his decide the best time for his marriage proposal., though he seems to think that this is a failed approach.

In our case, should we do it by age 20-70= 50 years --- 38 years old is where you stop looking for example? Or Should we multiply 37% by 80,000 hours to get a total of 29600 hours of career "looking"?

B- n is the number of

**available options**. Like the secretary problem.If we have 100 viable job options, we just look into the first 37? If we have 10, we just look into the first 4? If we are still in a stage of our lives where we have thousands of career paths?

2- Why the situation is more complicated in the career choice situation:

A- You can want a career and pursue it and then fail at it.

B- You can mix career paths. If you take option c, it can help you later on with option G. for example, if I went as an IRS, the irs will help me later on if I decide to become a writer so there's overlap between the options and a more dynamic relationship. Also the option you choose in selection #1 will influence the likelihood of choosing other options in Selection 2 (For example, if in 2018 I choose to work at an NGO, that will influence my options if I want to do a career transition in 2023 since that will limit my possibility of entering the corporate world in 2023).

C- You need to be making money so "looking" that does not generate money is seriously costly.

D- The choice is neither strictly sequential nor strictly simultaneous.

E- Looking and leaping alternates over a lifetime not like the example where you keep looking then leap once.

Is there a practical way to measure how the probability of switching back and forth between our career options affects the optimal exploration percentage?

F- There is something between looking and leaping, which is testing the waters. Let me explain. "Looking" here doesn't just mean "thinking" or "self-reflection" without action. It could also mean trying out a field to see if you're suited for it. So we can divide looking into "experimentation looking" and "thinking looking". And what separates looking from leaping is commitment and being settled. There's a trial period.

How does this affect our job/career options example since we can theoretically "look" at all 100 viable job positions without having to formally reject the position? Or does this rule apply to scenarios where looking entails commitment?

G-

***You can return to a career that you rejected in the past. Once you leap, you can look again.*"But if you have the option to go back, say by apologizing to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%.*" https://80000hours.org/podcast/episodes/brian-christian-algorithms-to-live-by/

*3- A Real-life Example:

Here are some of my major potential career paths:

1- Behavioural Change Communications Company 2- Soft-Skills Training Company, 3- Consulting Company, 4-Blogger 5- Internet Research Specialist 6- Academic 7- Writer (Malcolm Gladwell Style; Popularization of psychology) 8- NGOs

As you can see the options here overlap to a great degree. So with these options, should I just say "ok the root of 8 is about 3" so pick 3 of those and try them for a year each and then stick with whatever comes next and is better?!!

There are lots of Optimization method /Evolutionary algorithms (EAs) in literature. Some of them is more effective (for solving linear/nonlinear problem) compared to other. But we don’t know which will fit our model. As a result we checked for everything as we can do. But cant get the desire result. Some of those methods are 1. Genetic algorithms (GA) ; Haupt and Haupt (2004) 2. Pattern search (Mathlab) 3. Particle swarm optimization (PSO), Binary Particle Swarm Optimization (BPSO); Eberhart and Kennedy (1995) 4. Bee optimization; Karaboga and Bosturk (2007) Pham et al (2006) 5. Cuckoo algorithm; Yang and Deb (2009, 2010) 6. Differential evolution (DE) ; Storn and Price (1995, 1997) 7. Firefly optimization; Yang (2010) 8. Bacterial foraging optimization; Kim, Abraham and Cho (2007) 9. Ant colony optimization (ACO) ; I Dorigo and Stutzle (2004) 10. Fish optimization; Huang and Zhou (2008) 11.Raindrop optimization ; Shah-Hosseini (2009) 12.Simulated annealing ; Kirkpatrick, Gelatt and Vecchi (1983) 13.Biogeography-based optimization (BBO), 14. Chemical reaction optimization (CRO) 15. A group search optimizer (GSO), 16. Imperialist algorithm 17. Swine flow Optimization Algorithm. 18. Teaching Learning Based Optimization (TLBO) 19. Bayesian Optimization Algorithms (BOA) 20. Population-based incremental learning (PBIL) 21. Evolution strategy with covariance matrix adaptation (CMA-ES) 22. Charged system search Optimization Algorithm 23. Continuous scatter search (CSS) Optimization Algorithm 24. Tabu search Continuous Optimization 25. Evolutionary programming 26. League championship algorithm 27. Harmony search Optimization algorithm 28. Gravitational search algorithm Optimization 29. Evolution strategies Optimization 30. Firework algorithm, Ying Tan, 2010 31. Big-bang big-crunch Optimization algorithm, OK Erol, 2006 32. Artificial bee colony optimization (ABC), Karaboga,2005 33. Backtracking Search Optimization algorithm (BSA) 34. Differential Search Algorithm (DSA) (A modernized particle swarm optimization algorithm) 35. Hybrid Particle Swarm Optimization and Gravitational Search Algorithm (PSOGSA) 36. Multi-objective bat algorithm(MOBA)
Binary Bat Algorithm (BBA) 37. Flower Pollination Algorithm 38. The Wind Driven Optimization (WDO) algorithm 39. Grey Wolf Optimizer (GWO) 40. Generative Algorithms 41. Hybrid Differential Evolution Algorithm With Adaptive Crossover Mechanism 42.Lloyd's Algorithm 43.One Rank Cuckoo Search (ORCS) algorithm: An improved cuckoo search optimization algorithm 44. Huffman Algorithm 45. Active-Set Algorithm (ASA) 46. Random Search Algorithm 47. Alternating Conditional Expectation algorithm (ACE) 48. Normalized Normal Constraint (NNC) algorithm 49. Artificial immune system optimization; Cutello and Nicosia (2002) 50. fmincon .

Besides this there are many other optimization algorithm recently invented which are generally called Hybrid optimization Technique because it’s a combination of two method. If we share our experiences then it will be helpful for all of us who are in the field of optimization. I may be missing some methods, researcher are requested to add those algorithms and the way of use like many model needs initial value, weight, velocity, different type of writing objective function etc. I am facing some problems that’s why I make this format which will definitely help me as well as all other researchers in this field. Expecting resourceful and cordial cooperation.

There are N tasks and M workers.

- For every tuple task-worker the efficiency is known;
- For every task one worker must be assigned;
- For every worker at least one task must be assigned;
- For every worker multiple tasks can be assigned;
- Tasks must be grouped (e.g. by location), and for every group the number of workers is fixed. Every worker must be in exactly one group.

Can you suggest an algorithm or approach for optimal (or suboptimal) assignment (maximal efficiency)?

As my knowledge goes:

- Without 4. and 5. this problem can be stated as the “Assignment Problem”, for which there are algorithms with polynomial complexity;
- Without 4. this problem can be addressed as “Generalized Assignment Problem” which is NP-hard;
- Without 4. and if M = 1 this problem can be addressed as “0-1 Knapsack Problem”.

I can’t see how to use any of the mentioned to address my problem.

I am looking for recent research questions in Reinforcement Learning( RL ) from Artificial Intelligence( AI ) . I also want know where it is applicable. As i know it is applied in Games, robotics and Operation Research, i would like know more about it. If any other areas where it is applied too?

There is a need to automate several industrial tasks which may require a number of humans and robots to perform it. Some can be done only using robots. Say there is a task X. My output looks like: Task X can be done if around 4 robots are assigned to it or 1 human and 1 robot are assigned to it. My input will describe the task based on which an algorithm will compute the desired output.

So basically could you share some research work where resource requirement for industrial tasks are modeled mathematically or even empirically? Or could you point to some existing algorithms in the domain of industrial engineering or otherwise where researchers have tackled the problem of identifying how much resources need to be thrown on a task to finish it successfully?

I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :

Vnew= Vold+(Current-Best) * f;

S= 1/ ( 1+Math.exp(-Vnew));

X(t+1) = { 1 S>Rnd , 0 Rnd>=S)

the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew

In CRS model, the input and output oriended objective function values are reciprocal to each others. But why not in VRS?

In optimization problem often we use local optimum but is it global? Or are there any meta-heuristics algorithm to obtain global solution? If there any then what is the name of that algorithm and if possible how we can get that solution?

Dear Friends and colleagues

I have an optimization in which I have a nonlinear term in the following form:

x(t)* a(k)

where, x and a are variables. a is a binery variable and the sets in which each of the variables are defined is not the same. Would you please suggest me a method that I can use to handle this term and transfer my model to a mixed integer linear programming?

Thank you for your suggestions.

Is it possible to manage with supply chain in a more effective way?

for optimization problem of fuel delivery from Depot to petrol stations, the solution approach is to use Tabu neighborhood for solving the model. (objective to minimize the delivery cost) how this can be done on Lingo or GAMS?

In the multi-objective optimization problems we often say that the objective functions are conflicting in nature. In what sense the objective functions are said to be conflicting with each other? Also, how it could be proved numerically that the objective functions in our multi-objective problem are conflicting in nature?

Please suggest recent topics or research in operations research (management science).

Thanks in advance.

We all know that data is very important in decision making processes, and also it's obvious that row data can not lead to a precise decision.

Operations Research (OR) can help transform the data to useful information by which accurate decisions can be made. Decision Support Systems are good examples in which this transformation and process of making accurate decisions occur.

Health care problems have attracted too much attention and researchers are using OR tools to solve them.

**Now a critical question is:**What are the main trends of using operations research tools in healthcare problems?

In primal form of DEA CRS model, we are going to maximize the efficiency of the reference firm.

**But how is it possible that in the dual form also, we are minimizing the efficiency of the reference firm?**What is the physical interpretation of the dual version? Please see the attached file.I am looking for examples of the combination of ABM, MO optimization, and game theory, preferably the ones that have been used for practical purposes.

Which of the following is not a component of inventory carrying cost?

i) capital cost

ii) transportation cost

iii) insurance cost

iv) obsolescence cost.

The implementation of JIT offers several advantages, including

(i) work-in-process increases

(ii) rework reduction

(iii) decreased profit margins

(iv) increase in variability to better respond to variable demand.

Hello All. If a country wants to lay down railway tracks(assuming there are no tracks already). Let the goal be maximising connectivity and increasing flow, are there any known mathematical models for such a problem?

I am trying to optimize a function that is non-linear in parameters, three in number. I am using

**Genetic Algorithms**(GA) for this purpose. Thus, I have a function of time that is non-linear in three parameters as time-series data. I am using the**ga() function**of the**GA package in R language**for the purpose. However, as I see, the**initial values**that I set for the parameters**heavily influences**the parameters computed by the ga() function. I also read the following article: Scrucca, L. (2013). GA: a package for genetic algorithms in R.

*Journal of Statistical Software*,*53*(4), 1-37.In section

**4.4 Curve Fitting,**if I use the following initial values (min, max):**a(1000, 10,000), b(0,10), and c(0.5, 10)**instead of the ones used, that is a(3000, 4000), b(0, 1) and c(2, 4) I get**completely different results**from the ones obtained by the paper. I get**a=2772, b=.0235, c=4.07**as against a= 3534.741, b=0.01575454, c= 2.800797 in the paper.My understanding is that global optimization techniques such as GA would be able to

**find out global optima irrespective of initial values**although it might take more or less iterations depending on the initial values of parameters. Why is this not happening in case of my function and also the example that I cited?Thanking you all in advance.

What is the difference between a research monograph and a book or article?

Thanks

Over a single link, SRPT is known to be the optimal policy for minimizing mean completion times. However, is that also true for a

**network**case with many sources and destinations? Is it possible that Fair Sharing (based on Max-Min Fairness) performs better in a network scenario (considering mean completion times)?I have a 30*40 matrix. Lets say the components in the matrix are specified with "P" and the related number of the row and column of each "P" is specified by "X" and "Y" accordingly. I have a model that the output should give us the P, X and Y. How can I define constraints (for solving a simplex) which connect P with it's exact X and Y? I want to say for example:

if X=1 and Y=1 then P= 0.1

if X=1 and Y=2 then P= 0.5

if X=1 and Y=3 then P= 0.8 and so on.

I don't want the model to return a P that does not match it's location in the matrix. How can I achieve this?

Everything is known, except for P(k), X and Y

Does anyone know of publicly available large/huge data envelopment analysis (DEA) instances?

Project life cycle has four phases namely, (i) Initiation phase,

(ii) Planning/Design Phase (Work Breakdown Structure), (iii)

Execution/Control Phase and (iv) Closing Phase. Among these for

phases when does project purchasing process take place?

**For example if ABC railway company gives a tender to construct a**

**bridge, then**

**1. When (in which phase) project agreement (purchase) will be**

**sanctioned with DEF construction company?**

**2. In which phase project identification and selection is made?**

**3. In which phase statement of work and project appraisal (for**

**proposed project) is done?**

I am working on small project which is to apply the Operation research knowledge into daily life. Do you have fun or brilliant ideas ? Please share!

Thank you

In transportation problem, which method gives the best result: North-West, Row Minima, Column Minima, Least Cost or Vogel’s Approximation (VAM)?

In Branch and Bound Algorithm, if the linear relaxation of the problem provides more than one fractional values (for more than one decision variables), then which decision variable should be considered for the next step?

If arbitrarily any one of the fractional values is chosen, does it guarantee the optimal solution finally?