Science topic

# Optimization (Mathematical Programming) - Science topic

Explore the latest questions and answers in Optimization (Mathematical Programming), and find Optimization (Mathematical Programming) experts.
Questions related to Optimization (Mathematical Programming)
Question
In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set.
In the context of nonlinear multi-stage max-min robust optimization problems:
What are the best robustness models such as Strict robustness, Cardinality constrained robustness, Adjustable robustness, Light robustness, Regret robustness, and Recoverable robustness?
How to solve max-min robust optimization problems without linearization/approximations efficiently? Algorithms?
How to approach nested robust optimization problems?
For example, the problem can be security-constrained AC optimal power flow.
To tractably reformulate robust nonlinear constraints, you can use the Fenchel duality scheme proposed by Ben Tal, Hertog and Vial in
"Deriving Robust Counterparts of Nonlinear Uncertain Inequalities"
Also, you can use Affine Decision Rules to deal with the multi-stage decision making structure. Check for example: "Optimality of Affine Policies in Multistage Robust Optimization" by Bertsimas, Iancu and Parrilo.
Question
I am solving Bi-objective integer programming problem using this scalarization function ( F1+ epslon F2). I have gotten all my result correct but it says Cplex can not give an accurate result with this objective function. It says cplex may give approximate non-dominated solution not exact. As I said before, I am very sure that my result is right because I already checked them. Do I need to prove that cplex give right result in my algorithm even sometimes it did mistake in large instance?
Did you code the epsilon constraint method using OPL? May I ask how did you code this? I tried but could not get the right results.
Thanks a lot.
Question
Hi all, I'm using CPLEX for solving VRPTW (vehicle routing problem with time window) and observe there is a huge computing time difference even when I change the problem size by just 1. By "problem size", I mean number of nodes inside problem.
For example, for 20 nodes, it took only 20 secs to solve. However, it took more than 1 hour to solve 19 nodes instance. I understand VRPTW is NP-hard and so such phenomenon is expected to happen.
The gap is still too big, I wonder if there is any technique to make computing time more consistent with problem size?
Ondřej Benedikt Michael Patriksson Alexandre Frias Faria Adam N. Letchford Thanks all for your valuable insights. I have played more with my instances and find out that it's more related to the structure of my model formulation.
My model needs to determine both routes of vehicles & time-windows of customers given a set of scenarios each of which contains parameters like traveling time. So, it's quite harder than classic VRPTW where time-windows are known.
It's hard for some specific problem size. If I simply ignore these "bad instances", then the solving time indeed increases when problem size grows.
Question
I am using MATLAB's  'fmincon' to solve some nonlinear constrained optimisation problem. But it is very slow. What are the possible ways to speed up the simulation?
What is the best alternative to 'fmincon' to speed up the optimisation process so that I can use it with MATLAB?
In the Matlab page of fmincon mentioned that a parameter to speed up as follows:
UseParallel
"When true, fmincon estimates gradients in parallel. Disable by setting to the default, false. trust-region-reflective requires a gradient in the objective, so UseParallel does not apply. See Parallel Computing. "
Question
Are there any (commercial) optimization solvers that make use of dynamic programming techniques when useful?
Yes, OpEMCSS comes with my Wiley textbook “Simulation-Based Engineering of Complex Systems” that has a Classifier block that can do Reinforcement Learning (RL). RL does the same thing as Dynamic Programming without storing all the paths. RL learns the rules to create a plan (a sequence of rules) that solves a problem. I covered both of theses techniques in my CSUF, EE585 class, Mathematical Optimization.”. How to set up a program to do RL is explained in my textbook.
Question
I made it but it gives different values from the literature.
Please refer to the following link for the population-based method Black-Hole Optimization Algorithm:
Question
Bat-inspired algorithm is a metaheuristic optimization algorithm developed by Xin-She Yang in 2010. This bat algorithm is based on the echolocation behaviour of microbats with varying pulse rates of emission and loudness.
The idealization of the echolocation of microbats can be summarized as follows: Each virtual bat flies randomly with a velocity vi at position (solution) xi with a varying frequency or wavelength and loudness Ai. As it searches and finds its prey, it changes frequency, loudness and pulse emission rate r. Search is intensified by a local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm. (Wikipedia)
What are the applications of bat algorithm? Any good optimization papers using bat algorithm? Your views are welcome! - Sundar
Bat algorithm (BA) is a bio-inspired algorithm developed by Xin-She Yang in 2010 and BA has been found to be very efficient .
Question
Hello everyone,
We have the following integer programming problem with two integer decision variables, namely x and y:
Min F(f(x), g(y))
subject to the constraints
x <= xb,
y <= yb,
x, y non-negative integers.
Here, the objective function F is a function of f(x) and g(y). Both the functions f and g can be computed in linear time. Moreover, the function F can be calculated in linear time. Here, xb and yb are the upper bounds of the decision variables x and y, respectively.
How do we solve this kind of problem efficiently? We are not looking for any metaheuristic approaches.
I appreciate any help you can provide. Particularly, it would be helpful for us if you can provide any materials related to this type of problem.
Regards,
Soumen Atta
The method for solving this problem depends on the properties of functions F, f, g (convex, concave, other properties).
If properties are not known - the only method is looking through all values of variables.
Question
Hi All,
Are there any opinions and experiences of the LocalSolver solver?
Comparing for example accuracy, speed, etc. to other solvers, etc.
Interesting to hear about them ...
/Lars
Dear Lars
I agreed 100 % with you in that a Solver must identify unfeasibility, but my question is: How many solvers or MCDM methods do you know that have that capacity?
Only one: Linear Programming
The procedure is very simple: It compares criteria independent values, and if a solution satisfies these values. If only one criterion is not satisfied the project is unfeasible.
Nowadays, problems are 'solved' assuming that a problem is feasible, not taking into account that t hat circumstance may not exist.
I have read hundreds of comments and papers from our colleagues. How many of them posed this problem?
Nobody.
I wrote in RG almost a year ago about this problem, you can see it in my profile under the number 304, and again, in May 2020 under the number 318. Both have had some moderate reading but nobody came forward to acknowledge and discuss it. You are the only person that addresses that issue.
Regarding LocalSolver I know what it, is but my experience on it is null.
Question
Assume, we found an approximate solution A(D),
where A is a metaheuristic algorithm, D is concrete data of your problem.
How close the approximate solution A(D) to an optimal solution OPT(D)?
Question
I am preparing a comparison between a couple of metaheuristics, but I would like to hear some points of view on how to measure an algorithm's efficiency. I have thought of using some standard test functions and comparing the convergence time and the value of the evaluated objective function. However, any comments are welcome, and appreciated.
The 7th section, namely "Results, Data Analysis, and Comparison", of the following current-state-of-the-art research paper have a sufficient answer for this question:
Question
Hi,
I have heard - but cannot find any documents about it, that some solvers are better at utilizing the "SOS1" variable type, whereas other solvers just converts the problem into one with binaries and a few more constraints. Is that true or not?
If true - which are the solvers that are more efficient by using SOS1 instead of binaries when applicable?
/Lars
Question
I need implement the epsilon constraint method for solve multi objective optimization problems, but I don´t know how to choose each epsilon interval and neither when to terminate the algorithm, that is to say the stopping criteria.
Hi my dear friend
Good day!
I think the below book will help you a lot to provide relevant codes.
Messac, A. (2015). Optimization in practice with MATLAB®: for engineering students and professionals. Cambridge University Press.
best regards.
Saeed Rezaeian-Marjani
Question
I have rewritten an MPC problem to a QP formulation.
The QP solvers quadprog and Gurobi, which uses the interior-point algorithm, give me the same objective function value and optimized values x, but GPAD, a first-order solver, gives me the same optimized values x, but an objective function value, which is a factor 10 bigger compared to quadprog and Gurobi. Do you anyone know a possible explanation?
Good!
Question
I need help on Lingo programming. I have a mixed integer programming to solve it using sets in Lingo. One of the constraints is :
S(i,k,w)>=m(i,j)*X(i,t)  ,   for i=1,...,I ; j=1,...,J; t=1,..,T;
k=t+K(j,w)-1,..,t+K(j,w)+S(j)-2 ; w=1,..,W.
Here m(i,j), K(j,w) and S(j) are parameters.
The problem is i do not know how to enter index k using sets in Lingo.
Any help would be highly appreciated.
Hi, Imran and Farah
Before questions, you must read the Linus textbook that is free from LINO HP.
Question
I'm currently working on an optimization problem (please see attached file).
Any tipps to linearize the objective function? Please note that a,b and c are real constants.
Thanks!
This
also provide a lot of information for how to convert a convex problem to conic form e.g. second order cone form. This should be your best option as mentioned by Adam Letchford. Mosek is of course one among several optimizers for SOCPs.
Question
Dear Pierre Le Bot Thank you for introducting resources Please could i ask you If it's possible to answer these 2 questions : 1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in 2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted? regards
Hello Amid,
very sorry to see only today that you asked me a question two years ago !
1. What's the practical implication of CICA? and please mention some CICA for worker's hand cut off senario due to conveyor belt stiking in
MERMOS is built at the level of a the failure of the working team, then I do no know how to consider that event without more information. Our assumption is that failure happen because of a rational teamwork behavior that is no more adequate. The anayst describe with cica the behaviour of the team that are at the center of the failure story that is quantified. here it could be the CICA 1 the team (or the worker if no team) wants to fix the conveyor 2 the team doesnt want to spare time by avoding to switch off conveyor power source . that two elements are enough to exmplain the story.
2. After multiplying the results of 3 parameters (No reconfiguration probability -sf -cica) and obtaining a probability number , how the obtained probability number is interpreted?
the three elements (situation, CICAs, non reconfiguration) are the required elements of the failing scenario to exist (situation generating the CICAs, CICAS lasting too long without reconfiguration). they have to be conditional : the situtation features probability is the conditional probabilities that that particular situation occurs given the general context of the analysis, the CICAs occurs with a given probability given that situation, the non reconfiguration depends on the both. The has to try to built a plausible scenrio by having the probability of the cicas to 1. tha mean that he have to describe precisely the situation features (more it is precise, more the probability os low). the non reconfiguration takes in account the recovery induced by the MMI and the organsiaation (redundant roles, verification by procedures ...).
If these explanations are still useful but not enough do not hesitate to ask me again.
Question
I want to calculate distance distribution a point with distance d from the center of the circle to some random points uniformlly distributed in same circle so that the distribution function is dependent on distance d from the center of the circle, How can I calculate it?
...I read geometrical probability book but in this book two random points in circle investigated.
@ Peter Breuer,
Do you have a reference to a geometry book that one can cite for the pdf of 2r/R^2??
Question
If there are at least 4-5 factors to consider, there will be too many samples. I have read something about 2k factorial level design, and some researchers used to screen out the important factors.
I do not believe researchers can screen out the most important factors that influence processes. It makes no sense. A good foundational knowledge of what to do, when to do, why do and how to do is strongly recommended. You can do a good or decent screening with the wealth of information you have acquired about the nature of research you embark on.
Question
Consider the following elementary maximization problem:
\begin{align}
f{=}\mathrm{argmax}_{y_{l,c}, p_{l,c}}~\sum_{l=1}^{L}\sum_{c=1}^{C} y_{l,c}\text{log}_2\left(1+\frac{p_{l,c}}{I_{l,c}}\right)
\end{align}
s.t.,
\begin{align}
\sum_{l=1}^{L}y_{l,c}\leq 1
\end{align}
\begin{align}
\end{align}
\begin{align}
\sum_{l=1}^{L}\sum_{c=1}^{C} \text{log}_2\left(1+\frac{I_{l,c}}{p_{l,c}}\right) \geq R_c
\end{align}
where, $l=1,2, \ldots L$, and $c=1, 2, \ldots C$
My questions are as follows:
1. Can I solve this problem as non-linear optimization?
2. I want to use generalized reduced gradient (GRG) method. Is it the correct approach? Can I transform this problem to minimization objective function?
3. Can any other optimization method be followed? Some suggestion.
The easy methos employ a momento factor which is a stechastic mechanism to avoid get trapped in local minima
Question
What is the best method to solve Multiobjective Optimization , weight, bounded, goal....etc ?
There are really more than 50-60 algorithms. It looks like everyone is trying to change a few things in an algorithm and come up with a new algorithm. And of course everyone is promoting their own algorithms. I am checking papers which compare the performance of these algorithms, yet everyone choose to compare different sets of algorithms. At least, a few algorithms seem to appear more in these comparison papers. I just need multiobjective optimization for a FEM updating problem that I am encountering. But what I have seen that, multiobjective optimization research community needs to sit down and to do more surveys about which algorithms are most suitable for what kind of problems. In a few weeks, I can tell you which algorithms that I will choose and why. I am working on it too.
Question
I'm using optimization tool box in Matlab to solve multi-objective optimization, I have linear and nonlinear constraint, after running the optimization, I got Pareto front (see the file attached in this message), increasing the population size give me the same result.
what do you think ?
My graph also came the same way as that of Achour Hadjar . Do we have to add constraints in Genetic Algorithm (gamultiobj in MATLAB). Because if non-linear constraints are ignored then Parento front is being plotted in the correct way but on adding constraints its showing only 1 point. what might be the problem?
Question
I have seen many scholars use CPLEX solver in GAMS as they can solve the problem with ILOG CPLEX software. So in this case,they should possesses same results?
In principle, when modeling a problem with different modeling languages (such as GAMS, AMPL), you can expect that a global solver, such as CPlex, will return the same objective function value, no matter what modeling language you used. But...
The solution process within Cplex is different, when you present the variables and constraints in a different ordering. And you have no control about the ordering of those (i.e. the interface between the modeling language and the solver), and even if you would have that kind of control, there's no general strategy to find a good ordering (for example, and ordering that leads to minimum runtime).
What does that mean for you? In case you have an easy problem with a unique optimum, you would not feel much difference from using different modeling languages or different MIP solvers.
In case you have multiple optima, you might get different solutions from different modeling languages (although, of course, they have the same objective function value).
In case your problem is too difficult to find a global optimal solution in finite time, and you terminate the solution process prematurely, then you will easily end up with very different solutions and very different objective function values, depending on the modeling language you have used.
Question
Hi,
I am working with a VERY large scale LP -- so large that simplex method takes forever to run. I've developed an efficient numerical algorithm to exploit the problem structure to significantly reduce the running time.
The problem is, my application requires basic solutions. For now, I am using crossover, that is, take the optimal primal and dual solutions (numerical) from my algorithm, give them to a simplex solver, and use good old simplex to solve it to get basic solutions. This works well, but it's still a bit too slow, as most of the running time are used in the crossover phase.
So my question is, is there any other algorithm in the literature that can produce basic solution given a numerical solution with high accuracy?
Thanks,
Alex
Nice links from Nicolas Dupin. The articles are interesting and straight to the point. Am sure Alex will find them useful.
Question
Hello Everyone,
I have been asked from a reviewer to apply our metaheuristic on an industrial or a real-world problem (in addiition to the CEC benchmarks).
We are working in the continuous space of variables, with unconstrained optimization, Hence, we need some benchmarks for any industrial or real-world problems.
IEEE CEC 2011 Real World Problems
Question
When I read an article about supply chain design, three constraints which contains an uncertain operator "M". I think this "M" would be adaptive in optimization process, but I don't know how to find out the most suitable value to make these constraints tight.
First Constraint: inventory position <= reorder point + operator "M"*(1-binary decision variable).
Explain: If the inventory position <= reorder point, an procurement order need to be issued, so the binary decision variable = 1.
if the inventory position > reorder point, no procurement order is needed, so the binary decision variable = 0.
Second Constraint: logistics volume <= M*binary decision variable.
Explain: if the binary decision variable = 1, the logistics is allowed.
Therefore, the operator "M" is uncertain, how can I sure this value? Or how can I interpret this operator.
I attached this article, these equations are Eq. (20), Eq. (26), and Eq. (28).
In you case, M must be grater than the largest difference between the reorder point and the current inventory position. Say, M=(The largest applicable inventory position - The lowest/smallest applicable reorder point).
Question
I have formulated optimization problem for building, where cost concerns with energy consumption and constraints are related to hardware limits and model of building. To solve this formulation, I need to know if problem is convex or non-convex, to select appropriate tool to solve the same.
Question
Can anyone suggest me references about converting a constrained optimal control problem to an optimization problem?
I have seen problems being solved by this method, but I haven't been able to find an algorithm or set of instructions for the conversion process. I want to solve the problem using MATLAB fmincon function.
Question
what is exactly the difficulty imposed by non-linear cosntraints relatively to linear ones for a multi-objective optimization genetic algorithm ?
I think the main difficulty comes from the fact that nonlinear constraints are inherently much more difficult to satisfy than linear ones.
Evolutionary algorithms can usually deal reasonably well with linear constraints, but when it comes to nonlinear ones, the usually employed strategy of augmenting the objective function(s) with a penalty term can perform quite poorly depending on the cases.
What can happen is that you try to solve a problem with a different objective function, and the addition of the penalty term can drive the evolutionary algorithm "astray". In addition, if you employ a penalty approach, you need to balance between solutions that have a good feasibility but poor objective values and solutions with poor feasibility but good objective values.
Question
Hello I am using MATLAB interfacing to read and write in Gams file. I am doing multi objective optimization. My first objective function i solved with MIP while second objective function required MINLP. Due to Limitation in license i cant optimize MINLP. Can someone tell me if possible to test my both files (.gms and .m ) online ??
You can submit and run your GAMS file through the NEOS server.  There is no limit on model size or CPU time.  See https://neos-server.org/neos/.  The list of MINLP solvers is provided at https://neos-server.org/neos/solvers/index.html#minco and it includes the global solvers BARON, Couenne, LindoGlobal and Scip under GAMS.
Question
Hi, I m developing to PSO algorithm in a set of integers numbers, but I cant apply the updating of speed
v(t)=v(t-1)+ (pBest -x(t)) + (gBest -x(t))
x(t)=v(t)+x(t-1)
Because each element of set is a number between 0-499. Then If you are adding 2 elements of diferent vector or particle, the result exceeds the high limit 500.
Then I  might define to operators of add and rest , Do you help me with your  experience in PSO.?
We discovered rounding, etc, was not the best approach for PSOs and discrete sets of integers. There is an adapted PSO addressing this for discretes. We were able to combine this with the other types of PSOs. There is a "Discrete Particle Swarm Optimization" algorithm. Below is a reference that might help.
Question
I am looking for Matlab code for Ant colony optmization or Simulated annealing which can handle mixed integer variables.
Thanks.
You will find what you want, or check this link
Question
Suppose I am optimizing ZDT-1 2 objective test function. I want to stop the algorithm when there is no significant improvement in Pareto front. how can I achieve this?
Supposing that you can write/change the programming code of your computer program or computer program that you are using:
You can firstly test, improve your algorithm and adapt its parameters for a little example where you can find deterministically the optimal Pareto Front, for example by an enumeration model (probably exponential) if the space domain is discrete. For larger problems you can end the run in a similar way to what is normally done for a conventional optimization algorithm with a single objective function, for example when there is no improvement (punctual or in average) of the objective function value, with the necessary adaptations given that all objective functions must be considered simultaneously now.
Question
The desired random vector is a series of number between [Xmin, Xmax] with N elements. For instance, [1 3 2 5 9 4] can be considered a random vector while N=6, Xmin=1 & Xmax=10.
Exploiting MATLAB, "randi([Xmin Xmax],1,N)" can produces this vector but the vector probably includes similar elements randomly. For example:
[1 3 2 2 9 4], [1 3 2 3 9 4] , [1 9 2 5 9 4] or [4 3 2 5 9 4]
So, How I can generate a random vector without similar elements?
Note that, I know by using if,for and loop programming syntax's, I can revise the vector but I have to avoid loops in my MFILE!
Ronald's suggestion gave me a basic vision to produce the vector. Also attached the function is more appropriate than using "if" & "for".
Question
does Matlab R2014a support nonlinear constraint in multi objective optimization using optimization toolbox?
Thank you
Yes. In fact there are some tutorials on YouTube. If R2014a features optimization toolbox, you can easily optimize your functions with its help.
Question
How to write multiple nonlinear constraints?
Yes, you must just to add to the fitness function a penalty one for the non fulfillment of the restrictions,
Best wishes,
Question
I am trying to develop a long haul intermodal transportation route considering 4 countries for a research paper. Before processing with mixed integer linear programming (MILP) model, I need to calculate travel distance between cities and respective travel time. I prefer to work on R.
Unfortunately the OSRM package is not working on my R version.
Can anyone please suggest any alternative?
As Alan suggests, you need to install it form github:
> require(devtools)
devtools::install_github("rCarto/osrm")
> devtools::install_github("rCarto/osrm")
> library(osrm)
Data (c) OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright
If you plan to use the OSRM public API, read the OSRM API Usage Policy:
> data("com")
> distCom <- osrmTable(loc = com[1:50, c("name","lon","lat")])
> distCom\$duration[1:5,1:5]
Bethune Annezin Denderleeuw Haaltert Locon
Bethune 0.0 6.2 108.0 104.4 8.0
Annezin 5.5 0.0 110.0 106.3 7.2
Denderleeuw 111.2 114.2 0.0 12.7 107.1
Haaltert 107.7 110.8 12.8 0.0 103.7
Locon 7.3 7.0 102.8 99.2 0.0
Question
Given a minimization problem, i want to compute the lower bounds and upper bounds for the optimal solution to lie in. i have a scalar valued function f(x1,x2,... , xn). Then suppose i use GA, PSO etc. to minimize it. The algorithm outputs me some answer it thinks is good. i want to compute best possible lower and upper bounds but mostly lower bounds to make sanity check on the solution generated. What are the available methods to compute such bounds for the optimal solution on minimization problems? Assume both cases when f is differentiable and not. Second is of more importance to me. please suggest some seminal papers that do a good job at this.
Dear Kishore Shaurya,
your description of the problem under consideration makes clear, that your function f is neither differentiable nor convex (as long as it is not constant). So the idea of a bundle method is not applicable - and I got no idea how to determine a valid lower bound for it.
A domain decomposition approach like that of BARON for global optimization requires the ability to solve local minimization - but without further knowledge about the function f I see no way of doing that.
With such kind of problem you are really just left with search methods without any guarantee for the solution quality.
If n is small you could just try to evaluate f on a grid over the unit cube to get an idea how f behaves.
Even the search algorithm DIRECT
needs Lipschitz continuity, which is not fulfilled by an integer-valued function f with jumps.
Best regards
Ralf
Question
Solving large multi-stage stochastic problems may become intractable. There are some applications permit to solve such problems, among which "non-anticaptivity principle", "decomposition techniques", "lagrangian relaxation", and lately "optimal condition decomposition" are the most known ones. In this regard, what are the advantages and disadvantages of the "scenario reduction techniques" attribute to mentioned techniques to reduce computational burden as well as complexity with a good approximation to the original stochastic optimization problem? In advance, I'd appreciate your supportive message.
Dear Carvalho, thanks much for the hints...
Question
I am using multi-objectve GA toolbox in Matlab to optimize 3 objective function. I can plot pareto two objective each time, but I am unable to plot the pareto fronts of 3 objectives together.
Hi,
you can use some (free) graphics tools, as Gnuplot or R (libreary scatterplot3d), both libraries are of easy use.
In the other hand maybe you are interested in some attainment surface tool, you can use the next tool in C language and Gnuplot which is easy to implement.
There are others graph representations like:
scatterplots, bar chart, star coordinates
More information in the Deb book: chapter 8 "Salient Issues of Multi-Objective Evolutionary Algorithms"
Regards,
Question
My question goes from the explained in the attached document. It's about image denoising using regularization, with constraints. I used the Lagrange multipliers formulation, but, by obtaining the numerical solution, can't get the regularization parameter (Lagrange multiplier) as should be theoretically expected .
Any suggestion would be much appreciated.
Miguel Tavares
X(u) is a norm function (total variation norm). dL/du = 0 mean one must find the image u that minimizes X(u)+\lambda*||u-g||^2.
||u-g|| is the Frobenius norm.
The system is solved numerically, so, as Aparna pointed out, is not exact. But I get a close behavior of \lambda* to -dX*/dc, except from a scaling factor, which should not happen (at least if the theoretical derivation is correct).
Thanks
Question
Hi all, I would like some advice on efficiently resolving the QP problems that arise in Bundle methods at each iteration. As you may know, between iterations new affine constraints get added and the QP objective is also modified. It would be great if you can point me to an efficient implementation or suggest what best can be done with existing commercial software packages for optimization. Currently, I use CPLEX to solve each QP problem, however my guess is not much information is being re-used between iterations. Any suggestions to improve the performance?
Hi, you can refer for example to this complete page:
Question
I still confuse about how to determine the first population of fireflies in firefly algorithm for optimization to start the iteration
Question
If anyone know about NSGA|| evolutionary algorithm then please let me know why crowded distance is not used in initial condition for the generation of the offspring.?
Hello, the reason is that it is a secondary criterion that is used if there is a surplus of solutions based on the non-domination rank, starting from the first/best front, and the last front considered has more solutions than are required in the next generation. More solutions tend to belong to the same non-domination rank as optimization proceeds.
Question
Global optimisation techniques
Dear Soumitra K Mallick
Thanks for your answer. Well, my purpose is to specify a predefined controlled step for the variables at each GA iteration. In fact, this shall decrease the computation time which is something I look to.
Kind regards.
Moussa KAFAL
Question
Let say you have obtained a Pareto front through a certain MOEA and you want to check every solution in that front against certain criterion. Will it be sound (correct) to use VIKOR in such a comparison if you consider each solution as a possible alternative?
Hello Morteza,
Thank you for providing me with the sources.
Best regards
Redouane
Question
Suppose we have to algorithm (A and B) to solve a multi-objective problem. Each algorithm provides a set of solutions. Which statistical test is appropriate to compare these algorithms? Is Wilcoxon test appropriate?
Dear Hamid,
To check the statistical significance of pairwise differences among solutions, the Wilcoxon test is fine and the Chi-square test also would work. To perform these tests, the null hypothesis is that there is no significant difference between the two solutions at a significant level (e.g., 5 %). The p value and z value are used to assess the significance of differences between the solutions. When the p value is less than the significant level (0.05) and the z value exceeds the critical values of z (−1.96 and +1.96), the null hypothesis is rejected, meaning that the performance of the algorithms is significantly different.
Question
I am trying to maximize an efficiency function which is fractional in nature, through power allocation. It is already known that the function has an uni modal behavior with respect to power allocated. In this case, should I go for Dinkelbach's method of first converting it into a convex function and then optimize or can I go for other search methods(both direct and indirect ) to locate the peak value? Most papers (almost all) have tackled  these types of problem by Dinkelbach's method. Some body please suggest a soution
Thanks for the answer. Is it possible to reduce the computational complexity by adopting any other methods under specialized cases such as uni -modality and so on?
Question
I know that if we have a leader-follower game, and the follower's problem has inequality and equality constraints, it is considered a MPEC problem. What if the follower has an unconstrained problem, would it also be classified as MPEC? what if it has equality constraints only? wouldn't they be special cases of the problem with inequality constraints.
My understanding is that:
MPEC: problems with equality and inequality constraints
MPCC: problems with inequality constraints only (complementarity)
EPEC: problems with multiple leaders and equality and inequality constraints
Please correct me if i'm wrong. If so, please provide a source or chart that explains the difference with a complete list.
Amirsaman and Michael,
Thank you so much for your inputs.
Question
in my current work i employ the GA optimisation method in matlab to search for the best solution, hwoever i find that the GA method easily converges to local optimum but not global one, especially when this local optimum is very close to global optimum, could you help to explain how this happend, and how i can improve the GA method to avoid converging to the local optimum? thank you!!
Hi,
one of the solutions to escape local optimum is to perturb your new GA generation by adding some random chromosomes at the end of each new generation. Note that the randomness that you add must be well chosen and small compared to the rest of the genetic operators otherwise, you will have a random search instead of guided GA search.
Another solution is to reconsider the mutation rate (which induces some randomness to your population), be careful though with the chosen rate otherwise, your GA will be too perturbed to find a good solution. You can also consider allowing some bad solution to pass to the next generation, the same remarque here, the number of bad individuals that you allow to survive must not be big just a little few otherwise your GA will not converge to the hopeful results
Good Luck
Question
Instead of converging to the max value the binary matrix is very random and the next random particle position (0,1) of the sigmoid function causes the objective to be be very low. Is this normal for BPSO. How does the solution converge for BPSO.
No i am using a sigmoid function, that generates the binary matrix for the next iteration. using the simplest binary code available in literature similar to what is attached.
Question
Hi dear all. Full description of my problem is in the attached pdf file.
Hi, could you please give us the formulation  of your mathematical  programming  problem? Normally, the total number of unknowns, variables and multipliers is ewual to the number if equations (contraints plus necessary conditions)
Question
I want to compare the SI techniques such as ACO, PSO, FA, TLBO,SA, DEM ABC,HS,ICA,IWO,CA,IFA,CS,SSO. For comparison I am using becnhmark functions as sphere,rosenbrock and ackley. Further I am taking constants as fixed no of iterations,max and min value.
While doing this even if I specigiy the range of [-10,10] I get optimized  value as 784. Is it Correct?
You can evaluate those optimizers according to several benchmark problems provided by IEEE Congress on Evolutionary Computation (CEC). There are some details that I think researchers in the field of optimization should be aware of them. Here, I list them again (I also have mentioned them for other researchers) because it can be very useful for you and other researchers.
1. If you want to be a great researcher (such as Dr. Michael Patriksson), I recommend you to often be innovative, deep, and curious and provide new discussions for researchers, hence, those benchmarks are not enough to compare different optimizers and substantiate the differences between their performances. For this reason, it is better to also include some of the following benchmark problems.
2. You should choose the suitable benchmark problem based on the class of your optimizer, preferences, and your objective function. For example, you should decide between need single objective or Multi-objective, constrained problems or unconstrained problems? artificial benchmark problems or real-world optimization problems?
3. After step 1, you can download the proper benchmark suite based on the IEEE CEC standard benchmark problems. You can download the MATLAB code of those benchmarks from the site of Prof. Ponnuthurai Nagaratnam Suganthan:          http://www.ntu.edu.sg/home/epnsugan
4. Please do not forget that you should employ non-parametric statistical hypothesis tests (such as Wilcoxon signed-rank test ) to decide about the significant differences between different optimization algorithms. Otherwise, the differences between the results of those optimizers are not validated significantly. Then, the best ISI journals (Q1, Q2) will (may or can) revise (or reject) your paper because of the insufficient statistical validations.
5. You can utilize these benchmark problems instead of those old and simple benchmark problems (maybe it is not really valuable to only use simple problems), these problems have their own lower and upper bounds and the MATLAB codes in IEEE CEC site have embedded them (you can just follow the structure of those codes):
CEC'05: Evolutionary Real Parameter single objective optimization
CEC'06 : Evolutionary Constrained Real Parameter single objective optimization
CEC'07: Real-parameter MOEAs
CEC'08: large scale single objective global optimization with bound constraints
CEC'09: Dynamic Optimization
CEC09: real-parameter MOEAs
CEC10: large-scale single objective global optimization with bound constraints
CEC10: Evolutionary Constrained Real Parameter single objective optimization
CEC10: Niching Introduces novel scalable test problems
CEC11:  Real-world Numerical Optimization Problems
CEC2013: Real Parameter Single Objective Optimization
CEC2014: Real Parameter Single Objective Optimization (incorporates expensive function optimization).
CEC2014: Dynamic MOEA
CEC2015: Real Parameter Single Objective Optimization (incorporates 3 scenarios)
CEC2016: Real Parameter Single Objective Optimization (incorporates 4 scenarios)
CEC2017: Real Parameter Single Objective Optimization (incorporates 3 scenarios)
1. you can use realistic or applied problems in engineering and science, some of them are listed here:
• Optimal reactive power dispatch benchmark problems: IEEE 30-bus, 57-bus and 118-bus test systems
• Truss design problems: 10-bar plane truss, 25-space truss, 72-bar truss, 120-bar truss dome, 200-bar plane truss, 26-story truss tower
• Non-truss design problems: Welded beam, Reinforced concrete beam, Compression Spring, Pressure vessel, Speed reducer, Stepped cantilever beam, Frame optimization
Question
I am current doing research on global optimization and test my algorithm on benchmark function on both unimodal and multimodal (eg: Sphere function, Rosenbrock function, Schaffer function and ect.).
I am thinking it could be best if I can implement my algorithm on real-life application other than benchmark functions. So, could somebody recommend me few recent hot global optimization problems (prefer continuous problem)?
I think that the following points about real-world problems can help all interested researchers:
1. You should first choose the proper benchmark problems based on the nature of your algorithm and the target case. For example, single objective optimization or Multi-objective? Constrained optimization or unconstrained optimization? numerical problems or engineering problems?
2. You can find the proper problems based on the IEEE standard benchmark Suites. The best MATLAB, JAVA, ... codes and papers about several benchmark sets for optimization (often meta-heuristic and evolutionary algorithms) can be obtained from the following link, which is the home page of dear Prof. Ponnuthurai Nagaratnam Suganthan:          http://www.ntu.edu.sg/home/epnsugan
3. For real-world applications, you can use the following benchmark set:
CEC11:  Real-world Numerical Optimization Problems
• You can follow the structure of the published codes in that site to test your optimization algorithm.
• Please do not forget that you should employ non-parametric statistical hypothesis tests (such as Wilcoxon signed-rank test ) to decide about the significant differences between different optimization algorithms.
In addition to these benchmark problems, several engineering benchmark problems are also used by researchers.They use these benchmarks to demonstrate the efficiency of their algorithms for some engineering applications. I summarized some of the most utilized engineering benchmark problems:
• Optimal reactive power dispatch benchmark problems: IEEE 30-bus, 57-bus and 118-bus test systems
• Truss design problems: 10-bar plane truss, 25-space truss, 72-bar truss, 120-bar truss dome, 200-bar plane truss, 26-story truss tower
• Non-truss design problems: Welded beam, Reinforced concrete beam, Compression Spring, Pressure vessel, Speed reducer, Stepped cantilever beam, Frame optimization
Question
Hello,
I am currently trying to create a compliant mechanism by topology optimization in Abaqus 6.14-1, but I am already stuck at the standard examples from literature.
Take the compliant gripper for example. I understand the theory of maximizing the geometrical advantage (GA = Fout/Fin, or GA = uout/uin), while constraining the volume and input displacement.
My problem is implementing the geometrical advantage into Abaqus. How can I add the geometrical advantage to the design responses? Under "single-term" I can (obviously) not find the GA.
Under "combined-term" I could (in theory) combine the two single-term displacements or forces, but a division or multiplication is not possible. Just "substract", "absoulte difference" and "weighted combination".
Any help how to implement the GA would be really appreciated! :-)
Best regards
Rene Moitroux
Our library has this book, I will try to get it this afternoon. I read about the MatLAB codes, but tried to solve the problem solely with Abaqus to reduce complexity. As this does not work, I guess MatLAB should be the easiest solution then, I will check it out. Thank you again for your input!
Best regards
Rene Moitroux
Question
Hi to all ,
What is the interpretation of the pareto front graph when using a two-objective genetic algorithm (gamultiobj) in matlab . and how to choose one best individual from final points
Best regards
There are many methods for choosing the "best compromise" solution in a Pareto front. Two of the classical methods are the Laplace's criterion and the Wald's criterion.
Assume you have the objectives J1 and J2. Assume we have to minimize both of them, so we solve the optimization problem and we obtain the Pareto front composed by n Pareto-optimal solutions Si composed by two objectives values for each of them: J1i and J2i (i=1...n)
The Laplace's criterion consists in the minimization of the objectives mean values for each solution, i.e. finding the solution i that solves the problem:
min(E(J1i,J2i)), where E states for "expected value". For each solution you keep the mean value and you find among them the minimum one.
The Wald's criterion consists in choosing the "least worse" case i.e. finding the solution that solves the problem: min(max(J1i,J2i)). For each solution you keep the maximum objective value and you choose among them the minimum one.
Little tip: think about two J1 and J2 objectives that belong to the set [0,10] and we want to minimize both of them (0 means excellent, 10 means awful)
The Laplace's criterion tells us that these two solutions are at the same level:
S1: {J1 = 10 , J2 = 0}
S2: {J1 = 5 , J2 = 5}
which means that even if the first objective is very bad with respect to the second in the solution S1, we consider that both S1 and S2 are a good compromise between the two stakeholders.
The Wald's criterion is quite more "pessimistic": we choose to make the best of the worse.
The two solutions:
S1: {J1 = 8 , J2 = 0}
S2: {J1 = 8 , J2 = 7}
are at the same level, because in both the solutions we are making the best we can to avoid J1 being more (worse) than 8.
These two methods both have lacks, so be careful to use them in a "rational" way.
I hope this will help you; I suggest you these keywords for a simple web research:
best compromise solution, decision theory, choosing among Pareto alternatives.
Question
Last years for continuous multi-extremal optimization were developed a few random search oriented methods. There are Simulated Annealing and Genetic algorithms (implemented on the Matlab Global Optimization Toolbox), Ant Colony method, Cross-Entropy Method (see, e.g., https://www.researchgate.net/publication/225551595_The_Cross-Entropy_Method_for_Continuous_Multi-Extremal_Optimization ), etc. But a lot of researches yet often use traditional, gradient-based methods, as Newton-Raphson. The main drawback of these methods is that they, by their nature, do not cope well with optimization problems that have non-convex objective functions and/or many local optima. Optimization results essentially depend of the initial point selection. So, please, explain me, why researches don’t use random search oriented methods ?
Ales:  I remember my first algorithms course at my university's computer science department - Hill climbing and A* were the most popular then, at least teaching-wise. I wonder if that still is the case. We then learnt heuristic branch-and-bound and rollout like techniques to code a chess program. (In LISP!) That was quite nice. Gosh - more than 30 years has passed since then!
Question
Dears
I look for algorithms that resolve a sequencing problem (we need to minimize Cmax) with the following conditions:
1.       m similar machines, when each one has a different capacity Ci
2.       n job, when each one has a different process time Tj and a different priority Pj
Any contribution or suggestion is more than welcome.
thank you very much, i will read them and tell you.
Question
Let's say the initial population in NSGA ii is  i.i.d and uniformly distributed. Has anyone done research about what we can say about the distribution after k iterations in NSGA ii? The individuals are surely no longer i.i.d but are there asymptotic results for large populations sizes?
Dear Ernst,
You can utilize the following codes if you use MATLAB as follows:
before merging the population at the end of algorithm, please type "Stop time;"
and save the initial population in another Pop, e.g. Pop2.
Yours,
Niknamfar
Question
I am working on this project currently and I am not able to find any flowchart/algorithm for Dial a ride problem in order to code it in Matlab.Please do share any thesis papers/literature review if you have any.
Why don't you try to evolve the sequence of operators. If you are interested  contacted me.
Question
I am using GAMS for solving a MILP problem which includes binary variables. However there is a problem in the solution. Surprisingly, I have seen that one of the binary variables in the solution has a found value of "-1" another one "2". That is not acceptable and logical. I do not know what happened. GAMS gives me the message primal infeasible."resolved with fixed discrete variables"
code has attached.
Your results of course depend on the solver you used. There is nothing wrong with GAMS as far as I see - have a look at the attached files, where I solved your model with BARON as the MINLP solver framework, giving it an options file to have a smaller relative gap than the default 0.01.
You'll see in the lst file that the binary variables take binary values in the solution.
So what solver did you use? Maybe the solver is buggy...these are done by humans, too. To err is human ;-)
Question
Hi,
I am working on optimization approaches on an intersection. And I have been getting some questionnable results on the simulator. I get some high troughputs of vehicle in the intersection but with big average delay of vehicles e.g :
In a scenario I get a throughput of 4544 with an average delay of 31.22 seconds and average queue length per phase of 41 meters
And in another scenario I get a 4551 throughput with an average delay of 31.26 seconds with an average queue length per phase of 39.68 meters
This compared to the average study of the same intersection we see different results, e.g  in a scenario a throughput of 4343 with an average delay of 30.7 seconds and an average queue length per phase of 21.1 meters.
I am starting to question my results performance. Doesn't higher throughput mean minimum average delay and lesser queues on lanes ?
There is a mathematical theorem linking some quantities: Little's law for queuing systems isL=λW: the average queue length equals the average arrival rate times the average waiting time in the system.
It means you should not diverge too much from that without clearly violating at least one hypothesis. Usual deviations are:
• bad definition/calculation of quantities (throughput...)
• using the wrong quantities (what is the delay?)
• your system is not stationary (i.e. the simulation is not long enough or you don't cut the first part that is usually very non-stationary)
• you do not compare the same quantities.
Then, I can discuss more about interpreting results, because cooperative intersection is a topic I have highly investigated. But before going to theory (optimization criteria, mathematical tools, hypotheses...), check the simple issues I have mentioned.
Best regards,
Arnaud
Question
My problem is like, three objective equation \sum_{i=1,6;j=1,3} Ci,j  x-2i  and six constraints like \sum_{i=1,6; j=1,6} Di,j  x-2i ==ai,j. Both the case j refers number of equations. How maximize the three objective equation simultaneously?
There are several possibilities. The simpler one would be to minimize the normalized Tchebyshev distance from the ideal solution to the existence space using any numerical Non lineal Programming method. Did you understood me?
Best wishes,
Question
I have used epsilon constraint method as well ass the sum weighted method to find Pareto point for bi-objective model;in this case I have found same results ,same number of Pareto points,
can we claim which our bi-objective model is not non-convex if the results raised from epsilon constraint method and the sum weight method are same?
Then, I think that, in general, just looking at the efficient solutions is not sufficient.  Suppose you have just two continuous variables, x and y, and that your feasible region is defined by the following two quadratic constraints:
x^2  + y^2 <= 1, (x+1)^2 + y^2 >= 1.
This feasible region is non-convex, but if your objectives are max x and max y, then the weighted sum and epsilon-constraint methods will give the same efficient frontier, namely, the region:
(x,y): 0 <= x <= 1, 0 <= y <= 1, x^2 + y^2 = 1.
Question
I'm solving an optimization problem with two heterogeneous objective functions. The first one is a mixed-integer linear function and the second one is linear-fractional. How to combine these two functions and obtain a single objective function?
Hi, if you have two different objective functions to minimize at the same time, you fall into the framework of multi-objective optimization. You can fpr example have a look to the following survey on this topic:
Question
Would you say always the upper bound obtained in Dantzig Wolfe decomposition is better than upper bound in lagrangian relaxation method ( minimization) and lower bound conversely?
Dear professor Patriksson
In this paper:  ' A new cross decomposition method for stochastic mixed-integer linear programming Emmanuel Ogbe, Xiang Li' said that in mixed-integer linear programming:
" The DWD restricted master problem DWRMP provides a rigorous upper bound for Problem (P) , while the restricted Lagrangian dual RLD does not. Actually, according to Van Roy (1983) , RLD is a dual of DWRMP. On the other hand, DWPP is similar to LS and either one can provide a cut to BRMP (according to the discussion in the previous section). Therefore, using DWD instead of Lagrangian decomposition in the cross decomposition framework is likely to achieve better convergence rate."
Would you say always the upper bound obtained in Dantzig Wolfe decomposition is better than upper bound in lagrangian relaxation method and lower bound conversely?
Best Regards
Question
The solution of bi-level model is too hard. Generally, duality, decomposition and evolutionary algorithms are used for solving.
it is absolutely possible, you can check this book on your question:
1- Metaheuristics for Bi-level Optimization
2- Sinha, A., Malo, P., & Deb, K. (2017). Evolutionary Bilevel Optimization: An Introduction and Recent Advances. In Recent Advances in Evolutionary Multi-objective Optimization (pp. 71-103). Springer International Publishing.
1- Genetic algorithm based approach to bi-level linear programming
2- A hybrid of genetic algorithm and particle swarm optimization for solving bi-level linear programming problem–A case study on supply chain model
3- I also recommend this work:
A parameterised complexity analysis of bi-level optimisation with evolutionary algorithms, MIT, 2016
Corus, D., Lehre, P. K., Neumann, F., & Pourhassan, M. (2016). A parameterised complexity analysis of bi-level optimisation with evolutionary algorithms. Evolutionary computation, 24(1), 183-203.
Question
i have convex problem that start solved for first 50 iteration the after that its go to inaccurate solved .How to resolve the problem?
A remark might be added to this: if you have a convex problem, as you state, WHY do you use heuristics?? If you know that the problem is convex, you probably have an expression on the objective function and the constraints (if any), so you should be able to solve it accurately by convex optimization methods, such as by utilising (sub-)gradients.
Question
For both Bayesian and Frequentist expected loss, is the parameter an index of the data to which to make decisions on, or a state of nature?
Are there examples where a loss function is mapped using a vector of real observations to show what the parameter looks like?
Trying to understand your question.  Since the possibilities/models are infinite in number, parametrization cannot truly represent a state of nature but only help model a certain phenomenon (or part of it) and the loss function (in supervised methods) tests the goodness of fit of that model. The model of nature that one puts together can perform well or poorly.  The parameters of the model need to be viewed accordingly.
The goal of all statistical methods (Bayesian or Frequentist) is to find a model that can adequately represent a phenomenon (represented by a set of available data -- real or simulated) and use that to make logical predictions.
My sense here is that you're asking if that data can be used for bump hunting.  All unsupervised methods try to answer that very question and there are a vast number of techniques available for that, some better at certain things than others.  In these methods the term loss function is at a loss since the error functions used need to be seen more as frames of judgment for the fit.
But even for supervised methods, one can use non-parametric methods that use the available data in different ways to establish the probability distribution, which might be the other answer that you're looking for.
Question
I would like to write a multi objective function for optimizing several variables. However, I would like only certain objective functions effect certain optimization variables. For example, for a multi objective function of F = [f1 f2 f3] and optimization variables x = [x1 x2 x3], I would like x1 to be effected by f1 but not f2 or f3 etc. Optimization of all the variables should be performed together and I cannot assume that x2 and x3 for example are constant while I optimize for x1 . Is there an optimization algorithm that can handle this problem?
Thanks,
I am answering this question with respect to linear optimization.
Multi objective functions will take more than one objective functions. Let us say we want to maximize contribution (to profit) and maximize volume (in no of units) for a multiple product company. So it is necessary to use something like giving goal so that I can know how much weights we want to give each goal. So in a multi- objective framework it is necessary to assign right weights to the goals.
The slope of some of the variables, I understand that it is the coefficients will be different.
Let us say objective function is = w1(f1(X) + w2(f2(X)
= w1(C11X1+ C12 X2+...+C1nXn) + w2 (C21X1+....+C2nXn)
Given this , only requirement is how to assign the w1 and w2 and not C vector for
Here we must understand that
1) Should we solve for one goal or more than one goal . Optimal Solution for profit maximization will not be same for solution for utilization  maximization
2) Optimal Solution for profit maximization will be different from that of cost minimization.
In my opinion finding the weights (W1 and W2) are more important that finding the
3) The coefficients are generally provided are  parameters.
I hope it helps
Goutam Dutta
Question
What is use of xmax in Quantum integrated Particle Swarm Optimization (QPSO)? How can I define xmax in QPSO? How it signifies QPSO?
Sorry, I don't get what actually you mean by 'I am not sure about the coding publicly shared by Mr. Zhifei Li under TSP_QPSO title (whether it is the Quantum-PSO or not).' I am not talking about his codes. Am I?
Question
In multi objective optimization, when I change the  value of one constant in objective function I am getting pareto front for some xyz constants and for another set of constants, i am not getting any pareto front(pareto converges to point). So is this possible??
Greeting Maruti,
It is definitely possible. For me this happened on a real-world problem, where for some inputs the solution was a single point on the Pareto. This happens when the objectives araren't independant.
Best Regards,
Miha
Question
I have a symbolic integral with symbolic parameters (x(1), x(2),x(3),t). I am trying to fit this symbolic integral to experimental data and acquire those parameteres (x(1),x(2),x(3)). I tried to use fminsearch or fmincon; however, I couldn't use them and I faced different errors. for example: is not a valid MATLAB expression, has non-scalar coefficients, or cannot be evaluated: Error while trying to evaluate FITTYPE function. I attach my code. would you please help me to solve this error.
Thanks
clear all;
clc;
%%
beta=0.002;
alfa=0.004;
nu=0.49;
del=0.010;
t0=1.35;
syms eta G1 G2 A s t taw
x=sym('x',[1 3]);
p1=(eta)/(G1+G2);
q1=(2*G1*eta)/(G1+G2);
q0=(2*G1*G2)/(G1+G2);
B1=(2*G1*(1+nu))/(3*(1-2*nu));
B2=(2*G2*(1+nu))/(3*(1-2*nu));
B3=(2*eta*(1+nu))/(3*(1-2*nu));
q2=3*B1*B2/(B1+B2);
q3=B3/(B1+B2);
q4=3*B1*B3/(B1+B2);
Pc1=1+p1*A;
Qc1=q0+q1*A;
Pc2=1;
Qc2=q2;
f1=Pc1*Qc2*Pc1+2*Pc1*Pc2*Qc1;
c1 = coeffs(f1, A);
c1=simplify(c1);
f2=2*Pc1*Qc1*Qc2+Qc1*Pc2*Qc1;
c2=coeffs(f2,A);
c2=simplify(c2);
GG2=ilaplace((4*beta/(3*t0*sqrt(alfa)))*del*(c2(1,3)*s^2+c2(1,2)*s+c2(1,1))/((c1(1,3)*s^3+c1(1,2)*s^2+c1(1,1)*s)), t);
GGs2=subs(GG2, t, t-taw);
GGss2=subs(GGs2, {G1,G2,eta}, {x(1),x(2),x(3)});
assume(x(1) > 0)
assume(x(2) > 0)
assume(x(3) > 0)
assume(x(1),'real')
assume(x(2),'real')
assume(x(3),'real')
force2=int(GGss2*diff(taw^1.5,taw),taw,0,t0,'IgnoreAnalyticConstraints',true);
t0=[2 12 22 32 42 52 62 72 82 92 102 112];
F0=[0.77618 0.7259 0.70212 0.7011 0.69315 0.69324 0.67682 0.67658 0.67618 0.67669 0.67623 0.66831];
B2 = simplify(force2);
F2 = matlabFunction(B2,'vars', [{x(1),x(2),x(3)},t]);
%---------------------------------Error------------------------------------
funcfit1=fittype(F2,'indep','t','coefficients', {'x1','x2','x3'});
%--------------------------------------------------------------------------
F22=subs(numinteg, t, t0);
% fminsearch algorithm
fun = sum((F0-F22).^2);
%starting guess
pguess = [1000,1000,1000];
%optimise
[p,fminres] = fminsearch(fun,pguess)