Science topic

Optimization - Science topic

Explore the latest questions and answers in Optimization, and find Optimization experts.
Questions related to Optimization
  • asked a question related to Optimization
Question
2 answers
In his name is the judge
Hi
I want to learn multi-objective optimization with NSGAII in python for my research.
Please recommend a good source for learning NSGAII in python.
wish you best
Take refuge in the right.
Relevant answer
  • asked a question related to Optimization
Question
2 answers
I have running GA optimization using "gamultiobj" in Matlab. The upper bound and lower bound of my design variables are like [10 30 175 1] and [30 60 225 3]. But after convergence, I see the design variables (like all of them) near 20,47,175 and 1. I am not getting a Pareto Front. What could be the possible reasons for that?
Relevant answer
Answer
Can you please elaborate on your objective functions?
  • asked a question related to Optimization
Question
9 answers
I am using a hybrid optimization algorithm (Grey Wolf + Cuckoo Search) to find the optimal size of hybrid renewable energy system based on Total Net Present Cost of the system. Details of Optimization Problem are as follows:
Objective Function: TNPC[$]: f(Number of components)=f(N_pv N_wt N_bat N_inv)
Lower Bound: [1 1 1 1] Upper Bound: [1000 1000 100 100]
Constraints: 1. System size is within allowed min. and max. system size. 2. battery SOC remains within allowed limit
The Algorithm do end up with the least system cost however the optimal size of system comes to be just equal to lower bound values i.e. optimal size of system: [1 1 1 1] .
Optimization Algorithm Convergence Curve is attached.
Relevant answer
Answer
Steftcho P. Dokov Thank You, I will try that.
  • asked a question related to Optimization
Question
3 answers
I've been using GMAT in my free time to optimize trajectories, and have varied burn component values and spacecraft states, usually with success. The vary command in GMAT, with the Yukon optimizer that I am using, has the following parameters that can be changed:
  • Initial value: The initial guess. I know the gradient descent optimization method that GMAT uses is very sensitive to initial conditions and so this must be feasible or reasonably close.
  • Perturbation: The step size used to calculate the finite difference derivative.
  • Max step: The maximum allowed change in the control variable during a single iteration of the solver.
  • Additive scale factor: Number used to nondimensionalize the independent variable. This is done with the equation xn = m (xd + a), where xn is the non-dimensional parameter, xd is the dimensional parameter and this parameter is a.
  • Multiplicative scale factor: Same as above, but it's the variable d in the equation.
For the initial value, I can usually see when my chosen value is feasible by observing the solver window or a graphical display of the orbit in different iterations. The max step is the most intuitive of these parameters for me, and by trial and error, observation of the solver window and how sensitive my target variables are to changes in the control variables I can usually get it right and get convergence. It is still partially trial and error though.
However, I do not understand the effect of the other parameters on the optimization. I read a bit about finite difference and nondimensionalization/rescaling, and I think I understand them conceptually, but I still don't understand what values they have to be to get an optimal optimization process.
This is especially a problem now because I have started to vary epochs (TAIModJulian usually) or time intervals (e.g. "travel for x days" and find optimal x, or to find optimal launch windows), and I cannot get the optimizer to vary them properly, even when I use a large step size. The optimizer usually stays close to the initial values, and eventually leads to a non-convergence message.
I have noticed that using large values for the two scale factors sometimes gives me larger step sizes and occasionally what I want, but it's still trial and error. As far as perturbation goes, I do not understand its influence on how the optimization works. Sometimes for extremely small values I get array dimension errors, sometimes for very large values I get similar results to if I'm using too large a max step size, and that's about it. I usually use 1e-5 to 1e-7 and it seems to work most of the time. Unfortunately information on the topic seems sparse, and from what I can tell GMAT's documentation uses different terminology for these concepts than what I can find online.
So I guess my question is two-fold: how to understand the optimization parameters of GMAT and what they should be in different situations, and what should they be when I want GMAT to consider a wide array of possible trajectories with different values of control variables, especially when those control variables are epochs or time intervals? Is there a procedure or automatic method that takes into account the scale of the optimization problem and its sensitivity, and gives an estimate of what the optimization parameters should be?
Relevant answer
Answer
Not sure what you are looking for but I think you can use Automatic Differentiation libraries like #Casadi #yulmip, I prefer Casadi, because it is easy to use, you can simply define your objective function and your constraints
  • asked a question related to Optimization
Question
6 answers
I have designed the optimization experiment using Box-Behnken approach.
What should I do if any of the factors combination fails, for example because the aggregation occurs.
Should I review whole optimization or is there any method to skip the particular factors combination?
And if I need to review the whole experiment, what method should I use to evaluate boundary factors values? Screening methods I have seen require at least 6 factors to be screened.
Any help is appreciated.
Greetings.
  • asked a question related to Optimization
Question
6 answers
I have generated 16 variable probability distributions in the form of a 16 dimensional NumPy array in python. How could I determine all the peaks in this function in python or using some software?
Relevant answer
Answer
Thank you all for the clarifications
  • asked a question related to Optimization
Question
2 answers
Hello everyone, I am looking for a good MPC quadratic optimization mathematical model to optimize a cost function or performance index, for a battery energy storage state space model. Would anybody suggest a good research paper or post a formulation that contains a good mathematical model for quadratic optimization? An objective function with viable constraints, which can be possible to implement in function solvers such as quadprog or cplex would be ideal. Thank you
  • asked a question related to Optimization
Question
17 answers
I want to compare two theorems and see which one has the largest feasibility domain. like the attached picture.
for example, I have the following matrices: A1=[-0.5 5; a -1]; A2=[-0.5 5; a+b -1]; they are a function of 'a' and 'b' I want to plot according to 'a' and 'b' the points where an LMI is  feasible for example the following LMI Ai'*P+P*Ai<0
then I want to plot the points where another LMI is  feasible, for example:
Ai'*Pi+Pi*Ai<0
I have seen similar maps in many articles where the authors demonstrated that an LMI is better than another because it is applicable for more couples (a,b)
Relevant answer
Answer
Usually, these problems are easily solved by YALMIP toolbox in MATLAB.
Here, I write the following pseudocode for your problem:
It can be solved by defining two 'for-loop' as follows:
for min(a)<a<max(a)
for min(b)<b<max(b)
solve the proposed LMI-based optimization problem
if the LMI problem is feasible
figure(1)
hold on
plot(a,b,'.r')
end
end
end
OR, you can use the following sample:
for a = 0:1:15
for b = -12:1:0
yalmip('clear')
% Model data
A1 = [a 0.02;0.35463 0.2035];
A2 = [0.7025 0.02;0.2525 0.1025];
sdpvar P1(2);
sdpvar P2(2);
con1 = P1>=0;
con2 = P2>=0;
con3 = A1'*P1+P1*A1<0
con4 = A2'*P2+P2*A2<0
constraints = con1+con2+con3+con4;
opt = sdpsettings('solver','mosek','verbose',0);
optimize(constraints,gamma,opt);
P1 = value(P1);
P2 = value(P2);
eigP1 = min(eig(P11));
eigP2 = min(eig(P12));
if eigP1>0 && eigP2>0
figure(1)
hold on
plot(a,b,'.k','MarkerSize',5)
end
end
end
  • asked a question related to Optimization
Question
3 answers
I am trying to code this optimizer for a linear regression model. What i want to confirm from is that the update of model parameter are happening even if they cause the increase in cost function, isn't ?
Or we only update the coefficients values if they decreased the value of the cost function?
  • asked a question related to Optimization
Question
2 answers
I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $\rho < 2 - \epsilon$ on arbitrary graphs.
Here, I am going to clarify the main topics of the idea. Then, I am grateful if anyone identify any potential issues or give me informative suggestions.
You can see the last version of my paper in this open access site
https://vixra.org/abs/2107.0045 with a performance ratio of $1.999999$
https://vixra.org/abs/2202.0143 with a performance ratio of $1.885903$
It can be natural to reject new ideas right away. Yet, instead of immediate judgments and using negative words, it is better to use positive language. Even ideas that seem implausible can turn into outstanding innovations upon further exploration and development.
The Idea:
First of all, we prove that,
I. If the optimal value of the VCP is greater than $(n/2)+(n/k)$ then $\rho < (2k)/(k+2)$, and
II. If we can produce a feasible solution with objective value smaller than $(kn)/(k+1)$ then $\rho < (2k)/(k+1)$.
Hence, to introduce a performance ratio of $2 - epsilon$ on arbitrary graphs, it is sufficient to produce a feasible solution with a suitable fixed objective value or proving that the optimal value is greater than a suitable fixed value.
Therefore, we solve the well-known SDP relaxation proposed by Kleinberg and Goemans(1998). Note that, I know for sure that just by solving any SDP formulation, we cannot approximate the VCP with a performance ratio better than 2-o(1).
Then, let $V_{-1}=\{j: V_0V_j < 0\}$, and $V_1=V-V_{-1}$ which is a feasible solution for the VCP.
If $|V_{-1}| > 0.0625n$ then $|V_1| < 0.9375n= 15n/16$ and we have (based on II) $\rho < (2\times 15)/16 < 1.885903$.
Else, let $A=\{j: V_0V_j > 0.4\}$.
If $|A| > 0.3075n$, then, we can show that the optimal value of the VCP is greater than $(n/2)+(0.03025n)$ and we have (based on I) $\rho < (2k)/(k+2) < 1.885903$, where $k=1/0.03025$.
Else, let $G_{0.4}=\{j: 0 <= V_0V_j <= 0.4\}$, where based on above results we know that $|G_{0.4}| > (1-0.0625-0.3075)n= 0.63n$.
Now, it is sufficient to introduce a suitable feasible solution based on $G_{0.4}$.
To do this, we prove that for any normalized vector $w$, the induced subgraph on $H_w=\{j: |wV_j| > 0.700001\}$ is a bipartite graph and as a result,
if $|H_w| > 0.118472n$ then we can produce a feasible solution with objective value smaller than $(1-0.118472/2)n= 0.940764n < 16n/17$, and a performance ratio of $\rho < (2\times 16)/17 < 1.885903$.
Finally, to produce such a normalized vector $w$, we show that, by introducing two random vectors $u$ and $w$, one of the sets $H_u$ or $H_w$ has more than $0.118472n$ members, and as a result we can produce a suitable feasible solution based on $G_{0.4}$.
Therefore, we could introduce an approximation ratio of $\rho < 1.885903$ on arbitrary graphs, and, based on the proposed $1.885903$-approximation algorithm for the VCP, the unique games conjecture is not true.
#Combinatorial Optimization
#Computational Complexity Theory
# unique games conjecture
  • asked a question related to Optimization
Question
7 answers
In the optimization of gas turbine cycle, many researchers have used isentropic efficiency of gas turbine and air compressor as decision variables. Even I did the same. But recently while submitting a paper I got one comment from the reviewer which really made me think.
The reviewer comment:
"AC and GT isentropic efficiency are used as optimization parameters. Are these easily controllable metrics? The other metrics (pressure ratio and temperatures) are but I wonder about the isentropic efficiencies."
How should I justify?
Relevant answer
Answer
No isentropic efficiencies are not the optimization parameters; there is not much control and also not wide range of this efficiency. It depends on the design of the compressor and turbine. Rather you should treat them as external parameters and concentrate on heat exchanger efficiency (regenerator efficiency), maximum cycle temperature, two or three stage compression with intercooling, reheating point, pressure ratio, etc.
  • asked a question related to Optimization
Question
7 answers
Hi. I'm going to optimize the layout design of the satellite with Abaqus and Isight. I designed and analyzed the Abaqus model, which is pinned below. Now I want to enter my model to Isight to optimize satellite, but I face a big obstacle. The Abaqus should enter the reference points and constraints points in Isight to optimize the parts' location, but there is nothing like RF points or constraints points. I couldn't find a way to solve this problem. May someone helps me to figure it out?
Relevant answer
Answer
Thank you for sharing the finite element model. Do you consider additional constraints such as natural frequencies, moments of inertia, center of gravity?
  • asked a question related to Optimization
Question
6 answers
How does one optimize a set of data which is comprised of 3 input variables and 1 output variable (numerical variables) using a Genetic Algorithm? and also how can I create a fitness function? How is a population selected in this form of data? What will the GA result look like, will it be in a form of 3 inputs and 1 output?
I do understand how the GA works, however, I am confused about how to execute it with the form of data that I have.
My data is structured as follows, just for better understanding:
3 columns of Input data, and the fourth column of Output Data. (4 Columns and 31 rows) data. The 3 input variables are used to predict the fourth variable. I want to use GA to improve the prediction results.
Lastly, Can I use decimal numbers (E.g. 0.23 or 24.45); or I should always use whole numbers for chromosomes.
  • asked a question related to Optimization
Question
8 answers
When using Wasserstein balls to describe the uncertainty set in distributionally robust optimization, can multiple sources of uncertainty be considered at the same time, such as wind power and solar power forecast error?
  • asked a question related to Optimization
Question
4 answers
Hi
I'm working on a research for developing a nonlinear model (e.g. exponential, polynomial and...) between a dependent variable (Y) and 30 independent variables ( X1, X2, ... , X30).
As you know I need to choose the best variables that have most impacts on estimating (Y).
But the question is that can I use Pearson Correlation coefficient matrix to choose the best variables?
I know that Pearson Correlation coefficient calculates the linear correlation between two variables but I want to use the variables for a nonlinear modeling ,and I don't know the other way to choose my best variables.
I used PCA (Principle Component Analysis) for reduce my variables but acceptable results were not obtained.
I used HeuristicLab software to develop Genetic Programming - based regression model and R to develop Support Vector Regression model as well.
Thanks
Relevant answer
Answer
Hello Amirhossein Haghighat. The type of univariable pre-screening of candidate predictors you are describing is a recipe for producing an overfitted model. See Frank Harrell's Author Checklist (link below), and look especially under the following headings:
  • Use of stepwise variable selection
  • Lack of insignificant variables in the final model
There are much better alternatives you could take a look at--e.g., LASSO (2nd link below). If you indicate what software you use, someone may be able to give more detailed advice or resources. HTH.
  • asked a question related to Optimization
Question
5 answers
I wish to extend a paper by incorprating the particular feature the authors havent used or considered. However after going through the litreature It isnt clear how much that particular feature plays a role, all I know it does play an very important role for the output that I care about. For experimentation I am assuming a simple linear regression function ax+by where a serves as the contribution to the paper I am extending and x its feature set, my goal is to find the parameter b (mse minimization) by encoding the feature in variable y and thus determine the strength that y plays
However there are some limitation first of that I am assuming the relationship be linear which is very wide of assumption , and I m hoping to consider some kind of non linearity
Question is how do I proceed from here. Is there any mathematical equation I can consider as intial assumption
PS: Note Y is here a continous value not categorical
Relevant answer
Answer
In my view scientific research is about explaining or predicting phenomena. And that is impossible without the use of models, whether they are implicit, or explicit.
And the data are only "visible" by using , whether you are aware of that or not, models. And very often the language of mathematics is used to generate these models. In that sense we all are descendants of Isaac Newton.
Very often I do not see any explicit model, but instead a multitude of procedures in order to process raw data, and basically all is then a question of curve fitting, or one step ahead prediction, where the curve is not known, and the measure of goodness of fit is often not clear at all. And the scientists, that use these models very often do not have a clue about the relation between data and model.
And people from statistics or machine learning barely speak the language of each other. There is hard work to do in University, to redress this, and to partly destroy the Tower of Babel. But as long as we get our papers published , while nearly no one is reading them, the atomization of research will continue.
And that worries me!
  • asked a question related to Optimization
Question
7 answers
I need matlab code to reproduce the attached research paper
Relevant answer
Answer
Dear Anil,
In this paper, four optimisation methods were applied, including GA, DE, ES, and BBO. I know there are many different available source codes for these methods. Which one is most interesting for you to use?
Yarpiz (2022). Biogeography-Based Optimization (BBO) (https://www.mathworks.com/matlabcentral/fileexchange/52901-biogeography-based-optimization-bbo), MATLAB Central File Exchange. Retrieved March 18, 2022.
  • asked a question related to Optimization
Question
4 answers
Can anyone provide me with PSO MATLAB code to optimize the weights of multi types of Neural Networks?
Relevant answer
Answer
Dear Murana Awad,
Application of PSO-BP Neural Network in GPS Height Fitting
  • asked a question related to Optimization
Question
1 answer
I am running a MARKAL model, (GAMS based model) and a .lst file is generated. May I know what does the following terms mean. I have read the literature but still not able to make out the true meaning.
OPTION LIMROW=0, LIMCOL=0, SOLPRINT=ON, SYSOUT=OFF,
PROFILE=0, SOLVEOPT=REPLACE;
*OPTION NLP=MINOS5;
REPORT SUMMARY : 0 NONOPT
2076 INFEASIBLE (INFES)
SUM acr??
MAX EPS
MEAN 1.20424E+298
0 UNBOUNDED
What are the meaning of columns?
Level, Marginal
What is the meaning of EPS in "lower" and "upper" named columns?
Thank You
Regards
Relevant answer
Answer
Level is the last value from a variable. It can be the initialization value before the optimization and it is the result after the optimization did run.
Marginal value is the partial derivative value of a bounded variable with respect to the objective function. If you could increase the level of the variable beyond the boundary by one, the objective function would increas/decrease by the marginal value.
EPS is the synonym for a small number, not zero (if I remember correctly its assigned to columns that correspond to active constraints)
Regards
Markus
  • asked a question related to Optimization
Question
10 answers
I'm studying optimal placement of sensors in larger structures. Some metrics can be found like fisher information matrix, kinetic energy, effective independence, MAC, etc. But in your opinion and knowledge, which ones are the best?
Relevant answer
Answer
Dear Dr Joao Luiz Junho Pereira,
I invite you to read these articles:
A novel load-dependent sensor placement method for model updating based on time-dependent reliability optimization considering multi-source uncertainties
An adaptive sensor placement algorithm for structural health monitoring based on multi-objective iterative optimization using weight factor updating
Strategy for sensor number determination and placement optimization with incomplete information based on interval possibility model and clustering avoidance distribution index
Sensor placement algorithm for structural health monitoring with redundancy elimination model based on sub-clustering strategy
Sensor placement for structural health monitoring using hybrid optimization algorithm based on sensor distribution index and FE grids
Robust Optimal Sensor Placement for Uncertain Structures With Interval Parameters
Optimal sensor placement for deployable antenna module health monitoring in SSPS using genetic algorithm
An interval effective independence method for optimal sensor placement based on non-probabilistic approach
best regards,
Chen Yang
  • asked a question related to Optimization
Question
3 answers
(E1U) (E2G) (E2G) (B2U) (A1G) (E1U) (E1U) (E2G)
(E2G) (B2U)
Requested convergence on RMS density matrix=1.00D-08 within 128 cycles.
Requested convergence on MAX density matrix=1.00D-06.
Requested convergence on energy=1.00D-06.
No special actions if energy rises.
SCF Done: E(RB3LYP) = -13319.3349271 A.U. after 1 cycles
Convg = 0.2232D-08 -V/T = 2.0097
Range of M.O.s used for correlation: 1 5424
NBasis= 5424 NAE= 1116 NBE= 1116 NFC= 0 NFV= 0
NROrb= 5424 NOA= 1116 NOB= 1116 NVA= 4308 NVB= 4308
PrsmSu: requested number of processors reduced to: 4 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.I
.
.
.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
Symmetrizing basis deriv contribution to polar:
IMax=3 JMax=2 DiffMx= 0.00D+00
G2DrvN: will do 1 centers at a time, making 529 passes doing MaxLOS=2.
Estimated number of processors is: 3
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
CoulSu: requested number of processors reduced to: 4 ShMem 1 Linda.
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
CoulSu: requested number of processors reduced to: 4 ShMem 1 Linda.
.
.
.
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
CoulSu: requested number of processors reduced to: 4 ShMem 1 Linda.
Erroneous write. Write 898609344 instead of 2097152000.
fd = 4
orig len = 3177921600 left = 3177921600
g_write
Relevant answer
Answer
Hi, can you please tell me, how you have solved this problem.
Actually same problem i am facing now. So, it would be helpful if you can share your input regarding this error. Mostafa Yousefzadeh Borzehandani
  • asked a question related to Optimization
Question
14 answers
Could you suggest some contemporary research topics in Operations Research (OR)?
In your opinion, which research topics of OR could be impactful in the next decade?
Thanks in advance.
Relevant answer
Answer
My scientific opinion in that question is we must hybrid some problems together and create new tools (mathematically or artificially) to solve these problems.
The big data or large scale problems will be focused by several researchers in the next few years from now.
Try to hybrid the big data problems with the solving tools of operations research.
  • asked a question related to Optimization
Question
5 answers
Let us discuss about the advantages, disadvantages, and use of powerful decomposition techniques like Bender's decomposition for large-scale optimization. I invite my esteemed colleagues and researchers to share important literature, ways of implementation, and potential application areas of decomposition algorithms, in this forum.
Relevant answer
Answer
Dear R.K
You are right, we can't say no, because we don't know about new developments.
Probably, one of the reasons by which Bender's decomposition technique has not been applied to MCDM - and I share your opinion about its application - is because problems in MCDM are systems, and like that, they can't be partitioned, other than for study.
Your last paragraph resumes the same point, better than my sketchy explanation, so, we agree
  • asked a question related to Optimization
Question
4 answers
How can I get a MATLAB code for solving multi objective transportation problem and traveling sales man problem?
Relevant answer
Answer
The link that Mohamed-Mourad Lafifi mentioned is useful. In addition, to finding code kindly check GitHub:
  • asked a question related to Optimization
Question
4 answers
In most of AI research the goal is to to achieve higher than human performance on a single objective,
I believe that in many cases we oversimplify the complexity of human objectives, and therefore I think we should maybe step off improving human performance.
but rather focus on understanding human objectives first by observing humans in the form of imitation learning while still exploring.
In the attachment I added description of the approach I believe could enforce more human like behavior.
However I would like advice on how I could formulate a simple imitation learning environment to show a prove of concept.
One idea of mine was to build a gridworld simulating a traffic light scenario, while the agent is only rewarded for crossing the street, we still want it to respect the traffic rules.
Kind regards
Jasper Busschers master student AI
Relevant answer
Answer
Interesting
  • asked a question related to Optimization
Question
3 answers
I want to model an energy storage system that with given data levels power consumption. Where should I start looking to learn on how to create one? And most importantly, what would be the best optimization software to use for modeling my problem? Matlab optimization tools, GAMS or Gurobi? Any suggestions would be appreciated
Relevant answer
Answer
@Michael Short @Kehinde M. Adeleke. Thank you, for your suggestions. My problem now would be, what optimization software would be the best for modeling an energy storage system, for example following mixed integer linear programming. I am little bit more familiar with Matlab as that is all I have learned, but it would be good to know if GAMS or other programming softwares would be helpful.
  • asked a question related to Optimization
Question
3 answers
I am doing simulations of gait cycles in which I use a contact model to recreate ground reactions, and I try to optimize some parameters in my model to make it fit to my experimental measures.
I want to know if there is a standard for a good enough error in this particular field, or if it really depends on my application.
In other words, I want to know if there is a threshold at which I can consider my optimization adequate.
Relevant answer
Answer
Thank you for giving more clarifications of your project.
Unfortunately, I have not used this model. And I don't know a very good answer that helps you a lot. But here is what I am thinking:
In my opinion, it is very difficult to get a low relative difference between the prediction and the experimental measurement in such a complex biomechanical system (foot including soft tissue, bones on a contact interaction with a comparatively rigid body like the ground). Although the relative difference depends on how you define your validation metric, I think the difference of less than 20% is very good in this kind of experimental setup and modeling.
But In your case, if you want to use the model in the optimization, you can consider your model as a surrogate (a computationally efficient model that we replace with a complicated one like the full meshed FE model). So It is enough for us that the surrogate points out the local minimum and it does not necessarily need to predict the cost function with very high accuracy. So I suggest you build up your model with an approximately good predictive capability (error of less and 20%), solve your optimization problem, and then at the end, do the experiment with your optimized setup and show how much did you reduce the cost function in your optimized structure. And more importantly, what was your computational efficiency compared to the FE model (in fact, running the FE model of Foot with a contact interaction at least takes 17 hours using a computer featuring Core i7-6700 CPU @ 3.4 GHz and 32 GB RAM. You can see the reference that I mentioned in the previous response. So this computational efficiency would be your achievement).
I hope it would be helpful for you
  • asked a question related to Optimization
Question
3 answers
I am trying to run a interaction studies between two molecule for Hydrogen abstraction (HAT mechanism). So when i put two molecules together in Gaussian at 3-21G basis set, the HAT is happening but when i put the same input file and start optimizing at higher basis set 6-31G(d) basis set from the start, it does not result in H atom abstraction. Can anayone suggest why it is so??
Relevant answer
Answer
Whatever you calculated with the 3-21 basis set - forget about it. 3-21 is big enough only for generating the initial guess. For the H-abstraction you need a basis set with extra diffuse functions assigned to H atoms and the active site atoms. So, 6-311++, at least, or aug-cc-pVTZ which would be much better. Don't use 6-31++, since it would be ridiculously overloaded with diffuse functions while not having many enough functions for inner and valence electrons.
  • asked a question related to Optimization
Question
10 answers
How can I select only 3 buses to add capacitors from 21 candidate buses, by using the optimization scenario " not using loss sensitivity factor"? What's the condition for this selection?
I put the capacitors in 3 buses (from 21 candidates) which have the max Var injected, but when I run the code the results (locations, sizes, min power loss) differ in each run (not in each iteration), Is it normal?
Relevant answer
Answer
3 buses can be selected from 21 candidate buses using the command 'randsample' available in matlab. For e.g. candidate_bus= [ 2 3 4 5 6 7 8...], then x=randsample(candidate_bus,3). Here, 3 is the number of capacitor. At each run, it can generate random samples.
  • asked a question related to Optimization
Question
3 answers
Hello everyone, good time.
This is the first time I want to choose a camera. if you can, please guide me.
Do the cameras do software optimizations (on the captured image by lens), for example, to increase the image contrast?
Is software optimization generally done by the manufacturers in the cameras or does the user have to do the desired optimization himself?
Is it possible to apply software changes to increase the image quality of the cameras by the user?
Thank you for your attention.
Best regards.
  • asked a question related to Optimization
Question
7 answers
When I performed the optimization of the minimum four convergence criteria are met and then I carried out a calculation of frequencies which were all real, but to review the criteria for convergence of frequencies two criteria not met:
Item                                    Value      Threshold  Converged?
Maximum Force               0.000022  0.000450   YES
RMS Force                      0.000006  0.000300   YES
Maximum Displacement   0.014118  0.001800   NO
RMS Displacement          0.002399  0.001200   NO
in addition, the output file 09 does not set a gaussian error during calculation.
What do I have to do to fix the convergence criteria of the frequencies?
Relevant answer
Answer
Chaima Basma Remougui In principle it is not considered as converged. For geometry optimization, all four criteria should be fulfilled.
  • asked a question related to Optimization
Question
3 answers
The molecules contain Fe, B,C,H, F. The scf energies are very close to converging but they just don't converge. I tried increasing the scf cycles, changing the input geometry etc. I am using the basis set of def2tzvpd.
Relevant answer
Answer
Hi, I am not used to Q-Chem and RI-MP2 methods. But I have found several techniques to improve convergence on this page: https://manual.q-chem.com/5.3/sect_convergence.html
In general, it is useful to look at the last energy values of the SCF procedure. If the energy oscillates, maybe some damping is enough to favour the convergence. If the energy always decreases and does not oscillate (a pare from rare cases), maybe you should increase the maximum number of steps.
I found some details on how to deal with damping on Q-Chem here: https://manual.q-chem.com/5.3/sect_damp.html
I hope this may help
Best regards
  • asked a question related to Optimization
Question
5 answers
Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.
Relevant answer
Answer
Norbert Tihanyi one little warning, if you look whether a particular journal is mentioned in the Beall’s list you should not only check the journal title in the stand-alone journal list (https://beallslist.net/standalone-journals/) but also the publisher behind it (if any). In this case the publisher is not mentioned in the Beall’s list (https://beallslist.net/). Anis Hamza I suppose you mean ISSN number, this journal with ISSN 1547-5816 and/or E-ISSN:1553-166X is mentioned in Scopus (https://www.scopus.com/sources.uri?zone=TopNavBar&origin=searchbasic) and Clarivate’s Master journal list (https://mjl.clarivate.com/home).
Back to your question, it is somewhat diffuse. There are signs that you are dealing with a questionable organization:
-Contact info renders in Google a nice residence but does not seem to correspond to an office and I quote “The American Institute of Mathematical Sciences is an international organization for the advancement and dissemination of mathematical and applied sciences.” https://www.aimsciences.org/common_news/column/aboutaims
-Both websites https://www.aimsciences.org/and http://www.aimspress.com/ function more or less okay but not flawless
-The journal “Journal of Industrial & Management Optimization (JIMO)“ is somewhat vague about the APC. It positions itself as hybrid (with an APC of 1800 USD), but all papers I checked can be read as open access (although not all have a CC etc. license). It mentions something like open access for free when an agreement is signed with your institution but how much this cost is unclear
-No problem by itself but the majority of authors are from China, makes you wonder about American Institute…
-Editing is well…sober
On the other hand it looks like and I quote “AIMS is a science organization with two independent operations: AIMS Press (www.aimspress.com) and the American Institute of Mathematical Sciences (AIMS) (www.aimsciences.org ).” AIMS Press is focused on Open Access journals while the journals published by AIMS (www.aimsciences.org) are/used to be subscription-based journals. Pretty much like Springer has there BioMed Central (BMC) journal portfolio and Bentham has their Bentham Open division.
Facts are:
-AIMS ( www.aimsciences.org ), more than 20 of their journals are indexed in SCIE and indexed in Scopus as well (under the publisher’s name: American Institute of Mathematical Sciences)
-AIMS Press (www.aimspress.com ), four journals are indexed in SCIE and thus have an impact factor and 14 journals are indexed in Clarivate’s ESCI. 7 journals are indexed in Scopus.
-AIMS Press, 20 of their journals are a member of DOAJ
-Journal of Industrial & Management Optimization (JIMO) https://www.aimsciences.org/journal/1547-5816 is indexed in Clarivate’s SCIE (impact factor 1.801, see enclosed file for latest JCR Report) and Scopus indexed CiteScore 1.8 https://www.scopus.com/sourceid/12900154727.
-For the papers I checked the time between received and accepted varies between 6 and 9 months and an additional 3-4 months before publication (it is well… not fast but not unusual)
So, overall, I think that the publisher has quite some credibility and it might be worthwhile to consider.
Best regards.
  • asked a question related to Optimization
Question
3 answers
The properties of this powder compact greatly depend on the process parameters. In order to obtain the best final product, it is necessary to optimize the process parameters but which tool is used for parameters optimized?
  • asked a question related to Optimization
Question
6 answers
Are there sources explain how ANFIS code can be generated to optimize it by Particle Swarm Optimization (PSO)?
Relevant answer
Answer
  • asked a question related to Optimization
Question
8 answers
Hi all,
I am optimising 4-anilino-6,7-ethylenedioxy-5-fluoroquinazoline (a series of 4-anilino-5-fluoroquinazolines with varying R groups) for an internship.
I get slightly different optimised structures when optimising from a self-built molecule as compared to the compound downloaded from MolView. This difference may have been down to the difference in the initial state, falling into a minimum (local?) energy state.
Attached are the two orientations. They are almost identical, except for the CH2 angles, which are flipped.
What causes this and is it important for finding the global minimum energy structure and subsequent calculations?
Relevant answer
Answer
Thanks for the information! I believe I have correctly identified R and S enantiomers.
  • asked a question related to Optimization
Question
5 answers
I am doing NSGA ii optimization. For this I have developed some equation through rsm in minitab. Now the problem I am facing is to set the constraints. 4 variable 3 objective. I am getting optimized values which are beyond the experimental response domain. I think a proper constŕaint may solve this.
Relevant answer
Answer
You did not specify the type of problem you are dealing with and why you have to resort to heuristics instead of exact optimization algorithms. That's why nobody will be able to give valuable advice.
Best regards
  • asked a question related to Optimization
Question
4 answers
Can any one provide me with MATLAB code for Particle Swarm Optimization to train ANFIS ?
Relevant answer
  • asked a question related to Optimization
Question
4 answers
Hello everyone,
Is there any optimization procedure, in Whitehead 1965 tray method, to extract nematodes of hidrophobic soils such as forests?
Best regards
Relevant answer
Answer
You can use the Cobb method (decantation and sieving). However, the Cobb method requires more time to process soil (per sample unit).
  • asked a question related to Optimization
Question
3 answers
I have a constrained Optimization problem:
Decision variables:
Xih = Amount of item i in item group h . Xih is a real number.
Aih = Cost of item i in item group h
Cihj = property j of item i (in each unit) in item group h.
Bj = Minimum total property j.
Dj = Maximum total property j.
mih = Minimum of Xih
Mih = Maximum of Xih
Problem Formulation:
Min/Max z1= ∑h ∑i Aih Xih
s.t.
Bj ≤ ∑h∑i Cihj * Xih ≤Dj
where j is number of properties of items
mih ≤ Xih ≤ Mih
So, how do formulate this problem if I need only N elements from a group
something like
OR(X1h , X2h , X3h) == 1
OR(X1h , X2h , X3h) == 2 etc.
means can I constraint number of items in each group.
Relevant answer
Answer
Looks like a MILP problem (MILP = Mixed Integer Linear Programming). Check www.gams.com for modeling and suitable solvers.
  • asked a question related to Optimization
Question
11 answers
In the case, of many objective models, the epsilon constraint method has been seen as an efficient approach. This method is considered an exact approach and existing literature has discussed much about it. So I have two following questions:
1. How to implement this method (share the link in case of any source code available).
2. What are the other similar/different exact methods available for dealing with many objective problems.
#OperationResearch #Closed-loopSupplyChain #Optimization
Relevant answer
Answer
You can follow this research:
I am welcome any cooperation.
  • asked a question related to Optimization
Question
1 answer
I have a question concerning the update algorithms used for the Hessian during optimizations and transition state searches. In a paper by J.Baker (J.Comp.Chem 7, 385-395 (1986)) and some manuals (Games, Gaussian) the statement is made that while the BFGS update is *better* for an optimization, for a transition state, search the DFP update is to be preferred. Of course, no reason is provided to support this statement. One of the books frequently referred to (Fletcher: Practical Methods of Optimization) makes the following statements: - Provided that the function being optimized is quadratic AND the line-search is exact, positive definiteness of the Hessian is preserved (for both DFP and BFGS formulas) - For inexact line searches, global convergence of the BFGS method has been proved. However, for the DFP this could not be shown. Is this somehow related to the fact that for TS searches, the DFP-method is recommended? Why is this so? Does this mean that the BFGS method will lead to a positive-definite Hessian, even if the initial Hessian has the required negative eigenvalue? Any hint is appreciated! Thanks,
  • asked a question related to Optimization
Question
2 answers
In Aspen Plus I have the following problem (see attached figure):
I have a RGibbs block which calculates a distribution for a predifined set of products. However, I would like to go beyond this conventional Gibbs Energy Minimization by additionally considering the calculated energy demand for the block as a problem constraint. The strategy, understood as a conditional statement would be something like:
IF Qprocess = Qliterature THEN ---> Go on with the simulation
Else ---> Constraint product's yields ---> Repeat until condition is met
My problem is I don't exactly know how to set those yield constraints for the declared RGibbs products. I thought of the "Inerts" declaration tab, but when you specify a mole flow, the software understands that should be the actual output flow rather than a boundary.
Any thoughts on the issue will be much appreciated.
Relevant answer
Answer
Dear Reald Tashi and Enrique
I think the problem is how to add a new constraint to the GEM problem in Aspen, and RGibbs seems not suit for the job, even with a loop outside the block (through a DS or optimization). Have you considered solving the GEM problem using code (eg in MATLAB) and imposing the solution to the simulation model through RYield? That would also allow solving the problem calculating excess free energies of key components that satisfy your extra constraint.
I have spoken.
  • asked a question related to Optimization
Question
5 answers
Hello everyone,
Kindly suggest a few pros and cons of the Optimisation technique which are used to find out the minimum no. of test runs.
Relevant answer
Answer
Defining the interval ranges in design of experiment (DOE) is also important.
  • asked a question related to Optimization
Question
7 answers
in his name is the judge
hi
I wrot a sub code on opensees for active tlcd or tuned liqiud gass damper (tlcgd) and assign it to some structures, it seems worked correctly.
In next step i want to optimize tlcgds location on each story with some objects like max dislacement or torsion ratio and ... so i have to use multi objective optimization (which may NSGAII algorithm is the best choice) code or toolbox on matlab and simulink. For this purpose i want to run NSGAII algorithm in matlab, then algorithm Calling my code in opensees (tcl) and run it, after that NSGAII algorithm modify damper location (in opensees code) after each timehistory analysis In order to improve objectives and then analysis my code again and again until find best location for dampers.
Note that I actually want to changing dampers location be part of the NSGAII algorithm and the algorithm itself automatically relocation the dampers to get the best answer.
one best solution may use openseespy but i think it's not free access and i can't get it from iran, So i think realyhead Over heels in this case.
Any help is greatly appreciated.
Take refuge in the right.
Relevant answer
Answer
I think that's the best answer
  • asked a question related to Optimization
Question
6 answers
in order to measure the performance of the many objective optimization methods, some artificial test problems such as MOPs, DTLZ, DTZ, WFG and etc are presented but their are not real and the application of the evolutionary methods should be survived on real engineering problems.
I would appreciate you all if tell me about real world engineering optimization problems with more than 6 objectives that ever have been found and solved by you!
with all respect
Relevant answer
Answer
All sciences are integrated to each other but the real world managing objective for existing problems must be based on especilist and experience to reach for your aim.
Best regards
  • asked a question related to Optimization
Question
7 answers
I am working on the isolation and identification of ESBL(CTX-M,SHV,TEM) positive isolates.I have been doing troubleshooting for more than two weeks.But still i didn’t get the perfect band for my pcr products.I have done pcr gradient to determine the annealing temperature and used 2.5 microliter EtBr in 2% gel at 80volts,200AMP, 30minutes for electrophoresis.But still i am getting the same type of band over and over again.I have attached my band picture below.I would be very grateful if anybody could help me to find out my problems and help me in settling up my PCR optimization.
Thanks in advance.
Relevant answer
Answer
The gel image is not representative one as you did not mention any DNA ladder or the band size you looked for. Simultaneously there needs to use positive control, negative control, no template control in every batch of PCR as well as in gel electrophoresis to validate the PCR. With this image, it is really difficult to interpret as there is no clue to validate the PCR run. Would you please provide the primers' reference?
  • asked a question related to Optimization
Question
5 answers
I developed automatic tuner using Bayesian Optimization.
The plant system is real vehicle.
Furthermore, there is no stability problem, because this system is a kind of semi-active control system.
Is there anyone could suggest a better AI algorithm to be used in real-time calibration of this semi-active control problem with the parameters over 500EA with the real-time vehicle experiments?
This doesn't seems to be a simple problem and needs lots of domain knowledge regarding tuning process and control algorithms. However I would like to find and build a automated tuner imitating what human test drivers do.
Relevant answer
Answer
Dear Youngil Sohn,
As you have said so well, this is not a simple problem, for that it will be necessary to choose a tool or a combination of tools which is at the same time powerful and provided with smoothness and this to reach an adjustment including these two criteria. antagonists.
We can offer basic AI algorithms such as Traveling Salesman Problem (TSP), Deep Neural Networks or techniques that are part of Machine Learning.
For more deatils and information about this subject i suggest you to see links on topic.
Best regards
  • asked a question related to Optimization
Question
4 answers
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
Relevant answer
Answer
Greetings to you all.
Please how can I find MATLAB code for Accelerated Particle Swarm Optimization algorithm for tuning PID controller.
  • asked a question related to Optimization
Question
3 answers
I have designed fuzzy logic controller by means of Matlab function. Then I called this function in Simulink by using interpreted function block. Program worked but, it took too much time (about 2 days) to obtain a result. I want to speed up my controller so how to speed up interpreted function in Simulink?
Note: I did not use fuzzy logic toolbox as I want to optimize fuzzy membership parameters.
  • asked a question related to Optimization
Question
2 answers
For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.
Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?
Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.
Thanks for your time and consideration.
Regards
Ramy
  • asked a question related to Optimization
Question
5 answers
I am currently working on PSO where i need to test my algorithm with benchmark optimization test functions. Liang et al publish this paper ( ) for 28 test functions. Earlier they have the link for their Matlab Code repository, but now i think it is removed. can anyone help me for the matlab code of all these 28 functions?
Thanks in advance.
Relevant answer
Answer
  • asked a question related to Optimization
Question
4 answers
Hello everyone,
I want to set a gap limit in my model in CPLEX CP Optimizer, how can I add this?
Relevant answer
I hope it will be helpful Ekin Tanır
  • asked a question related to Optimization
Question
7 answers
Respected all,
I am trying to learn MCDM Techniques , therefore I am currently seeing lecture on
Can anyone please help me
where can I find some solved code in python or r related to MCDM techniques.
Relevant answer
Answer
I suggest the pymcdm library, the project of which can be found on the repository:
You can install it by using in console:
>> pip install pymcdm
Examples of library usage:
  • asked a question related to Optimization
Question
4 answers
I am getting the fmin by giving the random x values.
Can anybody tell me, how can I find the fmin for f(5.0,5.0)?
****************************************************************************
from scipy.optimize import fmin
x0 = ([-1.0, 1])
def func(x):
f= 100*(x[1]-x[0]**2)**2+(1-x[0])**2
return f
op= fmin(func, x0)
print(op)
***************************************************************************
Optimization terminated successfully. Current function value: 0.000000 Iterations: 100 Function evaluations: 187 [1.00000102 1.00000004]
Relevant answer
Answer
SciPy Python library can provide the fminsearch optimisation method like Matlab. for example
  • asked a question related to Optimization
Question
5 answers
I wonder if there are any theoretical/practical applications (papers) for minimizing a Neural Network function?
Precisely, let $f(w,x)$ be a real-valued neural network function obtained by determining its weights. Now I am interested in minimizing $f(w,x)$ over all $x$ in some space! Is there any application for this minimization?
Relevant answer
Answer
One needs to be careful when dealing with this problem, since w and x are not independent but w is determined through x. Therefore, it may not be quite reasonable to fix w based on some finite values and then search for the entire x domain. With that said, you could consider the functiona to be describing your phenomenon, thus minimizing it means you are searching for the extremums. For instance, the maximum flow of a river given certain inputs to a neural network describing it
  • asked a question related to Optimization
Question
9 answers
Hello
I have to run an optimization using the genetic algorithm GA with a defined initial population.
The same problem is optimized using PSO as well, and this is the option command for the GA
options = optimoptions(@ga,'Generations',Max_iteration,'OutputFcns',@outputfunction,'PopulationSize',50,'InitialPopulationMatrix',initialX,'TolFun',1e-10);
where initialX is my initial population.
The issue is that I am not getting the same value of the first run for both algorithms.
Any one can help me with this?
Relevant answer
Answer
Hi Djedoui,
If I understand your question correctly, I'm assuming you have a fixed/non-randomized initial population for both the GA and PSO optimizer and that by "run" you mean "iteration." If that it the case, then you shouldn't be surprised if your first iteration produces different objective function values. Remember, for most metahueristic algorithms, the value of the first iteration is estimated only after the position of the search agents are updated. To simplify, the algorithm first creates the initial population (in your case, fixed for both GA and PSO) and then the search agents go through the first iterational loop whereby positions and therefore objective value functions are updated. The best optimized value is selected and then reported for "Iteration 1." Since both GA and PSO employ randomized operators, you will definitely get different results.
Hope that answers your question.
  • asked a question related to Optimization
Question
4 answers
In the maintenance optimization context, researchers use a structure such that it results in the Renewal reward theorem and they use this theorem to minimize the long-run cost rate criteria and the maintenance optimization problem.
However, in the real-world, creating such structures that conclude to Renewal reward theorem maybe not happen. I am looking forward to how to dealing with these problems?
Bests
Hasan
Relevant answer
Answer
If you want to, yes. Why not?
  • asked a question related to Optimization
Question
2 answers
I’m defining a number system where numbers form a polynomial ring with cyclic convolution as multiplication.
My DSP math is a bit rusty so I’m asking when does inverse circular convolution exist? You can easily calculate it using FFT but I’m uncertain when does it exist? I would like to have a full number system where each number only has a single well defined inverse. Another part of my problem is derivation. Let c be number in my number system C[X] where coefficients are complex numbers. Linear functions can be derivated easily but I’m struggling to minimize mean squared error (i = 0..degree(C[X]), s_i(c) selects i:th coefficient of the number, s_i(x): C[x]->C):
error(W) = Exy{sum(i)(0.5|s_i(Wx - y)|^2)}
I can solve problem in case of complex numbers W E C but not in case of W E C[X] where multiplication is circular convolution. In practice my linear neural network code diverges to infinity when I try to minimize squared error.
Pointing any online materials that would help would be great.
Relevant answer
Answer
Have you tried Mathematica? It's my favorite software for Maths.
Better than Python (easier to use).
  • asked a question related to Optimization
Question
19 answers
Hi everyone.
I have a question about finding a cost function for a problem. I will ask the question in a simplified form first, then I will ask the main question. I'll be grateful if you could possibly help me with finding the answer to any or both of the questions.
1- What methods are there for finding the optimal weights for a cost function?
2- Suppose you want to find the optimal weights for a problem that you can't measure the output (e.g., death). In other words, you know the contributing factors to death but you don't know the weights and you don't know the output because you can't really test or simulate death. How can we find the optimal (or sub-optimal) weights of that cost function?
I know it's a strange question, but it has so many applications if you think of it.
Best wishes
  • asked a question related to Optimization
Question
10 answers
As success of deep learning depends upon appropriately setting of its parameters to achieve high-quality results. The number of hidden layers and the number of neurons in each layer of a deep machine learning have main influence on the performance of the algorithm. Some manual parameter setting and grid search approaches somewhat ease the users' tasks in setting these important parameters. But, these two techniques can be very time-consuming. I heard alot about the potential of Particle swarm optimization (PSO) to optimize parameter settings.
I want to optimize deep learning parameters to save my valuable computational resources.
Relevant answer
Answer
Why would you want to adjust DL parameters when you can approximate Error Gradients by PSO?
  • asked a question related to Optimization
Question
2 answers
Hello,
I am working on the classification of two different datasets of apple fruit for checking whether apples are rotten or fresh.
I have designed a CNN model from scratch. I have changed the hyperparameters like MiniBatchSize, Epoch, Learning Rate and Optimizer.
I would like to know how to compare the results. There are two possibilities, either I can draw the graphs for each hyperparameter separately and the other is based on accuracy.
I want to show that my model works perfectly for a particular dataset with particular hyperparameters.
Please guide me.
Thanks
Relevant answer
Answer
Thank you, Sir.
  • asked a question related to Optimization
Question
8 answers
I want the following help in linear fashion:
  • I have a vector: P(n)=[1,2,3,4,5,...,n]
  • I have binary variables with length of P: y(1),y(2).....y(n). These variables, y(i) can only take 0 or 1.
A new vector Q(n) is needed, such that, if y(i) is 1, then the ith element in Q(n) should be 0. And in rest of the places, the P(n) should get repeated.
  • Example : if n=6, and y(1) and y(4) are 1 ,then Q(1) and Q(4), should be 0. The vector, Q will be as follows :
Q=[0 1 2 0 1 2].
  • Another example : if y(3) is 1, then Q should be as follows:
Q=[1 2 0 1 2 3].
Basically, beside the zeros on ith position, the vector P should repeat itself in non-zero positions of Q
Can you please help me formulating this problem?
Relevant answer
Answer
In case of MATLAB you can try this:
CLC; clear;
P=[1:10]; %can work for any size or elements of P
Y=[1 1 0 1 1 1 0 1 1 1];
Q=P;
for i=1:length(p)
If y(i) ==0
Q(i)=0;
Q(i+1:end)=P(1:end-i)
end
end
  • asked a question related to Optimization
Question
2 answers
I am optimizing my molecule with LC-B3LYP method and 6311G(d,p) basis set, but the link dies giving me an error;
''Error termination via Lnk1e in C:\G09W\l301.exe''
I am attaching my output file also. what could be the reason?
Relevant answer
Answer
Nice to share the outstanding summary about "How to solve the error message in Gaussian" as follow:
Have a good day!!!!!
  • asked a question related to Optimization
Question
4 answers
In the case of decomposition based MaOPs, for example NSGA3, some predefined Ref.Points are associated to the population members in order to decompose a Many Objective problems to the series of multi/single objective problem!!
so how it works??
Relevant answer
Answer
The Ref. points or vector which are also in some cases different with each other, decomposes the objective space with the use of vectors from the origin to the desired point.
So while the algorithms is being ran, the population members gather around their own associated Ref.point so each Ref. point make some particles just to search the area around it in order to enhance the diversity of the population!
  • asked a question related to Optimization
Question
5 answers
I want to compare metaheuristics on the optimizattion of Lennard-Jones clusters. There are many papers available that optimize Lennard-Jones clusters. Unfortunately, none of them provide the upper and lower bounds of the search space. In order to conduct a fair comparison, all metaheuristics should search in the same bounds of the search space. I found the global minima here: http://doye.chem.ox.ac.uk/jon/structures/LJ/tables.150.html but the search space is not defined.
Can anyone please tell me what are the recommended upper and lower bounds of the search space?
Relevant answer
Answer
Miha Ravber : for me, [-2, 2] was enough because I fixed the first atom at (0, 0, 0), the second at (>= 0, 0, 0), etc. If you don't, you get free coordinates between your bounds.
You can definitely start at [-10, 10] and see what the results are, then adjust.
  • asked a question related to Optimization
Question
6 answers
Hi all,
I have a large mixed-integer programming (MIP) optimization problem, which has a high risk of infeasibility. The branch and cut algorithm in GLPK spends hours to find an optimum solution, and it may return an infeasible solution at the end. I want to do a pre-screening before starting the actual optimization to make sure there is a good chance of a feasible solution. I admire the fact that the only way to check feasibility is to run the optimization, but any heuristic with potential false infeasible alerts (false positives) could be helpful. My focus is on feasibility rather than optimality. Do you have any suggestion of algorithm, software, or a library in Python to do this pre-screening?
Thanks for your time and kind reply.
Relevant answer