Science topics: Applied MathematicsOptimization
Science topic
Optimization - Science topic
Explore the latest questions and answers in Optimization, and find Optimization experts.
Questions related to Optimization
In his name is the judge
Hi
I want to learn multi-objective optimization with NSGAII in python for my research.
Please recommend a good source for learning NSGAII in python.
wish you best
Take refuge in the right.
I have running GA optimization using "gamultiobj" in Matlab. The upper bound and lower bound of my design variables are like [10 30 175 1] and [30 60 225 3]. But after convergence, I see the design variables (like all of them) near 20,47,175 and 1. I am not getting a Pareto Front. What could be the possible reasons for that?
I am using a hybrid optimization algorithm (Grey Wolf + Cuckoo Search) to find the optimal size of hybrid renewable energy system based on Total Net Present Cost of the system. Details of Optimization Problem are as follows:
Objective Function: TNPC[$]: f(Number of components)=f(N_pv N_wt N_bat N_inv)
Lower Bound: [1 1 1 1] Upper Bound: [1000 1000 100 100]
Constraints: 1. System size is within allowed min. and max. system size. 2. battery SOC remains within allowed limit
The Algorithm do end up with the least system cost however the optimal size of system comes to be just equal to lower bound values i.e. optimal size of system: [1 1 1 1] .
Optimization Algorithm Convergence Curve is attached.
I've been using GMAT in my free time to optimize trajectories, and have varied burn component values and spacecraft states, usually with success. The vary command in GMAT, with the Yukon optimizer that I am using, has the following parameters that can be changed:
- Initial value: The initial guess. I know the gradient descent optimization method that GMAT uses is very sensitive to initial conditions and so this must be feasible or reasonably close.
- Perturbation: The step size used to calculate the finite difference derivative.
- Max step: The maximum allowed change in the control variable during a single iteration of the solver.
- Additive scale factor: Number used to nondimensionalize the independent variable. This is done with the equation xn = m (xd + a), where xn is the non-dimensional parameter, xd is the dimensional parameter and this parameter is a.
- Multiplicative scale factor: Same as above, but it's the variable d in the equation.
For the initial value, I can usually see when my chosen value is feasible by observing the solver window or a graphical display of the orbit in different iterations. The max step is the most intuitive of these parameters for me, and by trial and error, observation of the solver window and how sensitive my target variables are to changes in the control variables I can usually get it right and get convergence. It is still partially trial and error though.
However, I do not understand the effect of the other parameters on the optimization. I read a bit about finite difference and nondimensionalization/rescaling, and I think I understand them conceptually, but I still don't understand what values they have to be to get an optimal optimization process.
This is especially a problem now because I have started to vary epochs (TAIModJulian usually) or time intervals (e.g. "travel for x days" and find optimal x, or to find optimal launch windows), and I cannot get the optimizer to vary them properly, even when I use a large step size. The optimizer usually stays close to the initial values, and eventually leads to a non-convergence message.
I have noticed that using large values for the two scale factors sometimes gives me larger step sizes and occasionally what I want, but it's still trial and error. As far as perturbation goes, I do not understand its influence on how the optimization works. Sometimes for extremely small values I get array dimension errors, sometimes for very large values I get similar results to if I'm using too large a max step size, and that's about it. I usually use 1e-5 to 1e-7 and it seems to work most of the time. Unfortunately information on the topic seems sparse, and from what I can tell GMAT's documentation uses different terminology for these concepts than what I can find online.
So I guess my question is two-fold: how to understand the optimization parameters of GMAT and what they should be in different situations, and what should they be when I want GMAT to consider a wide array of possible trajectories with different values of control variables, especially when those control variables are epochs or time intervals? Is there a procedure or automatic method that takes into account the scale of the optimization problem and its sensitivity, and gives an estimate of what the optimization parameters should be?
I have designed the optimization experiment using Box-Behnken approach.
What should I do if any of the factors combination fails, for example because the aggregation occurs.
Should I review whole optimization or is there any method to skip the particular factors combination?
And if I need to review the whole experiment, what method should I use to evaluate boundary factors values? Screening methods I have seen require at least 6 factors to be screened.
Any help is appreciated.
Greetings.
I have generated 16 variable probability distributions in the form of a 16 dimensional NumPy array in python. How could I determine all the peaks in this function in python or using some software?
Hello everyone, I am looking for a good MPC quadratic optimization mathematical model to optimize a cost function or performance index, for a battery energy storage state space model. Would anybody suggest a good research paper or post a formulation that contains a good mathematical model for quadratic optimization? An objective function with viable constraints, which can be possible to implement in function solvers such as quadprog or cplex would be ideal. Thank you
I want to compare two theorems and see which one has the largest feasibility domain. like the attached picture.
for example, I have the following matrices:
A1=[-0.5 5; a -1];
A2=[-0.5 5; a+b -1];
they are a function of 'a' and 'b'
I want to plot according to 'a' and 'b' the points where an LMI is feasible
for example the following LMI
Ai'*P+P*Ai<0
then I want to plot the points where another LMI is feasible, for example:
Ai'*Pi+Pi*Ai<0
I have seen similar maps in many articles where the authors demonstrated that an LMI is better than another because it is applicable for more couples (a,b)
Hi..anybody can suggest best tool to do a feature selection using Fruit Fly Optimization?
I am trying to code this optimizer for a linear regression model. What i want to confirm from is that the update of model parameter are happening even if they cause the increase in cost function, isn't ?
Or we only update the coefficients values if they decreased the value of the cost function?
I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $\rho < 2 - \epsilon$ on arbitrary graphs.
Here, I am going to clarify the main topics of the idea. Then, I am grateful if anyone identify any potential issues or give me informative suggestions.
You can see the last version of my paper in this open access site
https://vixra.org/abs/2107.0045 with a performance ratio of $1.999999$
https://vixra.org/abs/2202.0143 with a performance ratio of $1.885903$
It can be natural to reject new ideas right away. Yet, instead of immediate judgments and using negative words, it is better to use positive language. Even ideas that seem implausible can turn into outstanding innovations upon further exploration and development.
The Idea:
First of all, we prove that,
I. If the optimal value of the VCP is greater than $(n/2)+(n/k)$ then $\rho < (2k)/(k+2)$, and
II. If we can produce a feasible solution with objective value smaller than $(kn)/(k+1)$ then $\rho < (2k)/(k+1)$.
Hence, to introduce a performance ratio of $2 - epsilon$ on arbitrary graphs, it is sufficient to produce a feasible solution with a suitable fixed objective value or proving that the optimal value is greater than a suitable fixed value.
Therefore, we solve the well-known SDP relaxation proposed by Kleinberg and Goemans(1998). Note that, I know for sure that just by solving any SDP formulation, we cannot approximate the VCP with a performance ratio better than 2-o(1).
Then, let $V_{-1}=\{j: V_0V_j < 0\}$, and $V_1=V-V_{-1}$ which is a feasible solution for the VCP.
If $|V_{-1}| > 0.0625n$ then $|V_1| < 0.9375n= 15n/16$ and we have (based on II) $\rho < (2\times 15)/16 < 1.885903$.
Else, let $A=\{j: V_0V_j > 0.4\}$.
If $|A| > 0.3075n$, then, we can show that the optimal value of the VCP is greater than $(n/2)+(0.03025n)$ and we have (based on I) $\rho < (2k)/(k+2) < 1.885903$, where $k=1/0.03025$.
Else, let $G_{0.4}=\{j: 0 <= V_0V_j <= 0.4\}$, where based on above results we know that $|G_{0.4}| > (1-0.0625-0.3075)n= 0.63n$.
Now, it is sufficient to introduce a suitable feasible solution based on $G_{0.4}$.
To do this, we prove that for any normalized vector $w$, the induced subgraph on $H_w=\{j: |wV_j| > 0.700001\}$ is a bipartite graph and as a result,
if $|H_w| > 0.118472n$ then we can produce a feasible solution with objective value smaller than $(1-0.118472/2)n= 0.940764n < 16n/17$, and a performance ratio of $\rho < (2\times 16)/17 < 1.885903$.
Finally, to produce such a normalized vector $w$, we show that, by introducing two random vectors $u$ and $w$, one of the sets $H_u$ or $H_w$ has more than $0.118472n$ members, and as a result we can produce a suitable feasible solution based on $G_{0.4}$.
Therefore, we could introduce an approximation ratio of $\rho < 1.885903$ on arbitrary graphs, and, based on the proposed $1.885903$-approximation algorithm for the VCP, the unique games conjecture is not true.
#Combinatorial Optimization
#Computational Complexity Theory
# unique games conjecture
In the optimization of gas turbine cycle, many researchers have used isentropic efficiency of gas turbine and air compressor as decision variables. Even I did the same. But recently while submitting a paper I got one comment from the reviewer which really made me think.
The reviewer comment:
"AC and GT isentropic efficiency are used as optimization parameters. Are these easily controllable metrics? The other metrics (pressure ratio and temperatures) are but I wonder about the isentropic efficiencies."
How should I justify?
Hi. I'm going to optimize the layout design of the satellite with Abaqus and Isight. I designed and analyzed the Abaqus model, which is pinned below. Now I want to enter my model to Isight to optimize satellite, but I face a big obstacle. The Abaqus should enter the reference points and constraints points in Isight to optimize the parts' location, but there is nothing like RF points or constraints points. I couldn't find a way to solve this problem. May someone helps me to figure it out?
How does one optimize a set of data which is comprised of 3 input variables and 1 output variable (numerical variables) using a Genetic Algorithm? and also how can I create a fitness function? How is a population selected in this form of data? What will the GA result look like, will it be in a form of 3 inputs and 1 output?
I do understand how the GA works, however, I am confused about how to execute it with the form of data that I have.
My data is structured as follows, just for better understanding:
3 columns of Input data, and the fourth column of Output Data. (4 Columns and 31 rows) data. The 3 input variables are used to predict the fourth variable. I want to use GA to improve the prediction results.
Lastly, Can I use decimal numbers (E.g. 0.23 or 24.45); or I should always use whole numbers for chromosomes.
When using Wasserstein balls to describe the uncertainty set in distributionally robust optimization, can multiple sources of uncertainty be considered at the same time, such as wind power and solar power forecast error?
Hi
I'm working on a research for developing a nonlinear model (e.g. exponential, polynomial and...) between a dependent variable (Y) and 30 independent variables ( X1, X2, ... , X30).
As you know I need to choose the best variables that have most impacts on estimating (Y).
But the question is that can I use Pearson Correlation coefficient matrix to choose the best variables?
I know that Pearson Correlation coefficient calculates the linear correlation between two variables but I want to use the variables for a nonlinear modeling ,and I don't know the other way to choose my best variables.
I used PCA (Principle Component Analysis) for reduce my variables but acceptable results were not obtained.
I used HeuristicLab software to develop Genetic Programming - based regression model and R to develop Support Vector Regression model as well.
Thanks
I wish to extend a paper by incorprating the particular feature the authors havent used or considered. However after going through the litreature It isnt clear how much that particular feature plays a role, all I know it does play an very important role for the output that I care about. For experimentation I am assuming a simple linear regression function ax+by where a serves as the contribution to the paper I am extending and x its feature set, my goal is to find the parameter b (mse minimization) by encoding the feature in variable y and thus determine the strength that y plays
However there are some limitation first of that I am assuming the relationship be linear which is very wide of assumption , and I m hoping to consider some kind of non linearity
Question is how do I proceed from here. Is there any mathematical equation I can consider as intial assumption
PS: Note Y is here a continous value not categorical
How can I optimize ANFIS using Genetic Algorithm; and also with Aquila Optimizer in MATLAB? Any available code I can use?
I need matlab code to reproduce the attached research paper
Can anyone provide me with PSO MATLAB code to optimize the weights of multi types of Neural Networks?
I am running a MARKAL model, (GAMS based model) and a .lst file is generated. May I know what does the following terms mean. I have read the literature but still not able to make out the true meaning.
OPTION LIMROW=0, LIMCOL=0, SOLPRINT=ON, SYSOUT=OFF,
PROFILE=0, SOLVEOPT=REPLACE;
*OPTION NLP=MINOS5;
REPORT SUMMARY : 0 NONOPT
2076 INFEASIBLE (INFES)
SUM acr??
MAX EPS
MEAN 1.20424E+298
0 UNBOUNDED
What are the meaning of columns?
Level, Marginal
What is the meaning of EPS in "lower" and "upper" named columns?
Thank You
Regards
I'm studying optimal placement of sensors in larger structures. Some metrics can be found like fisher information matrix, kinetic energy, effective independence, MAC, etc. But in your opinion and knowledge, which ones are the best?
(E1U) (E2G) (E2G) (B2U) (A1G) (E1U) (E1U) (E2G)
(E2G) (B2U)
Requested convergence on RMS density matrix=1.00D-08 within 128 cycles.
Requested convergence on MAX density matrix=1.00D-06.
Requested convergence on energy=1.00D-06.
No special actions if energy rises.
SCF Done: E(RB3LYP) = -13319.3349271 A.U. after 1 cycles
Convg = 0.2232D-08 -V/T = 2.0097
Range of M.O.s used for correlation: 1 5424
NBasis= 5424 NAE= 1116 NBE= 1116 NFC= 0 NFV= 0
NROrb= 5424 NOA= 1116 NOB= 1116 NVA= 4308 NVB= 4308
PrsmSu: requested number of processors reduced to: 4 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.I
.
.
.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda.
Symmetrizing basis deriv contribution to polar:
IMax=3 JMax=2 DiffMx= 0.00D+00
G2DrvN: will do 1 centers at a time, making 529 passes doing MaxLOS=2.
Estimated number of processors is: 3
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
CoulSu: requested number of processors reduced to: 4 ShMem 1 Linda.
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
CoulSu: requested number of processors reduced to: 4 ShMem 1 Linda.
.
.
.
Calling FoFCou, ICntrl= 3107 FMM=T I1Cent= 0 AccDes= 0.00D+00.
CoulSu: requested number of processors reduced to: 4 ShMem 1 Linda.
Erroneous write. Write 898609344 instead of 2097152000.
fd = 4
orig len = 3177921600 left = 3177921600
g_write
Could you suggest some contemporary research topics in Operations Research (OR)?
In your opinion, which research topics of OR could be impactful in the next decade?
Thanks in advance.
Let us discuss about the advantages, disadvantages, and use of powerful decomposition techniques like Bender's decomposition for large-scale optimization. I invite my esteemed colleagues and researchers to share important literature, ways of implementation, and potential application areas of decomposition algorithms, in this forum.
How can I get a MATLAB code for solving multi objective transportation problem and traveling sales man problem?
In most of AI research the goal is to to achieve higher than human performance on a single objective,
I believe that in many cases we oversimplify the complexity of human objectives, and therefore I think we should maybe step off improving human performance.
but rather focus on understanding human objectives first by observing humans in the form of imitation learning while still exploring.
In the attachment I added description of the approach I believe could enforce more human like behavior.
However I would like advice on how I could formulate a simple imitation learning environment to show a prove of concept.
One idea of mine was to build a gridworld simulating a traffic light scenario, while the agent is only rewarded for crossing the street, we still want it to respect the traffic rules.
Kind regards
Jasper Busschers master student AI
I want to model an energy storage system that with given data levels power consumption. Where should I start looking to learn on how to create one? And most importantly, what would be the best optimization software to use for modeling my problem? Matlab optimization tools, GAMS or Gurobi? Any suggestions would be appreciated
I am doing simulations of gait cycles in which I use a contact model to recreate ground reactions, and I try to optimize some parameters in my model to make it fit to my experimental measures.
I want to know if there is a standard for a good enough error in this particular field, or if it really depends on my application.
In other words, I want to know if there is a threshold at which I can consider my optimization adequate.
I am trying to run a interaction studies between two molecule for Hydrogen abstraction (HAT mechanism). So when i put two molecules together in Gaussian at 3-21G basis set, the HAT is happening but when i put the same input file and start optimizing at higher basis set 6-31G(d) basis set from the start, it does not result in H atom abstraction. Can anayone suggest why it is so??
How can I select only 3 buses to add capacitors from 21 candidate buses, by using the optimization scenario " not using loss sensitivity factor"? What's the condition for this selection?
I put the capacitors in 3 buses (from 21 candidates) which have the max Var injected, but when I run the code the results (locations, sizes, min power loss) differ in each run (not in each iteration), Is it normal?
Hello everyone, good time.
This is the first time I want to choose a camera. if you can, please guide me.
Do the cameras do software optimizations (on the captured image by lens), for example, to increase the image contrast?
Is software optimization generally done by the manufacturers in the cameras or does the user have to do the desired optimization himself?
Is it possible to apply software changes to increase the image quality of the cameras by the user?
Thank you for your attention.
Best regards.
When I performed the optimization of the minimum four convergence criteria are met and then I carried out a calculation of frequencies which were all real, but to review the criteria for convergence of frequencies two criteria not met:
Item Value Threshold Converged?
Maximum Force 0.000022 0.000450 YES
RMS Force 0.000006 0.000300 YES
Maximum Displacement 0.014118 0.001800 NO
RMS Displacement 0.002399 0.001200 NO
in addition, the output file 09 does not set a gaussian error during calculation.
What do I have to do to fix the convergence criteria of the frequencies?
The molecules contain Fe, B,C,H, F. The scf energies are very close to converging but they just don't converge. I tried increasing the scf cycles, changing the input geometry etc. I am using the basis set of def2tzvpd.
Journal of Industrial & Management Optimization (JIMO) is an open access journal. You pay a substantial amount to publish a paper. When you go to the website of its publisher, American Institute of Mathematical Sciences (AIMS Press), it seems that it is not really based in the United States. I am not sure if it is a legitimate professional organization or if it is a predatory publisher. They have a large number of open access journals. On the other hand, their handling of papers is terrible: extremely slow and low-tech, which is not typical for predatory journals. It may take 13 months to get an editorial rejection, for instance. Furthermore, they don't have an online submission system with user profiles on it, you just submit the paper on a website, and they give you a URL to check your paper's status, which makes your submission open to anyone who has the URL. It has an impact factor of 1.3, which makes me puzzled. Any comments on this organization and the journal will be appreciated.
The properties of this powder compact greatly depend on the process parameters. In order to obtain the best final product, it is necessary to optimize the process parameters but which tool is used for parameters optimized?
Are there sources explain how ANFIS code can be generated to optimize it by Particle Swarm Optimization (PSO)?
Hi all,
I am optimising 4-anilino-6,7-ethylenedioxy-5-fluoroquinazoline (a series of 4-anilino-5-fluoroquinazolines with varying R groups) for an internship.
I get slightly different optimised structures when optimising from a self-built molecule as compared to the compound downloaded from MolView. This difference may have been down to the difference in the initial state, falling into a minimum (local?) energy state.
Attached are the two orientations. They are almost identical, except for the CH2 angles, which are flipped.
What causes this and is it important for finding the global minimum energy structure and subsequent calculations?
I am doing NSGA ii optimization. For this I have developed some equation through rsm in minitab. Now the problem I am facing is to set the constraints. 4 variable 3 objective. I am getting optimized values which are beyond the experimental response domain. I think a proper constŕaint may solve this.
Can any one provide me with MATLAB code for Particle Swarm Optimization to train ANFIS ?
Hello everyone,
Is there any optimization procedure, in Whitehead 1965 tray method, to extract nematodes of hidrophobic soils such as forests?
Best regards
I have a constrained Optimization problem:
Decision variables:
Xih = Amount of item i in item group h . Xih is a real number.
Aih = Cost of item i in item group h
Cihj = property j of item i (in each unit) in item group h.
Bj = Minimum total property j.
Dj = Maximum total property j.
mih = Minimum of Xih
Mih = Maximum of Xih
Problem Formulation:
Min/Max z1= ∑h ∑i Aih Xih
s.t.
Bj ≤ ∑h∑i Cihj * Xih ≤Dj
where j is number of properties of items
mih ≤ Xih ≤ Mih
So, how do formulate this problem if I need only N elements from a group
something like
OR(X1h , X2h , X3h) == 1
OR(X1h , X2h , X3h) == 2 etc.
means can I constraint number of items in each group.
In the case, of many objective models, the epsilon constraint method has been seen as an efficient approach. This method is considered an exact approach and existing literature has discussed much about it. So I have two following questions:
1. How to implement this method (share the link in case of any source code available).
2. What are the other similar/different exact methods available for dealing with many objective problems.
#OperationResearch #Closed-loopSupplyChain #Optimization
I have a question concerning the update algorithms used for the Hessian
during optimizations and transition state searches.
In a paper by J.Baker (J.Comp.Chem 7, 385-395 (1986)) and some manuals (Games,
Gaussian)
the statement is made that while the BFGS update is *better* for an
optimization, for a transition state, search the DFP update is to be preferred.
Of course, no reason is provided to support this statement.
One of the books frequently referred to (Fletcher: Practical
Methods of Optimization) makes the following statements:
- Provided that the function being optimized is quadratic AND the line-search
is exact, positive definiteness of the Hessian is preserved
(for both DFP and BFGS formulas)
- For inexact line searches, global convergence of the BFGS method has been
proved.
However, for the DFP this could not be shown.
Is this somehow related to the fact that for TS searches, the DFP-method is
recommended? Why is this so? Does this mean that the BFGS method will
lead to a positive-definite Hessian, even if the initial Hessian
has the required negative eigenvalue?
Any hint is appreciated!
Thanks,
In Aspen Plus I have the following problem (see attached figure):
I have a RGibbs block which calculates a distribution for a predifined set of products. However, I would like to go beyond this conventional Gibbs Energy Minimization by additionally considering the calculated energy demand for the block as a problem constraint. The strategy, understood as a conditional statement would be something like:
IF Qprocess = Qliterature THEN ---> Go on with the simulation
Else ---> Constraint product's yields ---> Repeat until condition is met
My problem is I don't exactly know how to set those yield constraints for the declared RGibbs products. I thought of the "Inerts" declaration tab, but when you specify a mole flow, the software understands that should be the actual output flow rather than a boundary.
Any thoughts on the issue will be much appreciated.
Hello everyone,
Kindly suggest a few pros and cons of the Optimisation technique which are used to find out the minimum no. of test runs.
in his name is the judge
hi
I wrot a sub code on opensees for active tlcd or tuned liqiud gass damper (tlcgd) and assign it to some structures, it seems worked correctly.
In next step i want to optimize tlcgds location on each story with some objects like max dislacement or torsion ratio and ... so i have to use multi objective optimization (which may NSGAII algorithm is the best choice) code or toolbox on matlab and simulink. For this purpose i want to run NSGAII algorithm in matlab, then algorithm Calling my code in opensees (tcl) and run it, after that NSGAII algorithm modify damper location (in opensees code) after each timehistory analysis In order to improve objectives and then analysis my code again and again until find best location for dampers.
Note that I actually want to changing dampers location be part of the NSGAII algorithm and the algorithm itself automatically relocation the dampers to get the best answer.
one best solution may use openseespy but i think it's not free access and i can't get it from iran, So i think realyhead Over heels in this case.
Any help is greatly appreciated.
Take refuge in the right.
in order to measure the performance of the many objective optimization methods, some artificial test problems such as MOPs, DTLZ, DTZ, WFG and etc are presented but their are not real and the application of the evolutionary methods should be survived on real engineering problems.
I would appreciate you all if tell me about real world engineering optimization problems with more than 6 objectives that ever have been found and solved by you!
with all respect
I am working on the isolation and identification of ESBL(CTX-M,SHV,TEM) positive isolates.I have been doing troubleshooting for more than two weeks.But still i didn’t get the perfect band for my pcr products.I have done pcr gradient to determine the annealing temperature and used 2.5 microliter EtBr in 2% gel at 80volts,200AMP, 30minutes for electrophoresis.But still i am getting the same type of band over and over again.I have attached my band picture below.I would be very grateful if anybody could help me to find out my problems and help me in settling up my PCR optimization.
Thanks in advance.
I developed automatic tuner using Bayesian Optimization.
The plant system is real vehicle.
Furthermore, there is no stability problem, because this system is a kind of semi-active control system.
Is there anyone could suggest a better AI algorithm to be used in real-time calibration of this semi-active control problem with the parameters over 500EA with the real-time vehicle experiments?
This doesn't seems to be a simple problem and needs lots of domain knowledge regarding tuning process and control algorithms. However I would like to find and build a automated tuner imitating what human test drivers do.
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
I have designed fuzzy logic controller by means of Matlab function. Then I called this function in Simulink by using interpreted function block. Program worked but, it took too much time (about 2 days) to obtain a result. I want to speed up my controller so how to speed up interpreted function in Simulink?
Note: I did not use fuzzy logic toolbox as I want to optimize fuzzy membership parameters.
For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.
Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?
Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.
Thanks for your time and consideration.
Regards
Ramy
I am currently working on PSO where i need to test my algorithm with benchmark optimization test functions. Liang et al publish this paper ( ) for 28 test functions. Earlier they have the link for their Matlab Code repository, but now i think it is removed. can anyone help me for the matlab code of all these 28 functions?
Thanks in advance.
Hello everyone,
I want to set a gap limit in my model in CPLEX CP Optimizer, how can I add this?
Respected all,
I am trying to learn MCDM Techniques , therefore I am currently seeing lecture on
Can anyone please help me
where can I find some solved code in python or r related to MCDM techniques.
I am getting the fmin by giving the random x values.
Can anybody tell me, how can I find the fmin for f(5.0,5.0)?
****************************************************************************
from scipy.optimize import fmin
x0 = ([-1.0, 1])
def func(x):
f= 100*(x[1]-x[0]**2)**2+(1-x[0])**2
return f
op= fmin(func, x0)
print(op)
***************************************************************************
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 100
Function evaluations: 187
[1.00000102 1.00000004]
I wonder if there are any theoretical/practical applications (papers) for minimizing a Neural Network function?
Precisely, let $f(w,x)$ be a real-valued neural network function obtained by determining its weights. Now I am interested in minimizing $f(w,x)$ over all $x$ in some space! Is there any application for this minimization?
Hello
I have to run an optimization using the genetic algorithm GA with a defined initial population.
The same problem is optimized using PSO as well, and this is the option command for the GA
options = optimoptions(@ga,'Generations',Max_iteration,'OutputFcns',@outputfunction,'PopulationSize',50,'InitialPopulationMatrix',initialX,'TolFun',1e-10);
where initialX is my initial population.
The issue is that I am not getting the same value of the first run for both algorithms.
Any one can help me with this?
In the maintenance optimization context, researchers use a structure such that it results in the Renewal reward theorem and they use this theorem to minimize the long-run cost rate criteria and the maintenance optimization problem.
However, in the real-world, creating such structures that conclude to Renewal reward theorem maybe not happen. I am looking forward to how to dealing with these problems?
Bests
Hasan
I’m defining a number system where numbers form a polynomial ring with cyclic convolution as multiplication.
My DSP math is a bit rusty so I’m asking when does inverse circular convolution exist? You can easily calculate it using FFT but I’m uncertain when does it exist? I would like to have a full number system where each number only has a single well defined inverse. Another part of my problem is derivation. Let c be number in my number system C[X] where coefficients are complex numbers. Linear functions can be derivated easily but I’m struggling to minimize mean squared error (i = 0..degree(C[X]), s_i(c) selects i:th coefficient of the number, s_i(x): C[x]->C):
error(W) = Exy{sum(i)(0.5|s_i(Wx - y)|^2)}
I can solve problem in case of complex numbers W E C but not in case of W E C[X] where multiplication is circular convolution. In practice my linear neural network code diverges to infinity when I try to minimize squared error.
Pointing any online materials that would help would be great.
Hi everyone.
I have a question about finding a cost function for a problem. I will ask the question in a simplified form first, then I will ask the main question. I'll be grateful if you could possibly help me with finding the answer to any or both of the questions.
1- What methods are there for finding the optimal weights for a cost function?
2- Suppose you want to find the optimal weights for a problem that you can't measure the output (e.g., death). In other words, you know the contributing factors to death but you don't know the weights and you don't know the output because you can't really test or simulate death. How can we find the optimal (or sub-optimal) weights of that cost function?
I know it's a strange question, but it has so many applications if you think of it.
Best wishes
As success of deep learning depends upon appropriately setting of its parameters to achieve high-quality results. The number of hidden layers and the number of neurons in each layer of a deep machine learning have main influence on the performance of the algorithm. Some manual parameter setting and grid search approaches somewhat ease the users' tasks in setting these important parameters. But, these two techniques can be very time-consuming. I heard alot about the potential of Particle swarm optimization (PSO) to optimize parameter settings.
I want to optimize deep learning parameters to save my valuable computational resources.
Hello,
I am working on the classification of two different datasets of apple fruit for checking whether apples are rotten or fresh.
I have designed a CNN model from scratch. I have changed the hyperparameters like MiniBatchSize, Epoch, Learning Rate and Optimizer.
I would like to know how to compare the results. There are two possibilities, either I can draw the graphs for each hyperparameter separately and the other is based on accuracy.
I want to show that my model works perfectly for a particular dataset with particular hyperparameters.
Please guide me.
Thanks
I want the following help in linear fashion:
- I have a vector: P(n)=[1,2,3,4,5,...,n]
- I have binary variables with length of P: y(1),y(2).....y(n). These variables, y(i) can only take 0 or 1.
A new vector Q(n) is needed, such that, if y(i) is 1, then the ith element in Q(n) should be 0. And in rest of the places, the P(n) should get repeated.
- Example : if n=6, and y(1) and y(4) are 1 ,then Q(1) and Q(4), should be 0. The vector, Q will be as follows :
Q=[0 1 2 0 1 2].
- Another example : if y(3) is 1, then Q should be as follows:
Q=[1 2 0 1 2 3].
Basically, beside the zeros on ith position, the vector P should repeat itself in non-zero positions of Q
Can you please help me formulating this problem?
I am optimizing my molecule with LC-B3LYP method and 6311G(d,p) basis set, but the link dies giving me an error;
''Error termination via Lnk1e in C:\G09W\l301.exe''
I am attaching my output file also. what could be the reason?
In the case of decomposition based MaOPs, for example NSGA3, some predefined Ref.Points are associated to the population members in order to decompose a Many Objective problems to the series of multi/single objective problem!!
so how it works??
I want to compare metaheuristics on the optimizattion of Lennard-Jones clusters. There are many papers available that optimize Lennard-Jones clusters. Unfortunately, none of them provide the upper and lower bounds of the search space. In order to conduct a fair comparison, all metaheuristics should search in the same bounds of the search space. I found the global minima here: http://doye.chem.ox.ac.uk/jon/structures/LJ/tables.150.html but the search space is not defined.
Can anyone please tell me what are the recommended upper and lower bounds of the search space?
Hi all,
I have a large mixed-integer programming (MIP) optimization problem, which has a high risk of infeasibility. The branch and cut algorithm in GLPK spends hours to find an optimum solution, and it may return an infeasible solution at the end. I want to do a pre-screening before starting the actual optimization to make sure there is a good chance of a feasible solution. I admire the fact that the only way to check feasibility is to run the optimization, but any heuristic with potential false infeasible alerts (false positives) could be helpful. My focus is on feasibility rather than optimality. Do you have any suggestion of algorithm, software, or a library in Python to do this pre-screening?
Thanks for your time and kind reply.