Questions related to Stochastic Optimization
In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set.
In the context of nonlinear multi-stage max-min robust optimization problems:
What are the best robustness models such as Strict robustness, Cardinality constrained robustness, Adjustable robustness, Light robustness, Regret robustness, and Recoverable robustness?
How to solve max-min robust optimization problems without linearization/approximations efficiently? Algorithms?
How to approach nested robust optimization problems?
For example, the problem can be security-constrained AC optimal power flow.
In a situation, process simulation fails to converge for a particular set of decision variables, What would be the sequence of the line of code to close an unconverged simulation, and reopen the saved file before calling the simulation file for a new set of decision variables.
Also, in the case of an error message appearing during the simulation and the remaining VBA code cannot be executed. How do I use a "GoTo" statement to skip the remaining VBA code and continue the optimization by calling the simulation file for a new set of decision variables?
Can anyone help me with a similar VBA script that I can adapt to my code?
Attached here is the code for the interaction between excel and Aspen plus. Any idea/comment about the code to address this issue above would be highly appreciated.
When accounting for uncertainty in demand for humanitarian logistics planning, one of the most common ways is to use stochastic optimization approach in which the demand is generally assumed to follow a certain distribution (usually normal or uniform).
My question here is, how to identify these distributions when there is no historical data (as is the case in most disasters)?
it seems that with solving the stationary form of forward Fokker Planck equation we can find the equilibrium solution of stochastic differential equation.
is the above statement true?is it a conventional way to find the equilibrium solution of a SDE? and do SDEs always have equilibrium solution?
It will be really good if the suggested journal doesn't spend much time in revision cycles, because I submitted this algorithm to "Applied soft computing" journal 1 year ago, and after 6 revision cycles, they just reject it with no real reasons.
in process control in engineering, of course in many situations we need to control a system under a performance index (optimal control), where the system is exposed to uncertainty ( parameter uncertainty or disturbance or noise). and sometimes we need some constrained on the state of the system.
there are two approaches: robust optimal control, stochastic optimal control.
when we use robust optimal control (because some bounds on the uncertainty is known) we consider the worst case scenario, and we can use optimal control and hard constraints on the states can be satisfied.I think this is a practical approach
on the other hand, when we can not specify some bounds on the uncertainty and the probability distribution of uncertainty is known, we must use stochastic optimal control. In this case, the hard constraint can not be defined, and we should use the definition of chance constraints, meaning the constraint can be satisfied with some level of probability.
now my question is, does such definition a practical definition in real-world application?and is it really applied in industry?
Most of the constraints are for safety. for example we want the temperature of the boiler to be bounded. it is dangerous if we want the temperature of the boiler to be bounded with some probability. so I want to know that, is chance constrain a practical definition in real-world application in engineering?
A question for GAMS and/or SDDP experts!
I have two benders based LP models, static and dynamic (based on SDDP) for a power plant with storage. Given the capacities of the plant, both the models give the same objective function. However if I add the generation expansion problem to it as benders master, the duals generated by the dynamic model for energy storage (in particular) are wrong.
Although if I calculate the dual manually by increasing the storage capacity by 1 unit the objective function changes by the correct dual (which I know from my static model) and not by the number generated by GAMS.
The modeling of these processes requires specifying, for each operation of production process, an interval of the authorized duration.
Considering the performance of the Petri Nets tool in terms of modeling synchronizations, parallels, conflicts and sharing of resources, this tool is seen as an important research way for modeling and evaluation of robustness.
Are there versions of CMA-ES specifically designed for high dimensional search spaces? Are there any implementations available (preferable in MATLAB)?
I know that the polynomial chaos expansion can deal with many distributions such as normal distribution, beta distribution and gamma distribution. But if they simultaneously occur in equations, how can I handle such occasion?
I am interested in optimization of the fittest metabolic network topology which survives in the evolutionary pressure. Can anyone help me on evolutionary optimization algorithm? Actually I am in trouble in set up of the multi objective fitness function.
A simple function 1/x is not define at x=0 , this extends to first and second derivative. From what angle can we veiw the optimal behaviour of this function. A source to a comphrehensive discussion on this type of function in area of optimization will be appreciated
My proposed answer is that they know what it is that they want to know. They do not know that the optimal statistic exists, let alone that it is linear! Irving Fisher told them that estimation is a choice of functions and so it is a problem in stochastic optimal control or just simply calculus of variations. Instead of doing the worl, they massage the data set until it tells them exactly what the pay masters wish for it to tel. Thus they pride themselves by being his/her master's voice. Apologies to author of RCA Vector's logo
I ran this by two econometrician who are economics Nobel acolytes. They told me essentially that the only objections then had to econometrics (thy advocate that no none should teach econometrics!). Their reason is that the "results" do not confirm the economic prejudices of either one of them. Can you help?
I'm looking for a good way to approximate a multivariate log-normal distribution by a discrete distribution with finite state space. The discrete distribution should have the same first two moments.
Thanks for your help!
I want to develop a stochastic optimization model that handles inventories of a just-in-time setting.
When I use ACO to detect image edges, there are more than 9 parameters need to be set. According to the previous methods, I chose an experimental technology to determine these parameters, but this is not rigorous due to the lack of mathematical derivation. How can I remedy this deficiency? Thank you!
Data centers load is basically server work load and cooling load associated with it. In literature papers, there are ways to model the IT load (delay tolerant and delay sensitive) and also cooling load, but how can we use that model to find uncertainty of the load. Is the uncertainty assumed by specifying the range of demand, or can be solved as a stochastic optimization considering different scenarios?
As far as I know, decision making under uncertainty can often be formalized as a stochastic problem. I have seen a constraint which is called "Nonanticipativity" in the most papers regarding Stochastic Programming.
I would like to know that
1) What is its concept and role?
2) Is it essential for stochastic optimization problems or just for some especial stochastic problems?
3) Can I ignore it?
I would greatly appreciate any help.
Part of my problem need to solve an finite-horizon, discrete-time MDP where the distribution of the state in each slot is i.i.d. and do not depends on the action. Are there any simple policies that can obtain the optimal solution? What if we consider adding a total cost constraint? Thanks!
Are there research papers (e.g. variations of Q-learning) on reinforcement learning in Partially observable model-free environments. I am interested in knowing what are the future research aspects as well as challenges of this area ? How much the theory of Stochastic Approximation can be useful here ?
Another question is that from the sequence of observations and actions only how much close I can go to the optimal deterministic policy of the underlying MDP of the POMDP..
I have a sample of 2500 data and each having 9 attributes. I divided the set into 75% + 25% for training-testing purpose ( random selection for testing ). In SGB model I have taken 0.05 as learning rate (Shrinkage factor) and 0.5 as the sub-sample fraction for bagging. Each tree has 15 numbers of terminal nodes and all the features (attributes) interactions are allowed. By growing 20,000 trees (i.e. iterating 20,000 times) sequentially I am getting an R2 (R-square) of 99.7 in training data and 98.8 on testing data. In 10 fold cross validation I am getting an R2 of 97.6.
As I have used very low learning rate and bagging concept I assume that the accuracy I am getting is not due to the Over-fitting, and the MSE Vs No. of Tree graph is gradually decreasing without any spikes.
But, as I am iterating it 20,000 times, and getting this much of accuracy I am little bit confused regarding the Over-fitting concept. Please suggest me whether my approach and understandings are correct or not.
There are methods such as sample average method and progressive hedging algorithm for solving a mixed integer programming stochastic optimization.
Stochastic Gradient Descent (SGD) is fast for optimizing many convex objectives. But why does it fail to produce sparse solutions? Is there an intuitive explanation?
To further clarify my question: while Coordinate Descent is "naturally" suitable for producing sparse solutions, why does GD fail to have this ability before any fix added to it?
Or, what's the difference between CD and GD that makes one sparsity-inducing while the other not?
Using PN as analysis techniques for large systems will produce huge number of states and the analysis model can not be solved in some cases, therefore we have to apply some reduction (minimization) techniques to avoid this issue. In MC there is some approaches and I'm looking for similar approaches for PN. Note that: PN solver transform PN to MC to solve it but I need to apply reduction on PN level.