ArticlePDF Available

Abstract and Figures

Chance constrained stochastic model predictive controllers (CC-SMPCs) tradeoff full constraint satisfaction for economical plant performance under uncertainty. Previous CC-SMPC works are over-conservative in constraint violations leading to worse economic performance. Other past works require a priori information about the uncertainty set, limiting their application. This article considers a discrete linear time-invariant (LTI) system with hard constraints on inputs and chance constraints on states, with unknown uncertainty distribution, statistics, or samples. This work proposes a novel adaptive online update rule to relax the state constraints based on the time average of past constraint violations, to achieve reduced conservativeness in closed-loop. Under an ideal control policy assumption, it is proven that the time average of constraint violations asymptotically converges to the maximum allowed violation probability. The method is applied for optimal battery energy storage system (BESS) dispatch in a grid-connected microgrid (MG) with photovoltaic (PV) generation and load demand, with chance constraints on BESS state of charge (SOC). Realistic simulations show the superior electricity cost-saving potential of the proposed method as compared with the traditional economic model predictive control (EMPC) without chance constraints, and a state-of-the-art approach with chance constraints. We satisfy the chance constraints nonconservatively in closed-loop, effectively trading off increased cost savings with minimal adverse effects on BESS lifetime.
Content may be subject to copyright.
1
Adaptive Relaxation based Non-Conservative
Chance Constrained Stochastic MPC
Avik Ghosh, Cristian Cortes-Aguirre, Yi-An Chen, Adil Khurram, and Jan Kleissl
Abstract—Chance constrained stochastic model predictive con-
trollers (CC-SMPC) trade off full constraint satisfaction for
economical plant performance under uncertainty. Previous CC-
SMPC works are over-conservative in constraint violations lead-
ing to worse economic performance. Other past works require
a-priori information about the uncertainty set, limiting their
application. This paper considers a discrete LTI system with
hard constraints on inputs and chance constraints on states, with
unknown uncertainty distribution, statistics, or samples. This
work proposes a novel adaptive online update rule to relax the
state constraints based on the time-average of past constraint
violations, to achieve reduced conservativeness in closed-loop.
Under an ideal control policy assumption, it is proven that the
time-average of constraint violations asymptotically converges
to the maximum allowed violation probability. The method
is applied for optimal battery energy storage system (BESS)
dispatch in a grid connected microgrid with PV generation and
load demand, with chance constraints on BESS state-of-charge
(SOC). Realistic simulations show the superior electricity cost
saving potential of the proposed method as compared to the
traditional economic MPC without chance constraints, and a
state-of-the-art approach with chance constraints. We satisfy the
chance constraints non-conservatively in closed-loop, effectively
trading off increased cost savings with minimal adverse effects
on BESS lifetime.
Index Terms—Stochastic model predictive control, chance con-
straints, forecast uncertainty, discrete LTI systems, uncertainties,
non-conservative, microgrids, battery energy storage systems
NOMENCLATURE
αMaximum probability of violation of state constraints
γConstant of proportionality in online hupdate rule
ˆwWidth of the critical region
κ,κCritical region
PProbability measure
FtFiltration
˜
hAdaptive state constraint tightening parameter
ASystem state transition matrix
BSystem control input matrix
dControl input coupling vector dimension
ESystem state uncertainty matrix
FControl input uncertainty matrix
GState chance constraint matrix
gState chance constraint vector
hAdaptive state constraint relaxing parameter
k, t Time index
A. Ghosh (corresponding author), C. Cortes-Aguirre, Y. Chen and A. Khur-
ram are with the Department of Mechanical and Aerospace Engineering,
University of California, San Diego, CA, 92093 USA (UCSD)
email:{avghosh,ccortesa,yic002,akhurram}@ucsd.edu.
J. Kleissl is the Director of the Center for Energy Research in the Department
of Mechanical and Aerospace Engineering at UCSD.
email: jkleissl@ucsd.edu.
MControl input coupling matrix
mControl input dimension
NMPC prediction horizon length
nState dimension
pUncertainty dimension / Probability of violation of
state constraints
qControl input constraint vector dimension
rState chance constraint vector dimension
SControl input constraint matrix
sControl input constraint vector
uControl input
VState constraint violation tracker
wUncertainty
xState
YTime-average of state constraint violations
ZAbsolute difference between αand Y
cControl input coupling vector
NCDP Non-coincident demand peak
OPDP On-peak demand peak
BESS Battery energy storage system
JCC Joint chance constraints
LTI Linear time invariant
MG Microgrid
MPC Model predictive control
NCDC Non-coincident demand charge
OPDC On-peak demand charge
PV Photovoltaic
SMPC Stochastic MPC
SOC State-of-charge
VRES Variable renewable energy sources
I. INTRODUCTION
A. Motivation
Currently, there is great emphasis on integrating variable
renewable energy sources (VRES), such as wind and PV gen-
erators into the electricity grid, with the goal of de-carbonizing
power production. There is, however, an intermittent nature to
VRES, which can potentially lead to power imbalance in the
electric grid, thereby risking grid stability [1]. Battery energy
storage systems (BESS) can minimize power fluctuations
caused by the integration of VRES into the grid [1], and can
additionally be used for energy arbitrage, peak load shaving,
valley-filling, and ancillary services. However, to maximize
benefits from installing BESS, optimal BESS scheduling
strategies need to be devised to maximize electricity bill
savings, while providing services to the grid.
It is possible to optimally dispatch BESS, utilizing Model
predictive control (MPC) based scheduling algorithms that
2
include grid constraints. However, uncertainty in forecasts can
significantly reduce performance and should be taken into
account when formulating MPCs. Classical open-loop min-
max formulation based robust MPC can be used to factor in
uncertainty but it leads to over-conservative solutions which
may not be economical from an operational perspective [2].
Other variations of robust MPC such as closed-loop min-max
formulation (commonly known as “Feedback MPC”) suffer
from prohibitive complexity [2]. Tube-based MPC requires
specification of a bounded uncertainty set a-priori [3] (a
problem in common with robust MPC), which may be difficult
to specify non-conservatively for a complex practical system
such as a VRES integrated MG, which involves a variety
of forecasts. An over-conservative uncertainty set negatively
impacts economical system performance.
Stochastic MPC (SMPC) methods based on chance con-
straints strike a trade off between economic operation and
full constraint satisfaction [3]. Chance constraints allow for
the MPC to operate in a more economical way by respecting
a maximum probability of constraint violation. The superior
economic performance, lower complexity, and weaker assump-
tion requirements of chance constrained SMPC is desirable
for BESS operation in VRES intensive microgrids (MG),
especially under uncertainty in VRES and load forecasts, and
is thus the primary focus of this work.
B. Literature Review
Chance constrained SMPC algorithms have found appli-
cations in problems involving building climate control [3]–
[6], optimal power flow [7], and optimal microgrid (MG)
dispatch [8]–[18]. Chance-constrained SMPC problems are
solved by converting them into an approximate deterministic
form. If the uncertainties are Gaussian, or follow other known
distributions [19], [20], standard procedures exist to convert
the stochastic problem into a deterministic one. However,
in practical scenarios such as VRES and load forecasts, the
uncertainty distributions, and additionally, uncertainty statis-
tics (moments like mean, variance, skewness, etc.) may be
unknown and can vary with time (i.e., seasonally/yearly).
Other methods of reformulating the SMPC problem into a
deterministic one, such as using Chebyshev inequalities [21]
require a-priori knowledge about the uncertainty statistics
(mean and covariance), while using Chernoff bounds suffer
from high conservatism [3]. Sampling based approaches [22],
[23] suffer from high computational demand and may require
a prohibitive number of samples [3].
Chance constraints are generally enforced pointwise-in-
time within an MPC prediction horizon, without includ-
ing past behavior of the system, which can also lead to
over-conservativeness (i.e., less than desired constraint vi-
olations) in closed-loop [4], [6]. However, reducing over-
conservativeness is of paramount importance for MG operators
to reduce electricity costs. Thus, in this work we re-interpret
the pointwise-in-time chance constraints as time-average of
violations (or time-average of some loss function of violations)
in closed-loop similar to [3], [4], [6], [24], [25], and first
focus our literature review specifically on such online chance
constrained SMPC methods with theoretical advancements.
The re-interpretation keeps the spirit of occasional constraint
violations of the original chance constraint [4] while keeping
memory of past behavior of the system to aid in reducing
over-conservativeness in closed-loop.
The work in [6] adaptively relaxed and tightened the MPC
state constraints online aided by the amount of violations
quantified by a loss function empirically weighted averaged
over time. The authors defined a family of stochastic robust
control invariant (SRCI) sets for implementing their control
online and proved that the empirical weighted average loss is
bounded either in expected value or robustly with probability
1, and derived bounds on the convergence time. However,
drawbacks of the work include high computational cost to
parameterize the SRCI sets, and a-priori knowledge about the
distribution/statistics of the uncertainty.
The authors in [24] adaptively relaxed the MPC state
constraints online based on the time-average of (i) the number
of constraint violations in the first method, and (ii) a loss
metric based on a convex loss function of constraint violations
in the other method. Instead of providing asymptotic bounds
on constraint violations as done in previous works like [3],
[4], the authors provided stronger robust bounds in closed-
loop over finite time periods. A practical limitation of [24] for
our application (economic MG dispatch) is the assumption
of an objective function that is composed of stage costs
with quadratic penalties on predicted control inputs and state
deviations from an a-priori defined robust positively invariant
target set. Construction of a non-conservative robust positively
invariant target state set is difficult. Additionally, electricity
costs due to grid imports in a MG with BESS cannot be
expressed by stage costs with quadratic penalties on control
inputs and state deviations, because economic stage costs are
not necessarily positive definite with respect to the target set
of states and/or control inputs [26].1Moreover, the uncertainty
is incorporated by bounding the predicted states in state tubes
which require a-priori specification of the uncertainty set, and
the computation time is similar to a corresponding robust MPC
problem which is more expensive than the nominal MPC.
The authors in [3], [4], [25] adaptively tightened the state
constraints online during MPC computations: (i) using an
update rule based on the time-average of past state constraint
violations in [3], [4], and (ii) by iteratively employing a data-
driven Gaussian process binary regression based approach
depending on the observed state constraint violations in the
training data in [25]. In [4], using stochastic approximation,
the authors also proved the convergence of the time-average
1Economic MG dispatch involves usage of the electricity cost function
directly as the objective function of the MPC controller. Electricity costs
incurred by the MG to the utility involve time-of-use energy charges and
demand charges. Energy charges ($/kWh) are incurred based on the volumetric
import of electricity from the grid, while demand charges ($/kW) are incurred
based on the maximum load import from the grid over the month. Two
distinct time-of-use demand charges are used: one based on maximum grid
imports for the whole month called non-coincident demand peak (NCDP),
added on top of maximum grid import between 16:00-21:00 h of all days
of the month, called on-peak demand peak (OPDP). The demand charges
associated with NCDP and OPDP are called non-coincident demand charges
(NCDC), and on-peak demand charges (OPDC) respectively. For commercial
and industrial customers, demand charge costs are typically 30-70% of the
monthly electricity costs [27].
3
of constraint violations in probability to the allowable ‘least
conservative’ level. A few limitations of [4] are the a-priori
assumption of the uncertainty distribution, and the strong
assumption of terminal stability region of the state in closed-
loop, which is unrealistic for economic MG dispatch using
BESS. While [25] relaxes the requirement of a-priori knowl-
edge of the uncertainty distribution, iteratively learning the
final optimal tightening parameter is data-intensive. The entire
solution framework exhibits significantly more computation
cost for satisfying the chance constraints in the long run as
compared to the nominal MPC. For our application, [25] may
be unable to perform economically or may cause significant vi-
olation of the chance constraints if the underlying uncertainty
distribution changes with time (as the final optimal tightening
parameter is learned from past data), which is undesirable for
economic MG operation.
Some of the limitations of [4], [6], [24], [25] are avoided
by [3] which can be applied to systems with unknown un-
certainty distribution/statistics. Also, [3] does not need the
assumptions of terminal stability of the state and the type of
the objective function (except convexity assumptions), and the
implementation of the SMPC is computationally inexpensive
and has similar computation cost as that of a nominal MPC.
The above mentioned properties make the work in [3] ideal
for economic MG dispatch, and forms the primary reference
on which we develop our work.
In [3], the authors developed an adaptive state constraint
tightening rule that allows the time-average of violations of
the system to converge to the maximum allowable violation
probability in closed-loop under an ideal control policy as-
sumption (which may be unmet for practical implementation).
However, as clarified by the authors, the convergence was
argued intuitively without rigorous proof. Also, despite its
advantages, [3] is still not robust to significant violation of
chance constraints under time-varying uncertainty distribution.
While significant theoretical advancements have been made
to develop SMPC with varying degrees of simplifying as-
sumptions, computational cost, and ability to avoid over-
conservativeness in closed-loop chance constraint satisfaction,
there exists a gap in the literature of exploiting these non-
conservative methods for economic MG dispatch under un-
certainty. Some recent applied works involving economic MG
dispatch in VRES intensive grids with chance constraints are
explored in [8]–[16].
None of the works [8]–[16] considered demand charges in
their electricity cost. The work in [8] required extra steps to
generate scenarios of uncertainty at every time-step for the
MPC prediction horizon and considered the scenarios to be
Gaussian. The works [9], [11], [12], [14], [15] considered
the uncertainties to be Gaussian or other commonly known
distributions, which significantly limits the practical appli-
cation of these studies. The authors in [10], [13] employed
ambiguity sets to model the uncertainties from historical
data, and employed distributionally robust chance constrained
optimization (DRCC). The authors in [16] used adaptive kernel
density estimation to estimate the nonparametric uncertainty
distribution for VRES from historical data, and adjusted the
confidence levels according to estimated uncertainties to en-
sure constraint satisfaction within predefined confidence levels.
However, the approaches in [10], [13], [16] are data-intensive
and the performance is substantially influenced by the quality
and volume of historical data available.
To tackle the aforementioned problems, [17] and the authors
of this paper in a previous work [18] presented an online
adaptive SMPC model inspired by [3]. The authors in [17],
[18] minimized the VRES integrated MG operating cost, and
satisfied chance constraints on states (BESS SOC) in closed-
loop without making any assumption about the probability
distribution or statistics of the uncertainty. To further reduce
over-conservativeness, the works [17], [18] employed online
adaptive constraint relaxation in the nominal MPC. However,
the online adaptive constraint relaxation rules used in [17]
and [18] are based on intuition having no convergence guar-
antees or theoretical analyses, and both the works are appli-
cation specific. From an economic MG dispatch perspective,
only [18] included demand charges while [17] did not. The
authors in [17] used sample historical data of uncertainties for
initial constraint relaxation which [18] avoided by practical
engineering approximation. Additionally, [17], [18] considered
aggregate constraint violations, without any preference for the
time when violations should occur. For economic MG dispatch
with demand charges, it becomes critical, if necessary, to
preferentially be able to violate BESS state constraints during
a predefined on-peak period from 16:00 to 21:00 h, to reduce
grid import power peaks, as OPDC are charged on top of
NCDC.
The present work is an extension of the previous work [18]
by the authors, presented in a generic discrete linear time
invariant (LTI) setting with additional convergence properties
and proofs, related theoretical analyses, and additional case
studies. The proposed online adaptive SMPC (OA-SMPC)
minimizes a generic convex cost function over a finite re-
ceding horizon, subject to hard input constraints and chance
constraints on states. After presenting the theoretical results of
the OA-SMPC, a case study is presented for a grid-connected
MG operation with PV, load, and BESS using realistic data
for a full year of operation in an economic MPC (EMPC)
framework. The performance of the OA-SMPC is compared
with a traditional EMPC without chance constraints, and
a state-of-the-art approach from the literature with chance
constraints having similar computational cost [3]. The OA-
SMPC outperforms both the methods with respect to the cost
saving potential and non-conservative satisfaction of chance
constraints.
C. Contributions
The contributions of the present work are as follows:
1) To the best of the author’s knowledge, the present work’s
adaptive state constraint relaxation framework is limited
in the literature as compared to the more common adap-
tive state constraint tightening. Under the novel adaptive
relaxation rule of the present work, it is proven that the
time-average of the constraint violations asymptotically
converges to the maximum allowable violation probabil-
ity under an ideal control policy assumption similar to [3].
However, a rigorous proof is provided here which was not
4
provided in [3]. Also, for practical implementation (i.e.,
without the simplifying ideal control policy assumption),
while the proposed method cannot guarantee the afore-
mentioned convergence, it still encourages it.
2) The present work also proves that the the time-average of
the constraint violations exhibits martingale-like behavior
asymptotically for practical implementation.
3) The present work does not require either any a-priori
assumption about the probability distribution of the un-
certainty set or its statistics, or sample uncertainties from
historical data. The present work is also robust to signif-
icant violation of chance constraints under time-varying
uncertainty distribution for practical implementation, pro-
vided an additional post-processing step is incorporated.
4) The present work incorporates operational adjustments
in the online adaptive relaxation rule to account for
temporal preference in state constraint violation, which is
critical for economic MG dispatch. Additionally, unlike
the methods in [3], [17], the present method prevents
excessive overcharging/overdischarging of the BESS to
correct for large forecast uncertainties in real-time by the
post-processing step, which otherwise might harm the
BESS and leave the MG vulnerable for future demand
peaks.
5) The majority of the earlier works for chance constrained
SMPC based economic MG dispatch presented results
over a short time (24 h), or one or two months. However,
for MG operators, it is important to have at-least year-
long studies to determine how the algorithm performs
under realistic seasonal variations in loads, VRES gener-
ation, and forecasts, a gap which the present work fills.
The rest of the paper is organized as follows. Section II
presents the notations and standard definitions. Section III in-
troduces the original SMPC problem formulation with chance
constraints, approximates the original formulation to frame the
OA-SMPC formulation, along with presenting the convergence
proofs and related theoretical analysis. Section IV presents the
case study for a realistic grid connected MG with PV, load, and
BESS, with results and discussions in Section V. Section VI
concludes the paper summarizing the takeaways of the study.
II. MATHEMATICAL PRELIMINARIES
A. Notations
The set of n-tuple of real numbers is denoted by Rn.
Positive and negative real number sets are denoted by R0+
and R0, respectively. The set of natural numbers including
0 is denoted by N. A set of consecutive natural numbers
{i, i + 1, . . . , j}is denoted by Nj
i. The n-tuple of ones is
denoted by 1n. States, control inputs and uncertainties are
denoted by xXRn,uURm, and wWRp
respectively. The prediction horizon of the MPC is denoted
by NN. Actual states at time tTNare denoted by
x(t), while predicted states, obtained at tby MPC computation
kNN
1time-steps in the future are denoted by x(t+k|t).
Similarly, predicted control inputs over the MPC prediction
horizon are denoted by u(t+k|t), with kNN1
0. An
ordered collection of vectors (such as states) over the MPC
prediction horizon obtained at time tis denoted by bold letters,
x(t+1) := x(t+1|t), x(t+2|t), . . . , x(t+N|t). For matrices
Aand Bof equal dimensions, the operators {<, ,=, >, ≥}
hold component wise. The right inverse of a matrix ARm×n
with rank m < n is denoted by A. The ith row, and the
element from the ith row and jth column of a matrix Ais
denoted by Aiand Aij respectively, while the ith element of
a vector xis denoted by xi, unless mentioned otherwise. A
vector of the first aNelements of a vector xis denoted by
x1:a. The expected value of a random variable Zis denoted
by E[Z].|x|denotes the 1-norm of a vector x. The ‘logical
not’, and ‘logical and’ operators are denoted by ¬and
respectively.
B. Standard Definitions
Definition 1 (Filtered probability space [28]).A filtered
probability space is defined by (Ω,F,{Ft},P).(Ω,F,P)is
a probability triple with sample space ,σ-algebra (event
space) F, and probability measure Pon (Ω,F).Ftis a
filtration, which is an increasing family of sub σ-algebras of
Fsuch that Fs Ft F,ts, where t, s T.
Definition 2 (Almost surely [28]).An event E F happens
almost surely if P(E) = 1. It is denoted by a.s.
Definition 3 (Adapted stochastic process [28]).A stochastic
process Z:= (Z(t) : t > 0), is called adapted to the filtration
{Ft}if Z(t)is Ftmeasurable t.
Definition 4 (Supermartingale [28]).A stochastic process Z
is called a discrete-time supermartingale relative to ({Ft},P)
if it satisfies the following:
(a) Zis an adapted process,
(b) E[|Z(t)|]<,t,
(c) E[Z(t+ 1)|Ft]Z(t), a.s. t.
A discrete-time martingale Zrelative to ({Ft},P)is defined
similarly, with (c) replaced by E[Z(t+ 1)|Ft] = Z(t), a.s. t.
Definition 5 (Monotone convergence theorem for decreas-
ing sequence [29]).Let X= (xn:nN)be a sequence of
real numbers which is monotonically decreasing in the sense
that xn+1 xn,n, then the sequence converges if and only
if it is bounded, and in which case limn→∞ xn= inf{xn}.
III. PROB LE M FO RM UL ATIO N
A. System Description
The dynamics of the discrete LTI system are governed by
x(t+ 1) = Ax(t) + Bu(t) + Ew(t),t, (1)
where ARn×n,BRn×m, and ERn×p.
Assumption 1 (System).(a) At each time t, a measurement of
the state is available. (b) The set of admissible control inputs
Uand states Xare polytopes containing the origin.
Assumption 2 (Uncertainties).The set of uncertainties Wis
bounded and contains the origin.
In this setup, F=σ({w:w(t)W}:tT), and Ft=
σ({w(s) : w(s)W}:s<t). The system is subject to hard
5
control input constraints and chance constraints on states. The
control input constraints are formulated as,
Su(t)s, t, (2)
where SRq×m,sRq. The time-varying equality
constraints coupling the control inputs are formulated as,
Mu(t) = c(t) + F w(t),t, (3)
where MRd×m,cRd, and FRd×p. In previous
works like [3], (3) is not considered but for applications such
as economic MG dispatch, (3) is important for incorporating
physical constraints such as power balance of the MG with
the main grid (discussed in detail in Section III-F). However,
if (3) is considered in the problem formulation, the Ew(t)
term in the RHS of (1) is dropped as the F w(t)term in the
RHS of (3) accommodates the uncertainty.2Additionally, note
that constraint (3) is application specific and is independent
of the method and theoretical results presented in this paper.
The chance constraints on the states are formulated as,
P[Gx(t)g]¯
1¯α, t, (4)
where GRr×n,gRrand ¯
1 = 1rfor individual chance
constraints. ¯α= [α1, . . . , αr]is the vector of the pointwise-
in-time maximum probability of constraint violation, where
αi(0,0.5) iNr
1. In the individual chance constraint
form, (4) can be expressed as, P[Gixgi]1αi,i
Nr
1. In the joint chance constraint (JCC) form, a single
violation probability denoted by α(0,0.5) can be defined
for simultaneous satisfaction of all state constraints as,
P[G1xg1G2xg2 · · · Grxgr]1α.
Note that in this work, we re-interpret the maximum proba-
bility of violation of state constraints pointwise-in-time given
by the chance constraints (4) as the maximum time-average of
state constraint violations in closed-loop similar to [3], [4], [6],
[24], [25]. Gx(t)gis referred to as the original constraint
with respect to which violations are measured.
B. Online Adaptive SMPC (OA-SMPC)
Over the MPC prediction horizon N, computed from time
t, we define the ordered collection of states, control inputs,
uncertainties and coupling vectors as,
x(t+ 1) := x(t+1|t), x(t+2|t), . . . , x(t+N|t)
RNn ,
u(t) := u(t|t), u(t+1|t), . . . , u(t+N1|t)
RNm ,
w(t) := w(t|t), w(t+1|t), . . . , w(t+N1|t)
RNp ,
c(t) := c(t|t), c(t+1|t), . . . , c(t+N1|t)RN d.
2The Ematrix is still required for assigning a unique control input after
accommodating the uncertainty in closed-loop for multi-input systems. See
details in Section III-F.
The system dynamics can be written in expanded form as
x(t+k|t) =Akx(t|t) +
k1
X
i=0
Ak1iBu(t+i|t)+
k1
X
i=0
Ak1iEw(t+i|t),kNN
1,t,
(5)
where x(t|t) = x(t). Writing (5) in compact form yields,
x(t+ 1) =Ax(t) + Bu(t) + Ew(t),t, (6)
where ARNn×n,BRN n×N m and ERN n×Np .
The hard control input constraints over the MPC prediction
horizon are formulated as,
Su(t+k|t)s, kNN1
0,t. (7)
The equality constraints coupling the control inputs over the
MPC prediction horizon are formulated as,
Mu(t+k|t) = c(t+k|t) + F w(t+k|t),kNN1
0,t.
(8)
Generally, the chance constraints in (4) are interpreted for
the MPC prediction horizon pointwise-in-time by (9a), which
is over-conservative in closed-loop (see Remark 1). The
corresponding relaxed deterministic reformulation of (9a) as
implemented in the MPC prediction horizon by some previous
works [3], [4] is given by (9b). The aim of the deterministic
reformulation is to tighten the state constraints under nominal
MPC computations (resulting from ignoring uncertainties, i.e.,
w(t) = 0) by an adaptive tightening parameter ˜
hRrgiven
by,
P[Gx(t+k|t)g]¯
1¯α, kNN
1,t, (9a)
Gx(t+k|t)g˜
h(t+k|t),kNN
1,t, w(t) = 0,
(9b)
where ˜
hi>0,iNr
1and is updated based on the time-
average of past state constraint violations in closed-loop. Note
that violations of state constraints (i.e., Gx(t)> g) can
occur in closed-loop, as the uncertainties come into effect.
The adaptive constraint tightening in (9b) attempts to reduce
the conservatism inherent to (9a) by incorporating past state
constraint violation behavior of the system in closed-loop, but
can still be over-conservative (see Remark 1).
Remark 1 (Over-conservativeness of previous approaches).
(9a)approximates (4)conservatively [4], [6, Sec. II-A], as (9a)
requires the constraint satisfaction conditionally on x(t)(i.e.,
for x(t)that can be reached at time tby the given control
policy under the uncertainty sequence). Equation (4), however,
requires constraint satisfaction in a more relaxed average
sense (i.e., over all realizations of the uncertainty sequence
up to t). Moreover, (9a)does not consider the memory of past
state constraint violations which is critical in the present time-
average re-interpretation of chance constraints. Incorporating
past constraint violations by using the adaptive tightening
in (9b)can still be conservative in satisfying (4)in closed-loop
due to the over-estimation of the tightening parameter ˜
h[4].
Additionally, in (9b), the nominal MPC solutions never violate
the state constraints over the prediction horizon, as a result
6
of restricting the size of the feasible state set (despite a larger
feasible state set being available to the controller as compared
to the nominal MPC when accommodating for uncertainty),
which the MPC optimizer can theoretically exploit to further
reduce conservativeness in closed-loop.
Based on Remark 1, which shows that both (9a) and (9b)
can be over-conservative in satisfying (4) in closed-loop, we
propose to adaptively relax the state constraints in the nominal
MPC instead of tightening them. The adaptive relaxation
allows for state constraint violations over the nominal MPC
prediction horizon (Gx(t+k|t)> g with w(t) = 0), to push
the system towards reduced conservativeness. We relax the
satisfaction of (9a) and approximate (4) by reformulating the
nominal state constraints as,
Gx(t+k|t)gh(t),kNN
1,t, w(t) = 0,(10)
where hRris the adaptive relaxing parameter with hi<0,
iNr
1. It should be noted that the sign of hiin (10) is
opposite to ˜
hiin (9b). We also observe that decreasing h(t)
in (10) expands the feasible state set, pushing the system more
towards state constraint violations (i.e., Gx(t+k|t)> g), while
increasing h(t)contracts the feasible state set pulling the sys-
tem away from state constraint violations.3The initial value of
hat t= 0 can be calculated based on domain knowledge [18],
which obviates the requirement of past uncertainty samples,
as in [3]. The initial value of his not important since hgets
adapted as the system evolves with time [7]. The ordered
collection of adaptive relaxation parameters along the MPC
prediction horizon is denoted as,
h(t) := h(t), h(t), . . . , h(t)
RNr .
The nominal OA-SMPC, which is assumed to be a convex
optimization problem is then formulated as,
u(t) = arg min
u(t)RNm
J(x(t),u(t),w(t) = 0),(11a)
subject to x(t+ 1) = Ax(t) + Bu(t),(11b)
Su(t)s,(11c)
Mu(t) = c(t),(11d)
Gx(t+ 1) gh(t).(11e)
where J:Rn×RNm ×RN p Ris an arbitrary convex
function, SRNq ×Nm ,sRN q,MRN d×N m,cRN d ,
GRNr ×Nn ,gRN r. Note that dropping (11d) makes
the problem setup similar to [3]. Note that in (11e), the MPC
state constraints are applied for x(t+k|t),kNN
1, and
not for the present state corresponding to k= 0 to allow for
the present state to be outside of the feasible state set of the
3Note that in economic MG dispatch with BESS in VRE grids, where
the objective function is the actual economic cost of system operation like
the electricity bill, and not necessarily only a penalty on the control input
(BESS dispatch), relaxing the state constraints in the nominal MPC does not
automatically lead the system to predicted nominal solutions that violate the
(original) state constraints pathologically over the nominal MPC prediction
horizon. The state (BESS SOC), in these applications tries to exploit the
full feasible state set to best reduce economic cost for the MPC prediction
horizon. Nevertheless, the case where the proposed formulation can result
in pathological constraint violations is averted in closed-loop by a post-
processing step described later in Section III-F and Remark 7.
nominal OA-SMPC. Assumption 3, discussed next, ensures
recursive feasibility and existence of an ideal control policy
(described later in Assumption 5).
Assumption 3 (Control inputs [3]).(a) The control input
constraints (11c)are such that the system can provide enough
control input to bring the predicted state at the next time-
step, from any present state x(t), to the feasible region of the
nominal OA-SMPC (11e). Specifically,
u(t|t)s.t Su(t|t)s, M u(t|t) = c(t|t),
G(x(t+ 1|t)) gh(t),
where x(t+ 1|t) = Ax(t) + Bu(t|t), for all t. (b) The system
is one step controllable.
The condition for testing Assumption 3(a) which ensures
recursive feasibility (similar to [3], [30]) of the nominal OA-
SMPC is given in the Appendix, and is excluded here for
brevity. Assumption 3(a) ensures that after handling the un-
certainty from the previous time step in closed-loop, resulting
in the present state x(t), which may be outside the feasible
state set of the nominal OA-SMPC (11e), the computed control
input is strong enough to bring the predicted system state at
the next time-step back to the feasible state set.
Assumption 3(a), while theoretically can be, is generally
not restrictive for practical applications such as microgrids or
HVAC systems, as these systems are generally designed to be
able to have enough control input power to be able to handle
uncertainties [3]. Assumption 3(b) is more restrictive and is
only used for ensuring sufficient conditions for the existence of
an ideal control policy at every time step (see Assumption 5).
Assumption 3(b) can be relaxed for practical applications such
as the one described in the case study in Section IV.
Remark 2 (Structure of the input matrix).Note that
Assumption 3(b) is sufficient for saying that the system has
at least as many control inputs as states (i.e., nm) and B
has full row rank. The assumption implies that if n=m,B
has an inverse, while if n<m,Bhas a right inverse.
Assumption 4 (Form of the hupdate rule).The online h
(adaptive relaxing parameter) update rule can be written as
hi(t) := hi(t1)[1 + Ki(t)], where Ki(t)>1,tensures
hi(t)<0,t[18, Eq. (12)].
Remark 3 (Behavior of the hupdate rule).In Assump-
tion 4,Ki(t)>0decreases hi(t), expanding the state
constraints, pushing the system more towards state constraint
violations, while Ki(t)<0increases hi(t), contracting the
state constraints, pulling the system away from state constraint
violations.
C. Observed Violations
In this section, for consistency with earlier works like [3],
we drop (3) and (11d). Thus, the nominal OA-SMPC computed
optimal control inputs for the first time-step of the prediction
horizon are implemented in closed-loop. The observed states
get corrected once the uncertainties are realized, by using (1).
The case where (3) and (11d) are considered in the problem
7
formulation is discussed in Section III-F which additionally
post-processes the nominal OA-SMPC computed control in-
puts to correct for the uncertainty.
Without loss of generality, consider the ith state constraint
in (4). Let Vi(t+1) {0,1}track whether the state constraint
is violated in closed-loop at time t+1, while Yi(t+1) [0,1]
keeps track of the time-average of violations up to time t+ 1.
Note that the control input applied at time t(along with the
uncertainty realized at t) is manifested with updated system
states, which can be observed only at t+1, i.e., there is 1 time-
step delay in observing violations (or non-violations) from the
time when the control inputs and uncertainties are applied.
Vi(t+ 1):=1, Gi(Ax(t) + Bu(t|t) + E w(t)) > gi,
0, Gi(Ax(t) + Bu(t|t) + Ew(t)) gi.(12a)
Yi(t+ 1) := Pt+1
j=1 Vi(j)
t+ 1 .(12b)
The framework for tracking the state constraint violations
and time-average of violations in the case of JCC, is the same
as that of the individual chance constraints described in (12).
The only difference in the case of JCC is that a violation occurs
if any one of the constraints (involved in the JCC) violates its
specific state constraint bounds in closed-loop.
D. Convergence Properties of Y(t)
In this section, like III-C, without loss of generality, we
limit our discussion to the ith state constraint in (4) with
iNr
1, with its corresponding adaptive relaxation parameter
hiR0. The conditions established for ensuring the conver-
gence of the time-average of state constraint violations to the
maximum allowable violation probability in this paper uses
a similar simplifying assumption as in [3]. The simplifying
assumption (see Assumption 5) allows the controller to apply
ideal control inputs at the current time-step leading to a desired
probability of violation of state constraints at the next time-
step, under unknown bounded uncertainties.
Denote by Zi(t) = |αiYi(t)|the absolute difference
between the maximum allowable violation probability and
time-average of violations of the ith state constraint observed
at time t. We have to ensure that Yitends to αiin closed-loop
as the system evolves with time for non-conservative chance
constraint satisfaction. The non-conservative strategy ideally
leads to lower costs without violating the state constraints
beyond the maximum allowable violation probability.
As Zi(t)is non-negative, the convergence of Zican be
guaranteed a.s., if Zi(t)is a supermartingale [28]. Following
Section II-B, the three conditions for Zibeing a supermartin-
gale are investigated below:
(a) Let Ft=σ({w(0), w(1), . . . , w(t1)})be a σ-algebra
on uncertainties realized up to time t1.Zi(t)depends
on Yi(t), which depends on the realization of all the
uncertainties up to time t1. Since Zi(t)is exactly known
with information available up to time t1,Zi(t)is Ft
measurable t > 0, and is thus adapted. Also, since
Zi(t+ 1) is random with information available in Ft,
the process Ziis stochastic.
(b) As Vi(t) {0,1},Yi(t)[0,1], and αi(0,0.5), thus
Zi(t) = |αiYi(t)| [0,1).Thus, E[|Z(t)|]<1<,
t > 0.
(c) It remains to show E[Zi(t+ 1)|Ft]Zi(t)a.s., t > 0,
which we will show to hold under similar simplifying
assumptions as in [3]. The assumption involves replacing
the stochastic Zi(t+ 1)|Ftby its ideal surrogate Z
i(t+
1)|Ftas explained next.
From (12b)Yi(t+ 1) can be written as,
Yi(t+ 1) =
t+1
X
j=1
Vi(j)
t+ 1 =tYi(t)
t+ 1 +Vi(t+ 1)
t+ 1 .(13)
Denoting E[Zi(t+1)|Ft]Zi(t)by i(t)and substituting (13)
in i(t)results in,
i(t) =E"
αitYi(t)
t+ 1 Vi(t+ 1)
t+ 1
|Ft#
−|αiYi(t)|.(14)
Let pi(t+ 1) := P(Vi(t+ 1) = 1|Ft), which means that
pi(t+ 1) is the probability of observing a constraint violation
at time t+1. Similarly, 1pi(t+ 1) = P(Vi(t+1) = 0|Ft)is
the probability of not observing a violation at time t+1. We
notice that the only stochastic part within the expectation in
the RHS of (14) is Vi(t+ 1) based on Ft. Therefore, (14) can
be rewritten in terms of pi(t+ 1) to replace the expectation
as,
i(t) =pi(t+ 1)"
αitYi(t)
t+ 1 1
t+ 1#
+(1pi(t+ 1))"
αitYi(t)
t+ 1# |αiYi(t)|.
(15)
Simplifying yields,
i(t) = pi(t+ 1)βi(t) +
αitYi(t)
t+ 1
|αiYi(t)|,
(16)
βi(t) =
αitYi(t)
t+ 1 1
t+ 1
αitYi(t)
t+ 1
.(17)
To keep the analysis applicable to any arbitrary probability
distribution of w(t), we consider the sign of βi(t)to decide the
ideal control policy [3], for which we introduce Assumption 5.
Assumption 5 (Ideal control policy).(a) There exists an ideal
control policy (or ideal control input) at time t, applying which
makes pi(t+ 1) = p
i(t+ 1), where p
i(t+ 1) {0,1},t. (b)
When βi(t)<0, we apply ideal control inputs at tleading to
p
i(t+ 1) = 1 whereas, if βi(t)>0, we apply ideal control
inputs at tleading to p
i(t+ 1) = 0.
Definition 6 (Ideal surrogate of a variable).The man-
ifestation of a variable under the ideal control policy in
Assumption 5is defined as the ideal surrogate of that variable.
It is denoted by an asterisk after the variable.
Assumption 5(a) ensures that there exists a control input
which causes a constraint violation a.s. at the next time
step t+ 1 in closed-loop from any present state x(t), i.e.,
8
p
i(t+ 1) = 1. Similarly, p
i(t+ 1) = 0 means that the
controller can drive the system to prevent a constraint violation
a.s. at t+ 1 in closed-loop. Zi(t)and i(t)under the ideal
control policy are referred to as Z
i(t)and
i(t)respectively,4
consistent with Definition 6. Assumption 5(b) ensures that
when βi(t)<0, we choose ideal control inputs leading to
p
i(t+ 1) = 1 so that
i(t)may be 0a.s. Similarly, when
βi(t)>0, Assumption 5(b) ensures that we choose ideal
control inputs leading to p
i(t+ 1) = 0 so that
i(t)may
be 0a.s. Note that despite the application of ideal control
inputs, the second and third term in the RHS of (16) can
lead to
i(t)>0, which is used to derive the critical region
κ(αi, t), in Theorem 1later. The critical region signifies a
region where, if Yi(t)κ(αi, t), then
i(t)>0a.s.
The case when βi(t)=0implies p
i(t+ 1) not having an
effect on i(t), as the first term in the RHS of (16) vanishes
regardless of the value of pi(t+ 1). From (17), it can be
shown that βi(t) = 0 Yi(t) = αi1
2(t+1)
11
(t+1)
, and further
solving for i(t)>0in (16), yields αi>1
2(t+1) , which
becomes more likely to be satisfied as tincreases. However,
the case βi(t) = 0 can be avoided by a particular choice of
αi. From (17),
βi(t)= 0 Yi(t)=αi1
2(t+1)
11
(t+1)
.
Since, Yi(t) = Pt
j=1 Vi(j)
t, and Pt
j=1 Vi(j)N, therefore,
Yi(t)=αi1
2(t+1)
11
(t+1)
t
X
j=1
Vi(j)= (t+ 1)αi1
2,
which can be ensured by appropriate choice of αi. Specifically,
(t+ 1)αi1
2∈ N,t. The online adaptive relaxation rule
is introduced next in Section III-D1, with its behavior under
practical scenarios being discussed in Section III-D2.
1) hupdate rule:βi(t)<0implies αiYi(t)+ 2Yi(t)1
2(t+1) >
0,5and is associated with violation of state constraints at t+
1which is achieved by expanding the state constraint limits
in (11e). Similarly βi(t)>0implies αiYi(t) + 2Yi(t)1
2(t+1) <
0and is associated with non-violation of state constraints at
t+ 1 which is achieved by contracting the limits in (11e).
From Assumption 4, the hiupdate rule can be framed with
Ki(t)αiYi(t) + 2Yi(t)1
2(t+1) as
hi(t) = hi(t1)1 + αiYi(t) + 2Yi(t)1
2(t+1)
γi,(18)
where γiR0+ is a constant of proportionality that adjusts the
rate of hiupdate ensuring αiYi(t)+ 2Yi(t)1
2(t+1)
γi>1,t. Note
that Theorem 1, discussed next, implicitly assumes that (18) is
able to enforce Assumption 5. However, it may be possible to
devise other hupdate rules enforcing Assumption 5, wherein
Assumption 4(and consequently, use of (18)) can be relaxed.
4Z
i(t) = |αiY
i(t)|and
i(t) = Z
i(t+ 1) Zi(t).
5βi(t) = |ζi(t)1
t+1 | |ζi(t)|, where ζi(t) = αitYi(t)
t+1 . Solving for
βi(t)<0leads to ζi(t)>1
2(t+1) , which implies αiYi(t)+ 2Yi(t)1
2(t+1) >0.
In practical applications, while (18) cannot guarantee satisfac-
tion of Assumption 5in closed-loop, it still encourages it (see
Section III-D2).
Theorem 1. Let Assumptions 1,2,3, and 5hold. Given αi>
1
2(t0+1) ,Z
i(t), which is the ideal surrogate of the desired
supermartingale Zi(t), is monotonically decreasing a.s. t
t0, if and only if, Yi(t)∈ κ(αi, t),tt0, where κ(αi, t)is
a neighborhood of αidefined as,
κ(αi, t) := αi1
2(t+1)
11
2(t+1)
,αi
11
2(t+1) !.
Proof. Theorem 1says that given αi>1
2(t0+1) ,P Q,
where Pis defined as
i(t)0a.s., tt0, and Qis
defined as Yi(t)∈ κ(αi, t),tt0. To prove, P Q, we
will first prove the inverse as true, i.e., ¬P = ¬Q, which
implies Q= P. Then we will prove the contrapositive as
true, i.e., ¬Q = ¬P, which implies P= Q, which will
complete the proof.
To prove ¬P = ¬Q, assume ¬P is true, i.e.,
i(t)>0
a.s. To show, ¬Q holds, we consider Cases 1,2and 3based
on the sign of βi(t)below.
Case 1.
βi(t)<0p
i(t+ 1) = 1
αiYi(t) + 2Yi(t)1
2(t+ 1) >0Yi(t)<αi1
2(t+1)
11
t+1
.
(19a)
Solving for
i(t)>0when p
i(t+ 1) = 1 yields
Yi(t)>αi1
2(t+1)
11
2(t+1)
.(19b)
Equation (19) implies the critical region of Case 1is
κ1(αi, t) = αi1
2(t+1)
11
2(t+1)
,αi1
2(t+1)
11
t+1 , implying if Yi(t)
κ1(αi, t), despite applying ideal control inputs to the system
at tleading to p
i(t+ 1) = 1,
i(t)>0a.s.
Case 2.
βi(t)>0p
i(t+ 1) = 0
αiYi(t) + 2Yi(t)1
2(t+ 1) <0Yi(t)>αi1
2(t+1)
11
t+1
.
(20a)
Solving for
i(t)>0when p
i(t+ 1) = 0 yields
Yi(t)<αi
11
2(t+1)
.(20b)
Equation (20) implies the critical region of Case 2is
κ2(αi, t) = αi1
2(t+1)
11
t+1
,αi
11
2(t+1) , implying if Yi(t)
κ2(αi, t), despite applying ideal control inputs to the system
at tleading to p
i(t+ 1) = 0,
i(t)>0a.s.
Case 3. βi(t) = 0 leads to
i(t)>0a.s. because αi>
1
2(t0+1) 1
2(t+1) ,tt0. Additionally, βi(t) = 0 Yi(t) =
αi1
2(t+1)
11
(t+1)
=κ3(αi, t).
9
The total critical region from Cases 1,2and 3yields,
κ(αi, t) =κ1(αi, t)κ2(αi, t)κ3(αi, t)
= αi1
2(t+1)
11
2(t+1)
,αi
11
2(t+1) !,
which shows that if
i(t)>0a.s., then Yi(t)κ(αi, t),
which proves ¬P = ¬Q.
To prove ¬Q = ¬P, assume, ¬Q, i.e, Yi(t)κ(αi, t).
¬Q is subdivided into Cases 4,5and 6.
Case 4. βi(t)<0p
i(t+ 1) = 1, which yields (19a). (19a)
and Yi(t)κ(αi, t)yields, Yi(t)αi1
2(t+1)
11
2(t+1)
,αi1
2(t+1)
11
t+1 =
κ1(αi, t). However, from Case 1, we know that when p
i(t+
1) = 1 and Yi(t)κ1(αi, t), then
i(t)>0a.s.
Case 5. βi(t)>0p
i(t+1) = 0, which yields (20a). (20a)
and Yi(t)κ(αi, t)yields, Yi(t)αi1
2(t+1)
11
t+1
,αi
11
2(t+1) =
κ2(αi, t). However, from Case 2, we know that when p
i(t+
1) = 0 and Yi(t)κ2(αi, t), then
i(t)>0a.s.
Case 6. βi(t)=0Yi(t) = κ3(αi, t)which leads to
i(t)>0a.s. because αi>1
2(t+1) .
Cases 4,5and 6together prove ¬Q = ¬P, which
completes the proof.
Remark 4 (Extension of Theorem 1).Theorem 1establishes
the conditions necessary and sufficient for
i(t)0a.s.,
tt0, which implies that Z
i(t)is monotonically decreasing
a.s., tt0. From Cases 1,4, and Cases 2,5, if the critical
region is defined by κ(αi, t) := "αi1
2(t+1)
11
2(t+1)
,αi
11
2(t+1) #, then
i(t)<0a.s., tt0, which implies that Z
i(t)is strictly
decreasing a.s., tt0. As Z
i(t)is bounded from below by
0, we conclude limt→∞ Z
i(t) = inf{Z
i(t)}= 0 a.s. from the
monotone convergence theorem, which implies the asymptotic
convergence of Y
ito αia.s., if αi>1
2(t0+1) ,pi(t+ 1) =
p
i(t+ 1), and Yi(t)∈ κ(αi, t),tt0.6
Remark 5 (Width of the critical region [3]).The width of the
critical region κ(αi, t)is 1
2t+1 , while Yi(t) {0,1
t,2
t,...,1}.
The granularity in possible values of Yi(t)is 1/t,t. As
1
2t+1 <1
t, there is at most one critical value of Yi(t), such that
despite adapting the system to take ideal control inputs leading
to pi(t+ 1) = p
i(t+ 1), it is possible that
i(t)0which
can lead to the deviation of Y
ifrom αimomentarily. However,
the width of the critical region monotonically decreases as
tincreases, therefore, the assumption of Yi(t)∈ κ(αi, t), is
weak and can be easily satisfied as tincreases. The width
of the critical region in the present work is also smaller
than [3], implying more relaxed conditions for guaranteeing
convergence of Y
ito αi.
Lemma 1. Let Assumptions 1,2,3, and 5hold. Given any
initial time t0, with αi>1
2(t0+1) and Yi(t0)∈ κ(αi, t0), there
exists some time t= inf{t>t0|Y
i(t)κ(αi, t)}a.s.
6Note that Yi(t0)need not necessarily be Y
i(t0).
Proof. The lemma states that if the initial time-average of
violations of state constraints is outside of the critical region,
and we apply ideal control inputs from thereon, then at some
future time t, the time-average of violations comes inside the
critical region a.s. Lemma 1implies the a.s. strict decrease
of Z
i(t)is violated for tt(see Remark 4). Lemma 1
can be proved by showing that when Z
i(t)decreases, then
δˆwi(t)
δt >δZ
i(t)
δt , where ˆw(t)is the width of the critical region
at time t, and δis the forward finite difference operator.
ˆwi(t) := 1
2t+1 , implying δˆwi(t) := ˆwi(t+ 1) ˆwi(t) =
2
(2t+1)(2t+3) where δt := 1. Thus, ˆwi(t)decreases in the order
of O(1
t2).
Similarly, δZ
i(t) := Z
i(t+ 1) Zi(t) =
i(t).
i(t)can
be subdivided into Cases 1and 2.
Case 1. When p
i(t+1) = 1,
i(t) = αitYi(t)
t+1 1
t+1
|αiYi(t)|.
Case 2. When p
i(t+ 1) = 0,
i(t) = αitYi(t)
t+1 |αi
Yi(t)|.
Substituting Yi(t) = Pt
j=1 Vi(j)
t, where Pt
j=1 Vi(j)t,
in both Cases 1and 2, we see that when Z
i(t)decreases
a.s. by virtue of Yi(t)∈ κ(αi, t), then Z
i(t)decreases most
modestly (i.e., with the least magnitude of decrease), in the
order of O(1
t), which proves that, δˆwi(t)
δt >δZ
i(t)
δt , which
completes the proof.
Lemma 2. Let Assumptions 1,2,3, and 5hold. Given any
initial time t0, with αi>1
2(t0+1) and Yi(t0)κ(αi, t0), there
exists some time t= inf{t>t0|Y
i(t)∈ κ(αi, t)}a.s.
Proof. The lemma states that if the initial time-average of
violation of state constraints is inside the critical region, and
we apply ideal control inputs from thereon, then at some
future time t, the time-average of violation goes outside of the
critical region a.s. Lemma 2prevents the monotonic increase
of Z
i(t)for tt. When Yi(t)κ(αi, t),δZ
i(t)
δt 0, while
δˆwi(t)
δt <0, thus leading to δZ
i(t)
δt >δˆwi(t)
δt , which completes
the proof.
Theorem 2. Let Assumptions 1,2,3, and 5hold. Given
any initial time t0with αi>1
2(t0+1) , and time-average of
violations Yi(t0),Y
iasymptotically converges to αia.s.
Proof. The theorem states that if ideal control inputs are
applied to the system starting from any arbitrary time t0
with αi>1
2(t0+1) , then the time-average of violations
converges to the maximum probability of violations of state
constraints a.s. The proof follows from Theorem 1, Re-
marks 4and 5, Lemmas 1,2, and relaxes the assumption
of Yi(t)∈ κ(αi, t),tt0, of Remark 4. Theorem 2
also rigorously proves the result which was intuitively argued
in [3].
From Remark 5, the width of the critical region ˆwi(t)
is 1
2t+1 , which monotonically decreases with time. As the
critical region κ(αi, t)is a neighborhood of αiby con-
struction, decrease of ˆwi(t)implies decrease of the width
of the neighborhood. limt→∞ ˆwi(t) = 0, which implies
10
limt→∞ κ(αi, t) = αi, i.e., the critical region becomes a
point. With αi>1
2(t0+1) and pi(t+ 1) = p
i(t+ 1),tt0,
two cases are possible:
Case 1. When Yi(t0)∈ κ(αi, t), Lemma 1concludes that
there exists some t= inf{t > t0|Y
i(t)κ(αi, t)}a.s,
which implies that Y
i(t)is closer to αias compared to Yi(t0)
a.s. (see Remark 4). Then, from Lemma 2, we can conclude
that there exists some t′′ = inf{t > t|Y
i(t)∈ κ(αi, t)}
a.s., which implies that Y
i(t′′)is no closer to αias compared
to Y
i(t)a.s. (see Remark 4). The process repeats, with Y
i
coming in and out of the critical region a.s. as time progresses.
However, as the width of the critical region vanishes at t
, making the critical region the point αi, we conclude Y
i
converges to αia.s. as t .
Case 2. When Yi(t0)κ(αi, t), a similar argument can be
made as in Case 1involving Lemma 1and 2, with Y
icoming
in and out of the critical region a.s. until ultimately converging
to αia.s. as t .
Both Cases 1and 2complete the proof.
2) Deviation of the practical control policy from the ideal
control policy: Remark 5establishes that the assumption of
Yi(t)∈ κ(αi, t), is weak and can be easily satisfied as t
increases. Thus, given Yi(t)∈ κ(αi, t), if at some time t
Yi(t)<αi1
2(t+1)
11
2(t+1)
, then Yi(t)<αi1
2(t+1)
11
(t+1)
< αiholds, which
results in hi(t)< hi(t1) on applying (18). The decrease
of hiexpands the feasible state set for x(t+ 1) in (11) and
thereby encourages constraint violations at t+ 1 in closed-
loop. While under the ideal control policy (Assumption 5),
hi(t)< hi(t1) would have guaranteed constraint violation
at time t+ 1 a.s., in practical scenarios, just the expansion
of the feasible state set alone cannot guarantee violation. A
similar argument can be made when Yi(t)>αi
11
2(t+1)
, which
by virtue of Yi(t)> αi>αi1
2(t+1)
11
(t+1)
and (18) leads to
hi(t)> hi(t1), contracting the feasible state set for x(t+1)
thereby discouraging constraint violations at t+ 1 in closed-
loop. Thus under deviation of the practical control policy from
the ideal, while asymptotic convergence of Yito αicannot be
guaranteed, it still is encouraged by (18).
E. Asymptotic Behavior of the Practical System
It is important to determine the asymptotic behavior (i.e.,
as t ) of the practical system in which the simplifying
assumption of applying an ideal control policy referred to in
Assumption 5is dropped.
Theorem 3. Let Assumptions 1,2, and 3(a) hold. The expected
value of Yi(t+ 1)|Ftasymptotically converges to Yi(t).
Proof. Applying the limit of t to (16), and rearranging
to bypass the indeterminate
form, we get,
lim
t→∞ i(t) = lim
t→∞ pi(t+ 1)"
αiYi(t)
1 + 1
t
1
t+ 1
αiYi(t)
1 + 1
t#+
αiYi(t)
1 + 1
t
|αiYi(t)|.
(21)
As pi(t+ 1) [0,1], substituting t in (21), yields
limt→∞ i(t)=0, which implies either:
Case 1. limt→∞ Yi(t) = limt→∞ E[Yi(t+ 1)|Ft].
Case 2. limt→∞ Yi(t) = αi+land limt→∞ E[Yi(t+1)|Ft] =
αil, where, l[αi, αi].
Taking expectation on both sides of (13) yields,
E[Yi(t+ 1)|Ft] = tYi(t)
t+ 1 +EVi(t+ 1)|Ft
t+ 1 .(22)
Taking the limit at t in (22), and substituting the values
of limt→∞ Yi(t)and limt→∞ E[Yi(t+ 1)|Ft]yields,
αil= lim
t→∞
αi+l
1 + 1
t
+ lim
t→∞
EVi(t+ 1)|Ft
t+ 1 .(23)
As E[Vi(t+ 1)|Ft] = pi(t+ 1) [0,1], hence evaluating the
limit in (23) yields l= 0. Hence Case 2implies Case 1(but
not the other way around).
Case 1completes the proof.
The above proof shows that the time-average of the state
constraint violations has a martingale-like behavior asymp-
totically which may be useful for practical operation of the
proposed OA-SMPC to avoid unpredictable violation behavior
in the long run as the system evolves.
F. Post-processing for Real-time System Operation
In previous works such as [3], after the nominal MPC
computed optimal control inputs for the first time-step of
the prediction horizon are implemented, the observed states
get corrected in closed-loop to account for real-time un-
certainties by (1). Accommodation of the entire uncertainty
(uncertainty in weather forecast) by the state is reasonable for
building climate control applications where the state (room
temperature) and control input (heating/cooling effect from
the air-conditioner) are not coupled to the same physical
equipment [3]. However, in certain applications where the
states and control inputs are coupled to the same equipment
(like BESS, where state of charge (SOC) is the state, and
charging/discharging power is a control input), the first time-
step optimal control inputs (computed by the nominal MPC)
can also be post-processed in real-time to correct for the
realized uncertainty. For example, the BESS can alter its
nominal MPC computed optimal control inputs and states to
correct for the VRES and gross load forecast uncertainty in
real-time to maintain power balance of the MG with the main
grid.
Assumption 6 (Post-processing to correct for uncertainties
in real-time).For each chance constrained state affected by
uncertainties: (a) There are two mutually coupled sources of
control with one source having the primary responsibility of
handling the uncertainties in closed-loop. During correction
of the observed states in closed-loop to account for the
uncertainty, the feasible altered control input and state set
is the same as defined by s1:q(11c)and g1:rh1:r(t)(11e)
respectively, which are the time-varying design limitations. The
secondary control source always has enough control input
11
available to handle the remaining part of the uncertainty
(through satisfaction of (3)) that cannot be handled by the
primary source due to the possibility of violation of the time-
varying design limitations by the primary control source in
closed-loop, (b) The state transition is not dependent on the
secondary control source, (c) The primary and secondary
control sources are coupled only to each other, (d) The state
transition is not dependent on the control sources associated
with other states.
s1:qis a hard constraint and does not vary with time, but
as h1:r(t)is time-varying, we refer to these design limitations
as time-varying, when considered together. After updating the
states and control inputs, h(t+ 1) is updated by (18), and
the optimization in (11) is repeated. The schematic diagram
of the complete OA-SMPC operational framework with post-
processing is shown in Fig. 1, with the algorithm presented
in Algorithm 1. The results from Theorems 1,2and 3are
independent of the post-processing framework, and still hold
with post-processing. Additionally, the restrictive one step
controllability in Assumption 3(b) can be relaxed for practical
systems while still maintaining the structure of the input matrix
in Remark 2due to Assumption 6.
Note that the computational cost of the proposed OA-SMPC
is same as that of a nominal MPC, with the hupdate happening
outside of the MPC framework which makes the present
method extremely scalable for practical implementation. Also,
similar to [3, Sec. VII], as the linear structure of the system
is not leveraged for Theorems 1,2and 3, the results hold for
nonlinear systems too provided the relevant assumptions hold.
Figure 1. OA-SMPC operational framework with post-processing.
Example. To demonstrate post-processing, consider a simple
system with x(t)=[x1(t)] as the state subjected to chance
constraints, control inputs u(t) = u1(t)u2(t), where
u1and u2are the primary and secondary control sources
respectively, and the uncertainty is denoted by w(t). The
system matrices are A,B=B11 0and E. The structure of
Bfollows from Assumption 6(b). u1(t)and u2(t)are coupled
by (3), following Assumption 6(c). Let ˜x1(t+ 1) and ˜u1be
the state and primary control upper limits based on the time-
varying design limitations following Assumption 6(a). The
system described is similar to a MG with BESS, VRE, local
load demand and grid connectivity, where the BESS SOC is
the state, BESS charging/discharging power is the primary
control source, and the grid import power is the secondary
control source. Both the control sources are coupled by the
grid power balance equation, while the uncertainty is the real-
time VRES and load forecast error.
When correcting for the uncertainties in closed-loop, first
it is ensured that the primary control source handles only the
part of the uncertainty that still keeps it within the feasible
set. The altered primary control input can be formulated as,
u1(t) := minu
1(t|t) + D1w(t),˜u1,(24a)
where D=BE, and u
1(t|t)is the optimal MPC computed
primary control input. The state update equation using As-
sumptions 6(a) and 6(b) is formulated as
x1(t+ 1) = minAx1(t) + B11u1(t),˜x1(t+ 1).(24b)
If from (24b), x1(t+ 1) = ˜x1(t+ 1), we re-compute,
u1(t) = (B11)1(˜x1(t+ 1) Ax1(t)). Finally, the altered
secondary control input u2(t)is computed by satisfying (3).
In the case of the time-varying design limitations giving the
lower limits of the state and primary controls, ˜
x1(t+ 1) and
˜
u1respectively, (24a) and (24b) are modified by replacing
the min by the max function. Assumptions 6(a) and 6(b)
ensure that the post-processing steps give unique solutions,
and Assumptions 6(c) and 6(d) ensure that the correction for
uncertainties affecting a chance constrained state and related
control inputs does not unnecessarily affect other states and
control inputs which may lead to inconsistency in the post-
processed solutions. After post-processing, the state constraint
violations in closed-loop are tracked as (25) with time-average
of violations calculated similar to (12b).
V1(t+ 1):=1, G1x(t+ 1) > g1,
0, G1x(t+ 1) g1.(25)
Remark 6 (Significance of the post-processing framework).
In related previous works with two mutually coupled sources of
control to handle uncertainties on chance constrained states
such as [17], the time-varying design limitations as that of
Assumption 6(a) are not considered. Large primary control
inputs to handle large uncertainties can, thus, potentially lead
to damaging the primary controller in [17]. In other works
like [3], MPC computed control inputs are not altered to
account for uncertainties, making the state handle the entire
uncertainty in closed-loop. Large uncertainties can thus steer
the system away from the OA-SMPC feasible region, which due
to Assumption 3(a) can lead to the application of expensive
control input at the next time-step to bring the system back to
feasibility. Such an expensive control input can lead to high
economic cost, a problem not considered in both [3] and [17].
Remark 7 (Non-conservative chance constraint satisfac-
tion in a ‘practical sense’ in closed-loop even under
time-varying uncertainty distribution, and repeated large
uncertainties).As the nominal OA-SMPC always has access
to relaxed feasible states, it is possible for the predicted
nominal solutions to always violate the (original) state con-
straints k, t (as satisfaction of (9a)is not mandatory in our
formulation). In closed-loop, thus, if solutions are continuously
violated more than the maximum prescribed level, hiincreases
until hi0according to (18). Mathematical violations
can still persist in Vi(t+ 1) after hi0which fails
to give the benefit of adaptive relaxation. These violations
can be ignored during practical implementation, by adding
a small ε > 0to giresulting in considering violations only
if Gix(t+ 1) > gi+ε. The consideration of εin tracking
violations is an operational step and can be removed by
the user when hidecreases sufficiently to relax the state
12
constraint and reduce conservatism. This adaptive behavior
of our system along with post-processing ensures that the
chance constraints in (4)are satisfied in a ‘practical sense’
in closed-loop, which previous works [3], [17], [25] may not
be able to satisfy under repeated large uncertainties when the
mixed worst-case disturbance sequence is not known a-priori,
or if the uncertainty distribution changes with time. Note
that the mixed worst-case disturbance sequence computed via
scenarios of uncertainty in [3], [17] may in itself be over-
conservative [31].
Algorithm 1 Online Adaptive SMPC (OA-SMPC)
Initialization
1. Choose x(0).
2. Choose h(0) from domain knowledge.
Online solution
1. Solve (11) to get u(t|t).
2a. If post-processing of nominal OA-SMPC computed op-
timal control inputs are not allowed: Set u(t)u(t|t)
and account for the uncertainties in x(t+ 1) by (1).
Calculate V(t+ 1) from (12a).
2b. If post-processing of nominal OA-SMPC computed op-
timal control inputs are allowed: Post-process by (24)
and (3) to account for the uncertainties to calculate
x(t+ 1) and u(t). Calculate V(t+ 1) similar to (25).
3. Calculate Y(t+ 1) from (12b).
4. Calculate h(t+ 1) from (18).
5. Set tt+1 and repeat from Step 1 of Online solution.
IV. CAS E ST UDY
A. Overview
In this section, we implement our proposed method (OA-
SMPC) for simulating the optimal BESS dispatch strategy for
a practical MG with PV, load and connection to the main
grid in an EMPC framework. The MG setup is from the real-
life MG at the Port of San Diego, described in [18]. The
MG model incorporates electricity prices with demand and
energy charges, realistic load and PV forecast, and a post-
processing step for incorporating the forecast uncertainties
in BESS dispatch. The yearly (2019) electricity costs were
compared for the MG, for the traditional MPC, with hard
constraints on the state (BESS SOC), and our OA-SMPC
method with chance constraints on the state. The motivation
for using chance constraints on BESS SOC in the OA-SMPC is
to leverage some extra BESS capacity to reduce demand peaks
and thus, demand charges, leading to significant electricity cost
savings, while staying within a maximum violation probability
bound to avoid adverse effects on BESS life. Additionally,
we also compare our proposed OA-SMPC to the traditional
EMPC method without chance constraints, and a state-of-the-
art approach [3] from the literature with chance constraints
and similar computational cost.
B. PV and Load Forecast
The MG model uses day-ahead PV and gross load forecasts
with 15 minute time resolution as inputs. The k-Nearest
Neighbor (kNN) algorithm is used for the gross load forecast.
The training data comprises of 15-min resolution historical
load observations from November 1,2018 to November 20,
2019. For every MPC horizon, gross load observations for the
previous 24 h (feature vector) are compared with the training
sample and k= 29 nearest neighbors are identified by the
kNN algorithm. Finally, the gross load forecast is calculated
by averaging the gross load of the selected neighbors at every
time-step of the forecast horizon [32]. The root mean square
error (RMSE), mean absolute error (MAE), and mean bias
error (MBE) for the gross load forecast for the entire year are
22.4kW, 17.3kW, and 1.2kW respectively.
The PV generation forecast for the upcoming 24 h utilizes
the kNN (with k= 30) algorithm as well. The feature vector
is formed by three data sets: numerical weather prediction
(NWP) model forecasts for the upcoming 24 h, the average of
the preceding 1,2,3, and 4h PV power generation, and the
current time of the day. The NWP forecast utilized was the
High-Resolution Rapid Refresh (HRRR) model developed by
NOAA [33]. The PV power generation dataset was obtained
by running simulations for the PV plant in the Solar Advisor
Model (SAM) using the irradiance observation data obtained
from the NSRDB database as an input. The training sample
for PV power generation forecast includes NWP forecast and
irradiance observations between January 1, 2019 to December
31, 2019. The RMSE, MAE and MBE for the PV forecast for
the entire year are 20.3kW, 9.1kW, and 0.8kW respectively.
C. Microgrid (MG) Model
The system state x(t) = [x1(t)] is the BESS state of
charge (SOC). The control input is u(t) = u1(t)u2(t)
where u1(t)is the BESS dispatch power (primary control
source for handling uncertainty), and u2(t)is the grid import
power (secondary control source for handling uncertainty).
u1(t)>0denotes charging, while u2(t)>0denotes power
import from the main grid to the MG. The PV generation
and gross load is denoted by PV(t)and L(t)respectively
and are used as forecast inputs to the MPC. The uncertainty
w(t) = [w1(t)] is the difference between the gross load and
PV generation forecast uncertainties, i.e., w1(t) = Lf(t)
Lr(t)PVf(t)PVr(t)where the superscript fand r
denote forecasted and real values. The MPC prediction horizon
is one-day ahead, subdivided into N= 96 equal time-steps of
t= 0.25 h (15 minutes) each.
The system matrices are A= [1],B=t
BESSen 0, and
E= [ t
BESSen ]where BESSen is the energy capacity of the
BESS. The system matrices handle the SOC update of the
battery due to charging/discharging. For the hard control input
constraints, S=1 0
1 0and s=BESSmax BESSmax,
which constrains the maximum charging/discharging power of
the BESS. For the time varying equality constraints coupling
the control inputs, M=11,c(t)=[PVf(t)Lf(t)],
and F= [1], which ensures power balance of the MG with
the main grid. The MPC also has a terminal state constraint
defined as x1(t+N|t)ˆx1.
For this case study, we consider joint chance constraints
(JCC) on the BESS SOC which considers a violation if the
13
BESS SOC goes above or below predefined upper (SOCmax)
and lower bounds (SOCmin) in closed-loop. The goal is to re-
duce electricity import costs from the grid by having controlled
violations beyond the predefined upper and lower bounds
by making a larger BESS capacity available for dispatch.
Unrestricted violations are avoided as they can adversely affect
the BESS lifetime. The chance constraints are defined by
G=11,g=SOCmax SOCminand h(t) =
h1(t)h2(t). We choose h1(t) = h2(t)for adapting both
the state constraints simultaneously by the same parameter as
they are setup in JCC form. The maximum probability of the
JCC violation is predefined by α. For the traditional EMPC
(without chance constraints), violations are avoided and the
state constraints are formulated as (26), while for the OA-
SMPC the state constraints are formulated as (10).
Gx(t+k|t)gkNN
1,t. (26)
In the JCC formulation, when updating h(t)by (18), it is
ensured that h1(t) = maxSOCmax 1, h1(t), and h2(t) =
maxSOCmin, h2(t)to ensure the state constraints do not
violate physical limits of SOC above 1 or below 0. However,
in our case study, hi(t)given by (18) never violate physical
limits, obviating the above correction. We have practically
ensured this by setting a high value of γiin (18), which is a
design choice, at the cost of slower system adaptation (i.e., rate
of change of hi(t)), and setting the initial constraint relaxation
parameter hi(0) such that gihi(0) is sufficiently far away
from physical limits for i {1,2}.
The objective function is formulated as in [18], [34], and is
given by,
J(t)=RNCmax{u2(t+k|t)}N1
k=0 +ROPmax{u2(t+l|t)}lI(t)
+RECt"N1
X
k=0
u2(t+k|t) + 1η
2
N1
X
k=0
|u1(t+k|t)|#,(27)
where RNC is the non-coincident demand charge (NCDC) rate
charged on the maximum grid import during the prediction
horizon. Similarly, ROP is the on-peak demand charge (OPDC)
rate, charged on the maximum grid import between 16:00 and
21:00 h, called on-peak (OP) hours of the prediction horizon,
and I(t)represents indices of prediction horizon time-steps
coinciding with the OP hours. Naturally, the indices of OP
hours in the prediction horizon are a function of the starting
time-step tof the MPC prediction horizon. REC is the energy
charge rate and ηis the round-trip efficiency of the BESS
accounting for BESS losses. After the net load for the entire
month is realized, the monthly NCDC is computed based
on maximum load demand from the grid during the month,
while the monthly OPDC is computed based on maximum
load demand between 16:00 and 21:00 h of all days of the
month. The predefined parameters of the MG for the OA-
SMPC operation are shown in Table Iand a block diagram
of the MG operational framework is shown in Fig. 1.Note
that the one step controllability in Assumption 3(b), and ideal
control policy in Assumption 5is relaxed for the case study
for realistic simulations. The satisfaction of Assumption 3(a)
is ensured by choosing a sufficiently large γiwhich constrains
the rate of hi(t)increase depending on the available control
input power to satisfy (A.5). Note that due to Assumption 6(a),
in our case study, the risk of violation of Assumption 3(a) can
only arise when hi(t)increases.7
Table I
DESIGN PARAMETERS OF THE OA-SMPC FOR TH E MG.
Parameter Symbol Value
NCDC rate RNC $24.48/kW
OPDC rate ROP $19.19/kW
Energy rate REC $0.1/kWh
BESS round-trip efficiency η0.8
BESS energy capacity BESSen 2,500 kWh
BESS power capacity BESSmax 700 kW
Upper bound of SOC for
traditional EMPC 1 SOCmax 0.8
Lower bound of SOC for
traditional EMPC 1 SOCmin 0.2
Maximum violation probability α0.1
Terminal state constraint ˆx10.5
Initial state x1(0) 0.5
Initial constraint relaxing parameter h(0) 0.10.1
Proportionality constant γ1and γ215
D. Operation Strategy
This section presents the real-time operation strategy of
the OA-SMPC for the economic MG dispatch probem under
consideration. Although (18) updates hiat every time-step,
given the emphasis on additional OPDC penalties in our cost
function, whenever the starting time-step of the MPC coincides
with daily OP hours, we restrict the hiincrease between
two time steps, overriding (18) when required. Decreasing hi
during the daily OP hours is still allowed, and if hiincreases it
is reset manually to the previous value. The reasoning behind
restricting the increase of hiduring OP times is to avoid
the additional OPDC costs (on top of NCDC) by incurring
preferential violations during OP hours (by deeper BESS
discharge due to relaxed state constraints). The preferential
OP hour violations may lead to state violations beyond the
maximum allowable limit temporarily, but the adaptive rule
(by virtue of Y > α) pushes the BESS to compensate by
lowering violations due to rapid increase of hiduring other
times (when risk of penalty on peaks is lower). The simulations
are carried out in CVX, a package for solving convex programs
in the MATLAB environment [35], [36].
V. RE SU LTS AND DISCUSSION
We compare results of yearly simulations from 4 test cases:
(i) Traditional EMPC 1 described in Section IV-C to demon-
strate the case without chance constraints; (ii) OA-SMPC
which is our proposed method described in Section IV-C to
demonstrate the case with chance constraints on state; (iii)
SMPC Lit from [3] which is similar in computational cost
to our proposed method, with the chance constraints being
represented by (9b), a corresponding adaptive constraint tight-
ening rule given by ˜
hi(t) = ˜
hi(t1)1αiYi(t)+ 2αi1
2t
γi,
7The maximum absolute forecast uncertainty throughout the year for the
case study is 261 kW, which still is lesser than half of the BESS power
capacity of 700 kW. Thus, even in the case where Assumption 6(a) is relaxed,
and the BESS is set up to handle all the uncertainty until reaching the physical
limits, the practical viability of Assumption 3(a) is reinforced.
14
with ˜
hi(t)>0,t,i {1,2}, and ˜
h1(t) = ˜
h2(t). As [3]
only tightens the state constraints, violation is allowed in
closed-loop by changing the feasible state set in the post-
processing step in Assumption 6(a) to the physical limits of
the system (i.e., SOC limits of 0 and 1). Similar to the OA-
SMPC, ˜
hiincrease during OP hours is avoided by resetting
it manually to its previous value. The design parameters for
SMPC Lit are the same as that of OA-SMPC (see Table I),
with ˜
h(0) = h(0); (iv) Traditional EMPC 2 which modifies
Case (i) with g=SOCmax h1(0) SOCmin h2(0)to
demonstrate the MG performance with the state constraints
always relaxed by the same initial relaxing parameter in Case
(ii) to violate SOCmax/SOCmin in closed-loop but without
adaptation.
From the PV and gross load forecasting errors, we find that
the mean and standard deviation (SD) of the uncertainty is
0.5kW and 30.3kW, respectively for the entire year of
2019. A greater (lesser) value of mean uncertainty would push
the states higher (lower) in general causing more constraint
violations due to the BESS exceeding (falling below) the
SOCmax (SOCmin). A higher SD of the uncertainty would cause
the closed-loop behavior of the system to differ significantly
from the open-loop solutions given by (11) which in addition
to increasing the likelihood of violations may also compel
expensive BESS dispatch to handle the uncertainty and satisfy
Assumption 3(a).
Table II
RES ULTS F OR TH E 4TES T CA SES F OR T HE YE AR 2019. MONTHLY COSTS
AR E ADD ED TO C OM PUT E TH E YEA RLY CO ST.
Costs Traditional 1 OA-SMPC SMPC Lit [3] Traditional 2
NCDC $66,586 $64,579 $66,960 $57,690
OPDC $1,959 $1,274 $2,087 $1,148
Energy Cost $14,382 $14,385 $14,382 $14,399
BESS loss $8,290 $9,071 $8,309 $10,447
Total Cost $91,217 $89,309 $91,738 $83,683
Total BESS cycles 165.8 181.4 166.2 208.9
Yat year end 0% 10.1% 2.2% 20.4%
Figure 2. Yearly time-series for the: (a) time-average of state constraint
violations (Y) for the OA-SMPC, SMPC Lit [3], and Traditional EMPC 2
case studies, and the maximum allowable state constraint violation probability
(α= 0.1), (b) adaptive state constraint relaxing parameters (h1and ˜
h1, with
h1=h2, and ˜
h1=˜
h2in these case studies) for the OA-SMPC and SMPC
Lit, (c) closed-loop behavior of the control inputs for the OA-SMPC, (d)
closed-loop behavior of the state for the OA-SMPC.
Figure 2and Table II summarize the simulation results of
the 4 test cases for the year 2019. Note that the energy costs
are similar across all cases as there is no arbitrage in the MG
model. The minute differences in energy costs are due to the
different ending BESS SOC at the end of the year for the
different test cases.
The comparison between the Traditional EMPC 1 and OA-
SMPC demonstrates the superior economic performance of our
proposed algorithm while still staying within the maximum
allowable violation probability bound. NCDC and OPDC de-
crease by 3% and 35% between Traditional EMPC 1 and OA-
SMPC. The OPDC savings (as a %) are significantly more than
NCDC because of the operation strategy (see Section IV-D)
allowing preferential violations during OP hours. As extra
BESS capacity is available for the OA-SMPC as compared to
the Traditional EMPC 1 due to the adaptive relaxation of BESS
SOC constraints beyond the 0.20.8SOC range, the BESS is
used more aggressively in OA-SMPC resulting in 9.4% more
BESS losses amounting to 15.6 extra BESS yearly cycles.
Overall, OA-SMPC leads to 2.1% total yearly cost savings as
compared to the Traditional EMPC 1. SMPC Lit performs the
worst as it tightens the feasible SOC range and leads to highest
yearly cost. Some violations are caused due to the effect of
uncertainty in the closed-loop but it is unable to reduce costs
as the timing of the uncertainty is critical in reducing demand
charges. The NCDC and OPDC are greater in SMPC Lit as
compared to both the Traditional EMPC 1 and OA-SMPC.
SMPC Lit has similar BESS cycles as the Traditional EMPC
1 with 0.6% extra yearly costs. This serves as a proof of
the concept alluded to in Remark 1that adaptive constraint
tightening methods (like SMPC Lit) can be over-conservative
being unable to exploit the allowable violation limit which can
be particularly disadvantageous in EMPC frameworks.
Figure 2(a) shows the variation of Yfor the OA-SMPC
(green line), and SMPC Lit (black dotted line) for the entire
year. For the OA-SMPC, initially, Yis 0until the first con-
straint violation occurs, after which it overshoots α, oscillating
with a high frequency and magnitude until the first week of
January. Then, consistent with the goal of Yconverging to
a value less than or equal to α,Ydecreases and oscillates
about αwith lower frequency and magnitude, as the system
evolves, and finally reaching a value of 10.1% at the end
of the year (which is within a small margin of α). It is
expected for Yto decrease below αif the OA-SMPC is
run over a longer time period, resulting in satisfying the
chance constraints, albeit non-conservatively, in closed-loop.
Figure 2(b) shows that for the OA-SMPC, h1and h2increase
and decrease depending on the overshoot and undershoot of
Ywith respect to αrespectively, being able to expand and
contract the feasible state set accordingly to keep Ynear
α. Figures 2(c) and 2(d) demonstrate the yearly closed-loop
behavior of the BESS dispatch and grid import, and SOC
respectively for the OA-SMPC. For a significant portion of
the year, the grid import is negative because the PV system
at the location is oversized generating excess power which is
fed back to the grid. Significant violation of constraints can be
seen in Figure 2(d) in the months of May and October, which
are the two months that aid most is cost savings.
Figure 2(b) also shows that for the SMPC Lit, the ˜
h1and ˜
h2
quickly drop down to 0 in the first week of January trying to
15
encourage violations initially as the initial constraint tightening
was too conservative. However, the system’s Ystill stays far
below α, with the result that ˜
h1and ˜
h2stay close to 0+
throughout the year behaving essentially like the Traditional
EMPC 1 in the nominal MPC with allowance for SOC limits to
range from 0 to 1 in closed-loop. In the months of February
and March (not shown in Figures), SMPC Lit creates more
NCDC than Traditional EMPC 1, because just before the
NCDP time, a high uncertainty forces the SOC to go below
the lower limit of 0.2, which causes a large BESS charging
action at the next time-step to climb back to the feasible SOC
range of above 0.2 causing demand peaks, a problem alluded
to in Remark 6.
Traditional EMPC 2 demonstrates that relaxing the state
(BESS SOC) constraints by the same initial relaxation pa-
rameter as in OA-SMPC without any adaptation causes the
violations to exceed the maximum allowable violation prob-
ability bound in closed-loop (blue line in Fig. 2(a)). The
Traditional EMPC 2 has lower total yearly electricity costs
(due to lower NCDC and OPDC), than the OA-SMPC because
the objective function in (27) penalizes peaks in grid import
power (u2) more than BESS dispatch power (u1), resulting
in prolonged aggressive dispatch of the BESS to serve the
net load (LPV). The aggressive BESS dispatch is more
prolonged in Traditional EMPC 2 than OA-SMPC because
the feasible BESS SOC range in Traditional EMPC 2 is
not adapted due to past violations, and the feasible state
set continues being larger than OA-SMPC throughout. The
Traditional EMPC 2 shows higher frequency of oscillation and
magnitude of overshoot of Yabove αas compared to OA-
SMPC, and additionally shows that Yconsistently remains
above 2αfor the majority of the year. Thus, the Traditional
EMPC 2 is unable to fulfill the chance constraints and may
significantly harm BESS life, resulting in 27.5more yearly
BESS cycles as compared to OA-SMPC. The analysis serves
as a proof of concept that it is the adaptive relaxation rule in
OA-SMPC rather than the nature of uncertainties which ensure
non-conservative chance constraint satisfaction in closed-loop
in OA-SMPC.
Figure 3and Table III present the results of the OA-
SMPC analysis when it is repeated with different values of
α {0.05,0.15,0.2}, with the same other design parameters
as in Table I. Figure 3shows that in all the cases, similar
behavior is observed as in Fig. 2(a) with Yoscillating about
αwith lower frequency and magnitude as the system evolves
with time along the year while ultimately settling at a value
below αat year-end, thereby tightly satisfying the chance
constraints. Table III demonstrates that superior cost savings
are attained when αincreases at the cost of more BESS cycles.
Comparing the case with α= 0.2in Table III with Traditional
EMPC 2 in Table II shows that the OA-SMPC is more effective
at reducing costs with less violations and BESS cycles than the
Traditional EMPC 2 due to the OA-SMPC’s online adaptivity.
VI. CONCLUSIONS AND FUTURE WO RK
This work presents a novel online adaptive state constraint
relaxation based stochastic MPC (OA-SMPC) framework for
non-conservative chance constraint satisfaction in closed-loop.
Table III
RES ULTS F OR TH E OA-SMPC WITH DIFFERENT VALUES OF MAXIMUM
PRO BAB ILI TY O F VIO LATI ON OF S TATE CO NST RAI NT S (α)FOR T HE Y EAR
2019.
Costs α= 0.05 α= 0.15 α= 0.2
NCDC $66,294 $59,564 $57,323
OPDC $1,865 $1,036 $540
Energy Cost $14,383 $14,395 $14,395
BESS loss $8,473 $9,747 $10,336
Total Cost $91,014 $84,742 $82,593
Total BESS cycles 169.5 194.9 206.7
Yat year end 4.5% 14.3% 19.1%
Figure 3. Yearly time-series for the time-average of state constraint violations
(Yα) for the OA-SMPC for the maximum allowable state constraint violation
probability α {0.05,0.15,0.2}.
An adaptive state constraint relaxation rule is developed for
a generic discrete LTI system based on the time-average of
past constraint violations without any a-priori assumptions
about the probability distribution of the uncertainty set or
its statistics, or sample uncertainties from historical data.
The time-average of the state constraint violations, under the
assumption of ideal control inputs which can cause/prevent
constraint violations almost surely, is proven to asymptotically
converge to the maximum allowable violation probability. The
time-average of the state constraint violations is also proven to
exhibit martingale-like behavior asymptotically, even without
the ideal control input assumption.
The proposed method (OA-SMPC) is applied for minimiz-
ing monthly electricity costs by optimal BESS dispatch for a
grid connected microgrid (MG) in an economic MPC (EMPC)
framework. We perform simulations for the Port of San Diego
MG using realistic PV and load forecast data for the year 2019.
Chance constraints are applied on the BESS SOC to make
use of excess BESS capacity in our proposed OA-SMPC as
compared to the traditional EMPC without chance constraints
(which uses hard constraints on BESS SOC). The OA-SMPC
outperforms the traditional EMPC and a state-of-the-art chance
constrained approach from the literature, striking an effective
trade off between high BESS utilization (i.e., higher cost
savings) and full SOC constraint satisfaction (i.e., longer
BESS lifetime). The OA-SMPC lowers MG electricity costs
by non-conservative chance constraint satisfaction in closed-
loop, thereby having minimal adverse effect on BESS lifetime.
Future work will incorporate the BESS degradation cost and
a life-cycle analysis of the MG.
APPENDIX
A test for Assumption 3(a) entails solving the following
optimization problem [37, Section 5.8.1] at each time t, given
x(t),c(t|t)and h(t).f= min 0,(A.1)
subject to x(t+ 1|t) = Ax(t) + Bu(t|t),(A.2)
Su(t|t)s, (A.3)
16
Mu(t|t) = c(t|t),(A.4)
G(x(t+ 1|t)) gh(t).(A.5)
If f= 0, then the optimization problem defined above
is feasible and we can guarantee Assumption 3(a) holds. If
f=, then the optimization problem above is infeasible and
Assumption 3(a) fails to hold. Note that if (3) and (11d) are
not considered in the problem formulation, we can drop (A.4).
ACK NOW LE DG ME NT S
The authors would like to thank Dr. Sonia Mart´
ınez,
Professor of Mechanical and Aerospace Engineering at UC
San Diego, for the fruitful discussions during writing the
manuscript. The authors would also like to extend their deepest
gratitude to the anonymous reviewers whose comments greatly
improved the content of the paper.
REFERENCES
[1] P. Kou, F. Gao, and X. Guan, “Stochastic predictive control of battery
energy storage for wind farm dispatching: Using probabilistic wind
power forecasts, Renewable Energy, vol. 80, pp. 286–300, aug 2015.
[2] S. Singh, M. Pavone, and J.-J. E. Slotine, “Tube-based mpc : a
contraction theory approach,” 2016.
[3] F. Oldewurtel, D. Sturzenegger, P. M. Esfahani, G. Andersson,
M. Morari, and J. Lygeros, Adaptively constrained stochastic model
predictive control for closed-loop constraint satisfaction, in American
Control Conference, 2013, pp. 4674–4681.
[4] D. Mu˜
noz-Carpintero, G. Hu, and C. J. Spanos, “Stochastic model pre-
dictive control with adaptive constraint tightening for non-conservative
chance constraints satisfaction,” Automatica, vol. 96, pp. 32–39, 2018.
[5] Y. Long and L. Xie, “Iterative learning stochastic mpc with adaptive
constraint tightening for building hvac systems, IFAC-PapersOnLine,
vol. 53, no. 2, pp. 11 577–11 582, 2020.
[6] M. Korda, R. Gondhalekar, F. Oldewurtel, and C. N. Jones, “Stochastic
mpc framework for controlling the average constraint violation, IEEE
Transactions on Automatic Control, vol. 59, no. 7, pp. 1706–1721, 2014.
[7] F. Oldewurtel, L. Roald, G. Andersson, and C. Tomlin, “Adaptively
constrained stochastic model predictive control applied to security
constrained optimal power flow,” in American Control Conference, 2015,
pp. 931–936.
[8] X. Guo, Z. Bao, H. Lai, and W. Yan, “Model predictive control
considering scenario optimisation for microgrid dispatching with wind
power and electric vehicle, The Journal of Engineering, vol. 2017,
no. 13, pp. 2539–2543, 2017.
[9] G. Liu, M. Starke, B. Xiao, X. Zhang, and K. Tomsovic, “Microgrid op-
timal scheduling with chance-constrained islanding capability, Electric
Power Systems Research, vol. 145, pp. 197–206, Apr 2017.
[10] Z. P. Yuan, J. Xia, and P. Li, “Two-Time-Scale Energy Management
for Microgrids with Data-Based Day-Ahead Distributionally Robust
Chance-Constrained Scheduling,” IEEE Transactions on Smart Grid,
vol. 12, no. 6, pp. 4778–4787, Nov 2021.
[11] K. Garifi, K. Baker, B. Touri, and D. Christensen, Stochastic Model Pre-
dictive Control for Demand Response in a Home Energy Management
System; Stochastic Model Predictive Control for Demand Response in
a Home Energy Management System, 2018.
[12] M. Gulin, J. Matusko, and M. Vasak, “Stochastic model predictive
control for optimal economic operation of a residential dc microgrid,”
vol. 2015-June. Institute of Electrical and Electronics Engineers Inc.,
6 2015, pp. 505–510.
[13] Y. Ding, T. Morstyn, and M. D. McCulloch, “Distributionally robust joint
chance-constrained optimization for networked microgrids considering
contingencies and renewable uncertainty,” IEEE Transactions on Smart
Grid, vol. 13, no. 3, pp. 2467–2478, 2022.
[14] F. H. Aghdam, N. T. Kalantari, and B. Mohammadi-Ivatloo, “A stochas-
tic optimal scheduling of multi-microgrid systems considering emis-
sions: A chance constrained model,” Journal of Cleaner Production,
vol. 275, p. 122965, 2020.
[15] H. Wang, H. Xing, Y. Luo, and W. Zhang, “Optimal scheduling of
micro-energy grid with integrated demand response based on chance-
constrained programming,” International Journal of Electrical Power &
Energy Systems, vol. 144, p. 108602, 2023.
[16] O. Ciftci, M. Mehrtash, and A. Kargarian, “Data-driven nonparametric
chance-constrained optimization for microgrid energy management,”
IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp. 2447–
2457, 2019.
[17] X. Guo, Z. Bao, Z. Li, and W. Yan, “Adaptively Constrained Stochastic
Model Predictive Control for the Optimal Dispatch of Microgrid,
Energies 2018, Vol. 11, Page 243, vol. 11, no. 1, p. 243, Jan 2018.
[18] A. Ghosh, C. Cortes-Aguirre, Y.-A. Chen, A. Khurram, and J. Kleissl,
“Adaptive chance constrained mpc under load and pv forecast uncer-
tainties,” in 2023 IEEE PES Grid Edge Technologies Conference &
Exposition (Grid Edge). IEEE, 2023, pp. 1–5.
[19] B. Kouvaritakis, M. Cannon, and D. Mu˜
noz-Carpintero, “Efficient
prediction strategies for disturbance compensation in stochastic mpc,”
International Journal of Systems Science, vol. 44, no. 7, pp. 1344–1353,
2013.
[20] B. Kouvaritakis, M. Cannon, S. V. Rakovi´
c, and Q. Cheng, “Explicit use
of probabilistic distributions in linear predictive control, Automatica,
vol. 46, no. 10, pp. 1719–1724, 2010.
[21] M. Farina, L. Giulioni, L. Magni, and R. Scattolini, “An approach
to output-feedback mpc of stochastic linear discrete-time systems,”
Automatica, vol. 55, pp. 140–149, 2015.
[22] G. C. Calafiore and L. Fagiano, “Stochastic model predictive control of
lpv systems via scenario optimization,” Automatica, vol. 49, no. 6, pp.
1861–1866, 2013.
[23] D. Bernardini and A. Bemporad, “Scenario-based model predictive
control of stochastic constrained linear systems,” in Proceedings of the
48h IEEE Conference on Decision and Control (CDC) held jointly with
2009 28th Chinese Control Conference. IEEE, 2009, pp. 6333–6338.
[24] J. Fleming and M. Cannon, “Time-average constraints in stochastic
model predictive control, in 2017 American Control Conference (ACC).
IEEE, 2017, pp. 5648–5653.
[25] A. Capone, T. Brdigam, and S. Hirche, “Online constraint tightening
in stochastic model predictive control: A regression approach, IEEE
Transactions on Automatic Control, 2024.
[26] J. B. Rawlings, D. Angeli, and C. N. Bates, “Fundamentals of economic
model predictive control, in 2012 IEEE 51st IEEE conference on
decision and control (CDC). IEEE, 2012, pp. 3851–3861.
[27] S. Mullendore, “An introduction to demand charges, Clean Energy
Group, National Renewable Energy Laboratory (NREL), 2017.
[28] D. Williams, Probability with martingales. Cambridge university press,
1991.
[29] R. G. Bartle and R. G. Bartle, The elements of real analysis. Wiley
New York, 1964, vol. 2.
[30] G. Schildbach, G. C. Calafiore, L. Fagiano, and M. Morari, “Randomized
model predictive control for stochastic linear systems, in 2012 American
Control Conference (ACC). IEEE, 2012, pp. 417–422.
[31] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allg¨
ower, “Constraint-
tightening and stability in stochastic model predictive control, IEEE
Transactions on Automatic Control, vol. 62, no. 7, pp. 3165–3177, 2016.
[32] R. Zhang, Y. Xu, Z. Y. Dong, W. Kong, and K. P. Wong, A composite
k-nearest neighbor model for day-ahead load forecasting with limited
temperature forecasts,” vol. 2016-November. IEEE Computer Society,
11 2016.
[33] S. G. Benjamin, S. S. Weygandt, J. M. Brown, M. Hu, C. R. Alexander,
T. G. Smirnova, J. B. Olson, E. P. James, D. C. Dowell, G. A. Grell,
H. Lin, S. E. Peckham, T. L. Smith, W. R. Moninger, J. S. Kenyon,
and G. S. Manikin, “A north american hourly assimilation and model
forecast cycle: The rapid refresh,” Monthly Weather Review, vol. 144,
no. 4, pp. 1669 1694, 2016.
[34] A. Ghosh, M. Z. Zapata, S. Silwal, A. Khurram, and J. Kleissl, “Effects
of number of electric vehicles charging/discharging on total electricity
costs in commercial buildings with time-of-use energy and demand
charges,” Journal of Renewable and Sustainable Energy, vol. 14, no. 3,
2022.
[35] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex
programming, version 2.1,” http://cvxr.com/cvx, Mar. 2014.
[36] ——, “Graph implementations for nonsmooth convex programs,” in
Recent Advances in Learning and Control, ser. Lecture Notes in Control
and Information Sciences, V. Blondel, S. Boyd, and H. Kimura, Eds.
Springer-Verlag Limited, 2008, pp. 95–110, http://stanford.edu/boyd/
graph dcp.html.
[37] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge
university press, 2004.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Solving chance-constrained stochastic optimal control problems is a significant challenge in control. This is because no analytical solutions exist for up to a handful of special cases. A common and computationally efficient approach for tackling chance-constrained stochastic optimal control problems consists of a deterministic reformulation, where hard constraints with an additional constraint-tightening parameter are imposed on a nominal prediction that ignores stochastic disturbances. However, in such approaches, the choice of constraint-tightening parameter remains challenging, and guarantees can mostly be obtained assuming that the process noise distribution is known a priori. Moreover, the chance constraints are often not tightly satisfied, leading to unnecessarily hi gh costs. This work proposes a data-driven approach for learning the constraint-tightening parameters online during control. To this end, we reformulate the choice of constraint-tightening parameter for the closed-loop as a binary regression problem. We then leverage a highly expressive (gp) model for binary regression to approximate the smallest constraint-tightening parameters that satisfy the chance constraints. By tuning the algorithm parameters appropriately, we show that the resulting constraint-tightening parameters satisfy the chance constraints up to an arbitrarily small margin with high probability. Our approach yields constraint-tightening parameters that tightly satisfy the chance constraints in numerical experiments, resulting in a lower average cost than three other state-of-the-art approaches.
Article
Full-text available
Electric vehicle (EV) penetration has been increasing in the modern electricity grid and has been complemented by the growth of EV charging infrastructure. This paper addresses the gap in the literature on the EV effects of total electricity costs in commercial buildings by incorporating V0G, V1G, and V2B charging. The electricity costs are minimized in 14 commercial buildings with real load profiles, demand and energy charges. The scientific contributions of this study are the incorporation of demand charges, quantification of EV and smart charging electricity costs and benefits using several representative long-term datasets, and the derivation of approximate equations that simplify the estimation of EV economic impacts. Our analysis is primarily based on an idealized uniform EV commuter fleet case study. The V1G and V2B charging electricity costs as a function of the number of EVs initially diverge with increasing charging demand and then become parallel to one another with the V2B electricity costs being lower than V1G costs. A longer EV layover time leads to higher numbers of V2B charging stations that can be installed such that original (pre-EV) electricity costs are not exceeded, as compared to a shorter layover time. Sensitivity analyses based on changing the final SOC of EVs between 90% to 80% and initial SOC between 50 to 40% (thereby keeping charging energy demand constant) show that the total electricity costs are the same for V0G and V1G charging, while for V2B charging the total electricity costs decrease as final SOC decreases.
Article
Full-text available
In light of a reliable and resilient power system under extreme weather and natural disasters, networked microgrids integrating local renewable resources have been adopted extensively to supply demands when the main utility experiences blackouts. However, the stochastic nature of renewables and unpredictable contingencies are difficult to address with the deterministic energy management framework. The paper proposes a comprehensive distributionally robust joint chance-constrained (DR-JCC) framework that incorporates microgrid island, power flow, distributed batteries and voltage control constraints. All chance constraints are solved jointly and each one is assigned to an optimized violation rate. To highlight, the JCC problem with the optimized violation rates has been recognized as NP-hard and challenging to solve. This paper proposes a novel evolutionary algorithm that successfully solves this problem and reduces the solution conservativeness (i.e., operation cost) by around 50% compared with the baseline Bonferroni Approximation . We construct three data-driven ambiguity sets to model uncertain solar forecast error distributions. The solution is thus robust for any distribution in sets with the shared moments and shape assumptions. The proposed method is validated by robustness tests based on these sets and firmly secures the solution robustness.
Article
Full-text available
This paper presents a data-driven nonparametric chance-constrained optimization for microgrid energy management. The proposed approach imposes no assumption on probability density and distribution functions of solar generation and load. Adaptive kernel density estimator is utilized to construct a confidence set for each random parameter based on historical data. The constructed confidence sets encompass the ambiguous true distribution and density functions (PDFs). The concept of phi-divergence tolerance is applied to compute the distance between estimated and true PDFs. The estimated distributions are used to formulate a set of data-driven nonparametric chance constraints and model system/component restrictions. To account for the impact of errors in the forecast distributions on system economics and security, confidence levels of the chance constraints are adjusted with respect to point-wise errors of the estimated PDFs. This adjustment ensures that the microgrid chance constraints are satisfied with a predetermined confidence level even if the true realizations of solar generation and load do not exactly fit on the estimated PDFs. The chance constraints are converted into algebraic constraints. Numerical results show the effectiveness of the proposed approach for microgrid management.
Conference Paper
The recent increase in the intermittent variable renewable energy sources (VRES) results in mismatches between demand and supply that can cause grid instability. These issues can be mitigated with battery energy storage systems (BESS). However, BESS are generally dispatched conservatively to manage uncertainties in VRE forecast. Therefore, this paper proposes an online adaptive stochastic model predictive control (A-SMPC) based approach that minimizes electricity costs by expanding the BESS state of charge (SOC) limits beyond the nominal range of 20% – 80%. Allowing the SOC limits to expand, results in violation of the nominal SOC constraints. Chance constraints are implemented in the proposed A-SMPC method that guarantee that the probability of violating nominal SOC constraints remains below a desired value. Furthermore, the A-SMPC cost function includes time-of-use demand charges that have not been considered before in this type of model. Simulations based on historical load and PV generation data from the Port of San Diego for January 2019 shows that the proposed formulation outperforms the traditional MPC formulation, that does not include nominal SOC constraint violation, by reducing the monthly electricity costs by 7%. The proposed A-SMPC method results in 8% higher BESS utilization which translates to about 1 extra charging/discharging cycle during the analyzed month which is unlikely to have a significant impact on BESS lifetime.
Article
The micro-energy grid can meet various load demands and realize the complementary advantages of different energy sources, which provides a new way to solve the problems of energy utilization efficiency and environmental pollution. However, how coordinating multiple energy sources and improving the flexibility of the micro-energy grid is an urgent problem to be solved. This paper proposes an optimal scheduling model based on chance-constrained programming (CCP), which considers electric vehicle (EV) charging characteristics, integrated electricity-heat demand response, and ladder-type carbon trading in the background of various renewable uncertainty. Firstly, this paper uses power to gas (P2G) technology and combined heat and power (CHP) technology to improve the flexibility of the system and realize the coupling between different energy sources. Secondly, integrated demand response (IDR) is used to explore potential interaction capabilities between electric-heat flexible load and micro-energy grid. Then, the ladder-type carbon trading mechanism is introduced in the optimization scheduling model to reduce the carbon emissions of the system. Finally, sequence operation theory (SOT) transforms the original CCP model into a conveniently solvable mixed-integer linear programming (MILP) model. The simulation results show that all subsystems are closely coupled due to the participation of P2G and CHP, which reduces the operation cost of the system by 3.9 %. The results also indicate that the IDR mechanism improves energy efficiency and reduces operating costs by 7.8 %. Finally, the results substantiate that the ladder-type carbon trading mechanism reduces the carbon emissions of the system by 18.1 % and improves the environmental benefits of the system.
Article
The uncertainties arising from both renewable generation and load demand have brought challenges to the reliable and efficient operation of power systems. T his paper present s a two time scale (i.e. day ahead and intraday) microgrid energy management model for scheduling with low operational costs and high reliability against uncertainties. For the day ahead scheduling, we propose a data based distributionally robust chance constrained (DRCC) energy d ispatch model for grid connected microgrid s, to trade off the economic efficiency and operational risk. T his model gains a robust and low conservative day ahead scheduling solution against uncertainties by formulating the chance constraint based on Wassers tein a mbiguity s et into a tractable convex constraint with conditional value at risk (CVaR) approximation. For the intraday scheduling, we blend the shorter time scale prediction with a robust day ahead scheduling plan as well as the model predictive contr ol (MPC) rolling o ptimization method. This ensures accurate intraday dispatch solution and balance d supply demand. Finally, the effectiveness and performance of the proposed method are verified via case studies.
Article
Most of the existing stochastic model predictive control (SMPC) algorithms for systems subject to random disturbance are designed offline using the distribution information of the uncertainties. In this paper, we propose an iterative learning based MPC for systems subject to time varying stochastic constraints on states. Different from those existing offline design approaches, except for the boundedness, this algorithm does not require to know the distributions or statistics such as the covariances of the uncertainties and the parameters of the controllers are adjusted online using the observations of past state trajectories. By making use of the iterative nature of the process, pointwise in time stochastic constraints are enforced so that it can handle time-varying constraints. Under some proper assumptions, this iterative procedure is shown to be equivalent to a root-searching problem and stochastic approximation theory is applied to show that the empirical average converges to the prescribed expectation in probability. The proposed algorithm is applied to an HVAC control problem to show its effectiveness.
Article
With increasing demand in electrical energy and the development of distributed generation technologies, microgrids are formed to avoid the consequences of these changes. Furthermore, this leads to the arrival of multiple microgrids in the distribution network, better known as multi-microgrid systems. Renewable-based generators are indisputable parts of modern electrical energy systems, which might insert uncertainty to the mathematical modeling of the entities for the scheduling purposes. Thus, in this paper, chance-constrained programming is employed for the day-ahead scheduling of a multi-microgrid system in an uncertain environment. Alongside the renewable-based generation units, conventional units are being used, which may have environmental problems such as higher greenhouse gas emissions. Given the global policy for reducing pollutants, a framework for energy management of the multi-microgrid system aiming at decreasing emissions alongside other financial goals is proposed. Finally, a modified version of the IEEE 33-bus test system with multiple microgrids is selected for verification of the proposed methodology. The obtained results in various case studies indicated that considering the emission in the objective function, significantly affects the amount of greenhouse gas emission.