Conference PaperPDF Available

Conference Paper

# Efficient Customer Selection for Sustainable Demand Response in Smart Grids

## Abstract and Figures

Regulating the power consumption to avoid peaks in demand is a known method. Demand Response is used as tool by utility providers to minimize costs and avoid network overload during peaks in demand. Although it has been used extensively there is a shortage of solutions dealing with real-time scheduling of DR events. Past attempts focus on minimizing the load demand while not dealing with the uncertainty induced by customer intervention which hinder sustainability of the reduced load. In this paper we describe a smart selection algorithm that solves the problem of scheduling DR events in a broad spectrum of customers observed in common Smart Grid Infrastructures. We deal both with the problem of real-time operation and sustainability of the reduced load while factoring customer comfort levels. Real Data were used from the USC campus micro grid in our experiments. On the overall achievable reduction the results produced a maximum average approximation error ≈ 0.7%. Sustainability of the targeted load was achieved with maximum average error of less than 3%. It is also shown that our solution fulfils the requirements for Dynamic Demand Response providing a solution in a reasonable amount of time.
Content may be subject to copyright.
Efﬁcient Customer Selection for Sustainable
Demand Response in Smart Grids
Vasileios Zois, Marc Frincu, Charalampos Chelmis, Muhammad Rizwan Saeed, Viktor Prasanna
University of Southern California
Email: {vzois,frincu,chelmis,saeedm,prasanna}@usc.edu
Department of Computer Science, Department of Electrical Engineering
Abstract—Regulating the power consumption to avoid peaks
in demand is a known method. Demand Response is used as
tool by utility providers to minimize costs and avoid network
overload during peaks in demand. Although it has been used
extensively there is a shortage of solutions dealing with real-time
scheduling of DR events. Past attempts focus on minimizing the
load demand while not dealing with the uncertainty induced by
customer intervention which hinder sustainability of the reduced
load. In this paper we describe a smart selection algorithm that
solves the problem of scheduling DR events in a broad spectrum
of customers observed in common Smart Grid Infrastructures.
We deal both with the problem of real-time operation and
sustainability of the reduced load while factoring customer
comfort levels. Real Data were used from the USC campus micro
grid in our experiments. On the overall achievable reduction
the results produced a maximum average approximation error
0.7%. Sustainability of the targeted load was achieved with
maximum average error of less than 3%. It is also shown that
our solution fulﬁls the requirements for Dynamic Demand
Response providing a solution in a reasonable amount of time.
Keywords: smart grid, scheduling, optimization, sustainability,
demand response, selection algorithm, real time, change making,
customer comfort
I. INTRODUCTION
As demand for power increases so does the complexity
involving safe and reliable [20] energy distribution. Recent
innovations in power grids have provided smart tools, to help
utility providers monitor and predict [3] [4] power demand.
All together an automated power distribution network was
created on top of the old power grid infrastructure, now
know as a smart grid [13] cyberphysical system. It consists of
ment and security. Smart meters [6], capable of bi-directional
communication, are a vital part of the smart grid. They are
used for real time monitoring of power consumption helping
utility providers predict future demand. Fulﬁlling the necessary
energy requirements is based on these predictions. However
installing additional power generation capability to meet peak
demands sometimes is neither feasible nor sufﬁcient. Demand
Response(DR) [2] is a well known method employed by
energy providers to control demand. It is used to ﬁnd the
equilibrium between energy production and demand through
Energy providers use different paradigms in their attempt
to control customer load including: direct control [10], price
incentives [19] as well as voluntary participation. Although
these techniques have been broadly used in practice they
still are unable to produce good results while dealing with
uncertainty induced by the customer behaviour. Real-time
techniques need to be employed to cope with unexpected peak-
demands or situations where adaptation during a DR event is
needed(dynamic DR) to sustain a consistent power reduction
under a deﬁned safe threshold. A DR event is deﬁned as
a schedule consisting of customers and their corresponding
strategies for a speciﬁc period of the day. It is initiated by
energy providers to achieve a combined load reduction from
the participating customers.
A DR event is said to be sustainable if it achieves consistent
reduction is considered to be a reduction of the typical power
consumption under a speciﬁed threshold. This threshold is usu-
ally deﬁned by the utility providers. The goal can be to ensure
reliability in power distribution, protect the equipment on the
grid or simply maximize proﬁt. Sustainable load reduction is
a hard problem. Customers employed on top of the power grid
are inherently unpredictable. This uncertainty hinders attempts
on achieving sustainability. Customer comfort [23] is deﬁned
as the ability of users in the power grid to decide independently
about the time and way to consume power. Each one has a
different footprint characterized by their behaviour and dif-
ferent reactions to exogenous events(e.g rise in temperature).
When scheduling a DR event all of this has to be taken into
consideration. It is imperative to deal with uncertainty [9],
sustain a consistent reduction. Hence it is important also to
quickly(dynamic DR) react and adapt to changes induced by
uncertainty from the behaviour of individual customers. If the
above points are to be considered then sophisticated scheduling
algorithm is needed. This algorithm should produce a selection
of customers to participate in a DR event aiming to minimize
uncertainty while maximizing comfort. These two notions are
complementary since a comfortable customer is more likely
to comply with a DR event. This paper addresses the above
gap by introducing an algorithm that ﬁts the aforementioned
properties.
The contribution of this paper can be summarized in the
following points:
The proposed algorithm deals with customers as indivis-
ible entities. Detailed description of the consumption of
individual devices is not needed. We just focus on the
consumption patterns initiated by different DR strategies.
Dealing with customer comfort is not included as an extra
variable. We argue that large deviation in power reduction
throughout a DR event represent customer discomfort. A
selection of unique strategies deﬁne different levels of
intrusiveness.
Sustainable reduction throughout the DR event was our
initial goal. We achieve that by analysing in a coarse
grained manner the potential reduction(ﬁxed intervals) of
each customer.
The complexity of the algorithm conﬁrms the efﬁciency
claim. We achieve polynomial complexity in the number
of customers. Our experimental results show a linear
and polynomial increase in the relative execution time
consistent with the complexity analysis.
We start by describing in section II the related work. Next
we continue by formulating our problem in section III. An
analytical description for the algorithm developed is provided
in IV. We conclude with the analysis of the conducted exper-
iments in section V.
II. RE LATE D WOR K
Some of the earliest work focused on directly controlling
device schedules to minimize power consumption. In [11]
[10] dynamic programming was employed to minimize the
controllable load. In [21] linear programming in combination
with customer grouping was used based on a proﬁt based
A recent trend on the ﬁeld of load manipulation is fac-
toring customer comfort. Arguments in favour of it are the
uncertainty induced by users who may override the system and
cause peaks in demand. Different approaches emerged based
on modelling consumption of speciﬁc devices according to
their usage patterns. In [23] particle swarm optimization was
used to control demand by focusing on water heaters which
accounted for 30% of the overall load. The same technique was
employed in [12] and [18]. The former focused on the special
case of Plug-in Hybrid Electronic Vehicles (PHEVs). The latter
examined residential cases where a schedule was provided
based on the optimal time to operate domestic appliances.
In [17] load manipulation was studied conforming to con-
straints induced by dynamic pricing policies. An heuristic
approach was proposed providing a solution to the scheduling
problem in O(AT N 2log T N )time. Other approaches include
a game theoretic analysis on the minimization of load demand.
This was addressed in [9] using a residential environment
with real-time pricing as the case of study. Also in [19] a
similar technique was employed by relying on smart pricing
as an incentive to achieve the necessary load reduction. These
approaches relay on distributed calculation to provide a real
time solution to the scheduling of power reduction to different
customers. Our solution also considers the need for real time
execution.
Other proposed solutions with similar goals include [22].
There a multi-objective optimization problem was formulated
adhering to speciﬁc constraints. Those are induced by the
need to minimize consumption and maximize utility. Moreover
evolutionary algorithm was used to solve the constrained
problem. In [16] an ofﬁce environment was the use case. Power
consumption was minimized based on dynamic pricing and
production of electricity from renewable energy sources. The
ultimate goal was to leave productivity unaffected. Finally in
[30], a ﬁne grained description of a smart home is used to
schedule usage of individual appliances in respect to residen-
tial needs.
It is important at this point to note that many of the research
done so far deals with residential cases [14] [19]. Also many
approaches focus on speciﬁc devices [15] [18] [25] [8] [7].
This scenario is unrealistic for energy providers as they cannot
sustain information for all the different appliances and their
consumption patterns. A similar problem statement to ours
has been made in [24]. There the authors study the case of a
feeder failure in the distribution network. They made a mixed-
integer programming formulation of the scheduling problem
and proposed three approximate methods to solve it. Contrary
to ours their work assumes customer compliance during the
DR event. Also it deals with distributing a portion of the load
demand on other transformers. We deal only with ﬁnding a
schedule of customers to reduce the load demand.
Our solution considers the abstract notion of a customer
associated with available strategies. These strategies are the
different plans available by utility providers. Each one deﬁnes
a speciﬁc level of intrusiveness. Comfort is implied by the
overall achievable reduction and the observed inconsistencies
of the reduced load across the whole event. A dynamic pricing
policy associated with each strategy can be used as incentive
for participation in curtailment. Similar incentives have been
assumed in [5]. We avoided a device based analysis since it
is heavily dependent on their individual characteristics. It is
a realistic scenario since in many cases utility providers do
not have access to this kind of information. Our goal was to
capture the overall customer behaviour during a DR event and
use it as reference to schedule future ones.
III. PROB LE M DEFI NI TI ON
We formulate the problem as follows: We are given a set
of ncustomers. Each one is associated with mavailable
strategies. As we mentioned before each strategy represents
an individual plan provided by an energy provider. The
strategy provides some possible incentives in exchange for
voluntary participation in a DR event. Achievable reduction is
strongly related to the incentives provided. For each customer-
strategy combination there exists a curtailment vector <
r1, r2, ..., rk>. Each value rk∈ R is calculated at ﬁxed
time intervals. It represents the curtailment level which is the
difference between the predicted baseline consumption and the
predicted consumption during a DR event. A positive num-
ber indicates a successful power reduction while a negative
number indicates an increase in consumption. The baseline is
based on the observed consumption of the customer during
normal operation. The accuracy of the values indicating the
level of curtailment, depends on the predictions of the methods
used. Discussing in more detail the different kind of prediction
methods is out of the scope of this paper.
Let Rbe the targeted overall reduction throughout the
whole DR event. Our goal is to ﬁnd a subset of the available
customers which should accumulatively curtail R
tper interval,
where tis the number of intervals in a speciﬁc DR event.
Each customer must conform to a single strategy from the
available ones through the whole DR time frame. Based
on the above deﬁnition, our problem is deﬁned as ﬁnding
a set Aof customer-strategy combinations that collectively
achieves R
treduction per interval. This is described more
formally in (1) where ri,k is the reduction achieved by the
i-th customer-strategy combination Aat the k-th interval
from the curtailment vector.
|A|
X
i=1
ri,k =R
t(1)
Our problem can also be expressed as Integer Linear Pro-
gramming [28] Problem(ILP). It is known that ILP is N P-
hard [28]. This means that for a small number of customers
the solution is achievable in reasonable time but it is not the
same case for a large number of customers. We can also
deal with it as a knapsack problem [29]. However dynamic
programming does not ﬁt well to our deﬁnition of the problem.
A real time solution independent of the targeted reduction is
needed. Moreover it important to use the minimum number of
participating customers. Also we are dealing with real numbers
for weights and we need to solve an 0-1 knapsack problem. In
the next section we present our approximative solution which
is based on the change making [27] problem.
IV. PROPOSED ALGORITHM
In this section we focus on describing the developed algo-
rithm. At the beginning we discuss the motivation behind our
choice to implement an approximative solution. An overview
of the major steps which constitute the algorithm is presented.
Finally we focus in discussing further some parts which
affect the accuracy, sustainability and the complexity of our
approach. In the previous section we made a strong case in
favour of sustainability. Also we emphasized the need for a
dynamic demand response scheduling to deal with uncertainty.
These are two of the strongest points which impelled us to
provide an approximative solution. Real Time scheduling of
DR events is important. As it was mentioned before solutions
provided by formulation of our problem as a 0-1 knapsack
or as an ILP are feasible but not efﬁcient. Moreover we
need a way to incorporate in the selection procedure the
uncertainty induced by the customers. This can be realized
when considering the control of an AC unit during a warm
summer day. The discomfort induced can force a customer to
override the DR event causing unpredictable peaks in demand.
Finally we can see that eliminating discomfort provides us
with predictable load reduction. This makes it easier to sustain
a stable reduction throughout the event time frame. In the
next section the proposed solution is described in more detail
following the properties deﬁned above.
A. Notation
Before we begin describing the algorithm in detail we need
to deﬁne some common notation used:
U:set of representatives of the n customers.
uj:j-th estimate from set U.
xi:Number of customers in i-th bin.
˜
C:set of coins
˜ci:i-th coin from US domination.
C:set of bin values.
ci:i-th bin value from C, ci= ˜ci·v
v:unit value used to deﬁne the bins ranges.
M:reduction per interval.
˜
M:Amount to be payed using change making.
Bi:i-th building in a bin.
Bij :j-th strategy of i-th customer.
rk:reduction value of k-th interval.
BN :set of bin ranges.
B. Algorithm Description
Our procedure involves formulating the solution based on
the change making problem. The initial step is to distribute the
customers into bins. Each bin has a speciﬁc range determined
by a quantity we call unit value(v). The upper value of each
bin is deﬁned as the bin value. Customers are distributed into
speciﬁc bins according to an estimated reduction value. We
refer to this value with the term representative of a customer.
min
ciC,Bi(ci1,ci]X
rkBij
(cirk)2(2)
After distribution we conform each customer to a speciﬁc
strategy. We decide on this by choosing the strategy which
minimizes the accumulated error produced by the difference
between the bin value and the reduction per interval values(2).
This is done for all available strategies to each customer.
The last step is to greedily iterate over the bins and select
the customer-strategy combinations to pay for the needed
reduction. The algorithm for change making is used at this
point to index the necessary bins.
We describe here more formally the above procedure. Con-
sider that we need a reduction of MKWh per interval across
a DR event of size k(M=R
k). Assuming a known unit
value we get ˜
M=M
vas the amount needed to be paid in
order for the necessary reduction to be achieved. By using
the change making approach we get (3). Starting from the
biggest coin which is equal or less than ˜
Mwe use zero or
more coins of value ˜cito pay for the amount ˜
M. The set
of coins ˜
C={1,2,5,10,25,50,100}is based on the US
denominations which always gives an optimal solution using
the minimum number of coins. If we multiply (3) by vwhich
is the unit value we get (4). This way we can construct seven
bins using the values from ˜
Cmultiplied by v. If we do this we
get the bin ranges in (5). Every customer with reduction per
interval in the ranges deﬁned by the boundaries of the i-th bin
from (5) belongs in that bin. The estimate for the reduction is
the customer’s representative. A pseudo code describing the
ChangeMakingScheduler(buildings, M )
Data: List of building-strategy reduction vectors,
Reduction needed
Result: List of building-strategy
representatives =buildings.representatives();
u=calc unit value(representatives);
for i1to c.size do
c[i] = ˜c[i]u
end
˜
M=M/u;
bins =distributec, building s);
for i1to buckets.size do
sort(bins[i])
end
for ic.size to 1do
j= 0
while M˜c[i]0do
while c[j]bins[i].building[j].reduction
0and jbins[i].length do
c[j] = c[j]bins.building[j].reduction
j=j+ 1
end
˜
M=˜
Mc[i]
end
end
Algorithm 1: Change Making Scheduler
major steps of our change making scheduler is presented in
Algorithm 1.
˜
M=
7
X
1
ci·xi,xiN(3)
M=
7
X
1
ci·v·xi,xiN(4)
BN ={(0, v],(v, 2v],(2v, 5v], . . . (50v , 100v]}(5)
The most crucial step of the mapping described above is
selecting a suitable unit value. A unit value is suitable if we
can ﬁnd at least one customer-strategy combination in each
bin which achieves the bin value per interval. An inaccurate
decision at this point will limit the true potential reduction
of a customer. Although we can have another case which is
omitting a customer-strategy by placing it in a bigger bin.
Since customer-strategies are sorted in each bin before the
bins are indexed, choices which deviate much from the bin
value are not chosen. These limitations will always provide
us with a suboptimal solution that does not achieve the target
set. This affects both sustainability and the overall achievable
reduction through the whole DR event. Much of this paper is
devoted in inventing sophisticated procedures for choosing the
unit value. In the next subsection we describe them in detail.
C. Unit Value
Finding a suitable unit value must be a compromise between
accuracy and speed. From the equality ˜
M=M
vdescribed
before we only know M. So we must ﬁnd a way to assume an
approximate value for one of the variables in the right hand
side. We start by assuming that v=Mand built the bins
accordingly. This technique is deﬁned as the greedy approach.
All the customers under consideration will be limited in the
ﬁrst bin since bins of larger value will overshoot our target
and won’t be considered in the ﬁnal solution. Although simple
and efﬁcient this technique does not achieve the best accuracy.
This is because iterating greedily through the bins, provides
us with a solution containing customer-strategy combinations
of the highest reduction level less than M. It may be the
case that there is a selection of intermediate customer-strategy
combinations that are able to achieve the targeted reduction.
For that reason we need a solution adaptable to the speciﬁc
reduction patterns of each customer during a DR event.
Taking into account customer behavioural patterns we as-
sumed that there must be a suitable unit value from the set of
representatives for each customer. In this case we developed
three procedures to select from this set a suitable value. Our
solution begins by examining each individual representative
separately. This is done by using it as potential unit value to
built the bin ranges. Then assuming those bin range an error
measure is employed to decide which representative to select.
This error measure estimates the effect of our choice to the
ﬁnal solution. It selects as the unit value the representative that
minimizes this corresponding error measure.
The ﬁrst technique which we call Minimum Goal Ac-
cumulated Bin Error(MGABE), focuses on minimizing two
quantities (6) connected to the choice of the unit value. The
ﬁrst quantity is the accumulated error produced by the absolute
difference of the bin value from the reduction level estimated
by the representatives of that bin. The second quantity is
the absolute difference between the target reduction Mand
the sum of the bin values produced and being used from
the solution provided by the change making algorithm. This
approach is expected to produce relative good results in the
subject of sustainability since it considers minimizing the error
from the bin values. However i will have a hard time matching
the overall reduction needed since it focuses on two targets
independently. It will be argued from the experiments that
minimizing the latter quantity does not produce better results.
However we can see that the maximum error produced from
that quantity will not exceed 0.5 due to rounding of M
v.
min
ujUX
ciC
(ci( max
ci1ukci
uk)) + min
ujU(MX
ciCM
(ci))
(6)
A simpler technique is to consider only minimizing the
quantity connected with the bin values. The second technique
we developed, which is known as Minimum Accumulated
Average Bin Error(MAABE) does exactly that. It considers the
average reduction of the representatives that belong in each bin
and tries to minimize their difference from the bin value. Again
this is done for all the representatives from U. We describe this
minimization problem by deﬁning equation (7). We expect to
get better results here in respect to the overall reduction since
we don’t have the independent quantities of (6). Also in the
matter of sustainability since we are dealing with reduction
per interval we will manage to acquire a solution achieving a
sustainable reduction.
min
ujUX
viv
(ci(1
xi
·X
vi1ukvi
(uk))) (7)
Both of the described techniques consider all bins when
calculating the overall error. It would be more advantageous
to focus only on bins that are going to be used in the end result.
That is because we want to focus on minimizing less goals to
get a better result. It is important to consider a good heuristic
in order to get a real estimate on the bins used. Our approach
was to estimate ˜
Msince we already have a potential v. Using
this information we can decide on which bins are being used
in the end result. Then we select vfrom the representatives
to minimize the accumulated error induced by these bins.
This technique Minimum Coin Error(MCE), is similar to (6)
although we only deal with choosing the representative that
minimizes the accumulative bin error. Also another difference
is that we consider ˜
Mand not M. We describe MCE using
(8). It is expected our results to be strongly dependent to the
estimate we have for the bins being used.
min
ujUX
ciC
(ci( max
ci1ukci
uk)) (8)
The presented methods focused on searching the unit value
from the representative set. The major drawback in this is that
it assumes a suitable unit value is present in that set. It is
clear that there might be cases where that is not true. This
is the reason behind the development of our last technique
which is called Unit Data Trend(UDT). Our reasoning behind
this method is that we need to try and ﬁt the patterns in the
dataset to the constructed builds. Only this way we can achieve
sustainability which will also ensure achieving the overall load
reduction. We focus on using as bin values the initial values
of the coins from the change making problem. Then we state
that a unit value is needed to ﬁt the dataset and provide
corresponding bin ranges. We initially distribute the buildings
into the corresponding bins. Based on this distribution we
calculate the weighted average(9) which serves as a unit value.
The weight is the number of customers in a bin and the value
is the max reduction from the representatives in that bin. We
expect to get good results in respect to sustainability as we
try to match the patterns existing in our dataset. The overall
achievable reduction might not be achieved since sustainability
might produce reduction close to the needed one but less or
more than that.
v=P7
i=1 (xi·maxujU,ci1<ujciuj)
P7
i=1 xi
(9)
D. Representatives
The previous section made the role of the representatives
clear. They are used as estimates for the achievable reduction
of each customer. Based on them we created heuristic methods
to calculate a suitable unit value. Also we use them to decide
on how buildings are divided into bins. It is clear that their
role is important and affects the robustness of the scheduling
algorithm. Making a wrong estimate will give an unsuitable
unit value which in turn limits the potential reduction of each
customer. This is because we conform each customer to the
only strategy that provides the lowest deviation from the bin
value (2). Realizing this we decided to test different ways
of calculating representatives. Our choices we are motivating
by the need to show estimates that produce good results
as well as bad ones. It was our to goal to give a clear
understanding on the importance of these selections to the end
result. Moreover we focused on ﬁnding simple and efﬁcient
solutions which should not add much to the overall complexity.
We present brieﬂy the three methods used and get into a
detailed discussion of their affect in the experiments section.
The ﬁrst method(MAX) calculated the maximum value from
all intervals of all possible strategies for a customer. The
second method(AVG) calculated the average reduction from all
intervals of each available strategy for a customer. Finally the
third methomd (MAVG) used as a representative the maximum
average reduction from each individual strategy available to
a customer. In general we expect to get the worst results
by AVG since large deviations between strategies is going
to provide an inaccurate estimation. It is important to note
that the representatives affect strategies which are heavily
based on them. Those are all the strategies which use them
to calculate an approximate unit value. Between the other two
methods (MAX and MAVG) we expect better results from the
former. This is connected to the bin values which constraint
the selection of strategies. MAVG uses an averaging estimation
for the whole interval giving a bad estimation in cases where
large deviations in the reduction exist. So our safest choice will
be MAX since it is not so restricted on the max bin values
constructed.
E. Complexity
For the complexity part we can divide the algorithm into
two sections. We ﬁrst have the section where some common
steps are executed (e.g distribution of customers into bins,
representative calculation e.t.c). This section is common for
all methods and provides a standard complexity. In the second
section we deal with the individual methods used to calculate
the unit value. There the complexity differs among each
method.
In general our algorithm consists of some common steps
independent from the technique used to calculate the unit
value. These include calculating the representatives, distribut-
ing the customers into bins and conforming them to a speciﬁc
strategy, sorting the customer-strategy combinations in each
bin according to their corresponding error.Finally we iterate
greedily over the bin and produce the indices used to select
which customers are going to participate in the DR event. The
size of the input is deﬁned as n·m·kwhere nis the number of
customers mis the number of available strategies and kis the
number of intervals in the DR event. In practice we have 3-10
strategies to consider and at most 96 intervals of 15 minute
granularity, exist in a day. These choices were made during
our experiments and present a realistic scenario. We conclude
that the size of the input is linear in the number of customers.
The common steps we described before have a complexity of
O(n),O(n),O(n log n)and O(n)respectively considering
an input size of ncustomers. So in conclusion the overall
complexity is O(n log n)for the ﬁrst part.
Except for the greedy technique, calculating the unit value
adds an extra computational cost. In MGABE and MCE,
there is an extra computational cost of O(n log n). This is
because we consider nrepresentatives at most and for each we
need to consider the one closer to the bin value. In MAABE
the extra cost is O(n2)because we calculate for each of
the nrepresentatives the average of the customers reduction
that belong in each bin. The ﬁnal method(UDT) calculates
the weighted average. Here we need to iterate over all the
It can be argued that the devised algorithm fulﬁls the
requirement for an efﬁcient solution. In any case we need
at most polynomial time to provide a solution. It will be
shown in the experiments that the above complexities are
veriﬁed. Also it is noted that the bottleneck in the execution
operations like calculating the representatives can be executed
as a preprocessing step.
V. EXPERIMENTS & RESULTS
The experiments we conducted were designed to test the
accuracy of the algorithm in terms of the targeted reduction
and the sustainability of the reduced load. Also we measured
the number of buildings used in the solution to argue about
the level of intrusiveness of our solution. Finally we simulated
consecutive DR events and measured the collective execution
time for a solution to be provided.
The algorithm was implemented using Java and was exe-
cuted on Windows operating system. The experiments where
executed in single quad core CPU system(Intel Core i7
3632QM @ 2.20 GHz) with system memory(8,00 GB Dual-
Channel DDR3 @ 665 MHz).
A. Dataset Used & Experiment Categories
To test our implementation we used measurements from
meters in the USC campus micro grid. The dataset is populated
with values representing the average power reduction(KWh)
for a ﬁxed periods of time(15 minute granularity).In total 33
buildings participated in 380 DR tests by employing speciﬁc
reduction strategies or a combination of them.The time frame
of the DR events is from 1:00 - 5:00 PM. We chose this time
frame because it is the time of the day where peak demand
is observed. The available strategies for each building consist
of equipment groupings which are controlled directly during
a DR event. Measurements from past DR events conducted
on campus buildings were used as input to the ARIMA [26]
prediction model. We needed to predict the actual building
consumption during past DR events. Southern California Edi-
son(CASCE) [1] was used as the baseline to deﬁne the actual
consumption on a regular day for each building. As it was
mentioned before we measured the curtailment level by taking
the difference between the baseline and the predicted reduction
during a DR event. CASCE was used because we found from
previous work [3] that it gave the most accurate results in
terms of the predicted consumption during a normal day.
We consider two cases in our experiments: The ﬁrst case
includes all the buildings and strategies available in our
dataset. We do this to evaluate the accuracy of each technique
developed in terms of overall load reduction and sustainable
load during a DR event. Absolute Percentage error was used
as the metric for our evaluation. The second case randomly
selects buildings and strategies to be eliminated. Every build-
ing or strategy has a probability of 50 % to be selected. This
decision was made in order to simulate a harder instance of
the problem being solved. The results from these experiments
were used to evaluate the robustness of each technique. In
this case we run a thousand requests and used Mean Absolute
Percentage Error(MAPE) as the metric for the evaluation of
each individual technique.
The overall performance for the developed methods is
evaluated against two random selection approaches used as
baselines. The ﬁrst baseline is a uniform random selection
of buildings and their corresponding strategies. The second
baseline implements a probabilistic selection. It is based on
the distribution of buildings into bins of ﬁxed length according
to their reduction estimated by the representatives. Bins with
more buildings in them have greater probability to be selected.
The ranges of the bins are built in increments of 10 beginning
from 0. There were some buildings with negative reduction
but we ignored them for the baselines.
B. Overall Reduction
Achieving an optimal reduction is strongly related to the
method used for calculating the unit value. Since many of
the developed methods are based on the representatives it is
imperative to have a good initial estimate for them. As it was
stated a good estimate is the one that does not limit the po-
tential reduction of each customer. Limitation exists if the bin
value conforms a customer to a strategy with lower reduction
than the maximum achievable. Each individual technique was
tested against the different estimations we presented in section
IV. As it was expected when using AVG as the estimate on the
representatives we got the worst result(maximum average error
2.5%)(Fig.1). Although compared to MAX and MAVG
estimates (maximum average error 0.7% and 1.3%
respectively) AVG seems to produce the worst results, it still
outperforms the baseline we set by much. In general the
estimate that gives us the best result is MAX. This correlated
to the need we have not to bin limited by the bin value
when choosing a strategy. So if we build our bins using this
estimate we can always ﬁt the maximum reduction provided by
a speciﬁc strategy. In the case of MAVG we are more restricted
from the bin value in comparison to MAX. However we are
less restricted than in the case of AVG. So it was expected
to get results in between MAX and AVG.In (Fig.2)we chose
to present the results produced by the MAX estimate. This
Fig. 1: Case 1 - Overall Targeted Reduction(AVG).
In order to compare the effectiveness of the techniques
developed we plotted the Cumulative Distribution Function
(CDF) of the normalized error. In (Fig.3) we present the
result we got using the MAX estimate for the representatives.
Although we argued that against the greedy technique it seems
that we are getting better results in comparison to the other
approaches. This can be explained by the nature of the dataset.
In the campus there exist many buildings similar to each
other that have a low deviation on their achievable reduction.
Taking this into account we can realize that a greedy choice
of building-strategy combination will approximate the target
reduction accurately most of the time. However this is not the
case for reduction of higher value. There are a few buildings
with larger reduction that need to be included into the ﬁnal
solution. So the relative error increases for these values.
A clearer view of the robustness of each approach can be
realized in the second part of the experiments we conducted.
Fig. 2: Case 1 - Overall Targeted Reduction(MAX).
In general the results showed that MAABE is the technique
which provides the best accuracy and stability. This is because
we just focus only in minimizing one quantity when deciding
on the unit value to be selected. Also by using an averaging
method to estimate the achievable bin value, we can always get
the best result given a good representative estimate as it was
explained before. An average closer to the bin value always
considers the maximum potential reduction of the customers
in a bin. In MGABE the results produce the worst error from
the targeted reduction. Given the reasoning we just presented
this was expected. The independent minimization of the two
quantities is responsible for this performance. Selecting a
suitable unit value is based in the speciﬁc targeted reduction.
Nevertheless we consider all the bins for the accumulated error
which results in conﬂicting objectives. MCE was developed on
that reasoning and focused to minimize the accumulated error
on speciﬁc bins indexed by the change making algorithm.
As the results show, it produces lower approximation error
outperforming MGABE. It keeps up with MAABE but shows
less stability and eventually performs worst. The problem in
this case is that we consider each bin as having the necessary
number of buildings needed. This in practice is not the case
as sometimes the construction of bin ranges produce empty
ones. Finally we consider the solution provided by UDT. In
this case the results are follow a similar pattern to that of MCE.
This approach is strongly correlated with the data distribution.
A pitfall exists when the deviation between the reduction
provided by each one of the buildings is large. The unit value
selected ﬁts the buildings with a small reduction and restricts
the ones with a large reduction. This happens because the bin
values are small considering the unit value chosen. In our case
the deviation between the buildings is small that is why such
a case is not clearly visible. It will become more clear in the
next part of the experiments.
The second round of experiments aimed at testing the
robustness of each method. In (Fig.4) we present the results
we got using AVG for the calculation of representatives. The
results are consistent with our claims about the importance
of choosing a good estimate for the representatives. In this
case(AVG) none of the methods we developed can match the
Fig. 3: Case 1 - Accuracy of Unit Value Techniques(MAX).
performance of the greedy technique. In (Fig.5) we chose to
present the results for the case of MAX since they are similar
to that of MAVG although a little bit worst. We realize that
the results follow the same pattern as in the ﬁrst round of
the experiments. Our claim in favour of MAABE is supported
since it manages to match and outperform every other method.
The greedy approach ranks second along with MCE. Finally
we see that MGABE has an unstable performance in terms
of accuracy. It manages to follow at some cases the accuracy
of other methods but it is unpredictable for the most part.
Finally UDT manages to match the error produced by the
greedy approach as expected. A point we need to make is
that the overall error in this round of results has increased
signiﬁcantly. Since we discard randomly selected buildings
from the original set there might not be a solution that fulﬁls
our request. The error curve presented in Fig.5 is expected. In
campus there are many buildings of low reduction which can
be combined to achieve an overall low reduction. However to
achieve a bigger reduction we need to include buildings with
high reduction levels. Those are eliminated more easily from
the ﬁnal set since few of them exist.
Fig. 4: Case 2 - Overall Targeted Reduction(AVG).
C. Reduction Sustainability
A strong asset of the selection algorithm we developed
is achieving sustainability. We argue in favour of this by
Fig. 5: Case 2 - Overall Targeted Reduction(MAX).
presenting the results for the ﬁrst case of our experiments.
We focused on the highest overall reduction we could achieve
in our experiments(3000 KWh). We present the deviation
from the optimal reduction per interval using the absolute
percentage error as a metric. Our choice is based on the
collective number of buildings used in the DR event. The goal
behind this decision is to show how the behaviour of different
buildings hinders our ability to achieve sustainability.
It is important to point out that sustainability it is not
connected directly to achieving the overall load reduction
across a DR event. A large average deviation from the target
per interval might again provide the overall reduction we want
by establishing lower reduction in the beginning of the event
and higher close to the end.
Fig. 6: Case 1 - Deviation of Sustainable Load(MAX)
The average deviation from the optimal sustainable reduc-
tion was less than 3%. The results(Fig.6) showed that
we achieved the lowest deviation in the cases of MGABE,
MAABE and UDT. Higher deviation has been observed for
the case of the greedy method and MCE. The results are
related to the method used to calculate the unit value. Although
MGABE gives a high approximation error for the overall load
reduction it is among the best to achieve sustainability of the
load. Similar results are observed for MAABE and UDT. A
common property of these methods is that they are trying to
construct bin values that ﬁt the patterns in the dataset. At
this point our decision to present the results of an overall
reduction achievable by the largest number of buildings is
justiﬁed.In our attempt to provide a solution, we index all the
bins and choose at least one building from each one. Creating
a sustainable schedule requires us to minimize the error from
the bin values. The bin values are calculated based on the
unit value. So every approach that considers the existence of
curtailment vectors that ﬁt the reduction deﬁned by the bin
values can achieve the best sustainability. That is why the
MAABE, UDT and MGABE achieve the best results. Given
this statement one would expect to get the same results for
the case of MCE. However this is not veriﬁed by the results.
It is also not expected to be veriﬁed. MCE is set to minimize
the error considering only speciﬁc bins. Also those bins are
assumed to have the needed amount of buildings which is not
Fig. 7: Case 1 - Average Number of Building Selected.
the case. So MCE makes a wrong estimation that results in a
unit value that poorly ﬁts the dataset. In this way sustainability
of the curtailed load is hindered. Similar reasoning can be
applied for the greedy method, although in this case it is more
obvious since there is no consideration of customers reduction
patterns from the start.
D. Comfort Level
Since we used the change making approach to index the
bins we expect to get the minimum number of building-
strategy combinations to achieve the necessary reduction. In
(Fig. 7) we see that for different techniques of calculating the
representatives we used 1534% of the campus buildings to
achieve the overall targeted reduction. The increase in building
selection when using AVG is caused by the choice of the
unit value. As it was noted before the unit value limits the
potential reduction of each building from the resulting bin
values. In that case the overall reduction is achievable through
selection of more buildings. It was stated that the comfort
levels are implied through the intrusiveness of each strategy.
This is induced into the potential sustainable reduction patterns
of each building associated with a speciﬁc strategy. It is the
goal of the algorithm to detect those patterns and discard the
ones that do not achieve a consistent reduction. In (Fig. 8) a
heat map was created including the buildings of the provided
solutions. Each building has an available strategy depicted with
different code in the second column. To construct the heat map
strategies of the same building were grouped together. Higher
and lower reduction per interval is depicted by different levels
of green and red respectively. Reduction level in the middle is
drawn with yellow and orange. It can be seen that our solution
selects buildings with consistent high reduction. We can asso-
ciate the curtailment vectors which produce inconsistent load
reduction to intrusive strategies. In that case we would have
consistent low reduction or inconsistent patterns with reduction
of high deviation among intervals. It is important to note that
the algorithm return the optimal selection of building-strategy
combinations in respect to the ones available.
Fig. 8: Case 1 - Heat Map of Building - Strategy.
E. Execution Time
Evaluation of the efﬁciency was done by using synthetic
data. A simulation of 1000 DR requests provided us with the
maximum average execution time. We present the results in
(Fig. 9). The number of customers ranged from 1000 to 32000
each one having 10 available strategies. The DR event time
frame was 4 hours (16 intervals). In our results we observed
an almost linear increase in the execution time for all methods
excepts in MAABE. There we got a polynomial increase in the
execution time. All results were expected and derive directly
from the complexity analysis in the subsection IV-E. It is
important to note that the bottleneck in the execution was the
calculation of the unit value. Since the algorithm inherently
does not have any data dependencies a distributed election
algorithm can be used to decide on the unit value.
VI. CONCLUSION & FUTURE
In this paper we focused on solving the problem of Dynamic
DR scheduling. Our goals where to achieve a sustainable
targeted reduction while factoring uncertainty induced by
customer discomfort. It was shown that our proposed solution
achieves a sustainable load reduction in respect to the targeted
one. It detects behavioural patterns implied in the reduc-
tion levels of customers associated with speciﬁc strategies.
The provided solution fulﬁls the dynamic requirements as
it provides a schedule in a reasonable amount of time in
respec to the number of customers. In our future work we
Fig. 9: Timings of Consecutive DR events.
will be focusing in two parts. Although the algorithm has a
low complexity it presents a bottleneck when retrieving the
customer information. It is imperative to provide a distributed
solution to overcome this pitfall. Moreover we need to deal
with uncertainty induced by changes in the customer behaviour
through multiple DR events. This again affects the dynamic
nature of the algorithm. It is important to factor on demand
update of dataset to be accurate in our scheduling.
VII. ACKNOWLEDGMENT
This material is based upon work supported by the United
States Department of Energy under Award Number number
DE-OE0000192, and the Los Angeles Department of Water
and Power (LA DWP). The views and opinions of authors
expressed herein do not necessarily state or reﬂect those of
the United States Government or any agency thereof, the LA
DWP, nor any of their employees.
REFERENCES
[1] 10-day average baseline and day-off adjustment. Technical report,
Southern California Edison, 2011.
An overview. In IEEE Power Engineering Society General Meeting,
volume 2007, pages 1–5, 2007.
[3] S. Aman, M. Frincu, C. Charalampos, U. Noor, Y. Simmhan, and
V. Prasanna. Empirical comparison of prediction methods for electricity
consumption forecasting. Technical Report 14-942, University of South-
ern California, 2014. URL: http://www.cs.usc.edu/assets/007/89887.pdf
(accessed Mar 6, 2014).
[4] S. Aman, Y. Simmhan, and V. K. Prasanna. Improving energy use
forecast for campus micro-grids using indirect indicators. In Data
Mining Workshops (ICDMW), 2011 IEEE 11th International Conference
on, pages 389–397. IEEE, 2011.
[5] S. Bahrami, M. Parniani, and A. Vafaeimehr. A modiﬁed approach for
residential load scheduling using smart meters. In Innovative Smart
Grid Technologies (ISGT Europe), 2012 3rd IEEE PES International
Conference and Exhibition on, pages 1–8, Oct 2012.
[6] F. Benzi, N. Anglani, E. Bassi, and L. Frosini. Electricity smart meters
interfacing the households. Industrial Electronics, IEEE Transactions
on, 58(10):4487–4494, 2011.
[7] B. Chai, Z. Yang, and J. Chen. Optimal residential load scheduling in
smart grid: A comprehensive approach. In Control Conference (ASCC),
2013 9th Asian, pages 1–6, June 2013.
[8] C. Chen, J. Wang, Y. Heo, and S. Kishore. Mpc-based appliance
scheduling for residential building energy management controller. Smart
Grid, IEEE Transactions on, 4(3):1401–1410, Sept 2013.
[9] J. Chen, B. Yang, and X. Guan. Optimal demand response scheduling
with stackelberg game approach under load uncertainty for smart grid.
In Smart Grid Communications (SmartGridComm), 2012 IEEE Third
International Conference on, pages 546–551, Nov 2012.
[10] W.-C. Chu, B.-K. Chen, and C.-K. Fu. Scheduling of direct load
control to minimize load reduction for a utility suffering from generation
shortage. Power Systems, IEEE Transactions on, 8(4):1525–1530, Nov
1993.
[11] A. I. Cohen and C. Wang. An optimization method for load management
scheduling. Power Systems, IEEE Transactions on, 3(2):612–618, May
1988.
[12] J. Ding, R. Yu, and Y. Liu. Optimal phevs charging management by
tradeoff between utility cost and user satisfaction. In Communications
and Networking in China (CHINACOM), 2013 8th International ICST
Conference on, pages 936–941, Aug 2013.
[13] X. Fang, S. Misra, G. Xue, and D. Yang. Smart gridthe new and
improved power grid: a survey. Communications Surveys & Tutorials,
IEEE, 14(4):944–980, 2012.
[14] N. Gatsis and G. Giannakis. Cooperative multi-residence demand
response scheduling. In Information Sciences and Systems (CISS), 2011
45th Annual Conference on, pages 1–6, March 2011.
[15] N. Gatsis and G. Giannakis. Residential demand response with inter-
ruptible tasks: Duality and algorithms. In Decision and Control and
European Control Conference (CDC-ECC), 2011 50th IEEE Conference
on, pages 1–6, Dec 2011.
[16] I. Georgievski, V. Degeler, G. Pagani, T. A. Nguyen, A. Lazovik, and
M. Aiello. Optimizing energy costs for ofﬁces connected to the smart
grid. Smart Grid, IEEE Transactions on, 3(4):2273–2285, Dec 2012.
[17] H. Goudarzi, S. Hatami, and M. Pedram. Demand-side load scheduling
incentivized by dynamic energy prices. In Smart Grid Communications
(SmartGridComm), 2011 IEEE International Conference on, pages 351–
356, Oct 2011.
[18] F. Mangiatordi, E. Pallotti, P. Del Vecchio, and F. Leccese. Power
consumption scheduling for residential buildings. In Environment and
Electrical Engineering (EEEIC), 2012 11th International Conference on,
pages 926–930, May 2012.
[19] A.-H. Mohsenian-Rad, V. Wong, J. Jatskevich, R. Schober, and A. Leon-
Garcia. Autonomous demand-side management based on game-theoretic
energy consumption scheduling for the future smart grid. Smart Grid,
IEEE Transactions on, 1(3):320–331, Dec 2010.
[20] K. Moslehi and R. Kumar. A reliability perspective of the smart grid.
Smart Grid, IEEE Transactions on, 1(1):57–64, 2010.
[21] K.-H. Ng and G. Sheble. Direct load control-a proﬁt-based load man-
agement using linear programming. Power Systems, IEEE Transactions
on, 13(2):688–694, May 1998.
[22] S. Salinas, M. Li, and P. Li. Multi-objective optimal energy consumption
scheduling in smart grids. Smart Grid, IEEE Transactions on, 4(1):341–
348, March 2013.
[23] A. Sepulveda, L. Paull, W. Morsi, H. Li, C. Diduch, and L. Chang.
A novel demand side management program using water heaters and
particle swarm optimization. In Electric Power and Energy Conference
(EPEC), 2010 IEEE, pages 1–5, Aug 2010.
[24] H. Sim˜
ao, H. Jeong, B. Defourny, W. Powell, A. Boulanger, A. Gagneja,
L. Wu, and R. Anderson. A robust solution to the load curtailment
problem.
[25] S. Tang, Q. Huang, X. yang Li, and D. Wu. Smoothing the energy
consumption: Peak demand reduction in smart grid. In INFOCOM,
2013 Proceedings IEEE, pages 1133–1141, April 2013.
[26] Wikipedia. Autoregressive integrated moving average(arima) —
Wikipedia, the free encyclopedia, 2014. [Online; accessed 15-March-
2014].
[27] Wikipedia. Change-making problem — Wikipedia, the free encyclope-
dia, 2014. [Online; accessed 15-March-2014].
[28] Wikipedia. Integer programming — Wikipedia, the free encyclopedia,
2014. [Online; accessed 15-March-2014].
[29] Wikipedia. Knapsack problem — Wikipedia, the free encyclopedia,
2014. [Online; accessed 15-March-2014].
[30] D. Zhang, L. G. Papageorgiou, N. J. Samsatli, and N. Shah. Optimal
scheduling of smart homes energy consumption with microgrid. In
ENERGY 2011, The First International Conference on Smart Grids,
Green Communications and IT Energy-aware Technologies, pages 70–
75, 2011.
... The first is to forgo accuracy guarantees in favor of performance. Techniques such as [39,27,50,40] develop fast algorithms that can have arbitrarily large errors in the objective function (utility maximization, cost minimization, etc.). Authors in [39] develop a genetic-algorithm-based heuristic, while [50] presents a heuristic based on change making. ...
... Techniques such as [39,27,50,40] develop fast algorithms that can have arbitrarily large errors in the objective function (utility maximization, cost minimization, etc.). Authors in [39] develop a genetic-algorithm-based heuristic, while [50] presents a heuristic based on change making. The algorithm developed in [40] uses Linear Programming, whose solutions need to be rounded to integral values and can have large errors (unbounded integrality gap). ...
... We also compare our algorithm against demand curtailment selection techniques such as those developed in [50] and [17]. We observed that these techniques typically incur errors (constraint violations) of around 5% to 20% and in the worst case can go as high as 95%. ...
Article
Mitigating supply-demand mismatch is critical for smooth power grid operation. Traditionally, load curtailment techniques such as demand response have been used for this purpose. However, these cannot be the only component of a net-load balancing framework for smart grids with high PV penetration. These grids sometimes exhibit supply surplus, causing overvoltages. Currently, these are mitigated using voltage manipulation techniques such as Volt-Var Optimizations, which are computationally expensive, thereby increasing the complexity of grid operations. Taking advantage of recent technological developments that enable rapid selective connection of PV modules of an installation to the grid, we develop a unified net-load balancing framework that performs both load and solar curtailment. We show that when the available curtailment values are discrete, this problem is NP-hard and we develop bounded approximation algorithms. Our algorithms produce fast solutions, given the tight timing constraints required for grid operation, while ensuring that practical constraints such as fairness, network capacity limits, and so forth are satisfied. We also develop an online algorithm that performs net-load balancing using only data available for the current interval. Using both theoretical analysis and practical evaluations, we show that our net-load balancing algorithms provide solutions that are close to optimal in a small amount of time.
... The notion of achieving sustainable DR over a peak period divided into subintervals was proposed in [20] using a change making heuristic to evenly distribute curtailment over intervals. However, a detailed analysis (omitted due to space constraints) shows that it achieves consistency between intervals without reference to the target leading to unbounded errors which is also demonstrated by our experimental results. ...
... Since the ILPs are solved exactly, the respective errors are optimal. We compare the optimal minimal errors with the actual errors achieved by the stateof-the-art heuristic [20]. ...
Article
Full-text available
Demand Response (DR) is a widely used technique to minimize the peak to average consumption ratio during high demand periods. We consider the DR problem of achieving a given curtailment target for a set of consumers equipped with a set of discrete curtailment strategies over a given duration. An effective DR scheduling algorithm should minimize the curtailment error - the difference between the targeted and achieved curtailment values - to minimize costs to the utility provider and maintain system reliability. The availability of smart meters with fine-grained customer control capability can be leveraged to offer customers a dynamic range of curtailment strategies that are feasible for small durations within the overall DR event. Both the availability and achievable curtailment values of these strategies can vary dynamically through the DR event and thus the problem of achieving a target curtailment over the entire DR interval can be modeled as a dynamic strategy selection problem over multiple discrete sub-intervals. We argue that DR curtailment error minimizing algorithms should not be oblivious to customer curtailment behavior during sub-intervals as (expensive) demand peaks can be concentrated in a few sub-intervals while consumption is heavily curtailed during others in order to achieve the given target, which makes such solutions expensive for the utility. Thus in this paper, we formally develop the notion of Sustainable DR (SDR) as a solution that attempts to distribute the curtailment evenly across sub-intervals in the DR event. We formulate the SDR problem as an Integer Linear Program and provide a very fast $\sqrt{2}$-factor approximation algorithm. We then propose a Polynomial Time Approximation Scheme (PTAS) for approximating the SDR curtailment error to within an arbitrarily small factor of the optimal. We then develop a novel ILP formulation that solves the SDR problem while explicitly accounting for customer strategy switching overhead as a constraint. We perform experiments using real data acquired from the University of Southern California’s smart grid and show that our sustainable DR model achieves results with a very low absolute error of 0.001-0.05 kWh range.
... In this case, the utility will monitor the demand of each home and take actions based on predicted consumption. By employing customer selection algorithms [10], [11], utilities can target specific areas without impacting the same subset of customers repeatedly, hence reducing the impact on their comfort. However, despite DR being an effective means of curtailment energy consumption, utilities may face the reluctance of customers to yield control of their homes. ...
Conference Paper
As smart homes and smart grids become ubiquitous their interactions will become crucial for optimizing energy consumption at large scale at residential level. Scalable solutions will be required to enable fast and reliable control during demand response. While management solutions have been proposed they do not focus on the scalability issues of the processing system. Handling continuous and variable Big Data streams can easily saturate existing systems. In this paper we propose a scalable cloud based architecture and prototype system for handling smart home data ﬂows. The system can support near real time decisions for 10,000 customers each having 10 sensors with only 35 commodity machines running free cloud software. The platform is automated and can be used to directly control the customers’ smart home or to send recommendations. Some initial experiments are performed to show the beneﬁts of smart recommendations.
Article
Full-text available
Mitigating Supply-Demand mismatch is critical for smooth power grid operation. Traditionally, load curtailment techniques such as Demand Response (DR) have been used for this purpose. However, these cannot be the only component of a net-load balancing framework for Smart Grids with high PV penetration. These grids can sometimes exhibit supply surplus causing over-voltages. Supply curtailment techniques such as Volt-Var Optimizations are complex and computationally expensive. This increases the complexity of net-load balancing systems used by the grid operator and limits their scalability. Recently new technologies have been developed that enable the rapid and selective connection of PV modules of an installation to the grid. Taking advantage of these advancements, we develop a unified optimal net-load balancing framework which performs both load and solar curtailment. We show that when the available curtailment values are discrete, this problem is NP-hard and develop bounded approximation algorithms for minimizing the curtailment cost. Our algorithms produce fast solutions, given the tight timing constraints required for grid operation. We also incorporate the notion of fairness to ensure that curtailment is evenly distributed among all the nodes. Finally, we develop an online algorithm which performs net-load balancing using only data available for the current interval. Using both theoretical analysis and practical evaluations, we show that our net-load balancing algorithms provide solutions which are close to optimal in a small amount of time.
Conference Paper
Maintaining the balance between electricity supply and demand is one of the major concerns of utility operators. With the increasing contribution of renewable energy sources in the typical supply portfolio of an energy provider, volatility in supply is increasing while the control is decreasing. Real time pricing based on aggregate demand, unfortunately cannot control the non-linear price sensitivity of deferrable/flexible loads and leads to other peaks [4, 5] due to overly homogenous consumption response. In this paper, we present a day-ahead group-based real-time pricing mechanism for optimal demand shaping. We use agent-based simulations to model the system-wide consequences of deploying different pricing mechanisms and design a heuristic search mechanism in the strategy space to efficiently arrive at an optimal strategy. We prescribe a pricing mechanism for each groups of consumers, such that even though consumption synchrony within each group gives rise to local peaks, these happen at different time slots, which when aggregated result in a flattened macro demand response. Simulation results show that our group-based pricing strategy out-performs traditional real-time pricing, and results in a fairly flat peak-to-average ratio.
Article
Full-text available
Microgrid is taken as the future Smart Grid, and can work as a local energy provider to domestic buildings and reduce energy expenses. To further lower the cost, a Smart Homes idea is suggested. Smart Homes of the future will include automation systems and could provide lower energy consumption costs and comfortable and secure living environment to end users. If the energy consumption tasks across multiple homes can be scheduled based on the users' requirement, the energy cost and peak demand could be reduced. In this paper the optimal scheduling of smart homes' energy consumption is presented using mixed-integer linear programming. In order to minimize a one-day forecasted energy consumption cost, operation and electricity consumption tasks are scheduled based on different electricity tariffs, electricity task time window and forecasted renewable energy output. A case study of thirty homes with their own microgrid indicates the possibility of cost saving and asset utilization increase.
Technical Report
Full-text available
Recent years have seen an increasing interest in providing accurate prediction models for electrical energy consumption. In Smart Grids, energy consumption optimization is critical to enhance power grid reliability, and avoid supply-demand mismatches. Utilities rely on real-time power consumption data from individual customers in their service area to forecast the future demand and initiate energy curtailment programs. Currently however, little is known about the differences in consumption characteristics of various customer types, and their impact on the prediction method's accuracy. While many studies have concentrated on aggregate loads, showing that accurate consumption prediction at the building level can be achieved, there is a lack of results regarding individual customers consumption prediction. In this study, we perform an empirical quantitative evaluation of various prediction methods of kWh energy consumption of two distinct customer types: 1) small, highly variable individual customers, and 2) aggregated, more stable consumption at the building level. We show that prediction accuracy heavily depends on customer type. Contrary to previous studies, we consider the consumption data granularity to be very small (i.e., 15-min interval), and focus on very short term predictions (next few hours). As Smart Grids move closer to dynamic curtailment programs, which enables demand response (DR) events not only on weekdays, but also during weekends, existing DR strategies prove to be inadequate. Here, we relax the constraint of workdays, and include weekends, where ISO models consistently under perform. Nonetheless, we show that simple ISO baselines, and short-term Time Series, which only depend on recent historical data, achieve superior prediction accuracy. This result suggests that large amounts of historical training data are not required, rather they should be avoided.
Conference Paper
Full-text available
Curtailment prediction and efficient demand response (DR) strategy selection challenge the effectiveness of developing smart grid applications. Here we present solutions to both challenges, taking a bottom-up approach to demonstrate that curtailment at the equipment-level determined based on the equipment mechanical properties and models can be used to efficiently estimate curtailment and to optimize human comfort during a DR event across a subset of energy consumers. We focus on a controlled microgrid environment present on the University of Southern California campus and address two HVAC oriented DR strategies: variable frequency drive and global temperature reset. Strong correlation between equipment and building level consumption was found to exist in less than half the cases, showing that global consumption can be successfully used as well for those scenarios. Several fast heuristics aiming to optimizing human comfort during equipment level DR proved to be successful and showed that the best method depends on the DR strategy. Finally, relying on the equipment specifications and consumption models for our analyses circumvents the common challenges associated with statistical and machine-learning algorithm approaches that rely on large amounts of historical data.
Conference Paper
Full-text available
This paper proposes a Stackelberg game approach to deal with demand response (DR) scheduling under load uncertainty based on real-time pricing (RTP) in a residential setup. We formulate the optimization problems for the service provider that acts as the leader and for the users who are the multiple followers, respectively. We derive the Stackelberg Equilibrium (SE) consisting of the optimal real-time electricity price and each user's optimal power consumption. Simulation results show the proposed DR scheduling not only can control the total power consumption, but also is beneficial to the service provider's revenue. Moreover, the load uncertainty increases the service provider's revenue and decreases each user's payoff.
Conference Paper
In this paper, as a fundamental problem in smart grid, the residential load scheduling is studied in a comprehensive way. The main contributions lie in twofold. First, three indices, i.e., the power consumption expense, the robustness of schedule subject to uncertain electricity price and the satisfaction of customer, are taken into full consideration. We propose to optimize simultaneously the three indices via convex optimization. Second, in order to fully characterize the operation states of appliances, both binary and continuous variables are used, which results in a hybrid optimization problem. The relaxation technique is utilized to tackle the hybrid optimization problem. The performance of the proposed approach is illustrated by simulations. Both peak-to-average ratio of power load and variation of power load are reduced.
Conference Paper