Available via license: CC BY 4.0
Content may be subject to copyright.
Adaptive Pricing for Optimal Coordination in Networked Energy
Systems with Nonsmooth Cost Functions
Jiayi Li1, Jiale Wei2, Matthew Motoki1, Yan Jiang2, and Baosen Zhang1
Abstract— Incentive-based coordination mechanisms for dis-
tributed energy consumption have shown promise in aligning
individual user objectives with social welfare, especially under
privacy constraints. Our prior work proposed a two-timescale
adaptive pricing framework, where users respond to prices by
minimizing their local cost, and the system operator iteratively
updates the prices based on aggregate user responses. A key
assumption was that the system cost need to smoothly depend on
the aggregate of the user demands. In this paper, we relax this
assumption by considering the more realistic model of where
the cost are determined by solving a DCOPF problem with
constraints. We present a generalization of the pricing update
rule that leverages the generalized gradients of the system cost
function, which may be nonsmooth due to the structure of
DCOPF. We prove that the resulting dynamic system converges
to a unique equilibrium, which solves the social welfare opti-
mization problem. Our theoretical results provide guarantees on
convergence and stability using tools from nonsmooth analysis
and Lyapunov theory. Numerical simulations on networked
energy systems illustrate the effectiveness and robustness of
the proposed scheme.
I. INTRODUCTION
Modern energy systems are undergoing a significant trans-
formation, marked by the increasing prevalence of distributed
energy resources (DERs), responsive loads, and the emer-
gence of more autonomous devices. These developments
have created opportunities for customers to actively par-
ticipate in system operations. However, unlike dispatchable
resources, customers often cannot be directly controlled
by an operator 1and must be coordinated through some
form of incentives [3]. But the system and its customers
often have competing objectives: system operators strive to
achieve global objectives like efficiency, reliability, fairness
and stability of the network, individual users optimize their
private costs and preferences that are often unknown or
unobservable. In this paper, we study how to achieve align-
ment between the system objective and user objective while
keeping most of the information about the users private.
1J. Li, M. Motoki and B. Zhang are with the Electrical and Computer
Engineering, University of Washington, Seattle, WA. They are partially
supported NSF grant ECCS-2153937 and the Washington Clean Energy
Institute. {ljy9712,mmotoki,zhanbao}@uw.edu
2J. Wei and Y. Jiang are with the School of Science and Engineering,
The Chinese University of Hong Kong, Shenzhen, GD 518172, CHN (email:
jialewei@link.cuhk.edu.cn;yjiang@cuhk.edu.cn).
Y. Jiang is partially supported by CUHKSZ University Development Fund
and part of this research was performed while the author was visiting
the Institute for Mathematical and Statistical Innovation at University of
Chicago, which is supported by the NSF grant DMS-1929348.
1Direct load control exists and have been implemented, but are often
constrained by the number of times they can be called and duration [1], [2],
and we do not explore this class of resources in this paper.
Incentive-based coordination mechanisms have received
extensive attention and are one of the main features of power
systems with communication capabilities. In the context
of demand response in electricity markets, incentives can
take many different forms, ranging from alert/text-based
signals [4] to pricing [5]. In this paper, we focus on price-
based incentives: a system operator broadcasts prices, users
respond by adjusting their consumption to minimize their
individual costs, the operator adjusts the prices based on the
user responses, etc. Ideally, this iterative interaction should
converge to an optimal solution that balances user cost and
system performance. The major obstacle is that the operator
typically lacks access to users’ cost functions, either due
to privacy concerns or because users themselves rely on
complex or black-box control strategies (e.g., reinforcement
learning) [6]–[8]. This limits the effectiveness of many
pricing schemes and makes theoretical analysis difficult.
A previous work [9] proposed a two-timescale adaptive
pricing framework that is adopted from a dynamic incentive
[10] that evolves with the actions of the users. In this
framework, users act as price takers, optimizing their local
behavior in response to a broadcast price signal, while the
operator iteratively updates prices based on observed aggre-
gate consumption. Its iterative update circumvents the need
for user-specific knowledge. The main result showed that
under mild conditions–such as monotonicity of user response
with respect to price–this adaptive scheme converges to the
solution of a global social welfare optimization problem.
However, [9] made a key simplifying assumption: that
prices and the operator’s objective are a function of aggregate
demand alone (hence the prices are uniform across the users).
Of course, in real-world power systems, electricity must
be delivered over a physical network, where supply and
demand must balance at each node, and transmission line
capacities impose additional constraints. On top of these
constraints, the operator solves an optimization problem, in
this paper modeled as a DCOPF problem, that determines
the best way to satisfy the demands. This introduces new
layers of complexity, since the cost depends nonlinearly
and nonsmoothly on the demand, and the prices can exhibit
discontinuities.
The nonsmoothness of the price arises quite naturally. In
DCOPF problems, the feasible regions are polytopic, and
when the generator costs are linear, the optimal solutions
occur at the vertices of the feasible region [11]. Therefore,
a small change in load can change the set of binding
constraints and in turn cause discontinuous jumps in the
prices [12]. The algorithms in [9] and [10] use prices to infer
arXiv:2504.00641v1 [eess.SY] 1 Apr 2025
gradient information about the system cost, but when the cost
in DCOPF is nonsmooth and the prices are discontinuous,
gradients are no longer well-defined.
This paper extends the adaptive pricing framework by
embedding DCOPF constraints into the system operator’s
objective and carefully designing pricing updates based on
generalized gradients of the (possibly nonsmooth) cost. Our
formulation integrates network constraints directly into the
operator’s cost, and we propose a pricing update rule based
on generalized gradients. This rule accounts for the potential
non-differentiability of the cost function due to network
constraints. Our main contributions are:
1) We design a vectorized price update rule based on the
generalized gradient of the nonsmooth system cost in-
duced by DCOPF, enabling implementation in realistic
grid models.
2) We prove that the proposed iterative mechanism con-
verges to a unique equilibrium that aligns user behavior
with the social welfare solution. The proof handles
both linear and quadratic cost structures, using tools
from convex analysis and Lyapunov stability theory for
nonsmooth systems.
This work offers a scalable and theoretically grounded ap-
proach to aligning local and global objectives in networked
energy systems, opening the door to practical decentral-
ized control under realistic grid constraints. The proposed
mechanism is robust to privacy constraints, as the operator
requires only demand observations and users do not need
to disclose their cost functions or internal constraints, thus
preserving privacy. We also demonstrate through simulations
on networked scenarios that the mechanism effectively in-
duces socially optimal behavior while maintaining system
feasibility under DCOPF.
II. PROB LEM FORMULATION
A. Planner’s Optimization Problem
We consider a supply-demand balancing electricty market
with nusers indexed by i∈ N := {1, . . . , n}. The power
demand of user iis denoted by xi∈R. For a given demand
profile of users
x:= (xi, i ∈ N )∈Rnwhich is the column
vector obtained from the concatenation of demand vectors of
all users, the disutility (or cost) of the power consumption
of user iis given by fi(xi), while the system cost in serving
the demand profile of users is given by J(
x).
We now discuss them separately:
Assumption 1 (Cost assumption). The following assump-
tions on cost functions are made throughout this manuscript:
•Each user disutility function fi(xi)is strictly convex and
twice continuously differentiable;
•The system cost function J(
x)is a parametric program-
ming determined from the DCOPF problem with linear
generation costs:
J(
x):=min
ξcTξ(1)
s.t.Linear constraints on ξdepending on
x
where c:= (ci, i ∈ G )∈R|G| denotes the vector of
generation cost coefficients and ξ:= (ξi, i ∈ G )∈R|G|
denotes the power generation from a set of generators G.
A nice feature of the optimal cost J(
x)is that it is a
convex function of the user demand profile
x[13]. How-
ever, although J(
x)is continuous, it is not differentiable
everywhere.
Then the system operator is interested in solving the
following global social welfare problem:
min
x∈RnT C(
x) := X
i∈N
fi(xi) + J(
x),(2)
which minimizes the sum of the total disutility of all users
and the system cost to serve users.
Remark 1 (Linear generation cost assumption). Note that
the linear cost in (1)is in some sense the most difficult cost
function to deal with, at least in our setting. If the cost is
strongly convex function, for example, a quadratic cost, J
becomes differentiable everywhere. All of the results in the
paper still hold since in that case the generalized gradient is
the (standard) gradient and all sets are singletons. Therefore,
we focus on linear cost functions in this paper.
As discussed before, we adopt the standard assumption
that each disutility function fi(xi)is strictly convex and
twice continuously differentiable [14], [15] while the system
cost function J(
x)is convex and locally Lipschitz2. Hence,
it is easy to see that the entire objective function is strictly
convex and locally Lipschitz, which implies the existence of
a unique global minimizer
x⋆to problem (2) [17, Proposition
3.1.1]. By [18, Theorem 8.2],
x⋆is such a minimizer if and
only if
0∈∂ X
i∈N
fi(x⋆
i) + J(
x⋆)!
= (∇fi(x⋆
i), i ∈ N ) + ∂J (
x⋆),(3)
where the equality is due to the sum rule of the generalized
gradient for convex functions [19, Chapter 2.4]. The so-called
generalized gradient is a counterpart to gradient for nons-
mooth functions, which is often known to be subdifferential
by optimization community. As mentioned in Assumption 1,
J(
x)is continuous but it is not differentiable everywhere,
which forces us to borrow the generalized gradient concept.
Definition 1 (Generalized gradient [19]). If g:Rd7→ Ris
a locally Lipschitz continuous function, then its generalized
gradient ∂g :Rd7→ B(Rd)at z∈Rdis defined by
∂g(z) := co lim
k→∞ ∇g(zk) : zk→z,zk/∈Ωg∩ S,
where co denotes convex hull, Ωg⊂Rddenotes the set of
points where gfails to be differentiable, and S ∈ Rdis a set
of measure zero that can be arbitrarily chosen to simply the
computation.
2Every convex function is locally Lipschitz [16]. We list locally Lipschitz
property explicitly for the purpose of emphasis.
Remark 2 (Relation to gradient). Unlike a gradient which
gives a single vector, a generalized gradient is a set-valued
map. The generalized gradient is the generalization of the
gradient in the sense that, if gis differentiable at z, then
∂g(z) = {∇g(z)}.
However, in practice, the planner’s optimization problem
in (2) is not implementable due to the lack of knowledge of
the exact disutility functions of users for privacy concerns.
This poses challenges for the system operator to realize
economic dispatch by solving (2) directly. An important way
to address this issue by the system operator is to update its
power price pi∈Rfor individual users iteratively based on
how users adjust their desired power. By doing so, the system
operator hopes to encourage users to align their individual
goals of cost minimization with the goal of problem (2). The
design of such an adaptive price update will be discussed
later, which is the core of this manuscript.
B. User’s Optimization Problem
All users are assumed to be rational price takers. More
precisely, given the power price pi, each user iadjusts its
power consumption by solving the following optimization
problem:
min
xi
fi(xi) + pT
ixi,(4)
which minimizes the total cost of user iinduced by disu-
tility and payment for power consumption. Since (4) is an
unconstrained convex optimization problem, the necessary
and sufficient condition for x∗
ito be a minimizer is [20,
Chapter 4.2.3]
∇fi(x∗
i) + pi= 0 ,(5)
which yields a unique global solution x∗
iby the strict
convexity of fi(xi)[17, Proposition 3.1.1]. Basically, as the
system operator updates its price signal pi, user iadjusts its
power demand x∗
iaccordingly to satisfy (5) in a unique way.
To put it another way, for any given price pi, the demand
x∗
iis unique. Hence, x∗
iis clearly a function [21, Definition
2.1] of the current price piand can be expressed as
x∗
i(pi) := arg min
xi
fi(xi) + pT
ixi.
An important feature of this function x∗
i(pi)is that it is a
continuously differentiable and strictly decreasing function,
which is highlighted by the following lemma.
Lemma 1 (Bijective demand update). Under Assumption 1,
the demand update x∗
i(pi)is a bijection given by a contin-
uously differentiable and strictly decreasing function
x∗
i(pi) = ∇−1fi(−pi),(6)
which naturally has the properties that ∇x∗
i(pi)<0and,
∀˜
pi,ˆ
pi∈R, if ˜
pi=ˆ
pi, then x∗
i(˜
pi)=x∗
i(ˆ
pi).
Proof. First, by [22, Theorem 2.14], the strict convexity of
fi(xi)ensures that its gradient ∇fi(xi)is a strictly in-
creasing function. Then, this strict monotonicity implies that
∇fi(xi)is a bijection, which further implies that ∇fi(xi)
has a unique inverse function written as ∇−1fithat is also a
bijection. Now, we note that, as an optimal solution, x∗
i(pi)
must satisfy (5), i.e.,
∇fi(x∗
i(pi)) + pi= 0 .(7)
Since ∇−1fiis well-defined, we are allowed to represent
x∗
i(pi)in (7) as (6), from which it is easy to see that x∗
i(pi)
is a bijection since ∇−1fiis a bijection.
Moreover, an important property of any bijective function
is that it is one-to-one, which means that every element in
the codomain is mapped to by at most one element in the
domain. Thus, ∀˜
pi,ˆ
pi∈R, if x∗
i(˜
pi) = x∗
i(ˆ
pi), then ˜
pi=
ˆ
pi, which is logically equivalent to the contrapositive, i.e.,
∀˜
pi,ˆ
pi∈R, if ˜
pi=ˆ
pi, then x∗
i(˜
pi)=x∗
i(ˆ
pi).
Finally, we would like to show that x∗
i(pi)is a con-
tinuously differentiable function. By Assumption 1,fi(xi)
is twice continuously differentiable, which implies that
∇fi(xi)is continuously differentiable. That is, ∇2fi(xi)
exists everywhere and is continuous. As mentioned in the
beginning of the proof, ∇fi(xi)is strictly increasing, which
implies that ∇2fi(xi)>0everywhere. By inverse func-
tion theorem, ∇−1fiis continuously differentiable and its
derivative at (−pi)is given by 1/∇2fi(∇−1fi(−pi)) =
1/∇2fi(x∗
i(pi)) >0since ∇2fi(xi)>0every-
where. Thus, ∇x∗
i(pi) = −1/∇2fi(∇−1fi(−pi)) =
−1/∇2fi(x∗
i(pi)) <0, which is clearly continuous since
∇2fi(xi)is continuous. This concludes the proof that x∗
i(pi)
is continuously differentiable and strictly decreasing.
Lemma 1shows that the demand update x∗
i(pi)is a
bijective function which naturally enjoys a nice property.
That is, ∀˜
pi,ˆ
pi∈R, if ˜
pi=ˆ
pi, then x∗
i(˜
pi)=x∗
i(ˆ
pi),
which means that it is impossible for distinct price signals to
motivate the same power demand. As the analysis will unfold
later, this “uniqueness” plays a role in the convergence of the
pricing mechanism which we will propose.
Therefore, our goal is to design a suitable update for price
profile
p:= (pi, i ∈ N )∈Rnwhich can leverage the nice
demand update
x∗(
p) := (x∗
i(pi), i ∈ N )∈Rninduced by
individual user’s optimization problem (4) in each iteration
to gradually gear the demand profile
xof users toward
the minimizer of (2) after enough iterations. This incentive
pricing mechanism allows the system operator to achieve
the desired solution to the planner’s optimization problem
(2) without solving it directly.
III. ADAP TIV E PRIC E UPDATE UNDER NONSMOOTHNESS
In terms of incentive pricing mechanism, our recent work
[9] proposes for a similar but simpler setting to adopt price
dynamics utilizing the gradient information of the system
cost function J(·)to incentivize users to adjust their power
consumption towards a point where the planner’s problem
and user’s problem are simultaneously solved. However, the
underlying assumption that J(·)is smooth does not hold in
our case due to the particular choice of J(·)as (1) which
makes J(·)convex and locally Lipschitz but not differen-
tiable everywhere. Thus, we propose an adaptive price update
by leveraging the generalized gradient in Definition 1as
follows:
˙
p∈∂J (
x∗(
p)) −
p.(8)
This is well-defined since the fact that J(
x)is a locally Lip-
schitz continous function ensures that J(
x)has a nonempty
compact set as its generalized gradient at any
x∈Rn[23,
Proposition 6].
Based on (8), we now illustrate the incentive pricing
mechanism in more detail. As shown in Fig. 1, we consider a
two-timescale design of incentive pricing mechanism, where
individual users solve (4) for x∗
i(pi)much faster than the
system operator updates the price
pvia (8). This timescale
separation allows users to consider the price signal
pas
static when solving for
x∗(
p). Thus, following any given
price
pprovided by the system operator, users adjust their
power consumption towards
x∗(
p)almost immediately by
solving (5) and then the system operator updates the price
paccording to (8) in response to the current demand profile
x∗(
p). It should be intuitively clear that (8) provides users
with incentives to align their own interests with social
welfare, given that adjustments to
pintend to reduce the
difference between the marginal cost of the individual user
quantified by
pand the marginal cost of the system operator
characterized by ∂J (
x∗(
p)).
Fig. 1: Two-timescale design of incentive pricing mechanism.
With this in mind, as the system operator iteratively
updates the price
p, the nonsmooth dynamical system com-
posed of (5) and (8) ideally should settle down at a point
that achieves the optimal solution to the planner’s optimiza-
tion problem (2). That is, at the equilibrium price
p⋆:=
(p⋆
i, i ∈ N )∈Rn, we would like to have
x∗(
p⋆)satisfy
(3), which is captured by the following theorem.
Theorem 1 (Unique equilibrium with incentive aligned).
Under Assumption 1, the demand profile
x∗(
p⋆)occurring
at the unique equilibrium price
p⋆of the dynamical system
composed of (5)and (8)is the unique global minimizer to
the planner’s optimization problem (2), i.e.,
0∈∂J (
x∗(
p⋆)) + (∇fi(x∗
i(p⋆
i)), i ∈ N ).(9)
Proof. The point
p⋆is an equilibrium of the price update
(8) if and only if [19, Chapter 4.4]
0∈∂J (
x∗(
p⋆)) −
p⋆.(10)
Note that
x∗(
p⋆)generated from the demand update satisfies
(5), i.e.,
∇fi(x∗
i(p⋆
i)) + p⋆
i= 0 ,∀i∈ N ,
from which we know
p⋆=−(∇fi(x∗
i(p⋆
i)), i ∈ N ).(11)
Substituting (11) into (10) yields (9), which is exactly in
the form of the optimality condition (3) for the planner’s
optimization problem (2). Thus,
x∗(
p⋆)corresponding to the
equilibrium price
p⋆is the unique global minimizer to (2).
Now, it remains to show that the equilibrium price
p⋆is
unique. By way of contradiction, suppose that both
p⋆and
p◦satisfy (10), where
p⋆=
p◦. Then, by a similar argument
as above, we know that both
x∗(
p⋆)and
x∗(
p◦)must satisfy
the optimality condition (3). Thus,
x∗(
p⋆)and
x∗(
p◦)are
both optimizers of problem (2). Notably, by Lemma 1, our
assumption
p⋆=
p◦directly implies
x∗(
p⋆)=
x∗(
p◦).
Therefore, we now reach a situation where there are two
distinct optimizers to problem (2), which contradicts the fact
that problem (2) has a unique minimizer. This concludes the
proof of the uniqueness of the equilibrium price
p⋆.
Theorem 1verifies that the proposed incentive pricing
mechanism is guaranteed to settle down at a unique equilib-
rium price
p⋆whose corresponding demand profile
x∗(
p⋆)
is exactly the unique global minimizer to the planner’s
optimization problem (2). In other words, by adopting the
proposed adaptive price update, the system operator can
encourage users to align their individual benefits with the
social welfare. Thus, the system objective of economic
dispatch is achieved without disclosure of user privacy.
IV. NON SMO OTH STAB ILI TY ANALYSIS
Having characterized the equilibrium point and confirmed
the incentive alignment at that point, we are now ready to
investigate the stability of the nonsmooth dynamical system
composed of the demand update (5) and the price update
(8) by performing the natural extension of Lyapunov stabil-
ity analysis provided in [23, Theorem 1]. More precisely,
the stability under the incentive pricing mechanism can be
certified by finding a well-defined Lyapunov function that is
decreasing along the trajectories of the system comprising (5)
and (8). The main result of this section is presented below,
whose proof is enabled by a sequence of intermediate results
that we discuss next.
Theorem 2 (Asymptotic stability). Under Assumption 1,
the dynamical system composed of (5)and (8)is strongly
asymptotically stable at the unique equilibrium characterized
by (9).
Of course, before showing the stability of the system, we
need to show that the dynamical system has a solution. Here,
we take the solution to be in the Caratheodory sense, which
roughly says that there is a trajectory that satisfies (8) except
for a set of tthat has Lebesgue measure zero [23]. We do
this by checking the conditions in [23, Proposition S2]. We
use B(Rd)to denote the collection of all subsets of Rdand
B(a, r)to denote the ball centered at awith radius r.
Since ∂J (
x∗(
p)) involves the composition of functions,
we develop the following lemma to facilitate our analysis.
Lemma 2 (Property preservation in composition). Assume
that h:Rm7→ Rdis continuous at z∈Rmand uis the
composite of hand g:Rd7→ B(Rd)defined by
u(z) := (g◦h)(z) := g(h(z)) .(12)
•If g:Rd7→ B(Rd)is upper semicontinuous at h(z)∈Rd,
then uis upper semicontinuous at z.
•If g:Rd7→ B(Rd)is locally bounded at h(z)∈Rd, then
uis locally bounded at z.
Proof. We study the two cases separately.
For upper semicontinuity of a set-valued map [23], we
need to show that, ∀ϵ > 0,∃δ > 0such that
u(˜
z)⊂u(z) + B(0;ϵ),∀˜
z∈B(z;δ).(13)
To this end, we first note that, if gis upper semicontinuous
at h(z)∈Rd, for any given ϵ > 0,∃η > 0such that,
whenever y∈B(h(z); η), it holds that [23]
g(y)⊂g(h(z)) + B(0;ϵ).(14)
Next, since his continuous at z∈Rm, for any given η > 0,
∃δ > 0such that, whenever ˜
z∈B(z;δ), it holds that [21,
Definition 4.5]
h(˜
z)∈B(h(z); η).
Now, we combine the above two arguments by setting y=
h(˜
z)in (14), which yields
g(h(˜
z)) ⊂g(h(z)) + B(0;ϵ),∀˜
z∈B(z;δ).(15)
Finally, from (12), we know g(h(˜
z)) = u(˜
z)and g(h(z)) =
u(z), which combined with (15) gives exactly the claim (13)
that we would like to prove.
For local boundedness of a set-valued map [23], we need
to show that, ∃δ > 0and some constant M > 0such that
∥µ∥2≤M , ∀˜
z∈B(z;δ),µ∈u(˜
z).(16)
With this aim, we first note that, if gis locally bounded at
h(z)∈Rd,∃η > 0and some constant M > 0such that [23]
∥µ∥2≤M , ∀y∈B(h(z); η),µ∈g(y).(17)
Again, since his continuous at z∈Rm, for any given η > 0,
∃δ > 0such that [21, Definition 4.5]
h(˜
z)∈B(h(z); η),∀˜
z∈B(z;δ).(18)
Now, we combine the above two arguments by setting y=
h(˜
z)in (17), which yields
∥µ∥2≤M , ∀˜
z∈B(z;δ), h(˜
z)∈B(h(z); η),µ∈g(h(˜
z)) .
Here, the second condition can be removed since it is directly
implied by the first condition due to (18), which yields
∥µ∥2≤M , ∀˜
z∈B(z;δ),µ∈g(h(˜
z)) .(19)
Finally, from (12), we know g(h(˜
z)) = u(˜
z), which substi-
tuted into (19) gives exactly the claim (16) that we would
like to prove.
Lemma 2paves us a way to show the existence of a
Caratheodory solution of our dynamical system by checking
conditions in [23, Propostion S2], which is the core of the
next lemma.
Lemma 3 (Existence of a Caratheodory solution). Under
Assumption 1, there exists a Caratheodory solution of the
dynamical system composed of (5)and (8)for any initial
condition
p(0).
Proof. Basically, by [23, Propostion S2], it suffices to show
that the set-valued map
p7→ [∂J (
x∗(
p))−
p]takes nonempty
compact convex values and is also upper semicontinuous as
well as locally bounded3.
Clearly, it is the term ∂J(
x∗(
p)) associated with the
generalized gradient in the above mapping that makes our
dynamics different from differential equations. Thus, we
focus our analysis on properties of ∂J (
x∗(
p)), which is a
composition of ∂J (
x)and
x∗(
p).
We start by investigating ∂J(
x). Based on [23, Proposition
6], it follows directly from the local Lipschitz continuity of
J(
x)that ∂J (
x)is a nonempty compact convex set at any
x
and the set-valued map
x7→ ∂J (
x)is upper semicontinuous
and locally bounded at any
x.
As for
x∗(
p) := (x∗
i(pi), i ∈ N ), it is a continuous
vector-valued function since each component x∗
i(pi)is a
continuous function by Lemma 1[24, Theorem 2.4].
With above information about ∂J(
x)and
x∗(
p), we are
now ready to examine properties of ∂J(
x∗(
p)). First, given
that ∂J (
x)is a nonempty compact convex set at any
x, it
must be true that ∂J (
x∗(
p)) is a nonempty compact convex
set at any
pas well since
x∗(
p)is a bijective function by
Lemma 1. This can be understood by noting that, no matter
what particular value the price signal
pis taking, ∂J (
x)
will take a corresponding value
x=
x∗(
p)at that
p, which
must produce a nonempty compact convex set ∂J(
x∗(
p)).
Second, the upper semicontinuity and local boundedness of
∂J (
x∗(
p)) at any
pfollow from Lemma 2by setting h=
x∗(
p)which is continuous and g=∂J (
x)which is upper
semi-continuous and locally bounded.
Finally, the term (−
p)has no influence to the above
properties. First, it only translates the nonempty compact
convex set ∂J(
x∗(
p)) by (−
p), which is still a nonempty
compact convex set. Thus,
p7→ [∂J (
x∗(
p)) −
p]takes
nonempty compact convex values. Second, it can be con-
sidered as a continuous function which is inherently upper
semicontinuous and locally bounded at
p. Since the sum-
mation of two upper semicontinuous functions is still upper
3There is not need to check measurability here since (8) does not
explicitly depend on time t.
semicontinuous and the summation of two locally bounded
functions is still locally bounded. Thus,
p7→ [∂J (
x∗(
p))−
p]
is also upper semicontinuous and locally bounded. The result
follows from [23, Propostion S2].
After determining the existence of a Caratheodory solution
of the system from any initial point through Lemma 3, we
now examine the nonsmooth system stability by constructing
a candidate Lyapunov function. We seek a function V(
p)that
is locally Lipschitz and regular and also satisfies V(
p⋆)=0
and V(
p)>0,∀
p=
p⋆. The monotonicity of this Lyapunov
candidate along the system trajectories is more complicated
compared to a standard analysis since we need to study the
Lie derivative in a nonsmooth setting.
We consider the following Lyapunov function candidate:
V(
p) := C(
x∗(
p)) −C(
x∗(
p⋆)) ,(20)
where C(·)denotes the objective function of the planner’s
optimization problem (2) and
p⋆corresponds to the unique
equilibrium point of the system satisfying (9). The next
result shows that this is a well-defined Lyapunov function
candidate.
Lemma 4 (Well-defined Lyapunov function). Under As-
sumption 1,V(
p)defined in (20)is a locally Lipschitz and
regular function that satisfies V(
p⋆) = 0 and V(
p)>0,
∀
p=
p⋆.
Proof. As discussed in Section II-A, the entire objective
function C(
x)of the planner’s optimization problem (2) is
locally Lipschitz and strictly convex, which together with
the continuous differentiability of each x∗
i(pi)by Lemma 1
allows us to show that V(
p)is locally Lipschitz and regular.
We now illustrate this in detail.
We begin with the local Lipschitz continuity C(
x∗(
p)).
Clearly, the continuous differentiability of each x∗
i(pi)im-
plies that each x∗
i(pi)is locally Lipschitz [25, Chapter 17.2].
Now, since each component of
x∗(
p) := (x∗
i(pi), i ∈ N )
is locally Lipschitz and C(
x)is locally Lipschitz as well,
their composition C(
x∗(
p)) is locally Lipschitz by the chain
rule [23].
We next investigate the regularity of C(
x∗(
p)). First of
all,
x∗(
p) := (x∗
i(pi), i ∈ N )is a continuously differen-
tiable vector-valued function since each component x∗
i(pi)
is a continuously differentiable function [26, Theorem 2.8].
Moreover, C(
x)is locally Lipschitz and strictly convex,
which further ensures that C(
x)is regular [19, Proposition
2.4.3]. By [27, Theorem 8.18], as a composite of
x∗(
p)and
C(
x),C(
x∗(
p)) is regular. Hence, V(
p)is locally Lipschitz
and regular since the other term in V(
p)in (20) is a constant.
Clearly, V(
p⋆)=0by construction. To see why V(
p)>
0,∀
p=
p⋆, we first note that
x∗(
p⋆)is the unique global
minimizer to problem (2) by Theorem 1, which means that
∀
x=
x∗(
p⋆), C(
x)> C(
x∗(
p⋆)) .(21)
Moreover, we know from Lemma 1that, ∀
p=
p⋆, it holds
that
x∗(
p)=
x∗(
p⋆). Therefore, setting
x=
x∗(
p)in
(21), we get C(
x∗(
p)) > C(
x∗(
p⋆)), i.e., C(
x∗(
p)) −
C(
x∗(
p⋆)) >0, which is equivalent to V(
p)>0by our
construction of V(
p)in (20). This confirms that V(
p)>0,
∀
p=
p⋆, as desired.
Next, we turn to verify the monotonic evolution of
V(
p)along the system trajectories given by the notion
of Lie derivative in the nonsmooth setting, which requires
max ˜
LV(
p)<0,∀
p=
p⋆, with ˜
LV(
p)being the set-valued
Lie derivative of Vregarding [∂J (
x∗(
p)) −
p]in (8) at
p
defined by [23], [28]
˜
LV(
p) :=a∈R:∃v∈∂J (
x∗(
p)) −
psuch that
ζTv=a, ∀ζ∈∂V (
p)
=∩ζ∈∂V (
p)ζT[∂J (
x∗(
p)) −
p].(22)
Lemma 5 (Negativity of Lie derivative). Under Assump-
tion 1, the set-valued Lie derivative of V(
p)defined in (22)
satisfies max ˜
LV(
p)<0,∀
p=
p⋆.
Proof. Before delving into ˜
LV(
p), we need to characterize
∂V (
p)which can be computed as
∂V (
p) = ∂(C◦
x∗)(
p)
1
=∇
x∗(
p)∂C (
x∗(
p))
:= ∇
x∗(
p)η:η∈∂C (
x∗(
p))
=diag(∇x∗
i(pi), i ∈N )η:η∈∂C(
x∗(
p)).(23)
In 1 , the chain rule of the generalized gradient [27, Theorem
8.18] can be used with equality since, as discussed in the
proof of Lemma 4, the conditions that C(
x)is locally
Lipschitz and regular and that
x∗(
p)is continuously dif-
ferentiable both hold.
To get a more explicit expression for (23), we now derive
∂C (
x∗(
p)) as
∂C (
x∗(
p)) = ∂ X
i∈N
fi(x∗
i(pi)) + J(
x∗(
p))!
= (∇fi(x∗
i(pi)), i ∈ N ) + ∂J (
x∗(
p))
=∂J (
x∗(
p)) −
p,(24)
where the first equality is due to the definition of C(
x)
in (2), the second equality is is due to the sum rule of
the generalized gradient for convex functions [19, Chapter
2.4] as mentioned in Section II-A, the last equality uses the
relation that (∇fi(x∗
i(pi)), i ∈ N ) = −
presulting from the
optimality condition (7) of user’s problem as discussed in
the proof of Lemma 1.
Substituting (24) to (23) yields
∂V (
p) = diag(∇x∗
i(pi),i ∈ N )η:
η∈∂J (
x∗(
p)) −
p,(25)
which will be applied to (22) for investigating the sign of
max ˜
LV(
p). The challenge part is that J(·)is continuous but
not differentiable everywhere, which means that there exists
a set of points of
pfor which J(·)fails to be differentiable
at the corresponding
x∗(
p). For the ease of notation, we
denote such a set of
pas ΩJ(
x∗(
p)) ⊂Rn. Note that we
only care about points
pdifferent from the equilibrium
p⋆
in this particular analysis, which are
psatisfying
0/∈∂J (
x∗(
p)) −
p(26)
by (10) in the proof of Theorem 1. This allows us to consider
two cases based on whether
p=
p⋆is in ΩJ(
x∗(
p)) or not.
1) If
p=
p⋆and
p/∈ΩJ(
x∗(
p)): The generalized gradient
∂J (
x∗(
p)) reduces to a singleton, i.e.,
∂J (
x∗(
p)) = ∇J(
x∗(
p)).(27)
Thus, ∂V (
p)in (25) reduces to a singleton as well, i.e.,
∂V (
p) = diag(∇x∗
i(pi), i ∈ N )F(
p)(28)
with
F(
p) = ∇J(
x∗(
p)) −
p
in this case, which together with (27) simplifies (22) to
˜
LV(
p)
=a∈R:∃v∈{F(
p)}such that ζTv=a,
∀ζ∈diag(∇x∗
i(pi), i ∈ N )F(
p)
=(diag(∇x∗
i(pi), i ∈ N )F(
p))TF(
p)
=F(
p)Tdiag((∇x∗
i(pi)), i ∈ N )F(
p),(29)
We claim that
F(
p)Tdiag((∇x∗
i(pi)), i ∈ N )F(
p)<0(30)
for two reasons. First, diag((∇x∗
i(pi)), i ∈ N )≺0
since each ∇x∗
i(pi)<0by Lemma 1. Second, since
p=
p⋆, we know from (26) that F(
p)=0. Then, (30)
follows directly. Combining (29) and (30), we know
max ˜
LV(
p) = F(
p)Tdiag((∇x∗
i(pi)), i ∈ N )F(
p)
<0.(31)
2) If
p=
p⋆and
p∈ΩJ(
x∗(
p)): Substituting (25) to (22)
yields
˜
LV(
p)(32)
=a∈R:∃v∈∂J (
x∗(
p)) −
psuch that ζTv=a,
∀ζ∈diag(∇x∗
i(pi), i ∈N )η:η∈∂J(
x∗(
p))−
p.
If ˜
LV(
p) = ∅, then max ˜
LV(
p) = −∞ by conven-
tion [23]. If ˜
LV(
p)=∅, we claim that, ∀a∈˜
LV(
p)
in (32), a < 0. To see this, we first note that, for any
such a,∃v∈∂J (
x∗(
p)) −
p, such that ζTv=a,∀ζ∈
diag(∇x∗
i(pi), i ∈N )η:η∈∂J(
x∗(
p))−
p. Clearly,
we can pick η=vand then ζ= diag(∇x∗
i(pi), i ∈
N)vto solve for
a=ζTv= (diag(∇x∗
i(pi), i ∈N )v)Tv
=vTdiag((∇x∗
i(pi)), i ∈N )v<0.
Here, the inequality is due to a similar argument for the
previous case. That is, diag((∇x∗
i(pi)), i ∈ N )≺0
and v=0from (26). Now, we have shown that, if
˜
LV(
p)=∅,∀a∈˜
LV(
p)in (32), a < 0, which
ensures that max ˜
LV(
p)<0. Therefore, no matter
whether ˜
LV(
p)is empty or not, it must be true that
max ˜
LV(
p)<0.
In sum, max ˜
LV(
p)<0,∀
p=
p⋆.
According to [23, Theorem 1], with the aid of Lemmas 3,
4, and 5, we now have all the elements necessary to establish
the strongly asymptotical stability of the unique equilibrium
as summarized in Theorem 2.
V. NUMERICAL IL LUS TRATI ONS
In this section, we present simulation results to show the
convergence of our proposed incentive pricing mechanism
to the desired optimal solution to the global social welfare
problem. The simulations are conducted on the IEEE 14-
bus system, which contains 5generators and 20 transmission
lines. The generation cost of each generator is assumed to be
linear as in (1), with cost coefficient cigenerated uniformly
at random from [5,20]. We assume that there is one user on
each bus and each user ihas disutility fi(xi) = (xi−¯
xi)2,
where ¯
xiis some constant, representing for example, the
targeted consumption of user i.
In order to achieve the optimal solution to the global
social welfare problem (2) without solving it directly, we
randomly initialize the price signal for individual users from
[5,15] and run our proposed incentive pricing mechanism,
whose dynamics together with the evolution of user demand
profile are provided in Fig. 2. Obviously, both price signals
pand user demands
pconverge very fast. Particularly,
p
successfully converges to the optimal solution of problem
(2). Note that the cost are each bus are converges to one of
a few values, which is typical when a small number of lines
are congested [29].
012345678910
0
5
10
15
20
012345678910
10
20
30
40
50
Fig. 2: Convergence of price signals and user demands of our
proposed mechanism in IEEE 14-bus system with 14 users,
where the dashed lines represent the optimal demand profile
to the global social welfare problem.
VI. CONCLUSIONS AND OUT LOOK
This paper extends adaptive pricing mechanisms for social
welfare optimization to network-constrained energy systems
with nonsmooth cost structures. By embedding DCOPF
constraints into the operator’s objective and introducing a
generalized gradient-based price update rule, we establish
a provably convergent and privacy-preserving incentive de-
sign framework. Our theoretical analysis demonstrates the
existence, uniqueness, and strong asymptotic stability of
the equilibrium. Simulation results validate the practical
effectiveness of the proposed mechanism in guiding user
behavior toward globally optimal outcomes under realistic
power network constraints.
Looking ahead, several important extensions remain open.
First, practical systems are subject to uncertainty from re-
newable generation and stochastic user demand. Extending
the current framework to handle uncertainty explicitly, either
through robust or stochastic formulations of DCOPF, is
a natural next step. Second, applying the method to AC
power flow models would enhance its applicability to real-
world grids, though this introduces significant nonconvexity.
Third, while our current pricing update relies on analytical
computation of generalized gradients, a promising direction
is to develop data-driven or learning-based approximations
for the operator’s update rule, especially in settings where
exact DCOPF gradients are computationally expensive or
unavailable in real time. Finally, although our current results
focus on the single time-step case, extending the convergence
and stability guarantees to multi-timestamp scenarios is an
important direction for future work, especially for dynamic
and time-coupled systems.
REFERENCES
[1] N. Ruiz, I. Cobelo, and J. Oyarzabal, “A direct load control model
for virtual power plant management,” IEEE Transactions on Power
Systems, vol. 24, no. 2, pp. 959–966, 2009.
[2] C. Chen, J. Wang, and S. Kishore, “A distributed direct load control
approach for large-scale residential demand response,” IEEE Transac-
tions on Power Systems, vol. 29, no. 5, pp. 2219–2228, 2014.
[3] F. Rahimi and A. Ipakchi, “Demand response as a market resource
under the smart grid paradigm,” IEEE Transactions on smart grid,
vol. 1, no. 1, pp. 82–88, 2010.
[4] M. Peplinski and K. T. Sanders, “Residential electricity demand on
caiso flex alert days: a case study of voluntary emergency demand
response programs,” Environmental Research: Energy, vol. 1, no. 1,
2023.
[5] J. S. Vardakas, N. Zorba, and C. V. Verikoukis, “A survey on demand
response programs in smart grids: Pricing methods and optimization
algorithms,” IEEE Communications Surveys & Tutorials, vol. 17, no. 1,
pp. 152–178, 2014.
[6] P. Li, H. Wang, and B. Zhang, “A distributed online pricing strategy
for demand response programs,” IEEE Transactions on Smart Grid,
vol. 10, no. 1, pp. 350–360, 2017.
[7] K. Khezeli and E. Bitar, “Risk-sensitive learning and pricing for
demand response,” IEEE Transactions on Smart Grid, vol. 9, no. 6,
pp. 6000–6007, 2017.
[8] X. Kong, D. Kong, J. Yao, L. Bai, and J. Xiao, “Online pricing of
demand response based on long short-term memory and reinforcement
learning,” Applied energy, vol. 271, p. 114945, 2020.
[9] J. Li, M. Motoki, and B. Zhang, “Socially optimal energy usage
via adaptive pricing,” Electric Power Systems Research, vol. 235, p.
110640, Oct. 2024.
[10] C. Maheshwari, K. Kulkarni, M. Wu, and S. S. Sastry, “Inducing social
optimality in games via adaptive incentive design,” in Conference on
Decision and Control (CDC). IEEE, 2022, pp. 2864–2869.
[11] B. Stott, J. Jardim, and O. Alsac¸, “Dc power flow revisited,” IEEE
Transactions on Power Systems, vol. 24, no. 3, pp. 1290–1300, 2009.
[12] L. Zhang, Y. Chen, and B. Zhang, “A convex neural network solver for
dcopf with generalization guarantees,” IEEE Transactions on Control
of Network Systems, vol. 9, no. 2, pp. 719–730, 2021.
[13] D. Bertsimas and J. N. Tsitsiklis, Introduction to linear optimization.
Belmont, Mass. : Athena Scientific, 1997.
[14] E. Mallada, C. Zhao, and S. Low, “Optimal load-side control for
frequency regulation in smart grids,” IEEE Transactions on Automatic
Control, vol. 62, no. 12, pp. 6294–6309, Dec. 2017.
[15] F. D¨
orfler and S. Grammatico, “Gather-and-broadcast frequency con-
trol in power systems,” Automatica, vol. 79, pp. 296–305, May 2017.
[16] Mathematics Department at Wayne State University, “Every convex
function is locally lipschitz,” The American Mathematical Monthly,
vol. 79, no. 10, p. 1121–1124, Dec. 1972.
[17] D. P. Bertsekas, Convex Optimization Theory. Athena Scientific,
2009.
[18] S. J. Wright and B. Recht, Optimization for Data Analysis. Cambridge
University Press, 2022.
[19] F. H. Clarke, Y. S. Ledyaev, R. J. Stern, and P. R. Wolenski, Nonsmooth
Analysis and Control Theory. Springer-Verlag, 1998.
[20] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge
university press, 2004.
[21] W. Rudin, Principles of Mathematical Analysis, 3rd ed. McGraw
Hill, 2013.
[22] R. T. Rockafellar and R. J. B. Wets, Variational Analysis. Springer,
1998.
[23] J. Cortes, “Discontinuous dynamical systems,” IEEE Control Systems
Magazine, vol. 28, no. 3, pp. 36–73, June 2008.
[24] P. D. Lax and M. S. Terrell, Multivariable Calculus with Applications,
1st ed. Springer, 2017.
[25] M. W. Hirsch, S. Smale, and R. L. Devaney, Differential Equations,
Dynamical Systems, and an Introduction to Chaos, 3rd ed. Elsevier,
2013.
[26] M. Spivak, Calculus on Manifolds: A Modern Approach to Classical
Theorems of Advanced Calculus. Addison-Wesley Publishing Com-
pany, 1965.
[27] C. Clason, Nonsmooth Analysis and Optimization. University of Graz,
2024.
[28] D. Shevitz and B. Paden, “Lyapunov stability theory of nonsmooth
systems,” IEEE Transactions on Automatic Control, vol. 39, no. 9, pp.
1910–1914, Sept. 1994.
[29] B. Zhang, R. Rajagopal, and D. Tse, “Network risk limiting dispatch:
Optimal control and price of uncertainty,” IEEE Transactions on
Automatic Control, vol. 59, no. 9, pp. 2442–2456, 2014.