Article

The effect of short term storage operation on resource adequacy

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... T HE two most widespread indicators used for resource adequacy assessments of power systems across industry, academia, and policy-making are arguably the loss of load expectation (LOLE) and the expected energy not supplied (EENS) [1]- [4]. Electricity markets around the world are currently undergoing a transformation towards renewable electricity generation with conventional generators gradually being pushed out of the market. ...
... This is particularly crucial since the maximum demand curtailment is sometimes used as a proxy for the required additional dispatchable capacity in a power system. The described issue has recently been analyzed in [4]. Importantly, the above-mentioned particularities of FBMC also play a crucial role regarding the temporal dimension. ...
... 3) Expected Maximum Power Not Supplied (EMPNS): In [4], the EMPNS is defined as the expected maximum depth of load shedding per scarcity event. We use a slightly different definition, which is based on the expected maximum load loss in a given zone per year as shown in (4). ...
Preprint
Full-text available
In interconnected electricity systems dominated by intermittent renewable electricity generation, resource adequacy assessments need to consider cross-zonal and intertemporal dependencies. This calls for a decision-making mechanism that determines a fair and unique allocation of inevitable demand curtailment across space and time. Otherwise, traditional resource adequacy indicators like the loss of load expectation (LOLE) or the expected energy not supplied (EENS) become arbitrary on a national level. Against this background, we develop a conceptual framework for evaluating the spatial and temporal fairness of demand curtailment allocation. We then introduce two model formulations – a linearized approach and a post-processing approach – that allow to implement a demand curtailment allocation mechanism into any dispatch optimization model. Simulation results from a European-wide case study show that on an overall system level, both approaches substantially increase the spatial fairness of the demand curtailment allocation while only insignificantly increasing the total EENS across all zones. However, only the linearized approach achieves a more fair temporal allocation across different time steps. The implications in an individual zone, e.g. Germany, are stark, which is illustrated by the LOLE varying between 12 and 116 h/a as well as the EENS between 94.9 and 293.2 GWh/a depending on the applied configuration.
... T HE two most widespread indicators used for resource adequacy assessments of power systems across industry, academia, and policy-making are arguably the loss of load expectation (LOLE) and the expected energy not supplied (EENS) [1]- [4]. Electricity markets around the world are currently undergoing a transformation towards renewable electricity generation with conventional generators gradually being pushed out of the market. ...
... This is particularly crucial since the maximum demand curtailment is sometimes used as a proxy for the required additional dispatchable capacity in a power system. The described issue has recently been analyzed in [4]. Importantly, the above-mentioned particularities of FBMC also play a crucial role regarding the temporal dimension. ...
... 3) Expected Maximum Power Not Supplied (EMPNS): In [4], the EMPNS is defined as the expected maximum depth of load shedding per scarcity event. We use a slightly different definition, which is based on the expected maximum load loss in a given zone per year as shown in (4). ...
Preprint
Full-text available
In interconnected electricity systems dominated by intermittent renewable electricity generation, resource adequacy assessments need to consider cross-zonal and intertemporal dependencies. This calls for a decision-making mechanism that determines a fair and unique allocation of inevitable demand curtailment across space and time. Otherwise, traditional resource adequacy indicators like the loss of load expectation (LOLE) or the expected energy not supplied (EENS) become arbitrary on a national level. Against this background, we develop a conceptual framework for evaluating the spatial and temporal fairness of demand curtailment allocation. We then introduce two model formulations – a linearized approach and a post-processing approach – that allow to implement a demand curtailment allocation mechanism into any dispatch optimization model. Simulation results from a European-wide case study show that on an overall system level, both approaches substantially increase the spatial fairness of the demand curtailment allocation while only insignificantly increasing the total EENS across all zones. However, only the linearized approach achieves a more fair temporal allocation across different time steps. The implications in an individual zone, e.g. Germany, are stark, which is illustrated by the LOLE varying between 12 and 116 h/a as well as the EENS between 94.9 and 293.2 GWh/a depending on the applied configuration.
Article
Full-text available
Existing indicators of electricity system adequacy need to be supplemented with economic performance indicators. As power systems are decarbonized, energy storage technologies are being developed and demand is becoming more flexible. Reliability standards need to reflect the price-elasticity of these sources of flexibility. During scarcity situations, this increased demand flexibility may prevent outages, but still lead to high electricity prices. If the average electricity price is well above the average cost of power supply, this can be an indication that the system is not adequate, even if the outage rate does not exceed the current reliability standards.
Article
Full-text available
Capacity credits, i.e., metrics that describe the contribution of different technologies in meeting the load during peak periods, are widely used in the context of long-term energy-system optimization models to ensure a pre-defined level of firm capacity. In the same vain, such capacity credits may be used in capacity markets to reflect the availability of an asset during periods of peak load. For storage technologies it seems that there is a discrepancy between the capacity credit that correctly captures the capacity contribution to the capacity target, and the capacity credit that correctly values the storage capacity. This is illustrated in a case study, which shows the differences in planning model outcomes when different capacity credit interpretations are used. Our results indicate that defining the capacity credit according to the contribution to the capacity target overvalues storage technologies, causing overinvestments. On the contrary, defining the capacity credit to reflect the value of the storage capacity, triggers the correct amount of storage investments, but underestimates the true peak reduction potential, which results in overinvestments in other firm capacity providers. In this regard, a novel iterative approach is introduced that leverages both capacity credit interpretations simultaneously to remove any bias in the solution.
Article
Full-text available
Research has found an extensive potential for utilizing energy storage within the power system sector to improve reliability. This study aims to provide a critical and systematic review of the reliability impacts of energy storage systems in this sector. The systematic literature review (SLR) is based on peer-reviewed papers published between 1996 and early 2018. Firstly, findings reveal that energy storage utilization in power systems is significant in improving system reliability and minimizing costs of transmission upgrades. Secondly, introduction of policies to shift from the use of fossil fuels to that of renewable energy positively affects energy storage system development. Thirdly, North America is an early pioneer of power system reliability and energy storage system studies. However, Asia has recently taken over the role, with China being the main driver. Research gaps within this field are also identified. This review can serve as basis for scholars in advancing the theoretical understanding of the reliability impacts of energy storage systems and in addressing the gaps within this field.
Article
Full-text available
Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly; and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies—to a large degree—the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.
Article
Full-text available
This paper is concerned with assessing the contribution of grid-scale storage to generation capacity adequacy. Results are obtained for a utility-scale exemplar involving the Great Britain power system. All stores are assumed, for the purpose of capacity adequacy assessment, to be centrally controlled by the system operator, with the objective of minimising the Expected Energy Not Served over the peak demand season. The investigation is limited to stores that are sufficiently small such that discharge on one day does not restrict their ability to support adequacy on subsequent days. We argue that for such stores, the central control assumption does not imply loss of generality for the results. Since it may be the case that stores must take power export decisions without the benefit of complete information about the state of the system, a methodology is presented for calculating bounds on the value of such information for supporting generation adequacy. A greedy strategy is proven to be optimal for the case where decisions can be made immediately after a generation shortfall event has occurred, regardless of the decision maker’s risk aversion. The adequacy contribution of multiple stores is examined, and algorithms for coordinating their responses are presented.
Article
Full-text available
The use of electrical energy storage (EES) and demand response (DR) to support system capacity is attracting increasing attention. However, little work has been done to investigate the capability of EES/DR to displace generation while providing prescribed levels of system reliability. In this context, this paper extends the generation-oriented concept of capacity credit (CC) to EES/DR, with the aim of assessing their contribution to adequacy of supply. A comprehensive framework and relevant numerical algorithms are proposed for the evaluation of EES/DR CC, with different “traditional” generation-oriented CC metrics being extended and a new CC metric defined to formally quantify the capability of EES/DR to displace conventional generation for different applications (system expansion, reliability increase, etc.). In particular, specific technology-agnostic models have been developed to illustrate the implications of energy capacity, power ratings, and efficiency of EES, as well as payback characteristics and customer flexibility (that often also depend on different forms of storage available to customers) of DR. Case studies are performed on the IEEE RTS to demonstrate how the different characteristics of EES/DR can impact on their CC. The framework developed can thus support the important debates on the role of EES/DR for smart grid planning and market development.
Article
Full-text available
Critical peak pricing and peak time rebate programs offer benefits by increasing system reliability, and therefore, reducing capacity needs of the electric power system. These benefits, however, decrease substantially as the size of the programs grows relative to the system size. More flexible schemes for deployment of demand response can help address the decreasing returns to scale in capacity value, but more flexible demand response has decreasing returns to scale as well.
Article
We show how to value both variable generation and energy storage to enable them to be integrated fairly and optimally into electricity capacity markets. We develop theory based on balancing expected energy unserved against costs of capacity procurement, and in which the optimal procurement is that necessary to meet an appropriate reliability standard. For conventional generation the theory reduces to that already in common use. Further the valuation of both variable generation and storage coincides with the traditional risk-based approach based on equivalent firm capacity. The determination of the equivalent firm capacity of storage requires particular care; this is due both to the flexibility with which storage added to an existing system may be scheduled, and also because, when any resource is added to an existing system, storage already within that system may be flexibly rescheduled. We illustrate the theory with an example based on the GB system.
Article
This paper studies the consistency between two contradictory policies in the electricity industry. On the one hand, electricity systems are increasingly interconnected. On the other hand, reliability standards, whose value was typically set when countries were hardly interconnected, are still enforced at the national level. We show that enforcing autarky reliability standards may still reach the welfare optimum in the presence of interconnections, but only under two conditions. First, installed generation capacities should be determined jointly, while considering the whole power system. Second, reliability calculations should fully internalize external adequacy benefits occurring in neighboring systems. We run a numerical application for a set of European countries and find that existing interconnections may lead to generation adequacy benefits of around one billion euros per year, by enabling a 18.9 GW decrease in generation capacity. In our case study, regional coordination is found to be more important than fully internalizing external reliability benefits in adequacy simulations.
Article
As more variable renewable energy (VRE) and energy storage (ES) facilities are installed, accurate quantification of their contributions to system adequacy becomes crucial. We propose a definition of capacity credit (CC) for valuing adequacy contributions of these resources based on their marginal capability to reduce expected unserved energy. We show that such marginal credits can incentivize system-optimal investments in markets with installed capacity requirements and energy price caps. We simulated such markets using a LP-based capacity expansion planning model with convexified unit commitment (UC) constraints and ES. The impacts of UC and ES on capacity credits are investigated. Furthermore, we analyze technology and system cost distortions resulting from implementing inaccurate CCs in the capacity market. The results show that ignoring UC constraints can overestimate the CCs for VRE and ES. Building ES increases the CCs of VRE resources with higher capacity factors and a negative correlation with load. Assigning the wrong credit to VRE can significantly distort resource mixes and system cost. Implementing the proposed CCs can, in theory, eliminate those distortions and achieve the same overall optimum as a theoretical market without energy price caps.
Article
Energy storage systems (ESS) are being established as an important component of modern power systems along with renewable energy sources. These new resources are contributing to clean energy generation by offsetting the conventional fossil fuel-based generators. However, maintaining a balance between the sustainability of these new resources and providing acceptable system reliability is extremely challenging. These resources adapt their operating strategies with different objectives based on ownership, regulatory requirements, and revenue opportunities. The benefits from an ESS to the owner and the system ultimately depend on its operating strategy. Non-utility owned resources operate to maximize their profits from the electricity market. The impact of such a market-oriented operation of ESS on the reliability needs to be monitored. This paper in that regard considers market-driven ESS operation for operational adequacy planning. An analytical operational adequacy evaluation framework that incorporates the concepts of state enumeration, time series model of wind, and energy storage systems in conjunction with the dynamic system probability estimation approach has been used for that purpose. Furthermore, these scenarios are analyzed in terms of their environmental and economic performance.
Article
Providing peaking capacity could be a significant U.S. market for energy storage. Of particular focus are batteries with 4-h duration due to rules in several regions along with these batteries’ potential to achieve life-cycle cost parity with combustion turbines compared to longer-duration batteries. However, whether 4-h energy storage can provide peak capacity depends largely on the shape of electricity demand. Under historical grid conditions, beyond about 28 GW nationally the ability of 4-h batteries to provide peak capacity begins to fall. We find that the addition of renewable generation can significantly increase storage’s potential by changing the shape of net demand patterns; for example, beyond about 10% penetration of solar photovoltaics, the national practical potential for 4-h storage to provide peak capacity doubles. The impact of wind generation is less clear and likely requires more detailed study considering the exchange of wind power across multiple regions.
Article
The value of lost load (VOLL) is an essential parameter for power system reliability. It represents the cost of unserved energy during power interruptions. Various studies have estimated this parameter for different countries and more recently, for different interruption characteristics – such as interruption duration, time of interruption and interrupted consumer. However, it is common practice in system operation and the literature to use only one uniform VOLL. Our theoretical analysis shows that using more-detailed VOLL data leads to more cost-effective transmission reliability decisions. Using actual consumer- and time-differentiated VOLL data from Norway, Great Britain and the United States, numerical simulations of short-term power system reliability management indicate a potential operational cost decrease of up to 43% in a five-node network, and between 2% and 18% in a more realistic 118-node network – mainly because of lower preventive redispatch costs in response to lower expected interruption costs. However, changed reliability practices could lead to opposition, if some consumers are disproportionately interrupted and not adequately compensated. Although the first policy measures to collect more detailed and harmonized VOLL data have been taken, future policy should improve transmission-distribution coordination and enable the participation of all consumers in curtailment programs, through smart meters and smart appliances.
Article
This paper considers the optimal dispatch of energy-constrained heterogeneous storage units to maximise security of supply. A policy, requiring no knowledge of the future, is presented and shown to minimise unserved energy during supply-shortfall events, regardless of the supply and demand profiles. It is accompanied by a graphical means to rapidly determine unavoidable energy shortfalls, which can then be used to compare different device fleets. The policy is well-suited for use within the framework of system adequacy assessment; for this purpose, a discrete time optimal policy is conceived, in both analytic and algorithmic forms, such that these results can be applied to discrete time systems and simulation studies. This is exemplified via a generation adequacy study of the British system.
Article
We present a simple method to calculate the marginal capacity credit of energy limited resources with increased penetration. Energy limited resources are defined as any resource with limited hours of dispatch across a day, month, or year. This includes emission limited resources, time-limited demand response, and diurnal energy storage such as batteries and pumped storage. This paper focuses on four-hour energy limited resources with daily dispatch. The method modifies well-established effective load carrying capability (ELCC) methodology by optimally allocating limited capacity to the highest loss of load hours. Subsequent energy limited resources are iteratively stacked upon the prior solution, creating a marginal capacity credit curve with increased penetration of energy limited resources. Two systems are tested using this method. The initial capacity credit depends on system characteristics such as load and renewable penetration. Regardless, the capacity credit for energy limited resources declines steeply with penetration.
Article
An electricity generation system adequacy assessment aims to generate statistically significant adequacy indicators given projected developments in, i.a., renewable and conventional generation, demand, demand response and energy storage availability. Deterministic unit commitment (DUC) models with exogenous reserve requirements, as often used in today's adequacy studies to represent day-to-day power system operations, do not account for the contribution of operating reserves to the adequacy of the system. Hence, the adequacy metrics obtained from such an analysis represent a worst-case estimate and should be interpreted with care. In this paper, we propose to use a DUC model with a set of state-of-the-art probabilistic reserve constraints (DUC-PR). The performance of the DUC-PR model in the context of adequacy assessments is studied in a numerical case study. The Expected Energy Not Served (EENS) volume obtained with the DUC model is shown to be a poor estimate of the true EENS volume. In contrast, the DUC-PR methodology yields an accurate estimate of the EENS volume without significantly increasing the computational burden. Policy makers should encourage adopting novel operational power system models, such as the DUC-PR model, to accurately estimate the contribution of operating reserves to system adequacy.
Article
As the penetration of variable renewable energy in electricity markets grows, there is increasing need for capacity markets to account for the contribution of renewables to system adequacy. An important issue is the inconsistent industry definition of capacity credits for resources whose availability may be limited, such as renewable generation. Inaccurate credits can subsidize or penalize different resources, and consequently distort investment between renewables and non-renew- ables, and also among different types and locations of renewables. Using Electric Reliability Council of Texas (ERCOT) data, we use a market equilibrium model to quantify the resulting loss of efficiency due to capacity credits alone and in combination with renewable tax subsidies and portfolio standards. Layering inaccurate capacity credits with existing US federal tax subsidies decreases efficiency as much as 6.3% compared to optimal capacity crediting under those subsidies. Compensating producers based on their marginal contributions to system adequacy, considering how renewable penetration affects the timing of net load peaks, can yield an efficient capacity market design.
Conference Paper
With rising shares of fluctuating renewable energy generation, the role of electricity storage becomes increasingly important. However, European pumped storage plants (PSPs) are currently confronted with economic challenges and their potential for smoothing the residual load curve is consequently not fully exploited. In this paper, the long-term effects of load-smoothing and the typical price-based PSP operation are compared with an emphasis on the development of the wholesale electricity prices and the generation adequacy. The study is conducted using an agent-based simulation model of the day-ahead electricity market that has been extended by a methodology to heuristically dispatch the PSPs in Germany. The results show, that the contribution load-smoothing PSP operation can make to improving generation adequacy is rather small. Putting a strategic reserve in place reduces the differences between price-based and load-smoothing PSP operation even further. Moreover, load-smoothing PSP operation goes along with slightly increased wholesale electricity prices as compared to price-based operation.
Article
This paper addresses the adequacy of a generating system considering the impact of operation strategies of storages and hydro energy resources. It is assumed that the installed storage capacity of a system can be assigned for two purposes: first, economical operation which adopts storage to shift electric energy and smooth the load curve; and the second, reliability-based operation, which deploys storage to avoid load curtailments. The economical operation strategy is represented by starting from time domain data and using the load curve modification method to synthesize the overall impact of storage. The reliability-based operation is modelled with a new method based on Markov chain model that considers the dynamic behaviour of storage during time. Moreover, a novel state reduction method is presented for decreasing the number of Markov states in applications to large systems. The effectiveness of the presented methods is evaluated by running several simulation scenarios with results presented on three test systems. The results demonstrate that evaluating the system reliability in the presence of energy storage by considering the frequency and duration of system states is more effective than using methods based on the Capacity Outage Probability Table (COPT).
Conference Paper
An understanding of the capacity value of demand response, which represents the contribution it could make to power system adequacy, could provide an indication of its potential economic value and allow for comparison with other resources. This paper presents a preliminary methodology for estimating the capacity value of demand response utilizing demand response availability profiles and applying a response duration constraint. The results highlight the sensitivity of the capacity value of demand response to the energy limitation of the resource, the need to target different load types in different systems and the relatively small size of the demand response resources examined in relation to overall system size.
Article
The paper assesses the impact of electric-drive vehicles (EDVs) on power system reliability. For this purpose, it introduces direct optimization of reliability indices LOLE (Loss of Load Expectation) and EENS (Expected Energy Not Served). The analysis is performed by the proposed optimization model applied in different strategies of charging/discharging of EDV batteries. In general, it is observed that numerous EDVs increase the system loading resulting in weakening of the system reliability. However, the paper comes to the conclusion that EDVs could support the system to some extent, depending on the penetration level of EDVs, if an appropriate charging/discharging strategy is applied. Besides this technical question the paper also addresses the costs of the system reserve provision required for the system reliability support. A system operator could engage additional power plants in order to maintain the system reliability or, if this is more cost effective, the support could be provided by EDVs applying the appropriate charging/discharging strategy. The paper proposes a new approach for the techno-economic assessment of possible solutions that are ranked by its price-performance ratio.
Article
We present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that it explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.
Conference Paper
Energy storage is an important feature in renewable energy based power systems especially in small isolated applications. This paper describes a sequential Monte Carlo simulation method for incorporating energy storage capability in generating capacity adequacy evaluation of such systems. Time series models were used to simulate the generation/load characteristics of a power system. Energy storage state time series were obtained from the load time series and the available generation time series and incorporated in the overall system adequacy evaluation. The impact of energy storage on power system reliability performance is investigated using example systems containing wind energy and/or solar energy
Article
The theory of loss-of-load probability mathematics has been generalized so that the effective load carrying capability of a new generating unit may be estimated using only graphical aids. A parameter m is introduced to characterize the loss-of-load probability as a function of reserve megawatts. Once m is known or estimated, the effective load carrying capability of a new generating unit may be related to its rating and its forced outage rate. Alternate unit additions may be compared on the basis of their effective capabilities. Comparable expansion patterns may be developed on the basis of equal load carrying capabilities. Numerical examples are used to illustrate the application of the effective capability concept to the evaluation of changes in the rating of a new unit and to the strategical design of expansion plans. Copyright © 1966 by The Institute of Electrical and Electronics Engineers, Inc.
Redefining Resource Adequacy for Modern Power Systems
  • D Stenclik
  • A Bloom
  • W Cole
  • A Acevedo
  • G Stephen
  • A Tuohy