Content uploaded by Johannes Hiry
Author content
All content in this area was uploaded by Johannes Hiry on Nov 22, 2019
Content may be subject to copyright.
25th International Conference on Electricity Distribution Madrid, 3-6 June 2019
Paper n° 1099
CIRED 2019 1/5
LARGE SCALE AGENT BASED SIMULATION OF DISTRIBUTION GRID LOADING AND
ITS PRACTICAL APPLICATION
Chris KITTL, Johannes HIRY, Christian WAGNER Christoph ENGELS
Christian PFEIFFER, Christian REHTANZ
TU Dortmund University – Germany Univ. of Appl. Sciences Dortmund – Germany
chris.kittl@tu-dortmund.de christoph.engels@fh-dortmund.de
ABSTRACT
Academic studies and long-term planning demand for
highly sophisticated simulation of distribution system’s
usage considering operational actions and repercussions
of market driven measures when applied on a large scale.
This paper presents enhancements to the SIMONA tool
enabling a large-scale distribution system simulation of a
lifelike 50,000 nodes model.
INTRODCUTION AND MOTIVATION
For efficient strategic energy system development, the
long-term planning stage as well as academic studies try to
anticipate the system’s usage for periods of several years.
Within this time horizon, new, yet unknown system usage
patterns may arise. One of the most important steps in
planning is to check, whether a given energy system is able
to serve an assumed energy demand. Especially time-
coupled assets, like storages or electric vehicles, do
challenge conventional methods, such as static power flow
calculations and require a time-series based assessment. A
system is explicitly suitable when system operation staff is
able to operate the foreseen system in real time without the
system reaching a prohibited state. Therefore, the
suitability-check shall be able to simulate operational
actions as well. Moreover, the strategic assessment may
also identify suitable incentive-based measures to reduce
conventional grid reinforcement needs. Those measures
may invoke unwanted and unintended independencies
when applied to a large amount of entities. The agent based
simulation environment SIMONA [1] is capable to fulfil all
these requirements. Its modular bottom-up design allows
for simulating grids of any theoretical size, while
accounting for individual aims and strategies of single
customers at the same time. The downside of this approach
is a high computational effort and a huge amount of data.
Within the present paper, we will introduce relevant key
features of SIMONA being mandatory for a large-scale
simulation and afterwards apply it to a case study made
with a real distribution grid model comprising approx.
50,000 nodes and nearly the same number of branches
spanning five voltage levels.
RELEVANT SIMULATION FEATURES
Assessing the distribution system state of the
aforementioned model for a period of one year in an hourly
resolution means to calculate at least 438 million complex
nodal powers – not accounted for iterative simulation of
control schemes. SIMONA is a bottom-up simulation
framework, determining each single nodal power based on
the individual behaviour of all approx. 20,000 connected
assets. This gives some impression of the computational
effort raised by such a case study. In the following some
insight into simulation features needed, to handle such a
comprehensive distribution grid model and its arising
computational complexity are depicted.
Tap-changing three winding transformers
Higher voltage levels often comprise special and complex
assets. One of those assets is a three winding transformer.
Whilst tap changers may occur on high or low voltage side
of two winding transformers, with three windings, it can
only be apparent on the high voltage side.
Based on [2] the authors model a three winding
transformer as an adopted T-one-line diagram shown in
Figure 1. The virtual node does belong to voltage level A
whilst the admittances ,
and ,
are referred to
voltage level A as denoted by the apostrophe. One main
design aspect of SIMONA is to assign galvanically
separated subnets to distinct NetAgents justified by the
advantage, that the simulation of different subnets can be
dispersed on different computers, as the used JADE
framework allows message delivery along physically
networks. The different voltage levels are connected via
messages sent during a forward backward sweep through
the levels. The lowest levels do a power flow calculation
and do sent their apparent power exchanged via the
interconnection nodes to the higher voltage levels until the
highest level is reached. That subnet then sends the
calculated nodal voltages at the interconnection nodes to
its lower grids, which recalculates the power flow, so that
it also can forward its nodal voltages. This is done back
and forth until exchanged power values do not change
anymore between the iterations.
Figure 1: Circuit symbol and equivalent one-line diagram of
a tap-changing three winding transformer
25th International Conference on Electricity Distribution Madrid, 3-6 June 2019
Paper n° 1099
CIRED 2019 2/5
For two winding transformers this process is quite simple,
as by design we choose to regard for them in the lower
voltage level. With three winding transformers, it becomes
more complicated, as the exchanged power has to be
handled at the virtual node. Therefore, we split up the
equivalent circuit and disperse it to the three concerned
NetAgents as shown in Figure 2. Following the above-
mentioned forward backward sweep, NetAgent B and C
first announce their apparent residual power via
message (1). If there is a voltage measurement assigned to
the given transformer in one of the inferior subnets, the
respective NetAgent also attaches a voltage regulation
request with its favoured in- or decrease in nodal
voltage at the connecting node. NetAgent A does receive
the messages, adds the apparent powers up and assigns
them as node apparent power for later power flow
calculation. Additionally, it balances the received voltage
regulation requests and adjusts the tap changer
accordingly. When the forward backward sweep is on its
backward part, NetAgent A sends the newly calculated
nodal voltage as well as the chosen tap changer position to
its inferior NetAgents B and C via message (2).
Figure 2: Message transfer between different NetAgents
Although the subnets A, B and C are strongly coupled in a
triangular interacting dependency, the above presented
approach allows for easy parallelisation of grid simulation
and thereby enables simulations of large-scale grids.
Multiple slack nodes
In addition to having three winding transformers in high
voltage grids, it is also common to have meshed grid
structure fed at more than one node from extra high voltage
level. Having in mind, that SIMONA divides the total grid
model up into galvanically decoupled grid models,
multiple coupling points to superior levels impose the need
to model multiple slack nodes per grid.
By default, the used Newton Raphson (NR) power flow
algorithm does not allow for having multiple slack nodes.
To overcome this shortcoming, we introduce a
SlackEmulator model. One node is arbitrarily chosen as
the “real” slack node, whereas the others serve as
connecting point for the previously mentioned
SlackEmulators. Those are dummy elements providing or
consuming a fixed nodal apparent power and storing the
target voltage of its connecting node.
We establish an additional loop around the NR calculation:
For the first iteration, the nodal residual powers are
summed up to estimate the total subnets residual power,
that later has to be balanced by the available slack nodes.
We evenly assign this apparent residual power to all slack
nodes – better to say available SlackEmulators – of this
subnet. Given a valid power flow result, the actual residual
power is recalculated and once again dispersed to all
SlackEmulators. The additional loop ends, when the
change in all SlackEmulators is less than a pre-set
threshold. During the backward stage of the power flow
algorithm, the given NetAgent receives the calculated
nodal voltage at each coupling point from its superior
NetAgent. In order to account for this recalculated nodal
voltage, all nodes serving as emulated slack nodes are
modeled as PV nodes having the received voltage
magnitude as target voltage.
In this way, the total subnets’ residual power is evenly
divided up to all coupling points. For future development
an impedance weighted balancing is intended to use to
allow for an even more detailed calculation, when the
coupling points differ much in their impedance-based
distance to the total grid model’s slack node.
Simple continuous power flow calculation
In SIMONA the power flow calculation is realised as a
numeric NR calculation. Based on an initial guess of the
nodal voltages describing the system’s state, non-linear
system equations are solved, until the system state
converges. The grid usage defines different classes of
challenges to the NR algorithm [3]. The proposed
approach addresses performance improvements for well-
conditioned power flow problems as well as leveraging the
risk of not finding a solution based on ill-conditioned
problems with small regions of attraction [3]. Until now
various complex methods have been developed to improve
performance of power flow calculation [4]. Anyhow, the
authors intend to make use of the special application
properties, time series based power flow calculation
proposes to the classic NR algorithm: Time series based
distribution system assessment is to simulate continuous
system usage. Hence, we assume, that a) the system is not
changing drastically over each time step and b) the usage
pattern will also not change too much. Therefore, we use
the last known information about the system to make a
better starting guess for the NR calculation of the next time
step.
The nodal residual apparent powers and Kirchhoff’s
law describe the system state – by the nodal voltages –
as a system of non-linear equations:
=
=[]
(1)
In equation (1) the nodal admittance matrix is denoted as
[] and describes the Hadamard product – the element
wise product of each vector. The NR algorithm is an
iterative approach and its basic principle is to linearize the
quadratic system equations in each th iteration step by
means of a multi-dimensional Taylor transformation and
applying the corrections to the solution of the previous
iteration step (1):
25th International Conference on Electricity Distribution Madrid, 3-6 June 2019
Paper n° 1099
CIRED 2019 3/5
()
()=()
()+()
()
(2)
with
()
()
()
()=
()
()
() (3)
()
()=()()
()
()
(4)
Equations (2)-(4) comprise the current iteration step , the
jacobian matrix () of this iteration step, the node
voltage corrections () resp. () and the vector of
changes in active power (()), reactive power (())
for each PQ node as well as the change in squared voltage
magnitude (()) in each PV node in comparison to the
previous iteration step (1).
Given the aforementioned assumption, that both the grid
structure as well as the grid usage – described by the vector
of nodal apparent powers – are not expected to differ much
from time step (1) to , the last known Jacobian matrix
() may help in making a good estimation for the start
vector in
()
()=
()()
()
(
)
(
)
()
()
(5)
by the help of the known nodal powers in time step (1)
lying satisfactorily close to the final result, reducing the
amount of iterations. Although a single iteration does not
take a long time, each saved iteration highly increases the
scalability of the simulation due to the high number of
power flow calculations.
Wide area voltage regulation
Time series based distribution grid simulation reduces the
disparity of planning process to operation simulation, as
the operative measures or control schemes should
favourably be accounted for in planning as well. One
interesting aspect in this context is the simulation of
transformer tap control schemes.
In general, there are two schemes used in practical
application. Local transformer tap control simply
compares the voltage magnitude at the transformer’s
secondary bus to a pre-set threshold. On the other hand,
wide area control scheme accounts for measurements
submitted by voltage measurements installed at nodes
prone to extremal voltage magnitudes.
To account for this, we introduce measurement system
models to SIMONA. They may be placed at some nodes in
the grid and define a restriction on what simulation values
may be available to a given control scheme. Within the
simulation’s configuration stage, the user is able to define
a trigger model – comprising minimum and maximum
voltage magnitude threshold as well as a list of available
measurement systems – and assign it to the given
transformers, both two and three winding.
By the help of those measurement system, SIMONA is
capable to simulate both local and wide area tap control
schemes, examine the impact of different measurement
placing strategies and may develop further control
schemes based on the availability of measurements in the
grid under testing.
RESULT PRESENTATION
The outcome of such a large-scale simulation is a huge
amount of data, which needs to be presented and analysed
appropriately. In conformance with the Gartner definition,
large-scale simulation is regarded as part of Big Data [5].
Big Data analysis requires an extensive data access when
joining different data sources. Usually this includes full
table scans over the entire data volume, which define
expensive database operations. Here the authors follow a
Deep Data approach, which takes the data gathered and
pairs it with industry experts who have in-depth
knowledge of the area. Deep Data pares down the massive
amount of information into useful sections, excluding
redundancy. Instead of just thinking "big" when it comes
to data, the approach is to start thinking "deep". The Deep
Data framework is based on the premise that a small
number of information-rich data sources, when leveraged
properly, can yield greater value than vast volumes of
data [6],[7]. The approach starts with the definition of
appropriate granularity levels derived from business use
cases of grid planning or asset management using methods
like the Kimball Enterprise-Bus-Matrix [8]. The
identification of coarser granularity allows for pre-
aggregated data representations and smaller data volumes.
The Kimball matrix is used to derive the information-rich
data sources as an efficient foundation of further analysis.
Figure 3: Visualisation of Key Performance Values
25th International Conference on Electricity Distribution Madrid, 3-6 June 2019
Paper n° 1099
CIRED 2019 4/5
The proposed approach includes the preparation of
measures like asset loading, voltage magnitude and angle
in geospatial, schematic and tabular views. These views
can be controlled by rich filter functions restricting the key
performance values to dedicated dimension elements like
scenarios, time intervals, regions or voltage levels on
aggregated and detailed levels. Figure 3 shows the
aggregated asset loading as the selected key performance
indicator in a mid-voltage grid sector.
CASE STUDY
To demonstrate the presented concept, we carry out a case
study. Please notice, that it has not been focus yet to find a
good system state by trimming the model parameters, but
to show the large scale applicability of our agent-based
simulation environment SIMONA.
Simulation model
A lifelike distribution grid model of the project
Agent.GridPlan [9] has been used in this paper. It
comprises two medium voltage (MV) levels and each one
extra high (EHV), high (HV) and low voltage (LV) level
with the key values listed in Table 1.
Table 1: Key values of the simulation model
Volt. lvl.
Subnets
Nodes
Branches
Shunts
5
568
47,661
47,814
20,403
The shunt elements represent both assets for consuming
and producing energy. Loads are modelled as standard
household loads as per German standard load profile [10].
All generating assets are modelled following a bottom-up
approach, calculating the apparent power output based on
(non-)electrical fundamental data [1]. Simulations are
carried out with weather data for one day in July with an
hourly resolution. As the correct geographical siting of
nodes is known, all weather dependent assets are served
with the geographically correct weather data.
Investigation A: Performance increase with
simple continuous power flow
The first investigation targets the potential performance
increase using the aforementioned approach of guessing
improved start vectors for power flow calculation in
comparison to simply using the target voltages. A PC with
Intel Xeon E5-1650 CPU and 128 GB RAM serves as
simulation platform.
The total simulation times shown in Table 2 reveal that the
improved guessing of start vectors can be reduced by
approx. 0.98 % depending on the actual simulation. In
order to determine the final distribution grid state of one
subnet in each time step, a lot of power flow calculations
have to be carried out. Iteration loops are introduced by
• the forward backward sweep to integrate all
voltage levels,
• balancing out the subnets residual power on
different slack nodes and
• control schemes (like () control and traffic
light concept [11]) or negotiations
among others. Therefore, if one of four to six inner
iterations can be saved, this has a major impact on the
overall performance.
Table 2: Total simulation time for both calculation approaches
Simple
Extended
Simulation time
640.79 s
629.68 s
+ Export time
1,140.82 s
1,180.10 s
Moreover, the comparison between simulation with or
without result export highlights the urgency to think about
further usage of simulation results. Persisting everything
costs around 180 % to 190 % of total time. Although using
buffers and parallelisation decouples actual simulation and
Input/Output-processes, further steps could only take
place, when all results are available in database. Therefore,
the following remarks should be considered when
applying time series based simulations on a larger scale:
1) The (needed) output of the simulation shall be
specified properly. Data filtering and information
compression shall be used where possible and
loss-free in terms of information – Deep Data
instead of Big Data as already mentioned.
2) Integration of processes and tools plays a major
role. When data can be kept in memory and
directly handed over to next process steps, major
savings can be made. Therefore, increased
discussion about open interface definitions and
open source tools is appropriate.
How iteratively interacting process modules and
information compression could be realised, was also part
of the Agent.GridPlan project and can be reviewed in [12].
Investigation B: Wide area monitoring system
As a simple realistic application example, we conduct a
comparative assessment of using local vs. wide area tap
control scheme.
The permissible voltage dead band of ± 10 % [13] at each
end customer’s connection point has been assigned with
± 4 % to MV level and ± 6 % LV level comprising also the
voltage drop over the secondary substation transformer as
usually applied in German distribution grid studies [14].
To determine the transformer trigger settings, two initial
simulations are made with the following settings:
Transformers have a fixed tap position that would lead to
a secondary bus voltage close to 1.03 p. u.. The first
simulation A is ran with only load to determine each
subnet ’s maximum voltage drop ,,.
Analogously for high infeed and low load (30 %) to
determine ,, (simulation B). To ensure
compliance with voltage thresholds the triggers are set to:
,,= 0.96 p. u. +,,
(6)
,,= 1.04 p. u. ,,
(7)
25th International Conference on Electricity Distribution Madrid, 3-6 June 2019
Paper n° 1099
CIRED 2019 5/5
Additionally simulation A and B define the candidates for
voltage measurements used in wide area control scheme
(simulation C) as the nodes with the extremal voltages.
The triggers for simulation C are set to:
,,= 0.96 p. u. +
(8)
,,= 1.04 p. u.
(9)
Figure 4: Nodal voltages in both control schemes (green: local
control, orange: wide area control)
Figure 4 shows, that local tap control is not able to
prohibit violations in LV, whereas no violation in MV is
apparent. Moreover, also the wide are control scheme is
not able to relieve the violations in LV. Obviously the
assumed voltage limits per voltage level are not suitable
and need to be revised. This highlights SIMONAs potential
in assisting planning engineers in their decision-making.
CONCLUSION AND OUTLOOK
The present paper gives insight into simulation (model)
complexity arising, when time series based grid
performance assessment shall be used. With the shown
adoptions SIMONA proofs to be a powerful tool to be used
for academic studies, like [12] and for use in long term
planning processes.
Future work will mostly focus on how time series based
grid performance assessment can be incorporated in easy
to use and comprehensive future-ready planning
processes. Main topics of interest are reduction of data,
recognition of repeating usage patterns as well as decision
supportive functionalities.
ACKNOWLEDGEMENTS
The authors gratefully thank Westnetz GmbH for
supporting the presented research by granting access to the
mentioned real-life grid model during their participation in
the research project Agent.GridPlan. The European Fund
for Regional Development has funded the project under
grant agreement number EU-1-1-006.
REFERENCES
[1] J. Kays, C. Rehtanz, 2016, “Planning process for
distribution grids based on flexibly generated time
series considering RES, DSM and storages”, IET
Generation, Transmission & Distribution (IET GTD),
vol. 10, 3405-3412. DOI: 10.1049/iet-gtd.2015.0825.
[2] F. de Leon, J. A. Martinez, 2009, “Dual Three-
Winding Transformer Equivalent Circuit Matching
Leakage Measurements”, IEEE Transactions on
Power Delivery
, DOI:
10.1109/TPWRD.2008.2007012.
[3] F. Milano, 2009, “Continuous Newton's Method for
Power Flow Analysis”, IEEE Transactions on Power
Systems, vol. 24, 50-57, DOI:
10.1109/TPWRS.2008.2004820.
[4] J. F. Gutiérrez, M. F. Bedriñana, C. A. Castro, 2011,
“Critical comparison of robust load flow methods for
ill-conditioned systems”, Proceedings of 2011 IEEE
Trondheim PowerTech, Trondheim, DOI:
10.1109/PTC.2011.6019405.
[5] M. Beyer: “Gartner Says Solving „Big Data“
Challenge Involves More Than Just Managing
Volumes of Data”, available:
https://www.gartner.com/newsroom/id/1731916
[6] K. Matthews, 2016, “The Difference Between Big
Data And Deep Data - Understanding the difference
will be important for 2017” available:
https://channels.theinnovationenterprise.com/articles/t
he-difference-between-big-data-and-deep-data.
[7] B. Raghavan, 2017, “Defining Deep Data: What It Is
and How to Use It”, available:
http://www.itbusinessedge.com/slideshows/defining-
deep-data-what-it-is-and-how-to-use-it-05.html, 2017.
[8] R. Kimball, M. Ross, 2002, “The Data Warehouse
Toolkit: The Complete Guide to Dimensional
Modeling”, Wiley.
[9] L. Jendernalik, D. Giavarra, C. Engels, J. Hiry, C. Kittl,
C. Rehtanz, 2017, “Holistic network planning
approach: enhancement of the grid expansion using the
flexibility of network participants”, 24th International
Conference & Exhibition on Electricity Distribution
(CIRED), Glasgow
[10] H. Meier, C. Fünfgeld, T. Adam, B. Schlieferdecker,
1999, “Repräsentative VDEW-Lastprofile”, available:
https://www.bdew.de/media/documents/1999_Reprae
sentative-VDEW-Lastprofile.pdf
[11] J. Kays, A. Seack, U. Häger, 2016, “Consideration of
innovative distribution grid operation concepts in the
planning process”, Proceedings of 2016 IEEE PES
Innovative Smart Grid Technologies Conference
Europe (ISGT-Europe), Ljubljana, DOI:
10.1109/ISGTEurope.2016.7856331.
[12] J. Hiry, C. Kittl, L. Willmes, S. Schimmeyer, C.
Rehtanz, 2019, "Automated Time Series Based Grid
Extension Planning Using a Coupled Agent Based
Simulation and Genetic Algorithm Approach",
Proceedings of the 25th International Conference on
Electricity Distribution (CIRED), Madrid
[13] European Standard EN 50160:2011, “Voltage
Characteristics of Electricity Supplied by Public
Distribution Systems”
[14] Deutsche Energie-Agentur GmbH (editor), 2012,
“Ausbau- und Innovationsbedarf der Stromverteilnetze
in Deutschland bis 2030.”, Berlin