Available via license: CC BY 4.0
Content may be subject to copyright.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
1
A Cloud-Based Platform for Service Restoration in
Active Distribution Grids
Maliheh Haghgoo, Alberto Dognini and Antonello Monti, Senior Member, IEEE
Abstract—In modern distribution grids, the access to the
growing amount of data from various sources, the execution
of complex algorithms on-demand, and the control of sparse
actuators require on-demand scalability to support fluctuating
workloads. Cloud computing technologies represent a viable solu-
tion for these requirements. To ensure that data can be exchanged
and shared efficiently, as well as the full achievement of the
cloud computing benefits to support the advanced analytic and
mining required in smart grids, applications can be empowered
with semantic information integration. This paper adopts the
semantic web into a cloud-based platform to analyse power
distribution grids data and apply a service restoration application
to re-energize loads after an electrical fault. The exemplary
implementation of the demo is powered by FIWARE, which is
based on open-source and customizable building blocks for future
internet applications and services, and the SARGON ontology
for the energy domain. The tests are deployed by integrating the
semantic information, based on the IEC 61850 data model, in the
cloud-based service restoration application and interfacing the
field devices of the distribution grids. The platform performances,
measured as network latency and computation time, ensures the
feasibility of the proposed solution, constituting a reference for
the next deployments of smart energy platforms.
Index Terms—Smart energy platform, Service-oriented, Mid-
dleware, FIWARE, Service restoration, Cloud-based platform,
Semantic web
ACRO NY MS
API Application Programmable Interface
CAIDI Customer Average Interruption Duration Index
CB Context Broker
CDC Common Data Class
CIM Common Information Model
CT Current Transformer
DER Distributed Energy Resources
DMS Distribution Management Systems
DO Data Object
FLISR Fault Location Isolation and Service Restoration
GTNET Gigabit Transceiver Network
HILP High Impact Low Probability
IED Intelligent Electronic Device
LD Logical Device
LN Logical Node
NGSI-LD NGSI Linked Data
OMA Open Mobile Alliance
RESTful Representational State Transfer
RTDS Real Time Digital Simulator
The authors are with the E.ON Energy Research Center, Institute
for Automation of Complex Power Systems, RWTH Aachen Univer-
sity, Aachen 52074, Germany. (e-mail: mhaghgoo@eonerc.rwth-aachen.de;
adognini@eonerc.rwth-aachen.de; amonti@eonerc.rwth-aachen.de).
RTU Remote Terminal Unit
SAIDI System Average Interruption Duration Index
SAREF Smart Appliance Reference
SCADA Supervisory Control And Data Acquisition
SOA Service Oriented Architecture
SOAP Simple Object Access Protocol
SR Service Restoration
I. STATE O F ART
The electrical grid network must be continuously monitored
through data gathered from measurement instruments, to guar-
antee the prompt intervention of protection components in case
of fault, minimize the consequences of outages on the power
delivery and recover to healthy conditions in minimum time
[1]. The occurrence of electricity interruptions causes several
economic and social impacts on the network and worsens
the reliability indices such as System Average Interruption
Duration Index (SAIDI) or Customer Average Interruption
Duration Index (CAIDI), which represent the quality of power
delivery [2]. The utilities aim at improving these perfor-
mance indicators to avoid sanctions and increase the end-
users satisfaction, by equipping the distribution grid with smart
automation solution [3]. Currently, many electrical networks
are not fully automated, and corrective actions for the fault
management are performed by human operators. This process
introduces delays and criticalities, which reduce the reliability
of the system, depending upon the size of the outage [4].
Fault Location Isolation and Service Restoration (FLISR) is
a key enabler of the self-healing of Distribution Management
Systems (DMS) in automated electrical grids [5], [6]. FLISR
consists of two parts: at first the faulted area is isolated by
tripping the nearest upstream circuit breaker and opening the
downstream switches. Then the network is re-configured to
energize the restorable loads, which had been disconnected
being downstream the isolated faulted area. This paper focuses
on the Service Restoration (SR) functionality, on which the
duration of the service interruption and therefore the reliability
indices depend upon.
The automated SR requires a vast amount of data (switching
devices statuses, measurements from metering devices, load
and generation profiles as well as network parameters) that
need to be analyzed, requiring high computing power and time
consuming algorithms. Consequently, different solutions have
been suggested by researchers based on cloud computing to
provide scalability on-demand, at fault occurrence. Previous
research activities investigated distributed and parallel com-
puting in future power systems. A grid computing method is
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
2
proposed in [7], a fast parallel processing of mass data in cloud
computing is given in [8] and a study of running a smart grid
control software in cloud computing is presented in [9], to
identify and examine technical needs for building the smart
grid automation and the computing infrastructure.
Furthermore, to ease and improve the development of dis-
tributed architectures, cloud vendors are offering distributed
system services, called middleware, which have standard Ap-
plication Programming Interface (API) and protocols. Several
studies focus on middleware, in the frame of projects or
research, as compared and classified in [10], [11]. Recently,
middleware development is central importance to the concept
and design of a Service Oriented Architecture (SOA). In SOA,
application components provide services to other components
via a communication protocol and web standards such as Sim-
ple Object Access Protocol (SOAP) and Representational State
Transfer (RESTful) [12], [13]. SOA enables SR application
development through discrete units of functionality, which are
self-contained interoperable and technology neutral.
Hence, the traditional cloud computing paradigm like data
center and cloud services must be extended to address cross-
domain requirements and include a diverse type of data
sources. Indeed, such a system requires to address the chal-
lenges regarding the access to the different resources.
Accordingly, one of the primary requirements of such
infrastructure is the provisioning and discovery of various
data sources, including registration, removal, querying, etc.
Therefore a semantic-based approach has been taken, which is
introduced at early stages of cross-domain information access
and tackles discovery of various data sources [14].
Moreover, smart grid is known as a complex cyber- phys-
ical system due its decentralized infrastructure. To address
challenges introduced in smart grid on substation automation
level, IEC 61850 and IEC 61499 are taken into consid-
eration as reference industrial standards. There have been
several researches that applied ontology modeling of these two
standards. To harmonize the industrial standards IEC 61850
and the Common Information Model (CIM), [15], [16] use
ontologies which consist of the industrial standards IEC 61970
and IEC 61968. [17] has developed a requirement modeling
framework that automated the process of translation from
requirements to ontology-based on IEC standards. Neverthe-
less, the main contribution of mentioned researches was on
modeling the smart grid rather than demonstrating the usage of
semantic information model in the system. In [18], semantic
information model has been employed to provide sufficient
interoperability and reduce cost associated with implementing
advanced controls, fault detection and diagnostics. Although
they have investigated effort in developing information model,
their work is limited to the smart building technologies and
use cases.
According to the aforementioned studies, in this article,
cloud-based platform for SR in active distribution grid has
been extended with a semantic information model based on
IEC 61850 standard. The features of semantic model ease the
future graph analysis in support of network reconfiguration.
Furthermore, advanced analytic can be established with en-
gaging semantic model of smart grid system.
Fig. 1. Three main layers in cloud-based platform.
Fig. 2. Architecture mapped to the services.
To accelerate the implementation process, FIWARE [19] has
been used. FIWARE is an example of SOA and open-source
architecture platform that extended its powerful interface with
NGSI Linked Data (NGSI-LD) [20], to include semantic web
into its core model. NGSI-LD model helps to cross-cut context
information and precisely communicate the nature of context
information for a given service, such as its period of validity,
its geographical constraints and each relevant semantic infor-
mation.
II. ARCHI TE CTURE OV ERVIEW
This work presents a cloud-based SR according to the
SOA. This goal is achieved via combination of a multi-
layered architecture and SOA based on three main layers:
data acquisition and translation related middleware, generic
and specific domain middleware and an open API, as shown
in Figure 1.
Layer 1 is responsible of data acquisition, collection, trans-
lation and transmission to layer 2. It accommodates several
standard protocols (as IEC 61850 MMS, GOOSE, DNP3 and
etc.) and data translation. This layer supports the mapping of
raw data coming from Intelligent Electronic Device (IED) to
a standardized data representation in the cloud.
Layer 2 is a combination of generic and specific domain
middleware that are loosely coupled to perform a distributed
SR in cloud. The scenario of this work uses services to
manage, analyze time-series and visualize information. The
services in this layer are taken from FIWARE catalogue and
presented in the following. Layer 3 is a publicly available API
that provides developers with open access to the back-end data.
Figure 2 represents the detailed mapping of middleware
with respect to the layered architecture explained before and
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
3
Fig. 3. Context element conceptual diagram [19].
presented in Figure 1. The yellow and green boxes in the
Figure 2 present the first and second layer middlewares,
respectively. The IoT agent, which is shown in yellow, is
located in layer 1 of the architecture and used to collect and
translate the data from the field. CrateDB, Quantum Leap,
Context Broker (CB), MangoDB, time-series visualization,
and FLISR are a list of generic and specific services that are
located in the layer 2 of the architecture and shown in green.
In the following, the platform services are explained. Fur-
thermore, SR is a domain specific service implemented by
composing FIWARE services. In particular, the embedding of
SR into FIWARE services and the feasibility of the presented
cloud-based platform for SR are analyzed in Section III and
IV, respectively.
A. FIWARE Services
FIWARE framework is used as a central system to manage
and store the data; moreover, this framework contains a list of
open-source services to facilitate and accelerate development
of smart internet-related application in various domains [19].
In the following, a brief overview of the services powered by
FIWARE that are used in the SR platform is given.
The CB is a key service in the FIWARE catalogue. It
eases the development and provisioning of innovative appli-
cations that require management, processing and exploitation
of context information as well as data streams, in real-time
and at massive scale and distributed format. This broker is
used to develop applications dealing with publish/subscribe
of data through NGSI, which is currently a de-facto standard
that has been released by the Open Mobile Alliance (OMA)
[21]. Contextual elements that are defined based on NGSI are
referred to as entities. Entities are physical objects (i.e., sensors
or actuators), hardware and software as presented by generic
data structure in Figure 3.
Based on this conceptual definition, entities are uniquely
identified by an ID. Each entity can have attributes which are
related to its characteristics. These attributes have static or
dynamic values and are represented by the triplets <name,
type ,value>. NSGI-LD includes semantic data representation
within entities and their attributes and is published as a
standard by ETSI [20]. NGSI-LD is expressive enough to
connect and federate other existing information models, using
JSON-LD. It is also compatible with RDF so that triple stores
and application logic, e.g. in SPARQL or DataCube software,
can be applied. The NGSI-LD information model is defined at
two levels, consisting of the core meta-model, i.e. the cross-
domain ontology, and the domain-specific ontologies as shown
in Figure 4. Based on different studies in the agricultural
domain with several sensors [22], this conceptual definition of
context information is compatible with IoT infrastructure and
easily extendable. In order to publish data of physical devices
Fig. 4. NGSI-LD information model [20].
Fig. 5. FIWARE IoT Platform Architecture [19].
called ”IoT devices through cloud”, NGSI-LD interfaces IoT
devices and Context management services. To accomplish this
transaction of information, NGSI-LD uses RESTful API via
HTTP. The data itself is stored in an underlying MongoDB
which refers to CB. CB provides data like as a storage;
therefore, to make this service more powerful, other building
blocks are considered. Each block has its own technological
focus, which is dealing, for example, with storing time series
data like Quantum-Leap and CrateDB, big data processing or
visualization.
IoT agents establish communication of devices or smart
resources with the cloud platform based on defined standard
protocols. The FIWARE IoT platform contains the two blocks
that are IoT Backend and IoT Edge. The IoT Backend focuses
on the management of IoT devices by providing a list of
functions and services in the cloud. It connects the physical ob-
jects with the platform.The IoT Edge is a protocol or standard
adapter to all on-field IoT physical objects connecting them
with IoT Backend. The IoT Edge offers the communication
layer to the host environment. The overall view of FIWARE
IoT is presented in Figure 5.
B. Domain Specific Ontology
The previous work regarding cloud-based platform for SR
in active distribution grids is described in [23] and it assesses
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
4
the SR as a cloud-based service. To integrate the semantic
web into the implemented SR cloud-based platform, this
article proposes the developed SARGON [24] ontology, which
extends Smart Appliance Reference (SAREF) ontology [25] to
cross-cut domain-specific information, representing the smart
energy domain, and includes joint management of building and
electrical grid automation. SARGON ontology is based on IEC
61850 and CIM standards and is developed for the real use
cases like monitoring and control of electrical grids via IEDs,
automation of medium voltage distribution grids, controlling
of the energy demand in buildings, energy management with
residential/non-residential involvement, etc.
Additionally, in the smart energy domain, the role of CIM
and IEC standards are undeniable. Therefore, reference to
these two standards as a pattern is useful to extract the domain
model and components to monitor and protect the smart grids.
Indeed, it improves the interoperability in the information
layer.
By considering the aforementioned concept, SARGON on-
tology is made of several interconnected domain ontologies
that are linked into the core SARGON ontology. Figure 6
presents an overview of SARGON ontology. In the core of
SARGON ontology, different types of devices in the smart
energy domain are defined, which are either grid or building-
related, to extend SAREF ontology. Smart energy devices
are the main building blocks in SARGON ontology and add
several properties and classes into SAREF, based on the
mentioned standards CIM and IEC 61850.
Concerning NGSI-LD information model as presented in
Figure 4, the SARGON ontology, which is developed for the
smart energy domain, is taken as a domain-specific ontology.
Furthermore, it is mapped into the cross-domain ontology of
the NGSI-LD information model.
For feasibility check, the assessment cases adopt the SAR-
GON ontology to provision, govern, discover, and query the
data of smart grid network.
C. SR as a Domain Specific Service
To perform the SR functionality, this paper uses an innova-
tive implementation of a rule-based optimization algorithm,
specifically adapted for the cloud-based platform presented
in this paper, and that has been developed as first reaction
in critical situations [26]. This algorithm was proved to be
fast, simple and, thanks to its adaptability to unstable grid
conditions, suitable for High Impact Low Probability (HILP)
events as cascade faults, which cause multiple de-energized
sections in the electrical network [1].
In this study, the RBO algorithm for service restoration of
active distribution grids has been implemented as a service. It
is applied to radial electrical grids that host Distributed Energy
Resources (DER) along the feeders and are fed through several
primary substations, assumed to behave as power sources,
energizing the loads connected along their own feeders. The
feeders of different primary substations are connected via
normally open switches, called bus-tie units. As soon as an
electrical fault (e.g. a short circuit) occurs in the grid, the
nearest switching devices, upstream and downstream, open to
isolate the fault area. After the fault is cleared, the service
restoration is activated to re-energize the loads downstream
the fault area, which were disconnected due to the actuation
of the protection schema.
The RBO algorithm analyzes the priority factors of the de-
energized loads, from the grid parameters data set, to select
the node of most crucial customer (e.g. hospitals, gas network
pumps, critical infrastructures), which becomes the objective
of the iterative restoration process. The type of demand in
the network determines the criticality of the corresponding
network node and, hence, the priority factor. Considering the
grid topology based on the actual statuses of the switches,
the algorithm identifies which is the most suitable primary
substation to re-energize the selected node with an alternative
path, by closing a normally open bus-tie unit. Each network
topology, obtained by closing a different bus-tie unit for
each substation in the network that can reconnect the target
node, is evaluated with a state estimation approach, which
considers as measurements the voltage at the slack bus and
the power injections (active and reactive) at each other bus
of the feeder [27], [28]. These measurements are ordinarily
available for medium voltage distribution grids, whose nodes
are associated with secondary substations. If the technical
constraints are verified (voltage, thermal and network radial-
ity), the restoration scheme having the minimum total power
losses is implemented. Sections of the feeders that include
DERs assume a particular priority because they reduce the
power requested from the primary substation and facilitate
compliance with the technical constraints. After the selected
tie unit successfully closes, the process restarts and contin-
ues until all the de-energized loads are reconnected or the
constraints are violated. The algorithm can manage whichever
network change that could occur during the restoration process
and considers real-time measurements from field devices or
forecast data, together with their accuracy, in the computation
process. The algorithm is developed in the programming
language Python, suitable for the integration in the cloud-based
architecture with the presented FIWARE services: it has been
adapted to retrieve necessary data from CB and provide the
computed solution, according to a common data structure. As
a specific service in the platform, it is completely independent
of grid topology and electrical parameters.
III. CLOU D-BASE D PLATF ORM FOR SE RVI CE
RES TOR ATIO N
The presented architecture is used to implement an auto-
mated cloud-based SR. The services introduced in the last
section are assembled in OpenStack, which is a free and open-
source software platform for cloud infrastructure management.
With respect to Section II, the data from measuring devices
are sent to the platform via communication protocols such as
MQTT, CoAp, etc. Since CB supports HTTP and RESTful
API, data from measuring devices can be sent directly to CB
based on SARGON ontology. The FIWARE IoT enabler, as
an interface to the field, supports data stream to the platform
by translating and transferring the data of measuring devices
to the CB with different communication protocols.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
5
Fig. 6. SARGON Ontology Network Structure [24].
In the test setup the main focus is on cloud-based SR,
therefore the given architecture is validated in the simplest
possible case. In this case, measurement values are provided
according to the SARGON ontology to the cloud platform via
Ethernet using HTTP protocol. Since all interactions between
the services are initiated by HTTP, IoT enabler is omitted
in this setup. Additionally, FIWARE includes many services
for context data process, analysis and visualization, which
are however not used in the SR platform configuration. The
proposed platform aims to enhance the data management
and the associated services for real distribution grids. Its
application consists of the integration within the existing DMS:
the deployed software components make use of a semantic
information model to efficiently integrate the field devices
and manage their data in a scalable, cloud-based platform that
supports on-demand workloads. This approach is particularly
relevant due to the increasing amount of data to be managed
by the Supervisory Control And Data Acquisition (SCADA)
systems of the grid operators. Several services to operate the
distribution grids can be further integrated into the platform:
our application focuses on the automated service restoration
component, to efficiently re-energize the disconnected MV
nodes due to fault occurrences.
A. IEC 61850 Object model in SARGON
IEC 61850 is the leading standard for substation automation,
whose information can be found in [29]. IEC 61850 defines
an object-oriented modelling of data process required for
the system automation and has been primarily defined for
exchange of information within substations in the part IEC
61850-7-4 [30]. Figure 7 presents the main structure of the
IEC 61850 information model. The main class is the server,
hosted by physical devices, corresponding to the controller
part of IEDs. The server can host one or more Logical Device
Fig. 7. IEC 61850 Information Model [31].
(LD), which are virtual representation of devices designed
for supervision, protection or control of automation systems.
LDs are the combination of several Logical Node (LN) which
define device functionality interfaces. Data Object (DO) are
known as a Common Data Class (CDC) from IEC 61850-7-3
or IEC 61850-7-2 [30]. Furthermore, type and structure of the
data for each CDC are described within the LN.
According to aforementioned description of IEC 61850,
logical device class has been defined in SARGON ontology to
represent the subclass of devices for grid related domain. Fur-
thermore, the hierarchical data model according to developed
IEC 61850-7 in SARGON is shown in the Figure 8, which
represents various developed IEC 61850 elements. To adopt
SARGON into the platform a serialization into NGSI-LD is
required, which has been performed manually and accurately
described. For instance, ”Relay1.XCBR1.ST.Pos.stVal” is sent
to CB, representing the status of the circuit breaker switches.
It is constituted by the specific components here described:
Relay1: logical device, protection relay related to the circuit
breaker; XCBR1: logical node “circuit breaker” of instance
“1”; ST: functional constraint for “status information”; Pos:
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
6
Fig. 8. Data Representation based on IEC Standard Presented in Protege.
data object “position”, accessed to verify the switch position;
stVal: data attribute to indicate the status value of the data.
B. Service Restoration Middleware Control Flow
Figure 9 shows the overall control flow of SR: detecting
the fault via CB, performing the service restoration process
and sending the command to actuate the computed solution.
In this setup, both the CB and the SR services run in the
same network. Data are provided from the smart devices
in the electrical network to the platform via HTTP and are
formulated based on the data structure of CB.
The grid data include the switches information (status and
tripped condition) and the real-time measurements necessary
for the state estimation. The switches statuses are provided
to CB as changes occur, whereas the field measurements are
updated every 2 seconds, in line with the cycling polling
rate from Remote Terminal Unit (RTU) to SCADA systems
in distribution networks [32]. In order to execute the state
estimation, static data are stored and accessible in the CB:
network model (topology and parameters of the lines) as well
as configuration and accuracies of measurement devices.
The CB is capable of receiving data and notifying an event,
in case of correspondence of the acquired value to a specific
entity. In this specific implementation, the condition of tripped
circuit breakers due to a fault in the grid is monitored to
activate the SR. If network data that indicate the presence of
a fault are updated in CB, these values satisfy the notification
condition: the CB sends a message to SR service, which starts
to process the information and retrieves the data related to
the network from the CB. The SR algorithm continuously
iterates to re-energize each disconnected load, until all possible
no
yes
Service
restoration
concluded
yes
no
Perform the solution
and update data in CB
Exist
loads to be
reconnected
?
Service restoration starts:
import data from CB
Fault
detected
?
Data from field
devices to CB
Fig. 9. Control flow of the service restoration process.
loads are reconnected or the safety electrical constraints are
violated, and provide the solution to the CB. Then, the CB
issues the network reconfiguration commands, i.e. closing of
specific switches, to the devices in the grid network. Once the
closing operation is successfully performed, data are updated
in the CB.
C. Electrical Grid Emulation
The electrical network is emulated through the Real Time
Digital Simulator (RTDS) with its NovaCor hardware plat-
form. The dedicated hardware components guarantee the ac-
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
7
curate simulation of grid behavior, i.e. the measurements of
electrical quantities that are exchanged with the implemented
software platform. Grid modelling is carried out via the
dedicated software RSCAD: in addition to the power sys-
tems components (as electrical loads, distributed generators,
switches, etc.), the IEDs and RTUs are emulated. These
devices publish the grid measurements (node voltages and
power injections) and switches information (tripping condition
and position status) into the communication network, reaching
the interface layer of the platform. In particular, the Gigabit
Transceiver Network (GTNET) card of RTDS provides a
real-time communication link to and from the simulator via
Ethernet. Similarly, the network reconfiguration commands
(as outcomes of the performed SR algorithm) are issued to
the emulated RTUs/IEDs and, consequently, to the modelled
switching devices.
The setup includes also industrial protection relays for de-
tection of electrical fault and, consequently, the coordination to
isolate the fault area through the tripping of nearby (upstream
and downstream) switches. Particularly, the tests are conducted
by integrating two ABB REF615 feeder protection and control
relays, equipped with IEC 61850 communication feature. The
grid measurement is emulated by injecting current, from a
controlled source, in the Current Transformer (CT) termi-
nals of one relay: as the value overcomes the pre-defined
threshold, the overcurrent protection is triggered and the IEC
61850 GOOSE message is published, via the logical node
PTOC (time overcurrent). The message is subscribed by the
GTNET card of RTDS, issuing a tripping command for the
circuit breaker associated to the relay (upstream the fault).
Additionally, the same GOOSE message is subscribed by
the second relay, which publishes its own GOOSE message
via the logical node PTRC (protection trip conditioning), in
order to accomplish the fault zone isolation and similarly
open the associated (downstream the fault) switch in the grid
model. The complete setup of the implemented hardware and
software component, together with the exchanged data flows,
is represented in Figure 10.
The setup has been implemented in order to replicate the
operational behavior toward the DMS in the real electrical
grid. Particular focus has been placed on the deployment
of standard communication protocols for the automation of
electrical grids, specifically the IEC 61850 and CIM: the use
of these data models allows to interface realistic gateway from
the implemented platform. This aspect is also ensured by the
integration of industrial protection relays for the coordination
of fault management actions. The accuracy of the measured
quantities, to reflect the network behavior in real-time, is guar-
anteed by the high quality of professional RTDS system. This
described setup allows to obtain precise and reliable statistical
information related to fault occurrences and the consequent
responses for each feature of the deployed platform.
IV. ASS ES SM ENT CA SE S
The analysis aims at evaluating the latency of the cloud-
based platform to handle SR in case of fault occurrence in
the grid network, in which different communications networks
of the grid are not considered and just latency of FIWARE
services to support SR is tested. All tests were performed fifty
times to statistically identify the latency between services to
activate SR. CB performs not only the data publishing but also
context information management to handle SR. Therefore, the
reported time to activate SR depends on the grid size and
pattern. In the following, a description of grids used in this
evaluation are presented.
A. First Case : 40 Nodes Network
Each node of the electrical grid includes a primary substa-
tion, a passive load or a DER; whereas the electrical lines,
which connects two nodes, can host switching devices.
The Figure 11 shows the single line diagram of the electrical
grid used to perform the test [33]. It represents a medium
voltage distribution grid at 13.8 kV, with four primary substa-
tions whose feeders are connected by normally open tie units,
indicated with white squares in the figure. The black squares
indicate the normally closed switches. Each node hosts a load,
in the range from 100 kW to 1 MW, with the exception of
nodes I2 and L1 to which DERs of 200 kW and 250 kW,
respectively, are connected. The overall data of the network are
reported in [26]; they are mapped according to the contextual
information model of the CB as described in section II.
Tripped condition of the circuit breakers is the monitored
attribute which, eventually, indicates the presence of a fault
and determines the activation of the SR with a notification
by CB. Once the SR middleware is initiated, the algorithm
retrieves from CB information regarding the topology of
the network, statuses (open/close) of the switching devices,
parameters of electrical lines, power consumption/generation
and priority indexes of the loads.
In the test case, a fault is assumed to occur at node A1.
Hence, the protection system opens the switches indicated by
numbers 1, 2, 3 and 19, as is shown in Figure 11, to isolate the
fault area. The switch statuses are updated in the CB; among
them, the information about tripped breakers satisfies CB
subscription for fault occurrence and the SR is activated. The
SR retrieves all the network data from CB; then the algorithm
determines the loads that became de-energized (indicated by
green circles in Figure 11). Considering loads criticality, B2
is the one with highest priority, then it is considered as target
node for the restoration. There are two feasible restoration
paths which preserve the radiality of the network: or from
substation SE 2, by closing the switch 7, or from substation SE
3, by closing the switch 10. Since the latter option guarantees
lower power losses, this restoration topology is implemented:
the SR provides the closing command, related to switch 10,
to CB via HTTP protocol. This command is sent to the
switch in the grid by CB, which in the successive collection
of measurements verifies that the closing operation has been
successfully performed and updates the CB table. It is worth
mentioning that, since the nodes B1 and B2 are electrically
connected, the closing of switch 10 re-energizes both of them.
Then, the algorithm retrieves the network data from CB and
repeats the analysis of the disconnected loads, for which C2
becomes the target node. Its restoration can be performed by
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
8
PTOC (GOOSE)
Fig. 10. Setup of the implemented platform with the emulated electrical grid.
Fig. 11. Network model for the first test case.
substation SE4, by closing the switch 17, or by substation SE
2, by closing the switch 6. In this case, the restoration topology
from SE 2 is the most convenient. Once the closed status of
switch 6 is updated in the CB, the RBO algorithm terminates
because all the possible loads have been reconnected. The
reconnection of loads inside the fault area (A1, A2 and A3)
involves the inspection/reparation of the fault and it is beyond
the scope of the service restoration.
B. Second Case: 400 Nodes Network
This scenario considers a larger network model, which is
composed by 400 nodes. Figure 12 represents this model, in
which the basic network model refers to the grid reported in
Figure 11 and used for the first test case: starting from its
nodes S3 and I3, nine subgrid blocks are connected to each
other in series.
Each subgrid block is composed by one substation, 39
passive loads in the same range from 100 kW to 1 MW
and 21 switches. As in the first test case, also in this scenario
the fault occurs at the node A1; the restoration from any of
the primary substations in the added subgrid blocks is not
possible, since the radiality of the network would be lost.
Hence, the restoration proceeds as explained in the previous
paragraph. This larger grid model has been introduced to test
Fig. 12. Expanded network model for the second test case.
the feasibility of the algorithm to manage a network having
greater dimensions and to evaluate the communication latency.
V. SE RVICE RES TORATION PLATFOR M EVALUATI ON
To investigate the validity of the proposed setup, the com-
munication network latency is recorded: it corresponds to the
time that elapses between sending data to CB and the detection
of the fault occurred in the electrical network, the consequent
activation of the SR middleware, the computation of the
restoration solution, provided to CB, and the implementation
of closing commands in the field devices. Table I reports the
time between the data transmission to CB and the initiation of
the SR, due to a fault condition in the grid; it presents detailed
measurements of an average communication network latency
variation between the two grid networks.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
9
1 - 2 s 1 - 2 s
4 - 100 s
Update CB
Power restored to
downstream healthy
sections of feeder
Computation of SR
CB receives data
and notifies fault
Fault occurs
Closing/Opening
switches for SR
Fig. 13. Time chart of SR process.
TABLE I
NET WOR K LATENC Y TO ACT IVATE SR
Number of Nodes Min Max Average STDEV
40 0.136s 0.192s 0.157s 0.015s
400 1.145s 1.481s 1.319s 0.101s
With respect to an implementation on the field, the commu-
nication latency to receive data from meters, relays and IEDs
is not included; this latency is anyway negligible with respect
to the the presented results. Internal network latency of the
cloud infrastructure is reduced by running the CB and SR at
the same network. Therefore, the major latency is the time to
publish data into CB and the event detection.
The fourth column shows a difference of over 1 second
between the two grid networks; the comparison between the
two test cases demonstrates a relation between communication
time of services and grid network size. As indicated in [32],
the central control system receives data from remote terminal
units, typically, every 2-4 seconds for high-priority data. This
is in line with the proposed SR platform, confirming its
feasibility.
According to the test setup, the effect of grid network size
into computation latency is evident. In the future, to reduce
computation time of SR in the large grid network, several SR
can be deployed with a load balancer to distribute the workload
in the platform.
Moreover, the processing time of SR algorithm is measured:
starting from the activation of the SR algorithm, it includes the
provision of computed solution to CB and the verification that
the specific switches successfully implemented the received
closing commands. The computation time needed for applying
SR in the faulty grid network is reported in Table II.
The computation time of SR depends on the size of the grid
and varies between 4 to 100 seconds in the test setup with
the networks having 40 and 400 nodes, respectively. Figure
13 represents the timing of cloud-based SR integrated into
FIWARE services, for which, according to the results of time
measurements, CB needs around 1-2 seconds to notice the
fault presence from the data collected in the electrical grid.
TABLE II
COM PUTATIO N TIME O F SR I N DIFFE REN T TEST CA SES
Number of Nodes Min Max Average STDEV
40 4.846s 5.574s 5.21s 0.013s
400 84.121s 88.78s 86.45s 0.123s
300 ms 80 ms
< 50 s
Fault isolation
Fault occurs
Opening upstream
circuit breaker
< 200 s
Opening upstream
sectionalizer
Closing upstream
circuit breaker
Closing/Opening
switches for SR
Power restored to
upstream healthy
section of feeder
Power restored to
downstream healthy
sections of feeder
Fig. 14. Post-fault timing diagram in automated SR.
Figure 14 presents the result of time measured in smart
grids that deploy advanced distribution automation, as reported
in [34]. According to the given information, the duration of
the entire FLISR process is estimated at 5 minutes. Since
the initial step of the presented test is the acquisition of
grid data for which fault location is already performed by
opening the nearest switches, only the SR operation should
be considered, whose duration is evaluated at 3 minutes [34].
Hence, with respect to this time range, the computation time
results for the proposed implementation confirm the suitability
for a field implementation. Moreover, [35] reports the values of
reliability indeces of European networks in 2016; for German
electrical grid the CAIDI value, indicating the average time
required to restore service in case of interruption, corresponds
to 40 minutes. The shorter computation time obtained in the
conducted test confirms the feasibility of deploying the service
restoration to reduce the duration of power outages, decrease
CAIDI and consequently SAIDI indeces, improving reliability
of utilities in automated grids.
VI. CONCL US IONS
This paper presents a cloud-based platform powered by
FIWARE and based on SOA for the re-energization of loads
after the occurrence of electrical faults in active distribution
grids. Embedding required FIWARE services with the im-
plemented SR as a specific domain service leads to an easy
setup and benefit of the cloud-based solution properties such
as computation power on-demand. Furthermore, the platform
integrates the semantic information model and the functional
requirements from the SARGON ontology for semantic provi-
sion, governing, discovering, and querying smart grid network
based on IEC 61850 standard for substation automation. The
conducted tests and the obtained results on different grid
sizes confirm the viability of integrating the proposed platform
with the DMS of distribution grids, to efficiently manage the
continuously increasing amount of data by SCADA systems.
In fact, the recorded communication network latency and the
SR computation time are in line with standard values of smart
automated systems, for which FLISR contributes considerably
to increasing grid reliability.
In further developments, in addition to the SR, different
services for the smart energy system can be included. Particu-
larly, new services for network operations may pose additional
requirements on computation time and data availability, for
which additional features of the platform within the DMS
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIA.2022.3142661, IEEE
Transactions on Industry Applications
10
(as load balancers or distributed approaches) are needed.
Moreover, the coupling with cross-sectorial activities by the
system operators will introduce the necessity to integrate
new communication standards and data models within the
architecture components.
ACK NOW LE DGE ME NT
This work was supported by the project HYPERRIDE,
funded by the European Union's Horizon 2020 research and
innovation programme under Grant Agreement No. 957788.
REF ERENC ES
[1] M. Panteli and P. Mancarella, “The grid: Stronger, bigger, smarter?:
Presenting a conceptual framework of power system resilience,” IEEE
Power and Energy Magazine, vol. 13, no. 3, pp. 58–66.
[2] “IEEE Guide for Electric Power Distribution Reliability Indices,” IEEE
Std 1366-2012 (Revision of IEEE Std 1366-2003), pp. 1–43, May 2012.
[3] A. Zidan, M. Khairalla, A. M. Abdrabou, T. Khalifa, K. Shaban,
A. Abdrabou, R. E. Shatshat, and A. M. Gaouda, “Fault detection,
isolation, and service restoration in distribution systems: State-of-the-
art and future trends,” vol. 8, no. 5, pp. 2170–2185.
[4] A. Zidan and E. F. El-Saadany, “A cooperative multiagent framework
for self-healing mechanisms in distribution systems,” IEEE Transactions
on Smart Grid, vol. 3, no. 3, pp. 1525–1539, Sep. 2012.
[5] P. Parikh, I. Voloh, and M. Mahony, “Fault location, isolation, and
service restoration (FLISR) technique using IEC 61850 GOOSE,” in
2013 IEEE Power Energy Society General Meeting, pp. 1–6.
[6] L. Duy Phuc, B. Duong Minh, N. Cao Cuong, and L. Anh My Thi,
“FLISR approach for smart distribution networks using E-Terra software
- a case study,” Energies, vol. 11, no. 12, 2018.
[7] R. Al-Khannak, “Conceptual development of redundant power system
philosophy by grid computing,” Bolton University and South Westphalia
University of Applied Sciences, 2012.
[8] H. Bai, Z. Ma, and Y. Zhu, “The application of cloud computing in
smart grid status monitoring,” in Internet of Things. Springer, 2012,
pp. 460–465.
[9] K. P. Birman, L. Ganesh, and R. Van Renesse, “Running smart grid
control software on cloud computing architectures,” Proc. Workshop
Comput. Needs Next Gener. Elect. Grid, pp. 1–28, April 2011.
[10] P. Balamuralidhara, P. Misra, and A. Pal, “Software platforms for
internet of things and m2m,” Journal of the Indian Institute of Science,
vol. 93, no. 3, pp. 487–498, 2013.
[11] J. Mineraud, O. Mazhelis, X. Su, and S. Tarkoma, “A gap analysis of
internet-of-things platforms,” Computer Communications, vol. 89, pp.
5–16, 2016.
[12] H. Zhou, The internet of things in the cloud: a middleware perspective.
CRC press, 2012.
[13] D. Guinard, V. Trifa, S. Karnouskos, P. Spiess, and D. Savio, “Interacting
with the soa-based internet of things: Discovery, query, selection, and on-
demand provisioning of web services,” IEEE Transactions on Services
Computing, vol. 3, no. 3, pp. 223–235, July 2010.
[14] S. Zhao, Y. Zhang, and J. Chen, “An ontology-based iot resource
model for resources evolution and reverse evolution,” in International
Conference on Service-Oriented Computing. Springer, 2012, pp. 779–
789.
[15] R. Santodomingo, S. Rohjans, M. Uslar, J. A. Rodr´
ıguez-Mond`
ejar, and
M. A. Sanz-Bobi, “Facilitating the automatic mapping of iec 61850
signals and cim measurements,” IEEE Transactions on Power Systems,
vol. 28, no. 4, pp. 4348–4355, 2013.
[16] R. Santodomingo, J. A. Rodr´
ıguez-Mond´
ejar, and M. A. Sanz-Bobi,
“Using semantic web resources to translate existing files between cim
and iec 61850,” IEEE Transactions on Power Systems, vol. 27, no. 4,
pp. 2047–2054, 2012.
[17] C.-W. Yang, V. Dubinin, and V. Vyatkin, “Automatic generation of
control flow from requirements for distributed smart grid automation
control,” IEEE Transactions on Industrial Informatics, vol. 16, no. 1,
pp. 403–413, 2019.
[18] H. Bergmann, C. Mosiman, A. Saha, S. Haile, W. Livingood, S. Bushby,
G. Fierro, J. Bender, M. Poplawski, J. Granderson et al., “Semantic
interoperability to enable smart, grid-interactive efficient buildings,”
Lawrence Berkeley National Lab.(LBNL), Berkeley, CA (United States),
Tech. Rep., 2020.
[19] FIWARE-NGSI v2 Specification, “Next generation service
interfaces architecture approved version.” [Online]. Available:
http://fiware.github.io/specifications/ngsiv2/stable
[20] ETSI CIM,NGSI-LD CIM 009,, “Ngsi-ld cim 009,” 2019. [Online].
Available: https://www.etsi.org/deliver/etsigs/CIM, January 2019
[21] Open Mobile Alliance, “Next generation service inter-
faces approved version,” may 2012. [Online]. Available:
http://www.openmobilealliance.org
[22] R. Mart´
ınez, J. Pastor, B. ´
Alvarez, and A. Iborra, “A testbed to evaluate
the fiware-based iot platform in the domain of precision agriculture,”
Sensors, vol. 16, no. 11, p. 1979, 2016.
[23] M. Haghgoo, A. Dognini, and A. Monti, “A cloud-based platform
for service restoration in active distribution grids,” in 2020 6th IEEE
International Energy Conference (ENERGYCon), 2020, pp. 841–846.
[24] M. Haghgoo, I. Sychev, A. Monti, and F. H. Fitzek, “Sargon–smart
energy domain ontology,” IET Smart Cities, vol. 2, no. 4, pp. 191–198,
2020.
[25] L. Daniele, M. Solanki, F. den Hartog, and J. Roes, “Interoperability
for smart appliances in the iot world,” in International Semantic Web
Conference. Springer, 2016, pp. 21–29.
[26] A. Dognini and A. Sadu, https://www.fein-aachen.org/en/projects/rbosr/
accessed on 3rd July 2019.
[27] A. Abur and A. G. Exposito, Power System State Estimation : Theory
and Implementation. CRC Press, Mar. 2004.
[28] R. Stengel, Optimal Control and Estimation, ser. Dover Books on
Mathematics. Dover Publications, 2012.
[29] M. A. Aftab, S. S. Hussain, I. Ali, and T. S.
Ustun, “Iec 61850 based substation automation system: A
survey,” International Journal of Electrical Power & Energy
Systems, vol. 120, p. 106008, 2020. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0142061520304646
[30] International Electrotechnical Commission, “Iec 61850 communication
networks and systems for power utility automation; iec
standards, parts 110, edition 2.0.” 2011. [Online]. Available:
https://webstore.iec.ch/home (accessed on 27 March 2021)
[31] S. Cavalieri, “Semantic interoperability between iec 61850 and onem2m
for iot-enabled smart grids,” Sensors, vol. 21, no. 7, p. 2571, 2021.
[32] J. Northcote-Green and R. Wilson, Control and Automation of Electrical
Power Distribution Systems. CRC Press, 2007.
[33] N. R. M. Fontenele, L. S. Melo, R. P. S. Leao, and R. F. Sampaio,
“Application of multi-objective evolutionary algorithms in automatic
restoration of radial power distribution systems,” in 2016 IEEE Confer-
ence on Evolving and Adaptive Intelligent Systems (EAIS), May 2016,
pp. 33–40.
[34] E. Shirazi and S. Jadid, “Autonomous self-healing in smart distribution
grids using multi agent systems,” IEEE Transactions on Industrial
Informatics, 2018.
[35] Council of European Energy Regulators, CEER Benchmarking Report
6.1 on the Continuity of Electricity and Gas Supply, July 2018.