ArticlePDF Available

Semantic 3D City Database — An enabler for a dynamic geospatial knowledge graph


Abstract and Figures

This paper presents a dynamic geospatial knowledge graph as part of The World Avatar project, with an underlying ontology based on CityGML 2.0 for three-dimensional geometrical city objects. We comprehensively evaluated, repaired and refined an existing CityGML ontology to produce an improved version that could pass the necessary tests and complete unit test development. A corresponding data transformation tool, originally designed to work alongside CityGML, was extended. This allowed for the transformation of original data into a form of semantic triples. We compared various scalable technologies for this semantic data storage and chose Blazegraph™ as it provided the required geospatial search functionality. We also evaluated scalable hardware data solutions and file systems using the publicly available CityGML 2.0 data of Charlottenburg in Berlin, Germany as a working example. The structural isomorphism of the CityGML schemas and the OntoCityGML Tbox allowed the data to be transformed without loss of information. Efficient geospatial search algorithms allowed us to retrieve building data from any point in a city using coordinates. The use of named graphs and namespaces for data partitioning ensured the system performance stayed well below its capacity limits. This was achieved by evaluating scalable and dedicated data storage hardware capable of hosting expansible file systems, which strengthened the architectural foundations of the target system.
Content may be subject to copyright.
Energy and AI 6 (2021) 100106
Available online 29 July 2021
2666-5468/© 2021 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (
Contents lists available at ScienceDirect
Energy and AI
journal homepage:
Semantic 3D City Database — An enabler for a dynamic geospatial
knowledge graph
Arkadiusz Chadzynski a, Nenad Krdzavac a, Feroz Farazi c, Mei Qi Lim a, Shiying Li b,
Ayda Grisiute b, Pieter Herthogs b, Aurel von Richthofen b, Stephen Cairns b, Markus Kraft a,c,d,
aCambridge Centre for Advanced Research and Education in Singapore (CARES), CREATE Tower, 1 Create Way, Singapore, 138602, Singapore
bSingapore-ETH Centre, CREATE Tower, 1 Create Way, Singapore, 138602, Singapore
cDepartment of Chemical Engineering and Biotechnology, University of Cambridge, Philippa Fawcett Drive, West Site, CB3 0AS Cambridge, UK
dSchool of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, Singapore, 637459, Singapore
Urban planning
Semantic web
Knowledge graph
Decision support system
Artificial intelligence
Geospatial modelling
Geospatial search
This paper presents a dynamic geospatial knowledge graph as part of The World Avatar project, with an
underlying ontology based on CityGML 2.0 for three-dimensional geometrical city objects. We comprehensively
evaluated, repaired and refined an existing CityGML ontology to produce an improved version that could pass
the necessary tests and complete unit test development. A corresponding data transformation tool, originally
designed to work alongside CityGML, was extended. This allowed for the transformation of original data into a
form of semantic triples. We compared various scalable technologies for this semantic data storage and chose
Blazegraphas it provided the required geospatial search functionality. We also evaluated scalable hardware
data solutions and file systems using the publicly available CityGML 2.0 data of Charlottenburg in Berlin,
Germany as a working example. The structural isomorphism of the CityGML schemas and the OntoCityGML
Tbox allowed the data to be transformed without loss of information. Efficient geospatial search algorithms
allowed us to retrieve building data from any point in a city using coordinates. The use of named graphs and
namespaces for data partitioning ensured the system performance stayed well below its capacity limits. This
was achieved by evaluating scalable and dedicated data storage hardware capable of hosting expansible file
systems, which strengthened the architectural foundations of the target system.
1. Introduction
General context of the paper — problem space
Development of sustainable digitisation practices is widely recog-
nised as an important part of roadmaps at organisational, industry [1],
national [2] as well as international levels [3]. Radermacher [4] points
to the fact that global governing bodies, such as the UN, G20 and
the World Bank, all agree on the importance of adopting digitisa-
tion standards for achieving international comparability. Despite the
complexity of existing standards and the time-consuming adoption
and implementation of information systems into a digital form, those
bodies agree that the benefits greatly outweigh the costs. Radermacher
[4] also notices that roadmaps strengthening comparability include
technological solutions designed with the intention to support systems
interoperability [5].
Corresponding author at: Department of Chemical Engineering and Biotechnology, University of Cambridge, Philippa Fawcett Drive, West Site, CB3 0AS
Cambridge, UK.
E-mail address: (M. Kraft).
The World Avatar (TWA) is an all-encompassing dynamic knowl-
edge graph. It is built on agent-based system [2,6] architectural princi-
ples as well as semantic web standards and recommendations provided
by the W3C. The system can be regarded as an example of a gen-
eral knowledge graph, capable of multi-domain knowledge represen-
tation [2,714]. Answering to inter-domain interoperability problems
is at its core.
Designed as one of the critical TWA components, the J-Park Sim-
ulator (JPS) [1520] includes representations of built environments
and agent-based subsystems capable of simulating emissions disper-
sion from various types of air pollution sources as well as optimising
designs of Eco-Industrial Parks (EIPs) with respect to their carbon
footprint [21].
The system architecture for the Semantic 3D City Database pro-
posed in this paper aims at closing some of the gaps, particularly
related to current built environment representation within JPS and
TWA. The gaps, elaborated on in the next three paragraphs, could be
Received 4 June 2021; Received in revised form 16 July 2021; Accepted 17 July 2021
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
generally found in other publicly available city models with geospatial
information. Basing the architecture on reusable Open Source compo-
nents and standardised interfaces encourages wider adoption through-
out other information systems that require a scalable and interoperable
three-dimensional representation of such environments.
Cities and geospatial information
Existing City Information Models (CIM) already integrate large ur-
ban datasets in order to represent multiple aspects of cities such as
the built environment, energy management, transport, etc [22]. Link-
ing across domains and securing scalability are key challenges to
developing urban Digital Twins (DT) [23]. Representing cities as three-
dimensional models of built environments is a crucial step to add more
urban data and knowledge [24] to representing urban environments
that are developed and planned.
One of the common ways of 3D modelling for built environments in
various information systems is to use the CityGML standard, provided
by the Open Geospatial Consortium (OGC) [25]. This can be used as a
data exchange standard for city landscape management and planning
systems or even as a file-based data source for applications visualising
3D city landscapes on the web. It is possible to encode information
about different domains within this format through domain-specific ex-
tensions as well as combine purely geospatial concerns with any others
in order to analyse or support decisions regarding digitised urban de-
sign blueprints. A digital twin of the Manchester landscape in CityGML
2.0, with solar irradiation projected on the roofs of the buildings [26],
is just one of the plethora of examples currently available on the web.
However, developing applications built on static files lacks flexibility.
Apart from compliance with standards, flexibility is a key ingredient
to achieving interoperability [5]. It also keeps standard based systems
open to future innovation. Dynamism of the stored data is required
to be able to perform simulations under various conditions and hot
swap certain information in representations on demand. Moreover,
dynamic representation allows the gaps in static city models arising
from the constant evolution of the entities within built environments
to be addressed.
The open source 3D City Database developed at the Technische Uni-
versität München (TUM) was meant to close some of those gaps [27].
3D City Database is a suite of tools to transform data encoded in
flat CityGML 2.0 files into a more flexible database format and to
store and visualise 3D city landscapes. It has been under development
since 2003 [28]. The flagship examples, which showcase how to store
and visualise city data with the help of those tools, are models of
Berlin and New York in Level Of Detail 2 (LOD2). This approach
demonstrates a possibility of storing city data and adhering to the
CityGML 2.0 standard in a different way than by using static XML
files. However, relational database backends of this solution, limit the
implementation of semantic data interoperability. While some authors
have reported on first attempts to use graph databases for geospatial
data [29], discussions contain predominantly general ideas and partial
results [30]. Adopting a semantic data store allows for turning a bare
3D City Database into a knowledge base [6] with inference, truth
maintenance and reasoning engines. This makes the resulting Semantic
3D City Database an enabler for the dynamic geospatial knowledge
graph in TWA.
Research at the University of Geneva focused on the other side of the
problem spectrum led to producing CityGML ontology. It turned out to
be possible to generate an ontology, with one to one matching between
concepts, by applying XLST transformations into original CityGML 2.0
schemas. The ontology produced by applying those techniques could
be regarded as a step towards bringing the standard and applications
providing semantic interoperability together. However, closer exami-
nation of the available ontology reveals a number of issues concerning
its quality and, because of that, suitability to be used as a Tbox
(an ontology schema) for reliable applications adhering to semantic
web standards and recommendations. Apart from that, there is also
a lack of data transformation tools that would allow population of
the ontology with data and produce instances for an Abox. Schema
and instances are equally needed to build any application able to
operate three-dimensional geospatial data. Such applications also need
to provide reliable geospatial search functionality with acceptable data
retrieval times [31]. Although there are semantic triple stores imple-
menting geospatial search, there is a lack of examples of semantic
web applications operating on the multitude of geometries required
to represent entire cities. Any software application satisfying such
requirements needs appropriate hardware to facilitate this specific
functionality. Fully semantic 3D city database architecture definitions,
bringing together all the above-specified components in order to pro-
vide foundations for semantic applications operating on dynamic city
models, are not currently available.
The purpose of this paper is to present such an architecture defini-
tion as well as the steps necessary to produce proof of concept solutions
that address the mentioned problems. The ontology refinement process
and its evaluation with regard to quality and correctness, required to
ensure Tbox reliability, are elaborated on in the following Section 2.
Next, the use of refined ontology concepts in the process of augmenting
data transformation tools is presented, based on existing open source
solutions (Section 3). Evaluation of existing semantic data stores, car-
ried on before producing an Abox by utilising terms of the refined
ontology, is described in Section 4. The final Section 5is devoted
to presenting estimated hardware requirements, as a result of evalu-
ating geospatial functionality on tens of thousands of buildings with
over 2000 different types of two-dimensional and three-dimensional
geometrical shapes.
2. Refined ontology for CityGML
Producing a proof of concept Semantic 3D City Database required
ensuring reliability of the ontology, used as its schema, in the first
step. As a result, following the general principle of reusability, such
schema – OntoCityGML – is based on an existing ontology reflecting the
CityGML 2.0 standard and developed at the University of Geneva [32,
33]. Because of the methods used to produce it, it is referred to as
CityGML ontology in the following subsections, where the steps and
methodology undertaken in the process of refining it to a version
suitable for the proof of concept solutions are elaborated on.
Apart from cross-checking the ontology with domain experts, the
main set of common tools utilised during the refinement process con-
sisted of Protégé ontology editor, HermiT reasoner and OntoDebug
plugin for Protégé. Accuracy issues, detected and highlighted by the
editor, were resolved by manually eliminating all properties in the
ontology that caused accuracy problems. Evaluation for conciseness
required insight from domain experts, which allowed the detection of
all of the entities (classes and properties) in the ontology that were irrel-
evant for TWA domain. Some of these entities had to be removed from
the ontology using Protégé editor. Similarly, domain expert knowl-
edge and experience was required to check whether all the terms
implemented in the ontology cover the TWA domain. Coherence and
Computational efficiency tests required the use of more specific plugins
for the editor. Consistency was checked by using HermiT reasoner
integrated into Protégé.
2.1. An evaluation of the CityGML ontology
The publicly available CityGML ontology [32], which served as a
base for OntoCityGML ontology, was developed as a result of applying
XSLT transformations to CityGML 2.0 schema [25] as well as some
manual mapping needed to generate it [32]. The ontology implements
185 classes, 281 object properties and 92 data properties. It also con-
tains 1254 implemented axioms. Categorisation of criteria used during
its evaluation in order to check suitability for the proof of concept
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
presented in this paper is enumerated in Table 1. The suite of tools
and plugins available in the Protégé ontology editor [34] was utilised
during this process.
The following errors were reported by those tools in the CityGML
ontology, and then manually fixed during evaluation, based on all of
the above metrics. First, it did not pass the Accuracy test. There were
a number of Illegal redeclarations of entities: reuse of entity errors. For
instance, the term year of construction, also present in the original Char-
lottenburg CityGML 2.0 data, was implemented as owl:ObjectProperty
and owl:DatatypeProperty entities at the same time. More than fifty
errors of this nature were reported by the editor. Second, the Con-
ciseness test checked whether the base ontology defined entities that
were irrelevant to the domain to be covered [35]. Conducting it on
the CityGML ontology revealed entities that were irrelevant as ontology
elements with regard to TWA domain. For example, the term class was
implemented as owl:ObjectProperty for entities which were irrelevant
to TWA domain. The base ontology contained more than ten entities of
this nature. Third, the Completeness test measured whether the domain
of interest was appropriately covered [35]. It revealed that the CityGML
ontology did not completely cover TWA domain. For instance, a term
such as envelope, essential for the proof of concept, was defined in the
CityGML 2.0 open data model but was not implemented in the CityGML
ontology [32]. Fourth, the HermiT [36] reasoner revealed the Coherence
and Consistency of the CityGML ontology. Finally, the Computational ef-
ficiency test showed that the expressivity of Description Logics (DLs) of
the CityGML ontology was equivalent to  ()DLs. The described
assessment ensured that the CityGML ontology could serve as a good
starting point for the development of an OntoCityGML ontology and
provide some foundations for creating a dynamic geospatial knowledge
2.2. An evaluation of the OntoCityGML ontology
OntoCityGML ontology, which served as a Tbox [37] for the proof
of concept Semantic 3D City Database is an extension of the CityGML
ontology and a result of resolving previously mentioned issues. As in
the case of the base ontology, tools and plugins available for the Protégé
ontology editor [34] were used for its evaluation. The OntoCityGML on-
tology implements 344 classes, 272 object properties and 78 datatype
properties. It also contains 3363 implemented axioms.
The Computational efficiency test shows that the expressivity of On-
toCityGML ontology DLs is equivalent to  ()DLs. Due to the DLs’
expressivity of the ontology falling between DL-Lite [38] and 
DL [39], the OntoCityGML cannot be used to query city data stored
in relational databases by the means of the ontology-based data access
(OBDA) technologies [40]. The HermiT reasoner is able to classify the
OntoCityGML ontology. In debugging mode, it also detects that this
ontology is Consistent and Coherent. The OntoCityGML ontology fully
passed Accuracy,Conciseness and Completeness tests as well. Protégé
does not show any errors related to the illegal declaration of entities
or reuse of entities.
To cover TWA domain appropriately and to the extent needed
for the proof of concept, sixty-nine new terms were implemented
into the OntoCityGML ontology. The terms were checked for one to
one correspondence between the implementation in the ontology and
the CityGML 2.0 specification [25]. The list of terms corresponds to
the unique list of CityGML 2.0 tags found in the Charlottenburg–
Wilmersdorf data used in this proof of concept. OntoCityGML axioms
relevant to this list are included in the Appendix A.
Additionally, each of the new terms implemented in the OntoCi-
tyGML was covered by unit tests. This step ensures that any further
changes to the OntoCityGML ontology preserve its structure. Sample
test cases are included in the Appendix B. Furthermore, the OntoCi-
tyGML ontology has been placed under the git version control system
in order to make tracking such changes transparent.
3. Augmented data transformation tools
For the proof of concept, data validation mechanisms of the aug-
mented Importer/Exporter tool, originating from TUM [27], were used
to transform CityGML 2.0 data to the Semantic 3D City Database [37]
which uses OntoCityGML terms to describe city models. In this process,
every CityGML object is validated by the tool prior to its instantiation
to a corresponding Java object. The data transformation process is
described in the next Section 3.1.
3.1. 3D City database Importer/Exporter tool
Depending on the level of detail, CityGML models can form quite
complex and, when measured by the average present-day computing
capabilities, relatively large datasets. An architecture of any application
designed to work with such models needs to be developed with this
in mind. In the particular case of the semantic geospatial knowl-
edge graph, it has to ensure the efficiency of SPARQL queries on an
Abox as well as balance performance and optimal data storage. While
citygml4j [41], an open source Java class library and API, provides a
very good start to work with CityGML 2.0 models programmatically,
there is a lack of tools that would be able to turn such models into
semantic triples forming an Abox of a geospatial knowledge graph.
After exploring options, the closest existing data transformation tool
able to fulfil such requirements is the 3D City Database Importer/Ex-
porter [42]. It is also based on citygml4j and is available as an open
source project. The TUM tool is optimised to work with large CityGML
2.0 models and uses multithreading to read the data, transform it
and write it into a database [27]. Hence, it is more computationally
efficient than a raw library when the potential number of city model
objects being processed simultaneously is taken into consideration. This
matters in the case of large and detailed models. The unmodified tool
supports Oracle and PostGIS relational databases and makes use of
the Java Database Connectivity (JDBC) API with respective database
connectors. This particular design allowed the reuse and augmentation
of large parts of its code to work with Semantic 3D City Database based
on a non-relational graph triple store.
In order to augment the tool in such a way, Jena JDBC, A SPARQL
over JDBC driver framework [43], was utilised for the proof of concept
presented in this paper, with choosing its Remote Endpoint driver
connecting into a SPARQL Protocol compliant triple store that exposes
SPARQL query and SPARQL update endpoints. Adding the new driver
allowed augmentation of the tool and preservation of its original func-
tionality. The majority of the new features were added into two of
the tool’s original code packages: impexp-client and impexp-core. While
modifying the first one allowed addition of components enabling the
selection of new database type, augmentation of the second component
allowed the tool to be capable of establishing a connection to a triple
store via JDBC. This required adding a new database backend adapter
and incorporation of five new classes into the foundation codebase.
Namely, BlazegraphAdapter,GeometryConverterAdapter,SchemaManager-
Adapter,SQLAdapter and UtilAdapter classes had to be implemented, at
minimum, for the tool to be able to facilitate connectivity to a semantic
triple store with the new driver.
As the Importer/Exporter tool was originally designed to work with
relational databases at a very high level, it was validating the CityGML
models before instantiating model members into Java objects, which
were, in turn, persisted in a database by the means of corresponding
SQL statements. For the tool to be fit for purpose when used as a
data transformation tool with the Semantic 3D City Database, the last
step had to be augmented with the functionality to generate equivalent
SPARQL statements for the respective Java objects. Preserving current
data structures leveraged many years of development spent on query
and storage optimisation at TUM while developing it and fine-tuning
over that time [27].
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
Fig. 1. CityGML 2.0 data transformation tool augmented to support Blazegraphas a data store back-end. In the depicted menu, connection to the semantic database was
established and the tool is ready to start importing the data. In this augmented version of the Importer/Exporter tool [27], city model data will be imported via executing SPARQL
statements with OntoCityGML vocabulary against the semantic data store, instead of SQL statements against a relational database with predefined 3DCityDB schema, like in the
original version. The original functionality is preserved and relational database types can still be used.
In order to produce a semantic twin of the 3D City Database
representation for the Charlottenburg-Wilmersdorf district of Berlin,
the following classes of the org.citydb.citygml.importer.dat-abase.content
module had to be modified to work with SPARQL JDBC prepared
statements instead of SQL JDBC prepared statements: DBCityObject,
icSurface. New methods generating SPARQL prepared statements were
added to each of these classes and covered by appropriate unit tests.
The existing code was augmented to fill in those statements with
CityGML objects data when the semantic backend is specified as a
chosen option for the 3D City Database Importer/Exporter tool. As
depicted in Fig. 1, the modified tool is able to produce the Semantic
3D City Database Abox, using OntoCityGML as a Tbox. It also pro-
duces a semantic ‘‘mirror twin’’ of the relational 3D City Database,
with additional properties specific to semantic knowledge bases. Those
properties are described in more detail in the next subsection.
3.2. Relational schema to graph mapping
The heart and soul of much mathematics consists of the fact that the
‘‘same’’ object can be presented to us in different ways [44]. The present
section elucidates this statement while taking into consideration the
results of data transformations, which are outcomes of the augmented
data transformation tool. The augmented Importer/Exporter is able to
produce the original database as well as the Semantic 3D City Database.
Structural isomorphism of the 3D City Database and its semantic twin,
illustrated in Fig. 2, shows their equivalence.
There is one to one correspondence [45] between schemas of the
proof of concept databases, in terms of the number of schema objects
as well as their names 𝑅𝐷𝐵 𝑆𝐷𝐵. Both make use of names defined in
the CityGML 2.0 conceptual schema. The number of tables and graphs
is equal in both databases. Graph names are equivalent to table names
when underscores are removed from table names. For each table and
a graph with the corresponding name, the number of columns in the
Table 1
Categorisation of ontology evaluation criteria.
Source: Updated from [35].
Evaluation perspective Metrics
Ontology correctness Completeness
Consistency & Coherence
Ontology quality Computation efficiency
table is equal to the number of edge names in the graph. There exists an
equivalent graph edge name in a graph with name equivalent to a table
name, for each table column name, when underscores are removed
from table names and column names. The number of rows in each table
is equal to the number of sets of triples sharing the same subject in
the graph with the equivalent name. For each value of each row of
each table, there exists an equivalent graph vertex, in a graph with the
name equivalent to that table name, which is an endpoint of a graph
edge with a name equivalent to a column name to which this value
belongs in that table, when underscores are removed from table names
and column names. In other words, the above describes morphisms
𝑓𝑅𝐷𝐵 𝑆𝐷𝐵 and 𝑔𝑆𝐷𝐵 𝑅𝐷𝐵, where 𝑔is an inverse of 𝑓. The
established isomorphism stays in accordance with the transformation
approach described by Tim Berners Lee as one of the ways of publishing
data from relational databases as linked data [46].
The open world assumption (OWA) in the semantic representa-
tion [37] is what makes it different from the relational database repre-
sentation with closed world assumption (CWA) [47]. An analogy could
be drawn to the 10 = 9.99999... equation. It is possible to say much
more about the world when one has the realm of real numbers at hand
to describe it – as on the right side of the relation – than it is when
one is left to do so with only decimal numbers at hand – as on the
left side of the relation. The same statement holds when considering
representations built on elements listed in Table 2.
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
Fig. 2. An example of structural isomorphism illustrating equivalence of the city model representations within the original 3D City Database and its semantic twin. Named graphs
of addresses and buildings correspond to relational database tables with the same names. IRIs identifying entities correspond to table records with unique and sequential IDs as
primary keys of the original database. Address to building binding occurs by linking data via their IRIs, instead of via records containing appropriate foreign keys in the original
binding table.
Table 2
High level overview of building blocks of a dynamic city model representation as well
as their implementations within original 3D City Database and its semantic twin (from
left to right).
Building block 3D city database Semantic 3D city database
Schema SQL database schema OntoCityGML Tbox
Features Tables Named graphs
Features’ properties Columns OntoCityGML predicates
Instance representation Rows Same subject triple sets
Instance data Rows’ values of each column Triples’ objects
Instance IDs Sequential numbers URIs
Instance relations Numerical foreign keys URI subjects
Query language SQL SPARQL
The following example illustrates one of the potential consequences
of CWA and OWA when considering relational versus semantic 3D
City Databases used to build the proof of concept described in this
paper. There is a clear impact on data interoperability improvement
while working with heterogeneous datasets under the OWA. Both
databases contain information concerning a building identified by gml
id BLDG_000300000007a403, which could be also found in the origi-
nal CityGML 2.0 representation of Charlottenburg-Wilmersdorf down-
loaded from
s/download-portal/ for the purpose of building this proof of concept.
Both databases also contain information about an address, identified by
the gml id UUID_76daf80a-2fef-443d-88bb-b9bc0c24fffb belonging to
this building. While querying for information concerning the building
and its address in both databases, it is possible to find that this building
is in Berlin at 36 Tauroggener Str. One can also find out that the
building is bounded by a polygon specified by the following set of co-
ordinates: {384781.38838, 5820924.88493, 33.54,384789.35344,
5820924.88493, 33.54,384789.35344, 5820938.34387,
36.13122,384781.38838, 5820938.34387, 36.13122} as well as
that it has one roof specified by another polygon with a slightly
different set of coordinates, in both databases.
However, the relational database address record for the building
contains NULL in place of the country, whereas the semantic database
contains a BLANK NODE as a vertex, which is an endpoint of the
country edge connected to the address of the building on the other side.
In the case of the relational database, under the CWA, this could be
interpreted as ‘‘The building with this address does not belong to any
country’’. In the case of its semantic twin and under the OWA, on the
other hand, it is possible to interpret that as ‘‘It is not known to which
country the building with this address belongs to’’.
This would matter if both databases were integrated into a bigger
system, like TWA, and turned into knowledge bases. For example,
one could imagine a subsystem integrating and making interoperable
multiple different information sources coming from a few neighbouring
European countries in order to find out the most suitable roof locations
for solar panels, for instance. It is not hard to imagine that some of those
information sources would not contain any geospatial information
about the buildings, but would rather make possible retrieval of some
information for buildings by postal address. Geospatial search, as one of
the features of the dynamic geospatial knowledge graph in TWA, would
allow retrieval of information concerning buildings with one roof in
a square area spanning those countries and compare this information
with the one retrieved from the information sources containing no
geospatial information whatsoever.
It would be not possible to easily integrate those systems and make
them interoperable under the CWA. On the one hand, one would end
up with the information containing the one roof buildings in a few
countries, specified by the coordinates of the square. On the other
hand, one would end up with buildings with either no country, under
the CWA, or unknown country, under the OWA. It would be easier
to add the country information to such buildings by narrowing down
the geospatial search to the country level after that and filling in the
missing information under the OWA.
This way, it would be possible to say that ‘‘The building identified
by gml id BLDG_000300000007a403 is in Germany’’ and integrate this
information with other systems, which do not contain any geospatial in-
formation for European buildings. However, under the CWA, one would
end up with two contradictory statements: ‘‘The building identified by
gml id BLDG_000300000007a403 is in Germany’’, and ‘‘The building
identified by gml id BLDG_000300000007a403 does not belong to any
country’’. Contradictory statements contain no information and, there-
fore, it is possible to say much more about the world under the OWA.
Adding new information onto the CWA system ‘‘The building with
the address identified by the gml id BLDG_000300000007a403 is in
Germany’’ changes such a system and invalidates previous inferences,
whereas such a thing does not take place in the OWA systems. This
makes them more flexible and allow to make heterogenous data sources
interoperable with less computational overhead or inferencing errors.
4. Semantic 3D city data store
Results from working on the OntoCityGML ontology and augmented
data transformation tool, described in previous sections, ensured the
possibility of city model representations compliant with W3C and OGC
standards at the same time. In order to obtain this result, promising
when looked at from the point of view of sustainable digitalisation
practices, realised in a form of a dynamic geospatial knowledge graph,
Semantic 3D City Database had to be created within a scalable and W3C
compliant triple store as well. To keep the architecture open to further
collaborations and maximise the potential of its reuse and innovative
modifications in the future, during the proof of concept stage of the
database, research presented here was focused on open source stores.
From the scalability point of view, the store must be capable of ac-
commodating city data in a form of semantic triples. Furthermore, it
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
must be possible to add more data without significantly reducing the
performance of geospatial queries. In addition to that, the store must be
capable of ensuring multidomain interoperability by allowing city data
to be linked with any other data in a semantic form and be queried for
such relationships. The following describes the results of the research,
briefly summarised in Fig. 3, as well as motivations of the final triple
store technology choice as a target solution for this proof of concept.
Considering Eclipse RDF4J, an open source framework for process-
ing Resource Description Framework (RDF) data [48], as a triple store
for the Semantic 3D City Database was motivated mainly by its relative
popularity as well as familiarity. It would allow the implementation
of TWA data interoperability within a dynamic geospatial knowledge
graph without the need for data migration. At the moment, there are a
few existing TWA components that use it as a data store backend. The
framework has modular architecture. It is composed of a parser/writer
API, model API, repository API and a storage and inference layer
(SAIL) API. Its repository API supports SPARQL 1.1 query and update
language. Different core database implementations are also supported,
such as memory store, native store and elastic search store. On top of
these core databases, the RDF4J API can be extended with SPARQL
Inferencing Notation (SPIN) rule-based reasoning functionalities [49].
The RDF4J framework implements GeoSPARQL functions, but it fails
almost all of the GeoSPARQL benchmark tests [50]. The SAIL interface
can be successfully used for communication between the RDF4J frame-
work and an Apache HBase database in order to process petabytes of
heterogeneous RDF data [51]. However, limited geospatial support as
well as limited out of the box scalability motivated further research on
the triple store of choice for the semantic geospatial database storing
city models.
Another open source SPARQL server project, Apache Jena Fuseki,
was also considered because of its relative popularity and familiarity
(use of and interoperability with TWA components). It has been used
as a triple store backend for some of TWA components as well. Users
can run it as an operating system service, Java web application or a
standalone server. The server follows the SPARQL 1.1 protocol to query
and update RDF data [52]. It also provides a graph store protocol [53].
A Fuseki SPARQL server evaluation test shows that it is too slow to
be used in the production of software for intensive use [54]. Apache
Jena Fuseki supports an HTTP server component that conforms to
the GeoSPARQL standard [55]. A GeoSPARQL compliance benchmark
test used thirty benchmark requirements to prove that Jena Fuseki
can handle geographical vector data representation literals. The Jena
Fuseki server supports top level spatial and topological relation vocab-
ulary components, as well as Resource Description Framework Schema
(RDFS) entailment [50]. Because geospatial search support has been
set as an essential requirement for the dynamic geospatial knowledge
graph, the lack of it, as well as limited scalability, motivated further
research on the triple store of choice for the Semantic 3D City Database
proof of concept.
Although the spatiotemporal store Strabon, provided as an open
source project by the University of Athens, has never been used with
TWA or any of its components before, it caught initial attention during
research on semantic data stores for the dynamic geospatial knowledge
graph as well. This was partially for the reasons discussed before;
because it uses familiar rdf4j backend as one of its components. More-
over, the store provides rich geospatial support and implementation
of GeoSPARQL, stSPARQL, GML and WKT literals [56]. Its architec-
ture is also based on using named graphs to separate data. Strabon
is known to show good performance on single machine and syn-
thetic datasets [57]. The novelty of the underlying stRDF model and
stSPARQL query language consists of adding temporal extensions to
the semantic representations [58]. Although the mentioned query lan-
guage and model has not yet made it to the realm of W3C standards,
from their authors’ perspective they provide a major advantage over
pure GeoSPARQL. However, in the context of TWA, a time-varying
knowledge graph was already considered during the development of
its Parallel World Framework [21]. Within TWA, this approach will be
explored further instead, as it was conceptualised to address a much
broader problem spectrum than that. Strabon scales up to 500 million
triples [58]. This is less than double the estimated number of triples
required to transform the entire Berlin CityGML 2.0 data available at
the moment. It would be hard to link other datasets with the city data
and provide a sufficiently rich multi-domain interoperability for TWA
under those limits. Moreover, apart from the limited scalability of the
pure rdf4j backend, there are still a number of open problems related
to scalability and inferencing with stSPARQL and its underlying data
model [49]. Some of them are solved by using PostGIS in addition
to rdf4j within the system. However, Semantic 3D City Database is a
proof of concept demonstrating the possibility of utilising fully non-
relational data stores for geospatial representations, so that they can be
incorporated into dynamic geospatial knowledge graphs. Strabon also
appears less than production-ready for larger deployments, due to its
relatively rudimentary documentation [59] when compared to other
RDF stores. An additional point is a lack of technical community forums
which would allow users to find answers to common questions and
help with resolving any potentially arising issues. Geospatial search as a
feature is also not mentioned in the currently available documentation
or publications concerning the store.
Blazegraphis an active open source project and a triple store that
met the requirements listed at the beginning of this section. It is a W3C
compliant semantic data store released under a GPL-2.0 Licence [60].
Because of its compliance with standards, it proved to be relatively easy
to migrate other TWA data to this triple store as well, even if it had not
been used within the system before. The latest stable version, 2.5.1, was
released on the 19th of March 2019. The latest version candidate, 2.6.1,
was released on the 4th of February 2020. Blazegraphis in production
use at Fortune 500 companies such as EMC, Autodesk and Wikime-
dia Foundation’s Wikidata Query Service. Semantic transformation of
Charlottenburg–Wilmersdorf CityGML 2.0 LOD2 data contains 20,570
buildings and results in 24,244,610 triples materialised and stored
in Blazegraphacross multiple named graphs in a single namespace.
Blazegraphsupports up to 50 billion edges on a single machine and
it seems to be capable of accommodating the whole Berlin city data,
which is split into 12 parts (Charlottenburg–Wilmersdorf being one
part). Assuming that each part contains between 20,000 and 25,000
buildings, there will be a need to accommodate between 240,000 and
300,000 buildings in order to semantically represent the whole of
Berlin. Assuming uniform complexity of the buildings in other parts
of the city, it would result in between 282,873,427 and 353,591,784
semantic triples generated. This is still quite far from reaching the
single machine limit and shows the possibility of integrating various
additional layers of heterogeneous data to complement the city data
and achieve high levels of multi-domain data interoperability within a
general knowledge graph, such as TWA.
Blazegraphsupports geospatial search via SPARQL queries. Par-
titioning data into namespaces allows for query optimisation by exe-
cuting them on smaller portions of data, potentially in parallel. Using
named graphs for different parts of city objects (i.e. walls, roofs, etc.)
allows querying smaller graphs within namespaces independently. In-
formation resulting from such independent queries could be combined
into information about larger objects, as well as various interdepen-
dencies between such objects. The linked data approach allows OntoC-
ityGML buildings’ data to be combined with other semantic data, either
stored within one and the same namespace, or across named graphs in
separate namespaces. Federated queries are supported by Blazegraph
too. Sale-out and High Availability features are available in Enterprise
editions of Blazegraph, which also supports GPU query optimisation,
amongst many other features. Transactions, very high concurrency and
very high aggregate IO rates are supported in all of its editions.
In conclusion, Blazegraphtriple store has been used for this proof
of concept because of its scalability as well as geospatial search al-
gorithms already implemented as a part of its functionality. Enabling
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
Fig. 3. Triple stores taken under consideration while selecting the most suitable data storage, providing scalable geospatial search functionality for the Semantic 3D City Database.
Pros and cons of each solution are listed next to their logos. Blazegraphwas selected as the most promising one.
this functionality in Blazegraphrequired the development of a custom
vocabulary class as well as a datatype configuration properties file.
Due to the variety of the geometrical shape types found in the proof
of concept data, it was not possible to create such a configuration
manually. Therefore, a functionality of automatically creating such
datatype configurations as well as corresponding vocabulary items was
added to the augmented TUM data transformation tool’s GeometryCon-
verterAdapter class. Together with a newly introduced BlazegraphConfig-
Builder class, based on a thread safe singleton design pattern, the tool
detects any new shape type not previously encountered in the data and
creates appropriate configurations based on its geometrical properties.
Those properties are also encoded in vocabulary item names to make
it easier to break down the stored data into underlying geometries.
This way, for instance, it is possible to find out that coordinates
stored under the datatype ending with SOLID-3-15-15-15-15-15-15, as
the last part of the IRI, contain information about a cube. The first
number – 3 – describes the dimensionality of the stored geometry
type. The rest of the numbers say that the stored data consists of 6
parts of such a cube and each piece describes a polygon in terms
of 15/3 =5 points encoded in a coordinate system. This algorithm
allowed detection and building of datatype configurations automat-
ically for over 2000 different geometrical shape types found in the
Charlottenburg–Wilmersdorf LOD2 building data. Those were required
to fully complete and materialise the Semantic 3D City Database proof
of concept, enabling the possibility of dynamic geospatial knowledge
graph components within TWA. Sample live query results performed
on the Charlottenburg–Wilmersdorf data, with the Semantic 3D City
Database already integrated into TWA, and particularly relevant to the
city planning, are presented in the next section together with hardware
requirements, estimates and recommendations.
5. Hardware requirements
Integration of the Semantic 3D City Database, populated with
Charlottenburg-Wilmersdorf LOD2 building data, by transforming the
original CityGML 2.0 representation using the augmented Importer/Ex-
porter tool and OntoCityGML ontology, into the TWA, enabled dynamic
geospatial knowledge graph capabilities within the wider system. Cities
Knowledge Graph (CKG) is a TWA subsystem under active development
and research [61] in collaboration between the Cambridge Centre for
Advanced Research and Education in Singapore (CARES) [62] and the
Singapore-ETH Centre (SEC) [63]. A decision support system for Smart
City planning, based on the CKG, is a showcase of making sustainable
urbanisation practices while being aided by the sustainable digitisation
practices described in the previous sections. Results of sample ques-
tions, important from the city planning perspective, translated into
the appropriate SPARQL queries executed against the CKG are listed
in Tables 3 and 4. The number of query solutions, as well as elapsed
time, are listed next to each query.
The system is deployed to a server with Microsoft Windows Server
2016 Standard as an operating system and 1TB of storage space, 200
GB RAM as well as 2 Intel®Xeon®ES-2620 v3 @ 2.40 GHz CPUs. Out
of the total 200, 32 GB of RAM is assigned solely to the Blazegraph(),
deployed in Nano SPARQL Server mode. The current journal file size
used by it to store available city data is 6.13GB. System performance
on the type of queries illustrated in Tables 3 and 4is satisfactory in a
single user mode.
Following are the results of executing multiple geospatial search
queries simultaneously on the CKG, using the same hardware. This
incremental geospatial search concurrency test follows the first few
numbers of the Fibonacci Sequence, each multiplied by 10.
10 concurrent geospatial search queries completed in 611 ms,
increasing overall system CPU utilisation by 3% and memory
utilisation by 0%. Queries returned 3913 city objects contained
in variable square size areas in total.
20 concurrent geospatial search queries completed in 1617 ms,
increasing overall system CPU utilisation by 4% and memory
utilisation by 0%. Queries returned 28 653 city objects contained
in variable square size areas in total.
30 concurrent geospatial search queries completed in 2771 ms,
increasing overall system CPU utilisation by 8% and memory
utilisation by 0%. Queries returned 52 196 city objects contained
in variable square size areas in total.
50 concurrent geospatial search queries completed in 3959 ms,
increasing overall system CPU utilisation by 9% and memory
utilisation by 0%. Queries returned 76 689 city objects contained
in variable square size areas in total.
80 concurrent geospatial search queries completed in 5766 ms,
increasing overall system CPU utilisation by 10% and memory
utilisation by 0%. Queries returned 110 318 city objects contained
in variable square size areas in total.
130 concurrent geospatial search queries completed in 9274 ms,
increasing overall system CPU utilisation by 11% and memory
utilisation by 0%. Queries returned 184 496 city objects contained
in variable square size areas in total.
210 concurrent geospatial search queries completed in 12292 ms,
increasing overall system CPU utilisation by 11% and memory
utilisation by 0%. Queries returned 255 711 city objects contained
in variable square size areas in total.
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
Table 3
City planning related questions answered by the live CKG subsystem of TWA.
Query No. Question No. of solutions Elapsed time
1. Ask if a certain building function exists in the dataset. 1 2 ms
2. Ask if a certain street name exists in the dataset. 1 219 ms
3. Return all data about a building in a specified address. 39 193 ms
4. Return all distinct generic attribute names found in the dataset. 40 602 ms
5. Return all distinct building function codes in the dataset. 167 155 ms
6. Return all generic attribute names and their values found in the dataset. 956 823 22 118 ms
7. Return specified generic attribute ‘‘Qualitaet’’ with all its values found in the dataset and order solutions by the value
in descending order.
2 798 6048 ms
8. Return all distinct street names found in the dataset. Count the number of buildings in every street. 738 199 ms
9. Return all distinct street names that have building function code ‘‘1134’’ in it. Count how many times that function
occurred and order streets by the number of function occurrence in descending order.
7 155 ms
10. Return all distinct street names that have building function code ‘‘1444’’ in it. Count the ratio of the specified function
in each street. Order results by the ratio in descending order.
89 480 ms
Table 4
City planning questions answered by the live CKG subsystem of TWA. (continued from Table 3).
Query No. Question No. of solutions Elapsed time
11. Return all street names that have building function code ‘‘1171’’ in them. Return other existing functions in those
streets. Count and order other functions by their occurrence in descending order.
66 415 ms
12. Return the average height of each function code found in the dataset and order functions by average height in
descending order.
167 172 ms
13 Return all distinct street names found in the dataset. Count the average, minimum and maximum height of each street.
Order solutions by average height in a descending order.
689 219 ms
14. Return all distinct streets found in the dataset. Order solutions by the number of buildings in the street and by the
average height in ascending order.
689 225 ms
15. Return all distinct street names that have at least one of the specified function codes. Order solutions by the number
of matched function codes in descending order and by the average street height in ascending order.
71 338 ms
16. Return all addresses of buildings with function code ‘‘2921’’ found in the dataset. Order results by street name and
number in ascending order.
55 156 ms
17. Return all addresses and envelope coordinates of buildings with function code ‘‘2921’’ found in the dataset. 40 164 ms
18. Return all building Ids and their function codes found within a given boundary. 93 5073 ms
19. Return function codes found within a given boundary. Count function code occurrence and order functions by their
occurrence in descending order.
12 4938 ms
20. Return a list of building function codes found within the given boundary. Count the ratio of these functions in the
given boundary. Order results by ratio in descending order.
6 4152 ms
Geospatial search queries were sent to the /citieskg/namespace/ber-
lin/sparql endpoint in the TWA as collections of HTTP GET requests,
using Postman v8.1.0 [64] web API testing tool, over the Internet.
The proof of concept CKG, populated with Charlottenburg–Wilmersdorf
LOD2 building data and integrated into TWA, shows satisfactory results
in tests of concurrent execution of geospatial search queries as well,
when looked at from the point of view of expected workloads.
Recommendations for larger and more intensive workloads vary.
For instance, Nguyen and Kolbe [30] test graph comparison algorithms
on city data using a machine running SUSE Linux Enterprise Server 12
SP1 (64 bit) equipped with Intel®Xeon®CPU E5-2667 v3 at 3.20 GHz
(16 CPUs + Hyper-threading), a PCIe Solid-state Drive Array (SSD)
and 1 TB of main memory. Whereas RDF GAS API, implemented in
Blazegraph, seems not to be particularly heavy on memory and CPU
requirements, but instead it emphasises the importance of the SSD
technology to achieve close to 1 million traversed edges per second on a
MacBook Air [65]. Upon consultation, one of the market-leading server
hardware vendors recommended the following configuration for the
CKG — at least, as a starting point to a system able to store and query
between 282,873,427 and 353,591,784 semantic triples generated in
case of the whole Berlin and integrated within TWA:
1. Frontend web server:
12 core/2.9 GHz CPU
2x 300 GB HDD
2. Virtualisation server for more intensive operations on geospatial
32 core/2.3 GHz CPU
256 GB RAM
2 x 300 GB HDD
3. Expandable Network Attached Storage system for VMs:
16TB useable, hot-swappable and expandable SSD storage.
6. Conclusions and future work
Applying sustainable digitisation practices in order to build scalable
technological solutions, based on standards and open source compo-
nents, could be used to create intelligent decision support systems.
Architecture definition for one of them, in the form of CKG, elabo-
rated on in this paper, shows that incorporation of the Semantic 3D
City Database into TWA, enables dynamic geospatial knowledge graph
capabilities within it and makes it able to aid sustainable urbanisation
practices and decision-making processes.
Previous sections demonstrate how to build knowledge graphs ca-
pable of semantic representation of three-dimensional geometrical city
objects and, in this way, provide comprehensive insights based on
multi-domain data interoperability. This process required the invention
and implementation of some novel system components. Section 2treats
about the OntoCityGML ontology which required rearrangements of
concepts found in the original CityGML ontology. Two ontologies are
considered to be different from the logical point of view. Making use
of the OntoCityGML ontology would not lead to the inference errors,
which would occur while using CityGML ontology. Section 3introduced
augmentation of the Importer/Exporter tool in more detail. Imple-
mentation required: (a) new methods generating SPARQL prepared
statements and augmentation of existing code making it able to make
use of those methods; (b) new database adapters making the existing
tool capable of working with non-relational, W3C compliant, semantic
graph stores. The new adapters were developed as extensions of the
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
Abstract*Adapter classes and conforming to the required interfaces.
The abstract classes are well documented in the 3D City Database
documentation and their code is also available on GitHub. Links to both
are included in the bibliography section. The last paragraph of Sec-
tion 4treats about originally developed and implemented algorithms
allowing to generate geospatial vocabularies for Blazegraph in order to
automatically produce its configurations facilitating geospatial search
on models containing thousands of geometry types. The manual process
of procuring such configurations is not feasible.
A refined OntoCityGML ontology, presented in Section 2, is an ex-
ample of bringing a well-defined and specified international standard,
namely CityGML 2.0, into the world of semantic web and making
it compliant with W3C recommendations and standards at the same
time. The demonstration of using such ontology with augmented data
transformation tools proved the suitability of the ontology to serve
as a schema for the semantic twin of the 3D City Database, designed
and optimised at TUM for many years. CKG leverages this past work
by keeping data in named graphs organised within namespaces. Ad-
vantages of the new semantic representation arising from the implied
OWA have been shown when adding intelligence to a bare database
is of interest as well. The last sections show the possibility of the
materialisation of such a database within a scalable triple store with
geospatial search capabilities. The store also adheres to W3C standards
as well as making it possible to integrate geospatial data within a
general knowledge graph, such as TWA, and provide multi-domain
data interoperability. Such systems could be hardware demanding.
Sample tests of a live CKG single user and concurrent queries gauge
existing TWA capabilities with regard to that, allowing a better un-
derstanding of how to evolve this aspect of the system further. Such
recommendations conclude the last section of this paper.
TWA, at its core, is an agent-based system where intelligent au-
tonomous agents operate on the knowledge graph. A system of agents
specific to the CKG has not been considered during its proof of concept
as presented in this paper. In order to do that, further strengthening
of the OntoCityGML would be required, mainly consisting of cross-
checking more CityGML 2.0 concepts not found in the sample city data
used in this proof of concept. This would allow to add more data into
the CKG and enable it to serve as a base for even broader insights
with such data linked to other datasets, already available in TWA.
At the same time, further extensions of the TUM data transformation
tools would be needed as well. Higher level geospatial search func-
tionalities currently implemented Blazegraph, namely inCircle and
inRectangle [66], failed in some of the tests conducted using different
coordinate reference systems. Therefore CKG makes use of only custom-
Fields geospatial search feature at the moment. Resolving issues with
those higher level types of search would add more capabilities and
enable implementation of certain functionalities going forward as well.
Making use of the Parallel World Framework, already implemented
in TWA, with the city data would allow users to simulate, analyse,
dynamically visualise and evaluate various urbanisation scenarios and
enable the possibility for cross-domain sustainability impact analyses.
One cannot forget that TWA, as an information system, also occu-
pies some physical space in a data centre – a group of buildings used to
house computer systems and their associated components. Improving
the sustainability of such buildings has come into focus for many
big technology companies in recent years. Very often they look into
making use of more efficient cooling as well as utilisation of renewable
energy sources, such as installation of solar panels on their roofs, as
well as multi-tenancy or maximisation of optimal computing resource
utilisation, eliminating idle but energy-consuming times. When looked
at from this angle, the presented proof of concept may be regarded as
a brick on paving the road towards self-sustainable knowledge graphs.
List of abbreviations
EIP Eco-Industrial Park
HTTP Hypertext Transfer Protocol
URL Uniform Resource Locator
URI Uniform Resource Identifier
IRI Internationalized Resource Identifier
JPS J-Park Simulator
JSON JavaScript Object Notation
OWL Web Ontology Language
RDF Resource Description Framework
TUM Technische Universität München
SQL Structured Query Language
SPARQL SPARQL Protocol and RDF Query Language
W3C World Wide Web Consortium
XML Extensible Markup Language
GML Geography Markup Language
JDBC Java Database Connectivity
TWA The World Avatar
OWA Open World Assumption
CWA Closed World Assumption
DL Description Logic
OBDA Ontology-Based Data Access
OGC Open Geospatial Consortium
WKT Well-Known Text
CKG Cities Knowledge Graph
CARES Cambridge Centre for Advanced Research and
Education in Singapore
SEC Singapore-ETH Centre
LOD2 Level OfDetail 2
CIM City Information Model
DT Digital Twin
Declaration of competing interest
The authors declare that they have no known competing finan-
cial interests or personal relationships that could have appeared to
influence the work reported in this paper.
This research is supported by the National Research Foundation,
Prime Minister’s Office, Singapore under its Campus for Research Ex-
cellence and Technological Enterprise (CREATE) programme. Markus
Kraft gratefully acknowledges the support of the Alexander von Hum-
boldt foundation, Germany.
The research was conducted as part of an Intra-CREATE collabo-
rative project involving CARES (Cambridge Centre for Advanced Re-
search and Education in Singapore), which is University of Cambridge’s
presence in Singapore, and Future Cities Laboratory at the Singapore-
ETH Centre, which was established collaboratively between ETH Zurich
and the National Research Foundation Singapore.
Appendix A. OntoCityGML knowledge-base expressed in a DLs
Concept inclusion (CI) axioms:
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝐶 𝑜𝑙𝑙𝑒𝑐𝑡𝑖𝑜𝑛𝑇 𝑦𝑝𝑒
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙𝑇 𝑦𝑝𝑒 _𝐺𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙 𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂𝑓 𝐶 𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙
.𝑎𝑛𝑦𝑇 𝑦𝑝𝑒
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙𝑇 𝑦𝑝𝑒 𝑎𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑀 𝑒𝑚𝑏𝑒𝑟.𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑇 𝑦𝑝𝑒
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙𝑇 𝑦𝑝𝑒 𝑏𝑜𝑢𝑛𝑑𝑒𝑑 𝐵𝑦.𝐸𝑛𝑣𝑒𝑙𝑜𝑝𝑒𝑇 𝑦𝑝𝑒
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙𝑇 𝑦𝑝𝑒 ⊑𝐶𝑖𝑡𝑦𝐺 𝑀𝐿𝐶 𝑜𝑟𝑒𝑀𝑜𝑑 𝑢𝑙𝑒
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙𝑇 𝑦𝑝𝑒 𝑐𝑖𝑡𝑦𝑂𝑏𝑗 𝑒𝑐𝑡𝑀 𝑒𝑚𝑏𝑒𝑟.𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐶𝑖𝑡𝑦𝑂 𝑏𝑗𝑒𝑐𝑡𝑇 𝑦𝑝𝑒
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
⊤ ⊑𝑐𝑖𝑡𝑦𝑂𝑏𝑗 𝑒𝑐𝑡𝑀 𝑒𝑚𝑏𝑒𝑟.𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑢𝑖𝑙𝑑𝑖𝑛𝑔 𝑇 𝑦𝑝𝑒
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑢𝑖𝑙 𝑑𝑖𝑛𝑔𝑇 𝑦𝑝𝑒 ⊑𝐵𝑢𝑖𝑙𝑑𝑖𝑛𝑔 𝑀𝑜𝑑 𝑢𝑙𝑒
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑀𝑜𝑑𝑢𝑙𝑒 ⊑𝐶 𝑖𝑡𝑦𝐺𝑀𝐿𝑀 𝑜𝑑𝑢𝑙𝑒
𝐶𝑖𝑡𝑦𝐺𝑀𝐿𝑀 𝑜𝑑𝑢𝑙𝑒 ⊑𝑐 𝑖𝑡𝑦𝑔𝑚𝑙
𝑐𝑖𝑡𝑦𝑔𝑚𝑙 ⊑⊤
⊤ ⊑𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔.𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑇 𝑦𝑝𝑒
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑇 𝑦𝑝𝑒 _𝐺𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂 𝑓 𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔
.𝑎𝑛𝑦𝑇 𝑦𝑝𝑒
⊤ ⊑_𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
.𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝐶𝑒𝑖𝑙 𝑖𝑛𝑔𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒.𝐶 𝑒𝑖𝑙𝑖𝑛𝑔𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝐶𝑙 𝑜𝑠𝑢𝑟𝑒𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒.𝐶𝑙𝑜𝑠𝑢𝑟𝑒𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝐹 𝑙𝑜𝑜𝑟𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒.𝐹 𝑙𝑜𝑜𝑟𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝐺𝑟𝑜𝑢𝑛𝑑𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒.𝐺 𝑟𝑜𝑢𝑛𝑑𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝐼𝑛𝑡𝑒𝑟𝑖𝑜𝑟𝑊 𝑎𝑙𝑙 𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
.𝐼𝑛𝑡𝑒𝑟𝑖𝑜𝑟𝑊 𝑎𝑙𝑙 𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒𝑇 𝑦𝑝𝑒
Concept inclusion (CI) axioms (continue):
𝑂𝑢𝑡𝑒𝑟𝐶 𝑒𝑖𝑙𝑖𝑛𝑔𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑀 𝑜𝑑𝑢𝑙𝑒
𝑂𝑢𝑡𝑒𝑟𝐹 𝑙𝑜𝑜𝑟𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝑅𝑜𝑜𝑓 𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒.𝑅𝑜𝑜𝑓 𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝑊 𝑎𝑙 𝑙𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒.𝑊 𝑎𝑙 𝑙𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒𝑇 𝑦𝑝𝑒
𝑀𝑢𝑙 𝑡𝑖𝑆𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝐴𝑔𝑔𝑟𝑒𝑔𝑎𝑡𝑒𝑇 𝑦𝑝𝑒
𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝐴𝑔 𝑔𝑟𝑒𝑔𝑎𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒
𝑀𝑢𝑙 𝑡𝑖𝑆𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒
𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒 ⊑𝑐 𝑖𝑡𝑦𝑔𝑚𝑙
𝑃 𝑜𝑙𝑦𝑔𝑜𝑛𝑇 𝑦𝑝𝑒 ⊑𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒
𝑆𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝑃 𝑟𝑖𝑚𝑖𝑡𝑖𝑣𝑒𝑇 𝑦𝑝𝑒
𝐿𝑖𝑛𝑒𝑎𝑟𝑅𝑖𝑛𝑔𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒
𝐶𝑜𝑚𝑝𝑜𝑠𝑖𝑡𝑒𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐶 𝑢𝑟𝑣𝑒𝑇 𝑦𝑝𝑒
𝐶𝑢𝑟𝑣𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝑃 𝑟𝑖𝑚𝑖𝑡𝑖𝑣𝑒𝑇 𝑦𝑝𝑒
𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝑃 𝑟𝑖𝑚𝑖𝑡𝑖𝑣𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺 𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒
𝐶𝑜𝑚𝑝𝑜𝑠𝑖𝑡𝑒𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑇 𝑦𝑝𝑒
𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝐶 𝑜𝑚𝑝𝑙𝑒𝑥𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒
𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝐶 𝑜𝑚𝑝𝑙𝑒𝑥𝑇 𝑦𝑝𝑒 ⊑𝑇 𝑟𝑎𝑛𝑠𝑝𝑜𝑟𝑡𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝑇 𝑟𝑎𝑛𝑠𝑝𝑜𝑟𝑡𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦 ⊑𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦 ⊑⊤
𝑆𝑜𝑙𝑖𝑑 𝑇 𝑦𝑝𝑒 ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐𝑃 𝑟𝑖𝑚𝑖𝑡𝑖𝑣𝑒𝑇 𝑦𝑝𝑒
𝑆𝑜𝑙𝑖𝑑 𝑇 𝑦𝑝𝑒 𝑒𝑥𝑡𝑒𝑟𝑖𝑜𝑟.𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒𝑇 𝑦𝑝𝑒
𝐴𝑑𝑑 𝑟𝑒𝑠𝑠𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐 𝑡𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒
Concept inclusion (CI) axioms (continue):
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒 ⊑𝑐 𝑖𝑡𝑦𝑔𝑚𝑙
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒 𝑛𝑎𝑚𝑒.𝐷𝑎𝑡𝑎𝑡𝑦𝑝𝑒𝑠𝑡𝑟𝑖𝑛𝑔
⊤ ⊑𝐴𝑑𝑑 𝑟𝑒𝑠𝑠.𝐴𝑑𝑑𝑟𝑒𝑠𝑠𝑇 𝑦𝑝𝑒
𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝑅𝑒𝑓 𝑒𝑟𝑒𝑛𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐶𝑖𝑡𝑦𝐺 𝑀𝐿𝐶 𝑜𝑟𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝐶𝑖𝑡𝑦𝐺 𝑀𝐿𝐶 𝑜𝑟𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦 ⊑𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝑅𝑒𝑓 𝑒𝑟𝑒𝑛𝑐𝑒𝑇 𝑦𝑝𝑒 𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝑂𝑏𝑗𝑒𝑐𝑡
.𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝑂 𝑏𝑗𝑒𝑐𝑡𝑅𝑒𝑓 𝑒𝑟𝑒𝑛𝑐 𝑒𝑇 𝑦𝑝𝑒
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐺𝑒𝑛𝑒𝑟𝑖𝑐 𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐶𝑖𝑡𝑦𝐺 𝑀𝐿𝐶 𝑜𝑟𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝑆𝑡𝑟𝑖𝑛𝑔 𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐 𝑡𝐺𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝑠𝑡𝑟𝑖𝑛𝑔𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒.𝑆 𝑡𝑟𝑖𝑛𝑔𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
𝐼𝑛𝑡𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐺𝑒𝑛𝑒𝑟𝑖𝑐 𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝑖𝑛𝑡𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒.𝐼𝑛𝑡𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
𝐷𝑜𝑢𝑏𝑙𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐 𝑡𝐺𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝑑𝑜𝑢𝑏𝑙𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒.𝐷 𝑜𝑢𝑏𝑙𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐺𝑒𝑛𝑒𝑟𝑖𝑐 𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐶𝑖𝑡𝑦𝐺 𝑀𝐿𝐶 𝑜𝑟𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝐷𝑎𝑡𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐺 𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝑑𝑎𝑡𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒.𝐷𝑎𝑡𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑇 𝑦𝑝𝑒
⊤ ⊑𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑃 𝑎𝑟𝑡.𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑃 𝑎𝑟𝑡𝑇 𝑦𝑝𝑒
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝐹 𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑇 𝑦𝑝𝑒 ⊑𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦 ⊑𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝑅𝑜𝑜𝑓 𝑇 𝑦𝑝𝑒𝑇 𝑦𝑝𝑒 ⊑𝐵 𝑢𝑖𝑙𝑑𝑖𝑛𝑔𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝑀𝑖𝑚𝑒𝑇 𝑦𝑝𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦 ⊑𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
⊤ ⊑𝑇 𝑒𝑥𝐶𝑜𝑜𝑟𝑑 𝐿𝑖𝑠𝑡.𝑇 𝑒𝑥𝐶𝑜𝑜𝑟𝑑 𝐿𝑖𝑠𝑡𝑇 𝑦𝑝𝑒
𝑊 𝑟𝑎𝑝𝑀 𝑜𝑑𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦
Concept inclusion (CI) axioms (continue):
𝑃 𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑖𝑧𝑒𝑑𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒 ⊑𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒
𝑃 𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑖𝑧𝑒𝑑𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒 _𝐺 𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂𝑓
𝑃 𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑖𝑧𝑒𝑑𝑇 𝑒𝑥𝑡𝑢𝑟𝑒.𝑎𝑛𝑦𝑇 𝑦𝑝𝑒
𝑃 𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑖𝑧𝑒𝑑𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒 𝑡𝑎𝑟𝑔 𝑒𝑡.𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝐴𝑠𝑠𝑜𝑐𝑖𝑎𝑡𝑖𝑜𝑛𝑇 𝑦𝑝𝑒
𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝐷𝑎𝑡𝑎𝑇 𝑦𝑝𝑒
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑆 𝑢𝑟𝑓𝑎𝑐 𝑒𝐷𝑎𝑡𝑎𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝑇 𝑦𝑝𝑒
𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝐷𝑎𝑡𝑎𝑇 𝑦𝑝𝑒 ⊑𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑀 𝑜𝑑𝑢𝑙 𝑒
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑀 𝑜𝑑𝑢𝑙𝑒 ⊑𝐶 𝑖𝑡𝑦𝐺𝑀 𝐿𝑀𝑜𝑑 𝑢𝑙𝑒
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑇 𝑦𝑝𝑒 ⊑𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐 𝑒𝑀𝑜𝑑 𝑢𝑙𝑒
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑇 𝑦𝑝𝑒 _𝐺 𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂𝑓
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒.𝑎𝑛𝑦𝑇 𝑦𝑝𝑒
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒𝑇 𝑦𝑝𝑒 𝑠𝑢𝑟𝑓 𝑎𝑐 𝑒𝐷𝑎𝑡𝑎𝑀 𝑒𝑚𝑏𝑒𝑟
.𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒𝐷𝑎𝑡𝑎𝑇 𝑦𝑝𝑒
⊤ ⊑𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙.𝑋3𝐷 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 ⊑𝐴𝑏𝑠𝑡𝑟𝑎𝑐 𝑡𝑆𝑢𝑟𝑓 𝑎𝑐𝑒𝐷𝑎𝑡𝑎𝑇 𝑦𝑝𝑒
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 _𝐺𝑒𝑛𝑒𝑟𝑖𝑐 𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂𝑓
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙.𝑎𝑛𝑦𝑇 𝑦𝑝𝑒
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 𝑎𝑚𝑏𝑖𝑒𝑛𝑡𝐼 𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦.𝑑𝑜𝑢𝑏𝑙𝑒𝐵 𝑒𝑡𝑤𝑒𝑒𝑛0𝑎𝑛𝑑1
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 𝑑 𝑖𝑓 𝑓 𝑢𝑠𝑒𝐶𝑜𝑙𝑜𝑟.𝐶 𝑜𝑙𝑜𝑟
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 𝑒𝑚𝑖𝑠𝑠𝑖𝑣𝑒𝐶 𝑜𝑙𝑜𝑟.𝐶𝑜𝑙𝑜𝑟
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 𝑠ℎ𝑖𝑛𝑖𝑛𝑒𝑠𝑠.𝑑 𝑜𝑢𝑏𝑙𝑒𝐵𝑒𝑡𝑤𝑒𝑒𝑛0𝑎𝑛𝑑1
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 𝑠𝑝𝑒𝑐𝑢𝑙𝑎𝑟𝐶𝑜𝑙𝑜𝑟.𝐶 𝑜𝑙𝑜𝑟
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑇 𝑦𝑝𝑒 𝑡𝑟𝑎𝑛𝑠𝑝𝑎𝑟𝑒𝑛𝑐𝑦.𝑑𝑜𝑢𝑏𝑙𝑒𝐵𝑒𝑡𝑤𝑒𝑒𝑛0𝑎𝑛𝑑 1
ℎ𝑎𝑠𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦.⊤ ⊑𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑦𝑇 𝑦𝑝𝑒
ℎ𝑎𝑠𝐸𝑛𝑣𝑒𝑙𝑜𝑝𝑒.⊤ ⊑𝐸𝑛𝑣𝑒𝑙𝑜𝑝𝑒𝑇 𝑦𝑝𝑒
Role inclusion (RI) axioms:
_𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝐶𝑜𝑙 𝑙𝑒𝑐𝑡𝑖𝑜𝑛 ⊑⊤ ×
𝐶𝑖𝑡𝑦𝑀 𝑜𝑑𝑒𝑙 ⊑_𝐹 𝑒𝑎𝑡𝑢𝑟𝑒𝐶 𝑜𝑙𝑙𝑒𝑐 𝑡𝑖𝑜𝑛
𝑓 𝑒𝑎𝑡𝑢𝑟𝑒𝑀𝑒𝑚𝑏𝑒𝑟 ⊑⊤ ×
𝑐𝑖𝑡𝑦𝑂𝑏𝑗 𝑒𝑐𝑡𝑀 𝑒𝑚𝑏𝑒𝑟 ⊑𝑓 𝑒𝑎𝑡𝑢𝑟𝑒𝑀 𝑒𝑚𝑏𝑒𝑟
_𝑆𝑖𝑡𝑒 ⊑_𝐶 𝑖𝑡𝑦𝑂𝑏𝑗𝑒𝑐 𝑡
_𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑢𝑖𝑙 𝑑𝑖𝑛𝑔 ⊑_𝑆𝑖𝑡𝑒
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔 ⊑_𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔
_𝐹 𝑒𝑎𝑡𝑢𝑟𝑒 ⊑⊤ ×
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
_𝐶𝑖𝑡𝑦𝑂𝑏𝑗𝑒𝑐𝑡 ⊑_𝐹 𝑒𝑎𝑡𝑢𝑟𝑒
_𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒 _𝐶𝑖𝑡𝑦𝑂𝑏𝑗𝑒𝑐𝑡
𝐶𝑒𝑖𝑙 𝑖𝑛𝑔𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒 _𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒
𝐶𝑙 𝑜𝑠𝑢𝑟𝑒𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒 _𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
𝐹 𝑙𝑜𝑜𝑟𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒 _𝐵 𝑜𝑢𝑛𝑑𝑎𝑟𝑦𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒
𝐺𝑟𝑜𝑢𝑛𝑑𝑆 𝑢𝑟𝑓 𝑎𝑐𝑒 _𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
𝐼𝑛𝑡𝑒𝑟𝑖𝑜𝑟𝑊 𝑎𝑙𝑙 𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒 _𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
𝑅𝑜𝑜𝑓 𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒 _𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
𝑊 𝑎𝑙 𝑙𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒 _𝐵𝑜𝑢𝑛𝑑 𝑎𝑟𝑦𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒
𝑖𝑛𝑡𝑒𝑟𝑖𝑜𝑟 ⊑⊤ ×
𝑒𝑥𝑡𝑒𝑟𝑖𝑜𝑟 ⊑⊤ ×
𝐴𝑑𝑑 𝑟𝑒𝑠𝑠 ⊑_𝐹 𝑒𝑎𝑡𝑢𝑟𝑒
𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝑂𝑏𝑗𝑒𝑐𝑡 ⊑⊤ ×
𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝑅𝑒𝑓 𝑒𝑟𝑒𝑛𝑐𝑒 ⊑⊤ ×
𝑥𝑎𝑙𝐴𝑑𝑑 𝑟𝑒𝑠𝑠 ⊑⊤ ×
Role inclusion (RI) axioms (continue):
𝑠𝑡𝑟𝑖𝑛𝑔𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 ⊑_𝑔𝑒𝑛𝑒𝑟𝑖𝑐 𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒
_𝑔𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 ⊑_𝐺 𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛
𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂𝑓 𝐶 𝑖𝑡𝑦𝑂𝑏𝑗𝑒𝑐 𝑡
_𝐺𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙𝑖𝑐 𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑂𝑓 𝐶 𝑖𝑡𝑦𝑂𝑏𝑗 𝑒𝑐𝑡 ⊑𝐺𝑒𝑛𝑒𝑟𝑖𝑐 𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛
𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑅𝑒𝑙𝑎𝑡𝑖𝑜𝑛
𝐺𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑝𝑝𝑙𝑖𝑐 𝑎𝑡𝑖𝑜𝑛𝑃 𝑟𝑜𝑝𝑒𝑟𝑡𝑦𝑅𝑒𝑙𝑎𝑡𝑖𝑜𝑛 ⊑⊤ ×
𝑑𝑜𝑢𝑏𝑙𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 ⊑_𝑔 𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒
𝑑𝑎𝑡𝑒𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 ⊑_𝑔𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒
𝑖𝑛𝑡𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 ⊑_𝑔𝑒𝑛𝑒𝑟𝑖𝑐𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒
𝑙𝑜𝑑2𝑆 𝑜𝑙𝑖𝑑 ⊑𝑙𝑜𝑑2𝑅𝑒𝑙𝑎𝑡𝑖𝑜𝑛
𝑙𝑜𝑑2𝑅𝑒𝑙𝑎𝑡𝑖𝑜𝑛 ⊑⊤ ×
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑃 𝑎𝑟𝑡 _𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵 𝑢𝑖𝑙𝑑𝑖𝑛𝑔
_𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝐵𝑢𝑖𝑙 𝑑𝑖𝑛𝑔 ⊑_𝑆𝑖𝑡𝑒
𝐵𝑢𝑖𝑙𝑑 𝑖𝑛𝑔𝑃 𝑎𝑟𝑡 _𝑆𝑖𝑡𝑒
𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑𝐻 𝑒𝑖𝑔ℎ𝑡 ⊑⊤ ×
𝑟𝑜𝑜𝑓 𝑇 𝑦𝑝𝑒 ⊑⊤ ×
𝑐𝑜𝑛𝑠𝑖𝑠𝑡𝑠𝑂𝑓 𝐵 𝑢𝑖𝑙𝑑𝑖𝑛𝑔 𝑃 𝑎𝑟𝑡 ⊑⊤ ×
𝑚𝑖𝑚𝑒𝑇 𝑦𝑝𝑒 ⊑⊤ ×
𝑇 𝑒𝑥𝐶𝑜𝑜𝑟𝑑 𝐿𝑖𝑠𝑡 _𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑃 𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑖𝑧𝑎𝑡𝑖𝑜𝑛
_𝑇 𝑒𝑥𝑡𝑢𝑟𝑒𝑃 𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑖𝑧𝑎𝑡𝑖𝑜𝑛 _𝐺𝑀 𝐿
_𝐺𝑀 𝐿 ⊑⊤ ×
𝑠𝑝𝑒𝑐𝑢𝑙𝑎𝑟𝐶 𝑜𝑙𝑜𝑟 ⊑⊤ ×
𝑖𝑚𝑎𝑔𝑒𝑈 𝑅𝐼 ⊑𝑈 𝑅𝐼
𝑏𝑜𝑟𝑑𝑒𝑟𝐶 𝑜𝑙𝑜𝑟 ⊑⊤ ×
Role inclusion (RI) axioms (continue):
𝑤𝑟𝑎𝑝𝑀𝑜𝑑 𝑒 ⊑⊤ ×
𝑡𝑒𝑥𝑊 𝑟𝑎𝑝𝑀𝑜𝑑 𝑒 ⊑𝑤𝑟𝑎𝑝𝑀𝑜𝑑 𝑒
𝑡𝑎𝑟𝑔𝑒𝑡 ⊑⊤ ×
𝑎𝑚𝑏𝑖𝑒𝑛𝑡𝐼𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦 ⊑⊤ ×
𝑠𝑢𝑟𝑓 𝑎𝑐𝑒𝐷𝑎𝑡𝑎𝑀 𝑒𝑚𝑏𝑒𝑟 ⊑⊤ ×
𝑡𝑒𝑥𝑡𝑢𝑟𝑒𝐶𝑜𝑜𝑟𝑑 𝑖𝑛𝑎𝑡𝑒𝑠 ⊑⊤ ×
𝐴𝑝𝑝𝑒𝑎𝑟𝑎𝑛𝑐𝑒 ⊑⊤ ×
𝑑𝑖𝑓 𝑓 𝑢𝑠𝑒𝐶 𝑜𝑙𝑜𝑟 ⊑⊤ ×
𝑒𝑚𝑖𝑠𝑠𝑖𝑣𝑒𝐶𝑜𝑙𝑜𝑟 ⊑⊤ ×
𝑠ℎ𝑖𝑛𝑖𝑛𝑒𝑠𝑠 ⊑⊤ ×
𝑡𝑟𝑎𝑛𝑠𝑝𝑎𝑟𝑒𝑛𝑐𝑦 ⊑⊤ ×
𝑋3𝐷𝑀 𝑎𝑡𝑒𝑟𝑖𝑎𝑙 ⊑_𝑆𝑢𝑟𝑓 𝑎𝑐 𝑒𝐷𝑎𝑡𝑎
Appendix B. Sample of unit test cases — covering CityModelType
term in OntoCityGML
< ont od eb ug :t est Ca se >
<r df :D esc ri pti on >
<ontodebug: axio m rd f: datatype=
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
st ri ng " >
Ci ty Mo de lType SubClassOf
</ on to de bu g: ax iom >
<ontodebug: ty pe r df :datatype =
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
bo ol ea n " >
fa lse
</ on to de bu g :t ype >
</ rdf : Des cr ipt ion >
</ on to deb ug : tes tCa se >
< ont od eb ug :t est Ca se >
<r df :D esc ri pti on >
<ontodebug: axio m rd f: datatype=
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
st ri ng " >
Ci ty Mo de lType SubClassOf
Ab st ractFea tu re Co llectio nT yp e
</ on to de bu g: ax iom >
<ontodebug: ty pe r df :datatype =
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
bo ol ea n " >
</ on to de bu g :t ype >
</ rdf : Des cr ipt ion >
</ on to deb ug : tes tCa se >
< ont od eb ug :t est Ca se >
<r df :D esc ri pti on >
<ontodebug: axio m rd f: datatype=
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
st ri ng " >
Ci ty Mo de lType SubClassOf
</ on to de bu g: ax iom >
<ontodebug: ty pe r df :datatype =
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
bo ol ea n " >
</ on to de bu g :t ype >
</ rdf : Des cr ipt ion >
</ on to deb ug : tes tCa se >
< ont od eb ug :t est Ca se >
<r df :D esc ri pti on >
<ontodebug: axio m rd f: datatype=
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
st ri ng " >
Ci ty Mo de lType SubClassOf
</ on to de bu g: ax iom >
<ontodebug: ty pe r df :datatype =
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
bo ol ea n " >
</ on to de bu g :t ype >
</ rdf : Des cr ipt ion >
</ on to deb ug : tes tCa se >
< ont od eb ug :t est Ca se >
<r df :D esc ri pti on >
<ontodebug: axio m rd f: datatype=
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
st ri ng " >
Ci ty Mo de lType SubClassOf
</ on to de bu g: ax iom >
<ontodebug: ty pe r df :datatype =
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
bo ol ea n " >
fa lse
</ on to de bu g :t ype >
</ rdf : Des cr ipt ion >
</ on to deb ug : tes tCa se >
< ont od eb ug :t est Ca se >
<r df :D esc ri pti on >
<ontodebug: axio m rd f: datatype=
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
st ri ng " >
Ci ty Mo de lType SubClassOf
</ on to de bu g: ax iom >
<ontodebug: ty pe r df :datatype =
" h tt p :/ / ww w .w 3 .o rg / 2 00 1/ X ML S ch em a #
bo ol ea n " >
</ on to de bu g :t ype >
</ rdf : Des cr ipt ion >
</ on to deb ug : tes tCa se >
[1] Winkelhake U. Roadmap for sustainable digitisation. In: The digital transforma-
tion of the automotive industry: Catalysts, roadmap, practice. Cham: Springer
International Publishing; 2018, p. 127–78.
319-71610- 7_6.
[2] Akroyd J, Mosbach S, Bhave A, Kraft M. The National Digital Twin of the UK –
a knowledge-graph approach. Preprint No. 264, 2020,
uk/preprints/264/, Accessed March 4th, 2021.
[3] Inderwildi O, Zhang C, Wang X, Kraft M. The impact of intelligent cyber-physical
systems on the decarbonization of energy. Energy Environ Sci 2020;13:744–71.
[4] Radermacher WJ. Official statistics 4.0: The era of digitisation and globalisation.
In: Official statistics 4.0: Verified facts for people in the 21st century. Cham:
Springer International Publishing; 2020, p. 119–56.
978-3- 030-31492-7_4.
[5] Rashmi J, Evrim JM, Albizri A. An open system architecture framework for
interoperability (OSAFI). Int J Bus Inf Syst 2020.
[6] Russell S, Norvig P. Artificial intelligence: A modern approach. 3rd ed.. Pearson;
[7] Zhou L, Zhang C, Karimi IA, Kraft M. An ontology framework towards decen-
tralized information management for eco-industrial parks. Comput Chem Eng
[8] Farazi F, Akroyd J, Mosbach S, Buerger P, Nurkowski D, Salamanca M, et al.
OntoKin: An ontology for chemical kinetic reaction mechanisms. J Chem Inf
Model 2020;60(1):108–20.
[9] Devanand A, Kraft M, Karimi IA. Optimal site selection for modular nuclear
power plants. Comput Chem Eng 2019;125:339–50.
[10] Krdzavac N, Mosbach S, Nurkowski D, Buerger P, Akroyd J, Martin J, et al. An
ontology and semantic web service for quantum chemistry calculations. J Chem
Inf Model 2019;59(7):3154–65.
[11] Eibeck A, Lim MQ, Kraft M. J-Park Simulator: An ontology-based platform for
cross-domain scenarios in process industry. Comput Chem Eng 2019;131:106586.
[12] Zhou X, Eibeck A, Lim MQ, Krdzavac N, Kraft M. An agent composition
framework for the J-park simulator – a knowledge graph for the process
industry. Comput Chem Eng 2019;130:106577.
[13] Farazi F, Salamanca M, Mosbach S, Akroyd J, Eibeck A, Aditya LK, et al.
Knowledge graph approach to combustion chemistry and interoperability. ACS
Omega 2020;5(29):18342–8.
[14] Devanand A, Karmakar G, Krdzavac N, Rigo-Mariani R, Foo EYS, Karimi IA,
Kraft M. OntoPowSys: A power system ontology for cross domain interactions
in an eco industrial park. Energy AI 2020;1:100008.
[15] Pan M, Sikorski J, Kastner CA, Akroyd J, Mosbach S, Lau R, et al. Ap-
plying industry 4.0 to the jurong island eco-industrial park. Energy Procedia
[16] Pan M, Sikorski J, Akroyd J, Mosbach S, Lau R, Kraft M. Design technologies
for eco-industrial parks: From unit operations to processes, plants and industrial
networks. Appl Energy 2016;175:305–23.
[17] Zhang C, Romagnoli A, Zhou L, Kraft M. Knowledge management of eco-
industrial park for efficient energy utilization through ontology-based approach.
Appl Energy 2017;204:1412–21.
[18] Kleinelanghorst MJ, Zhou L, Sikorski J, Foo EYS, Aditya K, Mosbach S, et al.
J-Park Simulator: Roadmap to smart eco-industrial parks. In: Proceedings of the
second international conference on internet of things, data and cloud computing.
[19] Zhou L, Pan M, Sikorski JJ, Garud S, Aditya LK, Kleinelanghorst MJ, et al.
Towards an ontological infrastructure for chemical process simulation and opti-
mization in the context of eco-industrial parks. Appl Energy 2017;204:1284–98.
[20] Zhou X, Lim MQ, Kraft M. A smart contract-based agent marketplace for the
J-park simulator – a knowledge graph for the process industry. Comput Chem
Eng 2020;139:106896.
[21] Eibeck A, Chadzynski A, Lim MQ, Aditya LK, Ong L, Devanand A, et al. A
parallel world framework for scenario analysis in knowledge graphs. Data-Cent
Eng 2020;1:e6.
[22] Gil J. City information modelling: A conceptual framework for research and
practice in digital urban planning. Built Environ 2020;46(4):501–27, 10/ghqp88.
[23] Schrotter G, Hürzeler C. The Digital Twin of the city of Zurich for urban
planning. PFG – J Photogram Remote Sens Geoinf Sci 2020;88(1):99–112,
[24] von Richthofen A, editor. Urban elements: Advanced studies in urban design.
ETH Zurich; 2018, 000270354.
[25] Gröger G, Kolbe TH, Nagel C, Häfele KH. OGC City Geography Markup Language
(CityGML) En-coding Standard. 2012, URL:
citygml, Accessed 11th March, 2021.
[26] Manchester 3D data viewer. 2021, URL:
#/, Accessed 8th April, 2021.
[27] Stadler A, Nagel C, König G, Kolbe TH. Making interoperability persistent: A
3D geo database based on citygml. In: 3D geo-information sciences. Berlin,
Heidelberg: Springer Berlin Heidelberg; 2009, p. 175–92.
1007/978-3- 540-87395-2_11.
[28] Yao Z, Nagel C, Kunde F, Hudra G, Willkomm P, Donaubauer A, et al. 3DCitydb
- a 3D geodatabase solution for the management, analysis, and visualization
of semantic 3D city models based on citygml. Open Geospat Data Softw Stand
2018;3(1):5. 0046-7.
[29] Agoub A, Kunde F, Kada M. Potential of graph databases in representing and
enriching standardized geodata. In: Conference: Dreiländertagung der dgpf, der
ovg und der SGPFAt: Bern, SwitzerlandVolume: 25. 2016.
[30] Nguyen SH, Kolbe TH. A multi-perspective approach to interpreting spatio-
semantic changes of large 3d city models in citygml using a graph
database. ISPRS Ann Photogram Remote Sens Spat Inf Sci 2020;VI-
4/W1-2020:143–50. VI-4-W1-2020-143-
2020, URL: https://www.isprs-ann- photogramm-remote-sens-spatial-inf-
VI-4- W1-2020/143/2020/.
[31] Kolas D, Self T. Spatially-augmented knowledgebase. In: Aberer K, Choi K-
S, Noy N, Allemang D, Lee K-I, Nixon L, Golbeck J, Mika P, Maynard D,
Mizoguchi R, Schreiber G, Cudré-Mauroux P, editors. The semantic web. Berlin,
Heidelberg: Springer Berlin Heidelberg; 2007, p. 792–801.
1007/978-3- 540-76298-0_57.
[32] CityGML ontology (2.0). 2021, URL:
owl, Accessed 12th March, 2021.
[33] Knowledge Engineering @ CUI. Ontologies by KE@ISS. 2021.
[34] Musen M. The Protégé project: A look back and a look forward. AI Matters
[35] Hlomani H, Stacey D. Approaches, methods, metrics, measures, and subjectivity
in ontology evaluation: A survey. Sem Web J 2014;1:1–11, URL: http://www.
semantic-web-, Accessed February 7th, 2019.
[36] Glimm B, Horrocks I, Motik B, Stoilos G, Wang Z. HermiT: An OWL 2 reasoner.
J Automat Reason 2014;53(3):245–69.
[37] Baader F, Horrocks I, Lutz C, Sattler U. An introduction to description logic.
Cambridge University Press; 2017,
[38] Calvanese D, De Giacomo G, Lembo D, Lenzerini M, Rosati R. Tractable reasoning
and efficient query answering in description logics: The DL-Lite family. J
Automat Reason 2007;39(3):385–429.
[39] Horrocks I, Kutz O, Sattler U. The even more irresistible SROIQ. In: Proceedings
of the 10th international conference on principles of knowledge representation
and reasoning (KR 2006). AAAI Press; 2006, p. 57–67, URL:
uk/ian.horrocks/Publications/download/2006/HoKS06a.pdf, Accessed October
26th, 2018.
Energy and AI 6 (2021) 100106
A. Chadzynski et al.
[40] Kontchakov R, Rodríguez-Muro M, Zakharyaschev M. Ontology-based data access
with databases: A short course. In: Reasoning web. Semantic technologies for
intelligent data access: 9th international summer school 2013, Mannheim, Ger-
many, July 30 – August 2, 2013. Proceedings. Berlin, Heidelberg: Springer Berlin
Heidelberg; 2013, p. 194–229. 642-39784-4_5.
[41] Citygml4j Java class library and API. 2021, URL:
citygml4j, Accessed 6th April, 2021.
[42] 3d city database importer/exporter. 2021, URL:
importer-exporter, Accessed 6th April, 2021.
[43] Jena JDBC - A SPARQL over JDBC driver framework. 2021, URL: https://jena., Accessed 7th April, 2021.
[44] Mazur B. When is one thing equal to some other thing?. In: Proof and other
dilemmas: mathematics and philosophy. Mathematical Association of America;
2008, p. 221–42.
[45] Berners-Lee T. Relational databases on the Semantic Web. 1998, URL: https:
//, Accessed March 22, 2021.
[46] Berners-Lee T. Linked data. 2006, URL:
LinkedData.html, Accessed March 22, 2021.
[47] Sir M, Bradac Z, Fiedler P. Ontology versus database. IFAC-PapersOnLine
2015;48(4):220–5, 13th IFAC and IEEE Conference on Programmable Devices
and Embedded Systems.
[48] Eclipse RDF4J. RDF4j (3.6.0). 2021, URL:, Accessed
11th March, 2021.
[49] Knublauch H, Hendler JA, Idehen K. SPIN-overview and motivation. W3C Mem
Submiss 2011;22. URL:
[50] Jovanovik M, Homburg T, Spasic M. A geosparql compliance benchmark. 2021,
URL:, Accessed 12th March, 2021.
[51] Sotona A, Negru S, Prague M, et al. How to feed Apache HBase with petabytes
of RDF data: An extremely scalable RDF store based on Eclipse RDF4J. In:
Proceedings of the ISWC. 2016.
[52] Apache Jena Fuseki documentation. 2021, URL:
documentation/fuseki2/, Accessed 26th March, 2021.
[53] SPARQL 1.1 Graph Store HTTP Protocol. 2021, URL:
sparql11-http- rdf-update/, Accessed 26th March, 2021.
[54] Kilintzis V, Beredimas N, Chouvarda I. Evaluation of the performance of open-
source RDBMS and triplestores for storing medical data over a web service. In:
2014 36th annual international conference of the IEEE engineering in medicine
and biology society. 2014, p. 4499–502.
[55] GeoSPARQL Fuseki documentation, W3C Recommendation. 2021, URL:, Accessed
26th March, 2021.
[56] Koubarakis M, Karpathiotakis M, Kyzirakos K, Nikolaou C, Sioutis M. Data
models and query languages for linked geospatial data. Berlin, Heidelberg:
Springer Berlin Heidelberg; 2012, p. 290–328.
3-642- 33158-9_8.
[57] Garbis G, Kyzirakos K, Koubarakis M. Geographica: A benchmark for geospatial
RDF stores (long version). In: Alani H, Kagal L, Fokoue A, Groth P, Biemann C,
Parreira JX, Aroyo L, Noy N, Welty C, Janowicz K, editors. The semantic web –
ISWC 2013. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013, p. 343–59.
[58] Kyzirakos K, Karpathiotakis M, Koubarakis M. Strabon: A semantic geospatial
DBMS. In: Cudré-Mauroux P, Heflin J, Sirin E, Tudorache T, Euzenat J,
Hauswirth M, Parreira JX, Hendler J, Schreiber G, Bernstein A, Blomqvist E,
editors. The semantic web – ISWC 2012. Berlin, Heidelberg: Springer Berlin
Heidelberg; 2012, p. 295–311.
[59] Strabon, the spatiotemporal RDF store. 2021, URL: http://www.strabon.di.uoa.
gr/download.html, Accessed 13th April, 2021.
[60] Blazegraph - about. 2021, URL:
About_Blazegraph, Accessed 20th April, 2021.
[61] Cities knowledge graph - ResearchGate. 2021, URL: https://www.researchgate.
net/project/Cities-Knowledge- Graph, Accessed 15th April, 2021.
[62] Cities knowledge graph - CARES. 2021, URL:
research/cities/, Accessed 15th April, 2021.
[63] Cities knowledge graph - SEC. 2021, URL:
projects/cities-knowledge- graph.html, Accessed 15th April, 2021.
[64] Postman - the collaboration platform for API Development. 2021, URL: https:
//, Accessed 16th April, 2021.
[65] Blazegraph - RDF GAS API. 2011, URL:
wiki/RDF_GAS_API, Accessed 16th April, 2021.
[66] Blazegraph - GeoSpatial. 2021, URL:
wiki/GeoSpatial, Accessed 20th April, 2021.
... TWA could be regarded as an example of a new paradigm for the GeoWeb information systems that go beyond Web 2.0. Being based on the Semantic 3D City Database (Chadzynski et al., 2021) at its core, it ports existing GIS standards to a new graph database and takes advantage of the Open World Assumption (OWA), which is absent in the equivalent relational geospatial databases (Stadler et al., 2009). Coupling it with a system of intelligent autonomous agents based on cognitive architecture extends and scales existing geospatial data transformation tools. ...
... City objects are modeled within the Semantic 3D City Database that is a graph equivalent of the 3D City DB. The objects are described in a form of quads (RDF 1.1 N-Quads, 2014) that assign every statement to a named graph that is an equivalent to a corresponding original relational database table (Chadzynski et al., 2021). It allows mapping the semantically stored objects to software models in Java by using an object graph mapper (OGM) engine developed for this purpose within the TWA. ...
... The web interface to the semantic representation of Berlin in the TWA presented at the Figure 4, showcases its capabilities to handle the so-called five V problems in the smart city data management (Amović et al., 2021). As described by Chadzynski et al. (2021), its underlying Semantic 3D City Database uses Blazegraph™ as a data store. Because of that, this technology supports up to 50 billion edges on a single machine, and it is fully compliant with the semantic web standards; TWA is the only knowledge graph of this kind that is capable of handling Volumes of data needed to store models of large cities, so far described in the literature on the subject matter. ...
Full-text available
This article presents a system architecture and a set of interfaces that can build scalable information systems capable of large city modeling based on dynamic geospatial knowledge graphs to avoid pitfalls of Web 2.0 applications while blending artificial and human intelligence during the knowledge enhancement processes. We designed and developed a GeoSpatial Processor, an SQL2SPARQL Transformer, and a geospatial tiles ordering tasks and integrated them into a City Export Agent to visualize and interact with city models on an augmented 3D web client. We designed a Thematic Surface Discovery Agent to automatically upgrade the model’s level of detail to interact with thematic parts of city objects by other agents. We developed a City Information Agent to help retrieve contextual information, provide data concerning city regulations, and work with a City Energy Analyst Agent that automatically estimates the energy demands for city model members. We designed a Distance Agent to track the interactions with the model members on the web, calculate distances between objects of interest, and add new knowledge to the Cities Knowledge Graph. The logical foundations and CityGML-based conceptual schema used to describe cities in terms of the OntoCityGML ontology, together with the system of intelligent autonomous agents based on the J-Park Simulator Agent Framework, make such systems capable of assessing and maintaining ground truths with certainty. This new era of GeoWeb 2.5 systems lowers the risk of deliberate misinformation within geography web systems used for modeling critical infrastructures.
... Semantic Web Technologies (SWTs) [85] and Dynamic Geospatial Knowledge Graphs (DGKGs) [18] that implement them and adhere to the other well established geospatial standards at the same time are believed to be a modern technological answer to facilitate such interoperability using sustainable digitisation practices [83]. Cities Knowledge Graph (CKG) [22], built upon the Semantic 3D City Database [18], Cognitive Agents System Architecture [20], and GeoWeb interfaces [19], is a working prototype of such information systems. ...
... Semantic Web Technologies (SWTs) [85] and Dynamic Geospatial Knowledge Graphs (DGKGs) [18] that implement them and adhere to the other well established geospatial standards at the same time are believed to be a modern technological answer to facilitate such interoperability using sustainable digitisation practices [83]. Cities Knowledge Graph (CKG) [22], built upon the Semantic 3D City Database [18], Cognitive Agents System Architecture [20], and GeoWeb interfaces [19], is a working prototype of such information systems. It is capable of storing city-related knowledge, spanning multiple domains of interest [76] and relating to each other, analysing it and discovering new facts as well as providing collaborative interfaces [36] that allow to blend human and artificial intelligence [42], amongst others. ...
... Fortunately, progress in computer systems design and engineering has been fast enough since the initial RDF and OWL conceptualisation to allow to base TWA on a dynamic RDF store that already supports concurrency and transactions. Evaluation of triple stores conducted by Chadzynski et al. [18] resulted in choosing Blazegraph™ as a data store for TWA. It is an active open-source project, released under a GPL-2.0 ...
Full-text available
This paper presents revised novel semantic web systems reference architecture for inferences and components that can store and operate on knowledge in the form of a fully dynamic graph to infer new statements by an intelligent autonomous agent capable of making informed choices based on long-term memories about its tasks that implement inference algorithms of all currently known classes. An Owlconverter tool was designed and developed as a new component which can produce fully dynamic knowledge graphs without information loss that otherwise occurs while attempting to store complex concept definitions in existing open-source dynamic RDF stores. An Inference Agent with extended cognitive capabilities of making informed choices based on long-term knowledge was designed and developed to act as an extended inference engine supporting all currently known classes of knowledge graphs inference algorithms. This capability is supported by the newly developed OntoInfer ontology that encodes the taxonomy of those algorithms linked to instances of the agent’s tasks allowing the agent to make choices based on the knowledge stored in the knowledge graph. This extended architecture can demonstrate the implementation of tasks designed to work as independently executed threads containing examples of known inference algorithms using existing libraries and reasoning engines (Jena Jung andHermiT). Multi-domain reasoning capabilities on city object descriptions in terms of OntoCityGML, OntoZoning and OntoBuildableSpace were showcased on plot data provided by Urban Redevelopment Authority of Singapore converted into OWL 2 compliant knowledge base.
... An ontology is a key part of the semantic web technology stack which enables computers to understand the meaning of and relationships between different kinds of information (Rudman et al., 2016). The present work is part of the Cities Knowledge Graph project, a broader research effort to support urban planning through improved data representation, access and evaluation byusing knowledge graphs, triple stores and autonomous software agents (Chadzynski et al., 2021Grisiute et al., 2022aGrisiute et al., , 2022b. This broader infrastructure allows evaluating the usefulness and practical implications of the OntoZoning ontology as part of a semantic web technology stack. ...
... Besides the URA Master Plan plot data, several other datasets were used, such as building footprints. The data were converted from KML into CityGML format and uploaded into the Cities Knowledge Graph, which uses Blazegraph as its graph database (Chadzynski et al., 2021). ...
Full-text available
Semantic web technologies have the potential to significantly improve urban regulatory data access, integration and usability, with potentially large implications for planning practice. Ontologies are a cornerstone of the semantic web. In this paper, we describe OntoZoning, an ontology representing relationships between zoning types, land uses and programmes (more specific land uses) in Singapore. We link the ontology to geospatial data stored in a knowledge graph, which allows executing multi-domain queries on urban data. We demonstrate how such a semantic web based approach can improve access to and usability of land use regulation data, and in particular facilitate site selection and exploration. We also discuss the difficulty of defining some concepts in the land use regulation field, and how OntoZoning could be linked to a broader semantic-web based urban planning regulatory framework.
... While the idea of using diverse city indicators to better represent the complexity of a city isn't new, existing solutions (e.g. Knowledge Graphs, Planning Support Systems, City Information Models, Digital Twins) pose significant barriers to adoption due to immense computational and human resource requirements 21,22 . Not discounting the importance of aforementioned approaches, the lack of accessible tools persistently limits our ability to conduct open science and reason with the complexity of our urban environments [23][24][25][26] . ...
Full-text available
Urban networks play a vital role in connecting multiple urban components and developing our understanding of cities and urban systems. Despite the significant progress we have made in understanding how city networks are connected and spread out, we still have a lot to learn about the meaning and context of these networks. The increasing availability of open data offers opportunities to supplement urban networks with specific location information and create more expressive urban machine-learning models. In this work, we introduce Urbanity, a network-based Python package to automate the construction of feature-rich urban networks anywhere and at any geographical scale. We discuss data sources, the features of our software, and a set of data representing the networks of five major cities around the world. We also test the usefulness of added context in our networks by classifying different types of connections within a single network. Our findings extend accumulated knowledge about how spaces and flows within city networks work, and affirm the importance of contextual features for analyzing city networks.
... These advanced computational tools include big data analysis (Li et al., 2016), cloud computing (Yao et al., 2019) (such as Google Earth Engine Gorelick et al., 2017), knowledge graphs (Ma, 2022), natural language processing (Sit et al., 2019), etc. The integration of these advanced computational tools with geospatial analysis and Earth observation has led to essential advancements in geospatial big data (Yang et al., 2020), geospatial cloud computing (Yao et al., 2019), geospatial knowledge graphs (Chadzynski et al., 2021), and geocomputation for social sciences . GeoAI has been a driving force in these advancements (Janowicz et al., 2020). ...
Full-text available
Geocomputation and geospatial artificial intelligence (GeoAI) have essential roles in advancing geographic information science (GIS) and Earth observation to a new stage. GeoAI has enhanced traditional geospatial analysis and mapping, altering the methods for understanding and managing complex human–natural systems. However, there are still challenges in various aspects of geospatial applications related to natural, built, and social environments, and in integrating unique geospatial features into GeoAI models. Meanwhile, geospatial and Earth data are critical components in geocomputation and GeoAI studies, as they can effectively reveal geospatial patterns, factors, relationships, and decision-making processes. This editorial provides a comprehensive overview of geocomputation and GeoAI applications in mapping, classifying them into four categories: (i) buildings and infrastructure, (ii) land use analysis, (iii) natural environment and hazards, and (iv) social issues and human activities. In addition, the editorial summarizes geospatial and Earth data in case studies into seven categories, including in-situ data, geospatial datasets, crowdsourced geospatial data (i.e., geospatial big data), remote sensing data, photogrammetry data, LiDAR, and statistical data. Finally, the editorial presents challenges and opportunities for future research.
Conference Paper
Urban data analytics is helping to shape current and future cities, but the process of generating urban analytical indicators is often difficult to scale and automate. For instance, planners determine allowable Gross Floor Area (GFA) on a plot by manually cross-referencing multi-domain policies. As allowable GFA governs potential future developments, it is imperative to quantify and understand its values city-wide.This paper presents the first steps of a research effort to develop an automated semantic spatial policy model to estimate allowable GFA for plots in Singapore. We use ontologies and Knowledge Graph (KG) platforms to address regulatory data interoperability and automation challenges. We filtered regulation concepts that determine buildable area and volume at Level of Detail 1 (LoD1) and standardised these concepts across different regulatory sources. Then, we modelled concept-related policies and automated the generation of possible GFA values per plot. Finally, we developed an ontology to store these values in a dynamic geospatial KG. Our approach presents two key benefits: 1) a generated dataset of allowable GFA eliminates the need for manual calculation by field experts, and 2) a graph data structure is ideally suited for unstructured regulatory data, like planning regulations.We conclude that semantic spatial policy models improve the interoperability between multi-domain regulatory data and plan to generate a dataset for the entire Singapore as well as integrate regulatory data for mixed-use plots.KeywordsRegulatory DataKnowledge GraphUrban IndicatorsOntologySemantic WebLand Use Planning
Full-text available
Today, technological developments are ever-growing yet fragmented. Alongside inconsistent digital approaches and attitudes across city administrations, such developments have made it difficult to reap the benefits of city digital twins. Bringing together experiences from five research projects, this paper discusses these digital twins based on two digital integration methodologies-systems and semantic integration. We revisit the nature of the underlying technologies, and their implications for interoperability and compatibility in the context of planning processes and smart urbanism. Semantic approaches present a new opportunity for bidirectional data flows that can inform both governance processes and technological systems to co-create, cross-pollinate, and support optimal outcomes. Building on this opportunity, we suggest that considering the technological dimension as a new addition to the trifecta of economic, environmental, and social sustainability goals that guide planning processes, can aid governments to address this conundrum of fragmentation, interoperability, and compatibility. Policy Significance Statement As cities across the globe aspire to become smarter, the rapid pace of siloed technological developments and their growing complexities and pitfalls have become too significant for city administrations and politicians to ignore. This is exacerbated by the novel developments of city digital twins based on a diversity of software and technologies. We scrutinize a variety of digital twins to discern opportunities to address interoperability and compatibility. In overcoming technological lock-ins driven by business interests, we conclude that software developments need to pay greater attention to practical realities. We contend that city administrations would also have to step up to spearhead, rather than sway toward these technologies for their processes.
Full-text available
GeoSPARQL is an important standard for the geospatial linked data community, given that it defines a vocabulary for representing geospatial data in RDF, defines an extension to SPARQL for processing geospatial data, and provides support for both qualitative and quantitative spatial reasoning. However, what the community is missing is a comprehensive and objective way to measure the extent of GeoSPARQL support in GeoSPARQL-enabled RDF triplestores. To fill this gap, we developed the GeoSPARQL compliance benchmark. We propose a series of tests that check for the compliance of RDF triplestores with the GeoSPARQL standard, in order to test how many of the requirements outlined in the standard a tested system supports. This topic is of concern because the support of GeoSPARQL varies greatly between different triplestore implementations, and the extent of support is of great importance for different users. In order to showcase the benchmark and its applicability, we present a comparison of the benchmark results of several triplestores, providing an insight into their current GeoSPARQL support and the overall GeoSPARQL support in the geospatial linked data domain.
Full-text available
Interoperability of systems is a critical factor for firms to make informed operational and strategic decisions and achieve a competitive edge in the marketplace. As a result, open systems which have a higher level of interoperability with secured and stable operations have significant relevance in today's global economy. Interoperability is accomplished through appropriate system architecture and design. Thus, to achieve the open system interoperability, this paper proposes a framework that looks at system architecture at various levels of abstraction/implementation and identifies the required attributes at each of these levels. This framework can be used as a reference to analyse and determine interoperability requirements at all levels and prioritize the required aspects of interoperability.
Full-text available
In the age of virtualization, rapid urbanization and fierce competition, more and more "digital twins" of real cities are being created as a time, cost-efficient and especially user-oriented solution to many problems in urban planning and management. One prominent task is to efficiently detect progresses made by a city based on their virtual 3D city models recorded over the years, and then interpret them accordingly with respect to different groups of users and stakeholders involved in the process. The first half of the problem, namely automated change detection in city models, has been addressed in recent studies. The other half of the problem however, namely a user-oriented interpretation of detected changes, still remains. Thus, based on the current findings, this research extends the conceptual models and definition of different types of edit operations between city models using a graph database, where the graph representations of city models are also stored. New rules and conditions are then provided to further categorize these changes based on their semantic contents. Considering the different expectations and requirements of different groups of users and stakeholders, the research aims to provide a multi-perspective interpretation of such categorized changes.
Full-text available
This paper presents Parallel World Framework as a solution for simulations of complex systems within a time-varying knowledge graph and its application to the electric grid of Jurong Island in Singapore. The underlying modeling system is based on the Semantic Web Stack. Its linked data layer is described by means of ontologies, which span multiple domains. The framework is designed to allow what-if scenarios to be simulated generically, even for complex, inter-linked, cross-domain applications, as well as conducting multi-scale optimizations of complex superstructures within the system. Parallel world containers, introduced by the framework, ensure data separation and versioning of structures crossing various domain boundaries. Separation of operations, belonging to a particular version of the world, is taken care of by a scenario agent. It encapsulates functionality of operations on data and acts as a parallel world proxy to all of the other agents operating on the knowledge graph. Electric network optimization for carbon tax is demonstrated as a use case. The framework allows to model and evaluate electrical networks corresponding to set carbon tax values by retrofitting different types of power generators and optimizing the grid accordingly. The use case shows the possibility of using this solution as a tool for CO 2 reduction modeling and planning at scale due to its distributed architecture.
Full-text available
In this paper, we demonstrate through examples how the concept of a Semantic Web based knowledge graph can be used to integrate combustion modeling into cross-disciplinary applications and in particular how inconsistency issues in chemical mechanisms can be addressed. We discuss the advantages of linked data that form the essence of a knowledge graph and how we implement this in a number of interconnected ontologies, specifically in the context of combustion chemistry. Central to this is OntoKin, an ontology we have developed for capturing both the content and the semantics of chemical kinetic reaction mechanisms. OntoKin is used to represent the example mechanisms from the literature in a knowledge graph, which itself is part of the existing, more general knowledge graph and ecosystem of autonomous software agents that are acting on it. We describe a web interface, which allows users to interact with the system, upload and compare the existing mechanisms, and query species and reactions across the knowledge graph. The utility of the knowledge-graph approach is demonstrated for two use-cases: querying across multiple mechanisms from the literature and modeling the atmospheric dispersion of pollutants emitted by ships. As part of the query use-case, our ontological tools are applied to identify variations in the rate of a hydrogen abstraction reaction from methane as represented by 10 different mechanisms.
Full-text available
Knowledge management in multi-domain, heterogeneous industrial networks like an Eco-Industrial Park (EIP) is a challenging task. In this paper, an ontology based management system has been proposed for addressing this challenge. It focuses on the power systems domain and provides a framework for integrating this knowledge with the other domains of an EIP. The proposed ontology, OntoPowSys is expressed using a Description Logics (DL) syntax and the OWL2 language was used to make it alive. It is then used as a part of the Knowledge Management System (KMS) in a virtual EIP called the J-Park Simulator (JPS). The advantages of the proposed approach are demonstrated by conducting two case studies on the JPS. The first case study illustrates the application of optimal power flow (OPF) in the electrical network of the JPS. The second case study plays an important role in understanding the cross-domain interactions between the chemical and electrical engineering domains in a bio-diesel plant of the JPS. These case studies are available as web services on the JPS website. The results showcase the advantages of using ontologies in the development of decision support tools. These tools are capable of taking into account contextual information on top of data during their decision-making processes. They are also able to exchange knowledge across different domains without the need for a communication interface.
Description logics (DLs) have a long tradition in computer science and knowledge representation, being designed so that domain knowledge can be described and so that computers can reason about this knowledge. DLs have recently gained increased importance since they form the logical basis of widely used ontology languages, in particular the web ontology language OWL. Written by four renowned experts, this is the first textbook on description logics. It is suitable for self-study by graduates and as the basis for a university course. Starting from a basic DL, the book introduces the reader to their syntax, semantics, reasoning problems and model theory and discusses the computational complexity of these reasoning problems and algorithms to solve them. It then explores a variety of reasoning techniques, knowledge-based applications and tools and it describes the relationship between DLs and OWL.
The first chapters of this book have presented the drivers, influencing factors and technologies of digitisation. In addition to IT as a driver of digitisation, the innovative technologies and solutions relevant to the automotive industry were described and the influence of digital natives as future employees and customers was discussed in particular. Following on from this, Chap. 5 looked at changes in customer expectations and buying behaviour, analysed the current level of digitisation maturity in relation to the main technological changes in the market regions and described in detail a vision of the digitised automotive industry up to the year 2030. In this chapter detailed proposals are developed for the necessary actions to be taken in each transformation area, based on the author's many years of studies and projects, supplemented by relevant experiences and references presented in studies and specialist literature. As a basis for a comprehensive digitisation roadmap, a “digitisation house” is developed as an overall framework, divided into four focus areas and two cross-cutting themes. For each focus topic, the necessary steps for implementation are deepened in this chapter to establish an integrated roadmap to achieve the 2030 goals.
The digitalization of the urban development process is driven by the need for informed, evidence-based, collaborative and participative urban planning and decision-making, epitomized in the concept of Smart Cities. This digital transformation is enabled by information technology developments in fields such as 3D city models, Digital Twins, Urban Analytics and Informatics, Geographic Information Systems (GIS), and Planning Support Systems (PSS). In this context, City Information Modelling (CIM) has recently emerged as a concept related to these various technological driving forces. In this article, we review the state of the art of CIM (definitions and applications) in the academic literature and propose a definition and a general conceptual framework. By highlighting how the different disciplines are related to each other within this conceptual framework, we offer a context for transdisciplinary work, and focus on integration challenges, for research and development, both in academia and industry. This will contribute to moving forward the debate on digitalization of the built environment development process in the field of Smart Cities.
The chemical industry is increasingly relying on agents for data acquisition, optimization, and simulation. In order to enable efficient management of agents, Knowledge Graphs (KG) together with agent composition frameworks are therefore applied. However, a method to assess the reliability of agents for such systems is absent. Therefore, this paper proposes a Smart Contract-based agent marketplace for composition frameworks to estimate the reliability of agents. In this agent marketplace, we improved the feedback-based reputation system by leveraging Smart Contracts to eliminate fraudulent ratings and to enable automation. The marketplace incorporates a rating-dependent payment mechanism as well, to further enhance trust. The paper also illustrates how this marketplace is integrated into the J-Park Simulator (JPS) agent composition framework for the automated agent selection and transaction.