Conference PaperPDF Available

The Semantic Web3D: Towards Comprehensive Representation of 3D Content on the Semantic Web


Abstract and Figures

One of the main obstacles for wide dissemination of immersive virtual and augmented reality environments on the Web is the lack of integration between 3D technologies and web technologies , which are increasingly focused on collaboration, annotation and semantics. This gap can be filled by combining VR and AR with the Semantic Web, which is a significant trend in the development of the Web. The use of the Semantic Web may improve creation, representation, indexing, searching and processing of 3D web content by linking the content with formal and expressive descriptions of its meaning. Although several semantic approaches have been developed for 3D content, they are not explicitly linked to the available well-established 3D technologies, cover a limited set of 3D components and properties, and do not combine domain-specific and 3D-specific semantics. In this paper, we present the main motivations , concepts and development of the Semantic Web3D approach. It enables semantic ontology-based representation of 3D content built upon the Extensible 3D (X3D) standard. The approach can integrate the Semantic Web with interactive 3D technologies within different domains, thereby serving as a step towards building the next generation of the Web that incorporates semantic 3D contents.
Content may be subject to copyright.
Jakub Floty´
nski, Don Brutzman,Felix G. Hamza-Lup,Athanasios Malamos§,
Nicholas Polys, Leslie F. Sikosk, Krzysztof Walczak∗∗
One of the main obstacles for wide dissemination of immer-
sive virtual and augmented reality environments on the Web is
the lack of integration between 3D technologies and web tech-
nologies, which are increasingly focused on collaboration, an-
notation and semantics. This gap can be filled by combining
VR and AR with the Semantic Web, which is a significant
trend in the development of the Web. The use of the Semantic
Web may improve creation, representation, indexing, search-
ing and processing of 3D web content by linking the content
with formal and expressive descriptions of its meaning. Al-
though several semantic approaches have been developed for
3D content, they are not explicitly linked to the available well-
established 3D technologies, cover a limited set of 3D compo-
nents and properties, and do not combine domain-specific and
3D-specific semantics. In this paper, we present the main mo-
tivations, concepts and development of the Semantic Web3D
approach. It enables semantic ontology-based representation
of 3D content built upon the Extensible 3D (X3D) standard.
The approach can integrate the Semantic Web with interactive
3D technologies within different domains, thereby serving as
a step towards building the next generation of the Web that
incorporates semantic 3D contents.
Index Termsvirtual reality, Web3D, X3D, Semantic
Web, ontologies, knowledge bases
Immersive virtual reality (VR) and augmented reality (AR)
environments are becoming more and more popular in vari-
ous application domains due to the increasing network band-
width as well as the availability of affordable advanced pre-
n University of Economics and Business, Pozna´
n, Poland; email:
Naval Postgraduate School, Monterey, CA, USA; email: brutz-
Georgia Southern University, Savannah, GA, USA; email:
§Hellenic Mediterranean University, Heraklion, Greece; email:
Virginia Tech, Blacksburg, VA, USA; email:
kEdith Cowan University, Perth, WA, Australia; email:
∗∗Pozna ´
n University of Economics and Business, Pozna´
n, Poland; email:
sentation and interaction devices, such as headsets and motion
tracking systems. One of the most powerful and promising
platforms for immersive VR/AR environments is the Web. It
offers suitable conditions for collaborative development and
use of VR/AR environments, including indexing, searching
and processing of interactive 3D content of the environments.
Development of web-based VR and AR has been enabled by
various 3D formats (e.g., VRML [40] and X3D [41]), pro-
gramming libraries (e.g., WebGL [3] and WebXR [38]) and
game engines (e.g., Unreal [4] and Unity [32]).
These opportunities have been further enhanced with the
advent of the Semantic Web [8], which is currently a promi-
nent trend in the evolution of the Web. It transforms the Web
into a network that links structured content with formal and
expressive semantic descriptions. Semantic descriptions are
enabled by structured data representation standards (in par-
ticular, the Resource Description Framework, RDF [36]), and
by ontologies, which are explicit specifications of a conceptu-
alization [19], i.e. knowledge organization systems that pro-
vide a formal conceptualization of the intended semantics of
a knowledge domain or common sense human knowledge.
Ontologies consist of statements that describe terminology
(conceptualization)—particular classes and properties of ob-
jects. Ontologies are intended to be understandable to humans
and processable by computers [8, 19]. In the 3D/VR/AR
domain, ontologies can be used to specify data formats and
schemes with comprehensive properties and relationships be-
tween data elements. In turn, collections of individuals of
a knowledge domain, including their properties and relation-
ships between them are referred to as knowledge bases [29].
Knowledge bases consist of statements about particular ob-
jects using classes and properties that have been defined in on-
tologies. Hence, in the 3D/VR/AR domain, knowledge bases
can be used to represent individual 3D scenes and objects.
The Resource Description Framework Schema (RDFS)
[37] and the Web Ontology Language (OWL) [34] are lan-
guages for building statements in RDF-based ontologies and
knowledge bases. In turn, SPARQL [35] is the most widely
used query language to RDF-based ontologies and knowledge
bases. In contrast to other techniques of content representa-
tion, ontologies and knowledge bases enable reasoning over
the content. Reasoning leads to inferred tacit (implicit) state-
ments on the basis of statements explicitly specified by the
authors. These, in turn, represent implicit content properties.
The overall knowledge obtained from reasoning can be sub-
ject to semantic queries. For instance, connections between
3D objects that form hierarchies in scenes can be subject to
reasoning and querying about the scenes’ complexity. Simi-
larly, position and orientation interpolators in a 3D scene can
be subject to reasoning and querying about the motion cate-
gories of objects (linear, curved, rotary, etc.). A semantically
represented 3D piston engine can be subject to reasoning to
infer and query about its type on the basis of the cylinder ar-
rangement (in-line, multi-row, star or reciprocating).
A number of approaches use semantic web technologies
to improve creation, representation and processing of vari-
ous types of media, including text, images, audio and video.
However, comprehensive standardized solutions for semantic
creation, representation and processing of 3D content are yet
to be developed. This gap is the major obstacle for integration
and wide dissemination of VR and AR on the Web.
The main contribution of this paper is the Semantic Web3D
approach developed by the X3D Semantic Web Working
Group [42], which is a part of the Web3D Consortium. The
approach enables ontology-based representation of 3D con-
tent on top of the available 3D technologies, including 3D
formats. The representation includes different levels of speci-
ficity: 3D-specific and domain-specific knowledge. At every
level, different classes, objects and properties may be used.
The 3D-specific level is constituted by the X3D Ontology,
which is a semantic counterpart to the Extensible 3D (X3D)
[41]. X3D is a widely used standardized 3D format (ISO/IEC
197751) for web-based applications. It has been developed
by the Web3D Consortium as the successor to the Virtual Re-
ality Modeling Language (VRML) [40]. The domain-specific
level can be described using arbitrary domain ontologies, e.g.,
pertaining to cultural heritage, medicine, design, engineering
or e-commerce. Ontologies at both levels are linked by map-
pings. The Semantic Web3D has the following advantages
over the previous approaches to semantic 3D representation:
1. It is strictly integrated with leading standardized 3D web
technologies by an automatic transformation of the X3D
format to the X3D Ontology, which is the foundation of
our approach.
2. It covers a comprehensive and up-to-date set of 3D com-
ponents and properties, including geometry, structure, pre-
sentation and animation, since it is generated from X3D.
3. It combines 3D-specific semantics with domain-specific
semantics, thereby being applicable to arbitrary areas. Se-
mantic querying, reasoning and processing of 3D content
can be performed for both: inherent 3D components and
properties (understandable to technical users) as well as
domain components and properties (related to a particular
usage of the approach and understandable to domain ex-
The remainder of this paper is structured as follows. Sec-
tion 2 provides an overview of the current state of the art
in semantic representation of 3D content. In Section 3, we
overview the Semantic Web3D approach. The X3D Ontol-
ogy, which is a key element of the approach, is presented in
Section 4. Examples of queries utilizing the ontology are dis-
cussed in Section 5. Finally, Section 6 concludes the paper
and indicates possible future research.
Several works have been devoted to the use of ontologies for
3D content representation. A comparison of such solutions is
presented in Table 2. The 3D and domain specificity levels
are almost equally addressed by the ontologies. The ontolo-
gies also enable representation of different features of 3D con-
tent, such as geometry, structure, appearance and animation.
In most cases, only some content features are represented by
a single ontology. All the ontologies enable representation
of 3D structure, in particular spatial relations and hierarchies
between 3D objects. Only one third of the ontologies sup-
port representation of animation, making it the least covered
feature. Five ontologies enable representation of all content
features. An extensive comparison of 3D content representa-
tions has been presented in [18].
The available solutions have the following limitations:
1. They are not integrated with 3D formats. It hinders trans-
formation between knowledge bases, which can be used
for reasoning and querying, and 3D scenes, which can be
rendered using available browsers.
2. They do not combine 3D and domain specificity levels.
This hinders the use of content by average users and do-
main experts who are not IT specialists.
3. They do not cover important areas adhering to 3D rep-
resentation such as humanoid animation, geospatial data,
CAD, printing and scanning, do they integrate with sepa-
rate formats designed for such areas.
The main contribution of this paper is the Semantic Web3D
approach, which is an extension of the approach proposed in
[39]. The Semantic Web3D encompasses a queryable ontology-
based 3D content representation, which enables creation, mod-
ification and analysis of 3D content (Fig. 1). The represen-
tation is described in Section 3. Semantic queries possible
with the proposed representation are discussed in Section 3.2.
In Section 3.3, we analyze the possible use contexts of the
Semantic Web3D, which determine new research and appli-
cation areas, and provide the main motivations for the further
development of the approach.
Fig. 1. The Semantic Web3D approach.
Table 1. Comparison of 3D ontologies
Specificity level
Ontology 3D domain Geom. Struct. Appear. Anim.
[9] 3 3 3 3 3
[1, 30] 3 3 3 3 3
[20] 3 3 3 3 3
[21] 3 3 3 3 3
[26] 3 3
[7] 3 3
[11] 3 3 3
[10] 3 3 3 3
[22] 3 3
[33] 3 3 3 3
[5] 3 3
[46] 3 3 3
[14] 3 3 3 3 3
[15, 16] 3 3 3
[24, 28] 3 3 3 3 3
[25] 3 3
[12] 3 3 3
[31] 3 3 3
[23] 3 3 3
[27] 3 3 3
[13, 17] 3 3 3
3.1. Ontology-Based 3D Content Representation
The ontology-based 3D content representation, which is the
main element of the Semantic Web3D, is a stack of ontolo-
gies and knowledge bases. Ontologies specify 3D content
schemes at different levels of specificity, whereas knowledge
bases specify 3D models and scenes in line with the ontolo-
gies. The representation includes two levels of specificity: the
3D-specific level and the domain-specific level.
1. The 3D-specific level uses classes, objects and properties
that are related to 3D content, including geometry (e.g.,
vertices, edges and faces), structure (e.g., hierarchy of ob-
jects), appearance (e.g., textures and materials) and anima-
tion (e.g., event generators and interpolators). 3D-specific
classes and properties are defined in a 3D ontology, which
has been generated from a 3D format schema. So far,
we have automatically generated the X3D Ontology from
the X3D Unified Object Model (X3DUOM) using an XSL
transformation (cf. Section 4). The ontology is a counter-
part to the X3D format, and consists of classes and prop-
erties that are equivalents of X3D elements and attributes.
Hence, the X3D Ontology can be suited to a wide range
of practical 3D applications, including humanoid anima-
tion, geospatial visualization, CAD, printing and scanning.
Also other 3D ontologies for different formats can be used
at this specificity level. 3D ontologies are intended to be an
augmentation of available 3D formats (implemented by 3D
browsers) with reasoning and queries. However, in some
cases, it may be useful to treat ontologies as independent
(semantic) 3D formats directly processable by (semantic)
3D browsers (cf. Section 3.3/9). 3D ontologies can be
subject to 3D-specific meta-queries (cf. Section 3.2) for
information retrieval (cf. Section 3.3/6).
Collections of information about particular 3D models and
scenes specified using classes and properties defined in a
3D ontology are referred to as 3D knowledge bases. 3D
knowledge bases may be created by content authors within
knowledge-based 3D modeling (cf. Section 3.3/1), or auto-
matically generated from 3D models and scenes encoded
in a textual or binary 3D format, using the Data Format
Description Language (DFDL) [6] (cf. Section 3.3/8). 3D
knowledge bases can be subject to 3D-specific concrete
queries (cf. Section 3.2) for query-based 3D modeling,
editing and information retrieval (cf. Section 3.3/3 and 6).
2. The domain-specific level uses classes, objects and prop-
erties that are related to an arbitrary domain, which is de-
termined by a particular use case of the approach. For in-
stance, in cultural heritage, classes may correspond to dif-
ferent artifacts (weapons, armors, decorations, etc.), while
properties can describe features of the artifacts (types of
swords, materials used to make jewelry, etc.). Domain
classes and properties are defined in a domain ontology,
which is determined by a particular Semantic Web3D ap-
plication. Domain ontologies can be subject to domain-
specific meta-queries (cf. Section 3.2) for information re-
trieval (cf. Section 3.3/6).
Collections of information about particular domain objects
and properties that build 3D models and scenes using classes
and properties defined in a domain ontology are referred
to as domain knowledge bases. Domain knowledge bases
may be created by content authors within domain-oriented
3D content creation (cf. Section 3.3/2) or automatically
generated from 3D knowledge bases via discovering do-
main knowledge (cf. Section 3.3/5). Domain knowledge
bases can be subject to domain-specific concrete queries
(cf. Section 3.2) for query-based 3D modeling, editing and
information retrieval (cf. Section 3.3/3 and 6).
Ontologies at both levels of specificity are aligned using
mapping ontologies. A mapping ontology is a specification of
how domain-specific classes and properties are represented
by 3D-specific classes and properties. Hence, it enables vi-
sualization of domain-specific concepts. A mapping ontol-
ogy is created by a content author or automatically generated
by machine learning techniques within generating mappings
(cf. Section 3.3/4). A mapping ontology is a specialization
of the Mapping Meta-Ontology, which defines basic, general
concepts for mapping. Classes and properties of a mapping
ontology are inherited from classes and properties of the Map-
ping Meta-Ontology. They are specific to a particular Seman-
tic Web3D application. An individual mapping ontology is
used for a distinct pair of a 3D ontology and a domain on-
tology. Hence, it can be reused for different 3D models and
scenes built with these ontologies.
Knowledge bases at both levels of specificity are linked
by a mapping knowledge base, which is a collection of in-
formation about how particular domain-specific objects and
properties are represented by particular 3D-specific objects
and properties. For such a specification, classed and proper-
ties defined in the corresponding mapping ontology are used.
Hence, a mapping knowledge base specifies visual represen-
tations of particular domain objects in a 3D scene, e.g., cars,
exhibits and appliances. It is automatically generated during
adomain-oriented 3D content creation (cf. Section 3.3/2).
3.2. Queries to the Representation
Possible queries to the ontology-based representation of 3D
content may be distinguished in terms of the target dataset
type, specificity level, encoding standards used, and initiated
activity. These four classifications are orthogonal, i.e. every
query fits all of them.
1. Classification of queries in terms of the target dataset type:
(a) Meta-queries are about schemes of 3D models and
scenes, e.g., data types of properties of particular 3D
components, classes of components for which particu-
lar properties are used, specializations and hierarchies
of components.
(b) Concrete queries are about particular 3D models and
scenes, e.g. the distance between two objects in a
scene, the number of objects of a particular class in
a scene, the value of an object property.
2. Classification of queries in terms of the specificity level:
(a) 3D-specific queries are related to 3D components and
properties, e.g., the number of vertices and faces of a
model, the period of an animation, the color of a ma-
(b) Domain-specific queries are related to a particular do-
main for which the target model or scene has been cre-
ated, e.g., the age of a virtual museum exhibition, the
species of plants in a virtual garden, the functionality
of virtual home appliances.
3. Classification of queries in terms of the encoding standards
(a) SPARQL queries are encoded in SPARQL [35], which
is the primary query language for ontologies and knowl-
edge bases on the Semantic Web.
(b) RDF/RDFS/OWL queries are knowledge bases com-
bined with the target dataset (ontology or knowledge
base) and next, used to accomplish reasoning. RDF,
RDFS and OWL-based queries have the same encod-
ing as the target datasets. On the one hand, it makes the
solution syntactically more uniform then using SPARQL,
and liberates content consumers from applying addi-
tional software for query processing. Moreover, it en-
ables to determine the computational properties of the
overall dataset, in particular decidability. On the other
hand, since RDF, RDFS and OWL are knowledge rep-
resentation formats but not query languages, they lack
some query-specific constructs that are available in
SPARQL, e.g., order by, limiting the number of results
and selecting only distinct results. In addition, they do
not permit numerical operations.
4. Classification of queries in terms of the initiated action:
(a) Information retrieval provides information about 3D
models or scenes, e.g., get the coordinates of a shape,
get the trajectory of a moving object.
(b) Modeling and editing 3D content creates or modi-
fies 3D models or scenes, e.g., add a shape to a scene,
change the trajectory of a moving object.
3.3. Contexts of Use
The queryable ontology-based representation of 3D content
enables the following activities related to content creation and
analysis (marked by blue arrows in Fig. 1).
1. Knowledge-based 3D modeling, which is a 3D model-
ing process supported by knowledge contained in a 3D
ontology. The result of this activity is a 3D knowledge
base, which represents models or scenes at the 3D-specific
level. The use of a 3D ontology can facilitate modeling
of 3D content, e.g., by suggesting components and proper-
ties, with data types and ranges, that can be set for a par-
ticular object. In contrast to available 3D modeling tools,
which provide proprietary implementations of such func-
tions, ontologies can describe such features in a standard-
ized way, while reasoning engines can process such de-
scriptions using standard, well-known algorithms.
2. Domain-oriented 3D content creation, within which 3D
content is created using a domain ontology with domain-
specific classes, objects and properties, without appealing
to 3D-specific classes, objects and properties (like in typi-
cal 3D modeling). For instance, a marketing expert designs
an exhibition of home appliances including stoves, dish-
washers and washing machines. In this activity, first, a do-
main knowledge base, which represents models or scenes
at the domain-specific level, is created. Next, due to a
mapping ontology, which determines 3D representations
of domain concepts, final 3D scenes are generated upon
the domain knowledge base.
3. Query-based 3D modeling and editing, in which con-
crete queries are issued by content consumers to create or
edit content at different specificity levels—using 3D or do-
main knowledge bases. Such queries can specify new or
modify existing objects and properties, e.g., move an arti-
fact to a museum room with a collection dated to the ap-
propriate historical period.
4. Generating mappings may be useful for domain ontolo-
gies that have no mapping ontologies linking them to 3D
ontologies. Therefore, they cannot be used for domain-
oriented content creation, query-based modeling and edit-
ing, or information retrieval. However, there are some ex-
amples of mapping knowledge bases linking domain knowl-
edge bases to 3D knowledge bases. In such a case, ma-
chine learning software can generalize the available ex-
amples to produce a mapping ontology. For instance, the
availability of multiple examples of 5 regularly arranged
shapes may be a prerequisite how a table can be constructed
(a countertop and 4 legs).
5. Discovering domain knowledge can be useful for 3D knowl-
edge bases that have no associated domain knowledge bases,
because have been modeled by content authors (knowledge-
based 3D modeling—p. 1) or automatically generated from
models and scenes encoded in 3D formats (transforming
3D content—p. 8). Since this activity requires a mapping
ontology, it can follow generating mappings.
6. Information retrieval is possible from ontologies (about
schemes of content) and knowledge bases (about individ-
ual models and scenes) at different specificity levels. For
example, select positions of emergency vehicles in a vir-
tual city.
7. Validating 3D content allows content authors and con-
sumers to automatically verify the correctness of 3D mod-
els and scenes at different specificity levels against corre-
sponging 3D and domain ontologies, in particular: the use
of appropriate classes as well as data types and cardinal-
ity of properties. Content validation can by performed by
standard reasoning algorithms for RDF, RDFS and OWL
implemented by semantic environments, e.g., plugins to
e [2]. For instance, a virtual car must have 4 wheels;
the vertices of a mesh must form polygons.
8. Transforming available 3D content to semantic 3D con-
tent, which is enabled by automatic transformation of 3D
format schemes to 3D ontologies, and automatic transfor-
mation of 3D content encoded in the formats to 3D knowl-
edge bases compliant with these ontologies. XSLT can
be used to transform XML-based 3D formats and content,
e.g., in case of X3D, whereas the Data Format Description
Language (DFDL) [6] can be used for any (textual or bi-
nary) format and content. This opens new opportunities to
convert the available repositories and libraries of 3D con-
tent to their semantic equivalents, thus enabling the range
of new operations on content described in this section.
9. Rendering ontology-based 3D scenes can be done in two
(a) Maintaining the conformance of 3D ontologies to their
underlying 3D formats will enable transformation of
3D knowledge bases (compliant with the ontologies)
to 3D scenes encoded in the formats. This will inte-
grate our approach with the currently available tech-
nologies and enable 3D visualization with a number of
well established, efficient content browsers. However,
final 3D content encoded in a 3D format can no longer
be subject to reasoning and queries.
(b) The development of semantic 3D browsers is possible
to permit direct visualization of 3D knowledge bases.
In such a case, transformation of the content could be
implicitly accomplished within a browser, while main-
taining the possibility of semantic reasoning and queries
over dynamically changing content properties with their
temporal values, e.g., the volatile position of an object
moving in a 3D scene.
The X3D Ontology [44], which is an RDF/RDFS/OWL doc-
ument, is a 3D ontology we have developed for the Semantic
Web3D approach. It is the successor to the 3D Modeling On-
tology (3DMO) [24]. 3DMO has been developed manually
based on the X3D format. Therefore, modifications of the on-
tology necessary to keep its consistency with new versions of
the X3D format were problematic. The goal of the Semantic
Web3D is to provide flexible integration of available 3D tech-
nologies with semantic web technologies. Hence, the X3D
Ontology, as the evolution of 3DMO, is automatically gener-
ated from the X3D schema, which is described by the X3D
Unified Object Model (X3DUOM).
The X3DUOM is a description of the X3D schema, which
is a set of object-oriented interfaces for X3D nodes and fields
[45]. The X3DUOM is encoded as an XML document that
contains a list of the names of the X3D nodes, interfaces and
fields, information about inheritance of the nodes and fields,
and the fields data types. This is useful to implement vari-
ous encodings of X3D as well as bindings to programming
The X3D Ontology is generated using an XSL transfor-
mation [43]. A fragment of the XSLT document in the Turtle
format is presented in Listing 1. The code transforms X3D
XML elements to declarations of individual classes in the on-
tology. It processes every XML element (line 1) by extracting
its name attribute (2) and printing it as the subject of a new
RDF statement in the ontology. The subject is a new class
within the local namespace in the ontology (3–4). The predi-
cate in the statement is a(5), which is a shorthand notation for
rdf:type. The object in the statement is owl:Class (6).
In addition, if the processed XML element has sub-elements
with the path InterfaceDefinition/ Inheritance,
including the baseType attribute (7), it is used to specify
the superclass of the class (8–11).
Listing 1. A fragment of the XSLT document describing
transformation of the X3DUOM to the X3D Ontology in Tur-
1<xsl:template match="*"> <!-- process each element -->
2<xsl:variable name="elementName" select="@name"/>
3<xsl:text>:</xsl:text><!-- local namespace -->
4<xsl:value-of select="$elementName"/>
5<xsl:text> a </xsl:text>
7<xsl:if test="(string-length(InterfaceDefinition/
Inheritance/@baseType) > 0)">
8<xsl:text> ;&#10; </xsl:text><!-- new line -->
9<xsl:text>rdfs:subClassOf </xsl:text>
10<xsl:text>:</xsl:text><!-- local namespace -->
11<xsl:value-of select="InterfaceDefinition/Inheritance/
An example of an X3DUOM fragment transformed us-
ing the XSLT document is presented in Listing 2. Like every
element, Shape (line 1) is transformed to a class, while infor-
mation about the inheritance of the Shape node (its
baseType, line 3) is transformed to the superclass speci-
fication. The resulting statements are:
:Shape a owl:Class ;rdfs:subClassOf :X3DShapeNode .
Listing 2. A fragment of the X3DUOM document describing
the X3D Shape node.
1<ConcreteNode name="Shape">
2<InterfaceDefinition specificationUrl="https://www.web3d
3<Inheritance baseType="X3DShapeNode"/>
Fragments of the generated hierarchies of classes as well
as object and datatype properties of the X3D Ontology visu-
alized in the Prot´
e ontology editor are depicted in Fig. 2.
Another XSLT document has been developed to enable
transformation of X3D scenes to X3D knowledge bases com-
pliant with the X3D Ontology.
Fig. 2. Hierarchies of classes as well as object and datatype
properties of the X3D Ontology presented in Prot´
In this section, we present an example of transforming an
X3D scene to an X3D knowledge base compliant with the
X3D Ontology. The scene presents the San Carlos Cathedral
in Monterey, CA, USA2(Fig. 3).
Listing 3 includes a fragment of the generated X3D knowl-
edge base, covering some scene properties as well as the altar.
The scene has a background with a sky color represented by
an RDF list of values (lines 3–6). In addition, there is a trans-
form node applied to a shape that is a wooden element of the
altar (7–11). The shape of the element is determined by a box
with a given size (12–14). Like sky color, translation and size
are also represented by RDF lists. In addition, the element
has appearance with an image texture (15–18).
Listing 3. A fragment of an X3D knowledge base describing
the altar in the San Carlos Cathedral.
1#Prefixes: ’x3do’, ’:’, ’rdfand owlindicate:the X3D
Ontology and knowledge base as well as RDF and OWL.
3:scene rdf:type owl:NamedIndividual ,x3do:Scene .
4:scene x3do:hasBackground :background .
5:background rdf:type owl:NamedIndividual,x3do:Background;
6x3do:skyColor (0.7216 0.8 0.9922).
7:scene x3do:hasTransform :Colonna1 .
8:Colonna1 rdf:type owl:NamedIndividual ,x3do:Transform ;
9x3do:translation (0.7 0 -0.7) .
10 :Colonna1 x3do:hasShape :woodenElement1 .
11 :woodenElement1 rdf:type owl:NamedIndividual ,x3do:Shape.
12 :woodenElement1 x3do:hasBox :woodenElement1Box .
13 :woodenElement1Box rdf:type owl:NamedIndividual,x3do:Box;
14 x3do:size (0.4 1.2 0.4) .
15 :woodenElement1 x3do:hasAppearance :WoodAppearance .
16 :WoodAppearance rdf:type owl:NamedIndividual ,x3do:
Appearance .
17 :WoodAppearance x3do:hasTexture :Wood .
18 :Wood rdf:type owl:NamedIndividual ,x3do:ImageTexture ;
x3do:url ".../Wood.jpg" .
Fig. 3. An X3D model of the San Carlos Cathedral2(Mon-
terey, CA, USA): a view from outside and the altar.
Every X3D knowledge base can be subject to semantic
queries. The following SPARQL query provides the number
of shapes composing the altar. The result of the query is: 14.
1SELECT (count(distinct ?shape) as ?num) WHERE {
2?shape rdf:type x3do:Shape . }
The following query provides the paths of all textures used
within the scene. The result is the wood texture:
.../Wood.jpg (cf. Listing 3, line 18).
1SELECT ?textureUrl WHERE {
2?x x3do:hasTexture ?texture .
3?texture x3do:url ?textureUrl . }
4ORDER by ASC(?textureUrl)
The following query retrieves the color of the sky used in
the scene. The result is the following list of RGB values:
0.7216 0.8 0.9922 (cf. Listing 3, line 6).
1SELECT ?skyColorListVal WHERE {
2?background rdf:type x3do:Background ;
3x3do:skyColor/rdf:rest*/rdf:first ?skyColorListVal . }
In this paper, we have presented the concept of the Semantic
Web3D approach, which has been developed by the X3D Se-
mantic Web Working Group. The approach enables compre-
hensive ontology-based representation of 3D content at differ-
ent specificity levels, which integrates with available 3D tech-
nologies. This sets directions to a variety of new 3D/VR/AR
applications in different domains.
The primary implementation of the approach described in
the paper encompasses the XSL transformation of the
X3DUOM to the X3D Ontology, the XSL transformation of
X3D scenes to X3D knowledge bases as well as testing queries
to the ontology and knowledge bases. We plan to continue the
development of DFDL-based transformations of other textual
and binary 3D formats, tools for semantic 3D scene valida-
tion as well as semantic 3D browsers to directly render 3D
knowledge bases.
[2] Prot´
e., 2019.
[3] WebGL.
[4] Unreal engine.
unreal-engine-4, 2019.
[5] Sven Albrecht, Thomas Wiemann, Martin G¨
unther, and
Joachim Hertzberg. Matching CAD object models in se-
mantic mapping. In Proceedings ICRA 2011 Workshop:
Semantic Perception, Mapping and Exploration, SPME,
[6] Apache. Data Format Description Language (DFDL)
v1.0 Specification.
docs/dfdl/, 2014.
[7] Marco Attene, Francesco Robbiano, Michela Spagn-
uolo, and Bianca Falcidieno. Semantic Annotation
of 3D Surface Meshes Based on Feature Characteriza-
tion. In Semantic Multimedia, pages 126–139. Springer
Berlin Heidelberg, 2007.
[8] Tim Berners-Lee, James Hendler, and Ora Lassila. The
semantic web. Scientific American, 284(5):34–43, May
[9] W. Bille, B. Pellens, F. Kleinermann, and O. De Troyer.
Intelligent modelling of virtual worlds using domain on-
tologies. In Proceedings of the Workshop of Intelligent
Computing (WIC), held in conjunction with the MICAI
2004 conference, pages 272–279, Mexico City, Mexico,
[10] Yu-Lin Chu and Tsai-Yen Li. Realizing semantic virtual
environments with ontology and pluggable procedures.
Applications of Virtual Reality, 2012.
[11] Leila De Floriani, Annie Hui, Laura Papaleo, May
Huang, and James Hendler. A semantic web environ-
ment for digital shapes understanding. In Semantic Mul-
timedia, pages 226–239. Springer, 2007.
[12] Pierre Drap, Odile Papini, Jean-Chrisophe Sourisseau,
and Timmy Gambin. Ontology-based photogrammetric
survey in underwater archaeology. In European Seman-
tic Web Conference, pages 3–6. Springer, 2017.
[13] Jakub Floty´
nski, Marcin Krzyszkowski, and Krzysztof
Walczak. Semantic Composition of 3D Content Behav-
ior for Explorable Virtual Reality Applications. In Pro-
ceedings of EuroVR 2017, Lecture Notes in Computer
Science, pages 3–23. Springer, 2017.
[14] Jakub Floty ´
nski and Krzysztof Walczak. Semantic
Multi-layered Design of Interactive 3D Presentations.
In Proceedings of the Federated Conference on Com-
puter Science and Information Systems, pages 541–548,
ow, Poland, September 8-11, 2013. IEEE.
[15] Jakub Floty´
nski and Krzysztof Walczak. Conceptual
knowledge-based modeling of interactive 3D content.
The Visual Computer, page 12871306, August 2014.
[16] Jakub Floty´
nski and Krzysztof Walczak. Customization
of 3D content with semantic meta-scenes. Graphical
Models, 88:23–39, 2016.
[17] Jakub Floty´
nski and Krzysztof Walczak. Knowledge-
based Representation of 3D Content Behavior in a
Service-oriented Virtual Environment. In Proceedings
of the 22Nd International Conference on 3D Web Tech-
nology, Web3D ’17, pages 14:1–14:10, New York, NY,
USA, 2017. ACM.
[18] Jakub Floty´
nski and Krzysztof Walczak. Ontology-
Based Representation and Modelling of Synthetic 3D
Content: A State-of-the-Art Review. Computer Graph-
ics Forum, 35:329–353, 2017.
[19] Tom Gruber. Encyclopedia of database systems.
2007.htm, 2009.
[20] Mario Guti´
errez, Alejandra Garc´
ıa-Rojas, Daniel Thal-
mann, Fr´
eric Vexo, Laurent Moccozet, Nadia
Magnenat-Thalmann, Michela Mortara, and Michela
Spagnuolo. An ontology of virtual humans: Incor-
porating semantics into human shapes. Vis. Comput.,
23(3):207–218, February 2007.
[21] Evangelos Kalogerakis, Stavros Christodoulakis, and
Nektarios Moumoutzis. Coupling ontologies with
graphics content for knowledge driven visualization. In
VR ’06 Proceedings of the IEEE conference on Vir-
tual Reality, pages 43–50, Alexandria, Virginia, USA,
March 25-29, 2006.
[22] Patrick Kapahnke, Pascal Liedtke, Stefan Nesbigall,
Stefan Warwas, and Matthias Klusch. ISReal: An Open
Platform for Semantic-Based 3D Simulations in the 3D
Internet. In International Semantic Web Conference (2),
pages 161–176, 2010.
[23] Konstantinos Kontakis, Malvina Steiakaki, Kostas
Kapetanakis, and Athanasios G. Malamos. DEC-O: An
Ontology Framework and Interactive 3D Interface for
Interior Decoration Applications in the Web. In Pro-
ceedings of the 19th International ACM Conference on
3D Web Technologies, Web3D ’14, pages 63–70, New
York, NY, USA, 2014. ACM.
[24] Leslie F. Sikos. 3D Modeling Ontology (3DMO).
[25] Yuliana Perez-Gallardo, Jose Luis L ´
opez Cuadrado,
Angel Garc´
ıa Crespo, and Cynthya Garc´
ıa de Jes´
GEODIM: A Semantic Model-Based System for 3D
Recognition of Industrial Scenes. In Current Trends on
Knowledge-Based Systems, pages 137–159. Springer,
[26] Fabio Pittarello and Alessandro De Faveri. Semantic
Description of 3D Environments: A Proposal Based on
Web Standards. In Proceedings of the Eleventh Interna-
tional Conference on 3D Web Technology, Web3D ’06,
pages 85–95, New York, NY, USA, 2006. ACM.
[27] Peter J. Radics, Nicholas F. Polys, Shawn P. Neuman,
and William H. Lund. OSNAP! Introducing the open
semantic network analysis platform. In David L. Kao,
Ming C. Hao, Mark A. Livingston, and Thomas Wis-
chgoll, editors, Visualization and Data Analysis 2015,
volume 9397, pages 38–52. International Society for
Optics and Photonics, SPIE, 2015.
[28] Leslie F. Sikos. A novel ontology for 3D semantics:
ontology-based 3D model indexing and content-based
video retrieval applied to the medical domain. Interna-
tional Journal of Metadata, Semantics and Ontologies,
12(1):59–70, 2017.
[29] Leslie F. Sikos. Description Logics in Multimedia Rea-
soning. Springer Publishing Company, Incorporated, 1st
edition, 2017.
[30] Michela Spagnuolo and Bianca Falcidieno. The Role of
Ontologies for 3D Media Applications, pages 185–205.
Springer London, 2008.
[31] M. Trellet, N. F´
erey, J. Floty ´
nski, M. Baaden, and
P. Bourdot. Semantics for an integrative and immer-
sive pipeline combining visualization and analysis of
molecular data. Journal of Integrative Bioinformatics,
15 (2):1–19, 2018.
[32] Unity Technologies. Unity., 2019.
[33] George Vasilakis, Alejandra Garc´
ıa-Rojas, Laura Papa-
leo, Chiara Eva Catalano, Francesco Robbiano, Michela
Spagnuolo, Manolis Vavalis, and Marios Pitikakis.
Knowledge-Based Representation of 3D Media. Inter-
national Journal of Software Engineering and Knowl-
edge Engineering, 20(5):739–760, 2010.
[34] W3C Consortium. OWL.
syntax/, 2012.
[35] W3C Consortium. SPARQL.
sparql11-query/, 2013.
[36] W3C Consortium. RDF.
concepts/, 2014.
[37] W3C Consortium. RDFS.
schema/, 2014.
[38] W3C Consortium. WebXR.
webxr/, 2019.
[39] Krzysztof Walczak and Jakub Floty´
nski. Inference-
based creation of synthetic 3D content with ontolo-
gies. Multimedia Tools and Applications, 78(9):12607–
12638, May 2019.
[40] Web3D Consortium. VRML.
MarkUp/VRML/, 1995.
[41] Web3D Consortium. X3D.
X3D.html, 2013.
[42] Web3D Consortium. X3D Semantic Web Working
semantic-web/, 2018-present.
[43] Web3D Consortium. Export stylesheet to con-
vert X3D XML models into Turtle RDF/OWL triples.,
[44] Web3D Consortium. X3D Ontology for Semantic Web.
semantics.html, 2019.
[45] Web3D Consortium. X3D Unified Object Model
X3DUOM.html, 2019.
[46] Dennis Wiebusch and Marc Erich Latoschik. Enhanced
Decoupling of Components in Intelligent Realtime In-
teractive Systems using Ontologies. In Software Engi-
neering and Architectures for Realtime Interactive Sys-
tems (SEARIS), proceedings of the IEEE Virtual Reality
2012 workshop, 2012.
... The light weighted and efficient architecture can reduce the processing time of the model and further increase the transparency of the BIM applications, which are essential aspects when performing semantic induction since the delay of immediate model presentation can cause potential risk and result in outdated knowledge management. Another potential application of the suggested framework is the combination of the cloud-based lightweight processing mechanism with semantic web technology 40,41 and X3D semantic web 42,43 to better visualize large 3D models. The domain knowledge and expertise from specific web ontologies can be further integrated into the CEBOW framework in order to selectively render the model partially. ...
Full-text available
With the mobile technology continues to grow and evolve, the technology of presenting building information modeling (BIM) with an online platform has become an important application in the fields of civil engineering, architecture, and computer visualization. However, due to network bandwidth and browser performance limitations, it was difficult to display large‐scale BIM scenes in a flawless manner on mobile browsers. CEBOW, a Cloud‐Edge‐Browser Online architecture for visualizing BIM components with online solutions, is proposed in this article. The method combines transmission scheduling, cache management, and optimal initial loading into a single system architecture. For network transmission testing, BIM scenes are used, and the results show that our method effectively reduces scene loading time and networking delay while improving the visualization effect of large‐scale scenes. CEBOW, a Cloud‐Edge‐Browser Online architecture which combines edge computing with Browser/Server mode for visualizing BIM scenes, is proposed in this article. The method combines transmission scheduling, optimal initial loading and cache management into a single system architecture. Experimental results show that the proposed method effectively reduces loading time and networking delay while improving the effect of large‐scale scenes.
... In Flotyński et al. (2019a), the works of the X3D Semantic Web Working Group (Web3D Consortium 2020) have been outlined. In particular, an approach to generating ontology-based 3D formats from available 3D formats has been proposed. ...
Full-text available
The availability of various extended reality (XR) systems for tracking users’ and objects’ behavior opens new opportunities for analyzing users’ and objects’ interactions and autonomous actions. Such analysis can be especially useful and attainable to domain experts when it is based on domain knowledge related to a particular application, liberating the analysts from going into technical details of 3D content. Analysis of XR users’ and objects’ behavior can provide knowledge about the users’ experience, interests and preferences, as well as objects’ features, which may be valuable in various domains, e.g., training, design and marketing. However, the available methods and tools for building XR focus on 3D modeling and programming rather than knowledge representation, making them unsuitable for domain-oriented analysis. In this paper, a new visual approach to modeling explorable XR environments is proposed. It is based on a semantic representation of aspects, which extend the primary code of XR environments to register their behavior in a form explorable with reasoning and queries, appropriate for high-level analysis in arbitrary domains. It permits domain experts to comprehend and analyze what happened in an XR environment regarding users’ and objects’ actions and interactions. The approach has been implemented as an extension to MS Visual Studio and demonstrated in an explorable immersive service guide for household appliances. The evaluation results show that the approach enables efficient development of explorable XR and may be useful for people with limited technical skills.
... In [22], an approach to generating ontology-based 3D formats from available 3D formats has been proposed. The X3D Ontology [51], which is a semantic equivalent of the Extensible 3D (X3D) [59] format has been presented. ...
Full-text available
The main element of extended reality (XR) environments is behavior-rich 3D content consisting of objects that act and interact with one another as well as with users. Such actions and interactions constitute the evolution of the content over time. Multiple application domains of XR, e.g., education, training, marketing, merchandising, and design, could benefit from the analysis of 3D content changes based on general or domain knowledge comprehensible to average users or domain experts. Such analysis can be intended, in particular, to monitor, comprehend, examine, and control XR environments as well as users’ skills, experience, interests and preferences, and XR objects’ features. However, it is difficult to achieve as long as XR environments are developed with methods and tools that focus on programming and 3D modeling rather than expressing domain knowledge accompanying content users and objects, and their behavior. The main contribution of this paper is an approach to creating explorable knowledge-based XR environments with semantic annotations. The approach combines description logics with aspect-oriented programming, which enables knowledge representation in an arbitrary domain as well as transformation of available environments with minimal users’ effort. We have implemented the approach using well-established development tools and exemplify it with an explorable immersive car showroom. The approach enables efficient creation of explorable XR environments and knowledge acquisition from XR.
Full-text available
Creation of synthetic 3D content is typically a complex task, covering different geometrical, structural, spatial and presentational elements. The available approaches enable 3D content creation by programming or visual modeling of its elements. This demands expertise in computer science from content authors, limits the possibilities of using domain knowledge, and requires to determine all content details within the content creation process. In this paper, we propose a new method of 3D content creation, which is based on ontologies. Ontologies are the foundation of the semantic web, enabling the use of knowledge in various domains. In the proposed method, the use of ontologies facilitates 3D content creation by domain experts without skills in programming and computer graphics. In addition, due to the use of ontologies, the method enables automated reasoning, liberating the authors from determining many elements of the created content, which can be inferred by the content generation algorithm on the basis of explicitly specified content elements. The method has been implemented and evaluated. It simplifies 3D content creation in comparison to the available approaches by reducing the number of activities that must be completed by the content authors. Hence, the proposed method can increase the use of 3D content in different application domains.
Full-text available
The advances made in recent years in the field of structural biology significantly increased the throughput and complexity of data that scientists have to deal with. Combining and analyzing such heterogeneous amounts of data became a crucial time consumer in the daily tasks of scientists. However, only few efforts have been made to offer scientists an alternative to the standard compartmentalized tools they use to explore their data and that involve a regular back and forth between them. We propose here an integrated pipeline especially designed for immersive environments, promoting direct interactions on semantically linked 2D and 3D heterogeneous data, displayed in a common working space. The creation of a semantic definition describing the content and the context of a molecular scene leads to the creation of an intelligent system where data are (1) combined through pre-existing or inferred links present in our hierarchical definition of the concepts, (2) enriched with suitable and adaptive analyses proposed to the user with respect to the current task and (3) interactively presented in a unique working environment to be explored.
Conference Paper
Full-text available
This work addresses the problem of underwater archaeological surveys from the point of view of knowledge. We propose an approach based on underwater photogrammetry guided by a representation of knowledge used, as structured by ontologies. Survey data feed into to ontologies and photogrammetry in order to produce graphical results. This paper focuses on the use of ontologies during the exploitation of 3D results. JAVA software dedicated to photogrammetry and archaeological survey has been mapped onto an OWL formalism. The use of procedural attachment in a dual representation (JAVA - OWL) of the involved concepts allows us to access computational facilities directly from OWL. As SWRL The use of rules illustrates very well such ‘double formalism’ as well as the use of computational capabilities of ‘rules logical expression’. We present an application that is able to read the ontology populated with a photogrammetric survey data. Once the ontology is read, it is possible to produce a 3D representation of the individuals and observing graphically the results of logical spatial queries on the ontology. This work is done on a very important underwater archaeological site in Malta named Xlendi, probably the most ancient shipwreck of the central Mediterranean Sea.
Full-text available
An indispensable element of any practical 3D/VR/AR application is synthetic three-dimensional (3D) content. Such content is characterized by a variety of features—geometry, structure, space, appearance, animation and behaviour—which makes the modelling of 3D content a much more complex, difficult and time-consuming task than in the case of other types of content. One of the promising research directions aiming at simplification of modelling 3D content is the use of the semantic web approach. The formalism provided by semantic web techniques enables declarative knowledge-based modelling of content based on ontologies. Such modelling can be conducted at different levels of abstraction, possibly domain-specific, with inherent separation of concerns. The use of semantic web ontologies enables content representation independent of particular presentation platforms and facilitates indexing, searching and analysing content, thus contributing to increased content re-usability. A range of approaches have been proposed to permit semantic representation and modelling of synthetic 3D content. These approaches differ in the methodologies and technologies used as well as their scope and application domains. This paper provides a review of the current state of the art in representation and modelling of 3D content based on semantic web ontologies, together with a classification, characterization and discussion of the particular approaches.
Full-text available
Three-dimensional content offers a powerful medium enabling rich, interactive visualization in virtual and augmented reality systems, which are increasingly used in a variety of application domains, such as education, training, tourism and cultural heritage. The creation of interactive 3D presentations is typically a complex process covering diverse aspects of the content such as geometry, structure, space, appearance, animation and behavior. Recent trends in the development of the semantic web provide new opportunities for simplifying 3D content creation, which may be performed at different levels of abstraction and may encompass the inference of hidden knowledge, which may influence the created content. However, the available approaches to 3D content creation do not enable conceptual knowledge-based modeling of 3D content. The main contribution of this paper is an approach to semantic creation of 3D content. The proposed solution leverages the semantic web techniques to enable conceptual, knowledge-driven content creation. The proposed approach has been implemented and evaluated. It has been shown that the approach can significantly simplify modeling of advanced 3D content presentations in comparison with the available approaches.
Because of the growing popularity of 3D modeling, there is a great demand for efficient mechanisms to automatically process 3D contents. Due to the lack of semantics, however, most 3D scenes cannot be interpreted by software agents. 3D ontologies can provide formal definitions for 3D objects, however, many of them are semistructured only, cover a narrow knowledge domain, do not provide comprehensive coverage for geometric primitives, and do not exploit the full expressivity of the implementation language. This paper presents the most comprehensive formally grounded 3D ontology to date that maps the entire XSD-based vocabulary of the industry standard X3D (ISO/IEC 19775–19777) to OWL 2, complemented by fundamental concepts and roles of the 3D modeling industry not covered by X3D. This upper ontology can be used for the representation, annotation, and efficient indexing of 3D models, and their retrieval by 3D characteristics rather than by associated category labels.
Conference Paper
Semantic representation of interactive 3D content gains increasing attention due to possibilities of high-level domain-specific content description, inference of hidden knowledge as well as exploration of content on request with the use of semantic queries. However, the available approaches mostly address representation of static 3D objects and scenes, which do not evolve over time, thus disregarding the current state of dynamically changing content. The paper presents knowledge-based representation of 3D content behavior. The representation covers 3D content features at different points in time, enabling reasoning and exploration of dynamic content. The possibility of using different vocabularies enables application of the approach to different domains. The representation can extend the available static semantic 3D content representations, and it can be used together with 3D modeling tools and game engines. The proposed representation has been implemented in the Prolog declarative language and deployed within a service-oriented virtual environment with the Unity game engine.
Keeping an inventory of the facilities within a factory implies high costs in terms of time, effort, and knowledge, since it demands the detailed, orderly, and valued description of the items within the plant. One way to accomplish this task within scanned industrial scenes is through the combination of an object recognition algorithm with semantic technology. This research therefore introduces GEODIM, a semantic model-based system for recognition of 3D scenes of indoor spaces in factories. The system relies on the two aforementioned technologies to describe industrial digital scenes with logical, physical, and semantic information. GEODIM extends the functionality of traditional object recognition algorithms by incorporating semantics in order to identify and characterize recognized geometric primitives along with rules for the composition of real objects. This research also describes a real case where GEODIM processes were applied and presents its qualitative evaluation.
Customization of interactive three-dimensional content according to consumers’ requirements is a complex task, as it demands knowledge of such aspects of 3D content as geometry, structure, space, appearance and animation. In this paper, a new method of customizing interactive 3D content is presented. The method leverages semantic web techniques to enable conceptual, generalized representation of 3D content (3D meta-scenes) and on-demand customization of meta-scenes with semantic queries. Query-based content customization with the proposed method can be performed independently by different content consumers at different domain-specific levels of abstraction, with the inference of tacit knowledge. Content customization covers selection of 3D content objects, properties and relations between objects to be presented as well as introduction of new objects, properties and relations. The method has been implemented and evaluated. It significantly improves 3D content customization in comparison to previous approaches.
This book illustrates how to use description logic-based formalisms to their full potential in the creation, indexing, and reuse of multimedia semantics. To do so, it introduces researchers to multimedia semantics by providing an in-depth review of state-of-the-art standards, technologies, ontologies, and software tools. It draws attention to the importance of formal grounding in the knowledge representation of multimedia objects, the potential of multimedia reasoning in intelligent multimedia applications, and presents both theoretical discussions and best practices in multimedia ontology engineering. Readers already familiar with mathematical logic, Internet, and multimedia fundamentals will learn to develop formally grounded multimedia ontologies, and map concept definitions to high-level descriptors. The core reasoning tasks, reasoning algorithms, and industry-leading reasoners are presented, while scene interpretation via reasoning is also demonstrated. Overall, this book offers readers an essential introduction to the formal grounding of web ontologies, as well as a comprehensive collection and review of description logics (DLs) from the perspectives of expressivity and reasoning complexity. It covers best practices for developing multimedia ontologies with formal grounding to guarantee decidability and obtain the desired level of expressivity while maximizing the reasoning potential. The capabilities of such multimedia ontologies are demonstrated by DL implementations with an emphasis on multimedia reasoning applications.