ArticlePDF Available

Abstract and Figures

This paper introduces a strategy for both the retrieval and analysis of linked open data (LOD) based on the use of visual tools. Retrieving and understanding data from triplestores (such as SPARQL) requires technical knowledge and proves to be challenging with large datasets, which result in an increase in the mental overload when unknown ontologies are involved in the creation of complex queries. These two problems benefit greatly from visual techniques that allow for executing them in an easier and more intuitive manner. These techniques have already been applied to each problem separately; however, we propose combining them to lower the complexity of triple-store data retrieval and empower its exploration and analysis through specific data visualizations. To demonstrate the suitability of this strategy, a web-based tool was implemented. It allows for the creation of interactive queries using node-link ontology representations, and a subsequent filtering and analysis through a configurable dashboard with different data visualizations. This paper also presents a user-centred design process and evaluation of the proposed tool.
Content may be subject to copyright.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI
Intuitive ontology-based SPARQL
queries for RDF data exploration
1Visual Analytics and Information Visualization Group, Department of Computer Science and Automation. University of Salamanca, 37002, Salamanca, Spain
2Visual Analytics and Information Visualization Group, Department of Computer Science and Automation. University of Salamanca, 37002, Salamanca, Spain
3Austrian Centre for Digital Humanities, Austrian Academy of Sciences, 1010 Vienna, Austria (e-mail:
4Adapt Centre, Dublin City University, Dublin, Ireland (e-mail:
5Austrian Centre for Digital Humanities, Austrian Academy of Sciences, 1010 Vienna, Austria (e-mail:
6Visual Analytics and Information Visualization Group, Department of Computer Science and Automation. University of Salamanca, 37002, Salamanca, Spain
Corresponding author: Alejandro Rodríguez Díaz (e-mail:
This research was funded by the Nationalstiftung of the Austrian Academy of Sciences under the funding scheme: Digitales kulturelles
Erbe, grant number DH2014/22, as part of the exploreAT! project, carried out in collaboration with the VisUSAL Group, Universidad de
Salamanca, Spain and the ADAPT Centre for Digital Content Technology at Dublin City University, Ireland which is funded under the
Science Foundation Ireland Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development
Fund. This work was partially supported by the CHIST-ERA programme under national (MINECO Spain) Grant PCIN-2017-064.
ABSTRACT This paper introduces a strategy for both the retrieval and analysis of linked open data (LOD)
based on the use of visual tools. Retrieving and understanding data from triplestores (such as SPARQL)
requires technical knowledge and proves to be challenging with large datasets, which result in an increase in
the mental overload when unknown ontologies are involved in the creation of complex queries. These two
problems benefit greatly from visual techniques that allow for executing them in an easier and more intuitive
manner. These techniques have already been applied to each problem separately; however, we propose
combining them to lower the complexity of triple-store data retrieval and empower its exploration and
analysis through specific data visualizations. To demonstrate the suitability of this strategy, a web-based tool
was implemented. It allows for the creation of interactive queries using node-link ontology representations,
and a subsequent filtering and analysis through a configurable dashboard with different data visualizations.
This paper also presents a user-centred design process and evaluation of the proposed tool.
INDEX TERMS Semantic Web, Visualization, Software tools, Linked Open Data
VISUALIZATION is a key human capability for un-
derstanding information [1]. Visual representations are
aimed at leveraging the user’s understanding of datasets.
The rapid understanding of online databases is a well-known
problem in the visualization of Semantic Web resources.
While the majority of approaches have focussed on either the
visualization of ontologies or the underlying linked open data
(LOD), methods offering a holistic view of semantically-
enabled datasets by combining these two aspects aspects are
scarce. In particular, combining of several views that offer
different perspectives on the same dataset is supported by
the use of linked views and dashboards [2], [3]. Previous
work has differentiated two different tasks in the exploration
of data using the Resource Description Framework1(RDF)
standard—the retrieval and exploration of data.. These are
two separate tasks, requiring the use of different types of
data; thus, they are performed with different tools.
In our study, we focus on the exploration of data ex-
posed through SPARQL2endpoints and the asserted ontology
where the relationships are defined. Other tools use ontolo-
gies to reason about the information stored, applying seman-
tics to the retrieved data and allowing for a more structured
and less technologically coupled manner of accessing such
data. Often, node-link graphs are used to represent the on-
tologies. This paper introduces a workflow for retrieving and
VOLUME 4, 2016 1
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
exploring RDF data, where a task-specific view is designed
to accommodate the user’s needs. By performing the tasks
in different views, it is possible to take into account task-
specific needs and provide visual tools that best suit the
mental models and techniques associated with the task. We
implemented a web application to demonstrate this approach.
This tool provides a three-step workflow where the sources
are selected, the data is retrieved by means of visual queries,
and a configurable dashboard exploring the retrieved data is
provided. Dashboards can offer an extra degree of flexibility
by allowing the user to select and combine the visualization
that he/she considers most relevant to answer the questions at
hand. In our case, the SPARQL query language might not be
readily accessible for casual and/or inexperienced users due
to its steep learning curve, a fact that calls for the creation
of novel tools that circumvent this problem. More concretely,
there are examples in the literature, such as the work done
in [2], [4]–[7], which shows how this can be tackled by
means of well-established visualization techniques. Thus,
one main contribution of the work presented in this paper
is the automatic generation of SPARQL queries employing
visual tools that are accessible to a broad profile of end users.
Additionally, we conceive the filtering of data as an inter-
active task done once the user has retrieved it and is able to
explore it, rather than a previous step done in the database
query.. This decision leads to a shorter and more intuitive
initial phase where queries only require an entity and attribute
The present article covers both the workflow and its im-
plementation as an interactive tool in the following order:
Section II discusses previous work in ontology visualization,
RDF data visual exploration and dashboard usage in the field.
This is followed by a presentation of the proposed workflow
in Section III. To demonstrate this workflow an exploratory
tool was implemented; thus, in Section IV, the iterative
usability testing design process is explained, followed by an
in-depth explanation of the tool implementation in Section V,
and exemplified with two use cases in Section VI. In Section
VII, we discuss the results of the final tool evaluation and how
this tool compares with previous work on the subject feature
wise. Finally, we present the conclusions in Section VIII.
The use of visualization tools for Web Ontology Language3
(OWL) ontologies has been thoroughly addressed before [8]–
[11]. One of the major contributions of our paper relates
to bringing together three well-separated tasks of semantic
visualization that can be found in the literature. These are:
1. ontology visualization, 2. visual query building and 3.
interactive exploration of RDF data. Our initial motivation
was to create a platform on which the user is able to create
on-demand, configurable dashboards that allow the RDF data
to be explored with a minimal informational trade-off.
The two main sources of information used by the platform
are OWL-defined ontologies and RDF data. We have men-
tioned before that the ontologies defined in OWL represent
interrelationships of entities within a specific knowledge
domain. This knowledge is modelled by means of a) classes,
b) sub-classes and c) individuals, along with their respective
specific properties. RDF is a standard modeling language
that is based on a definition of resources given by subject -
predicate - object triples that allow for data interchange.
We make use of the ontology to access the RDF database in
a visual and user-friendly interface that does not require prior
programming experience or knowledge of the underlying
query language (SPARQL). In the data exploration process,
however, both the task and the tools are different, and for
that reason, we present a different interface, which makes
use of proven techniques that assist with the exploration
task of the data from different points of view. Both of
the aforementioned tasks require the user to understand the
different information and tasks involved in order to create a
mental model based on the understanding and interpretation
of it; a mental model will be then used to reason about
what is presented and what can be done with the available
information. This process of creating the mental model is
called visualization and constitutes the most important step
for successfully performing a task.
Usually by means of the visual channel, this problem has
been present in the field of the Semantic Web for different
tasks: providing visual representations and tools that help
the user to understand the underlying semantics (usually
provided in the form of an ontology) [12], [13], navigating
through different datasets available in LOD repositories [14],
or performing a more in-depth exploration and analysis of the
data [2], [4]–[6], [15].
The visual representation and manipulation of ontologies
is solved commonly by means of node-link diagrams. This
technique has proven successful in the correct representa-
tion of the hierarchical structure of an ontology over other
potential alternatives. For example, treemaps [16] or circle
packing [17] are other popular hierarchy visualizations on
which nodes are given more relative importance and, thus,
are more applicable to tasks such as finding files in a file
system [18] or comparing hierarchical levels. Sunburst vi-
sualizations [19] are more suited to showing the different
depth levels in a hierarchy. If the aim is to highlight the
relationships between the parts, other visualizations such as
chord diagrams, Sankey diagrams [20], and arc diagrams [21]
have been documented to be more appropriate.
Different node-link diagrams are used depending on the
user’s needs: Suggested change: Unified Modelling Lan-
guage (UML) representation for a better understanding of
class hierarchies and restrictions [22], spring layouts with
different links for representing each type of relationship [4],
or combined multiple views, showing the class nodes with
the attributes inside of them to easily understand the class
2VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
structure along with the hierarchical trees and allow quick
access to classes, properties, and scope. Another example
of the use of multiple simultaneous views for on ontology
visualization is the work introduced in [23], which shows dif-
ferent hierarchy-based visualizations (hyperbolic tree, 2.5D
node-link graph, icicle tree) to provide both a global and a
detailed view for specific classes.
Although the literature is rich with examples of ontology
visualizations, especially for ontology editors and navigators,
less work has been done in visual navigation of RDF sources
using these ontology visualizations.
Three main approaches can be distinguished, whereby
the RDF data is navigated, explored, and analysed. A clear
example of the first approach are lattices (proposed in [5]),
where the RDF data is used to provide navigation space
to reason about the relationships and properties among the
entities. Other tools such as [4], [14] integrate RDF in
the ontology visualization, providing an exploration-oriented
interface. Examples of the third approach, which provides
data access and analysis together in the same application,
can be found in [15], where a node-link diagram with the
structure of the retrieved data is shown after the SPARQL
queries are performed, and a second view is used to select the
visualization which best summarizes and provides insight.
We propose a workflow for retrieving and analysing LOD
that focusses on aiding new users to perform the tasks and
lower their cognitive load. One key aspect of the workflow is
the separation of concerns among views in order to diminish
its complexity, as opposed to other approaches that start with
complex queries requiring:
Previous in-depth understanding of the data;
Knowledge of query languages such as SPARQL; and
A well-defined need and question to be answered.
Similar to the strategy for separating task concerns to the
one proposed in [2], where three main tasks were identi-
fied and a different view was provided for each of them,
we present a three-step workflow in Figure 1, as detailed
below. Each step requires different actions and are expected
to provide different results; thus, using a dedicated view
allows for making the best use of the visualizations for each
task. This approach is more difficult with other single view
applications. Thus, our workflow consists of:
1) Selecting the sources.
2) Selecting and retrieving the data:
a) Selecting classes and relationships; and
b) Selecting properties.
3) Visually exploring the data, which comprises:
a) Selecting a set of variables and aggregations;
b) Selecting a visualization;
c) Creating and using the visualization; and
d) Filtering data through interactive controls.
As illustrated in Figure 1, the workflow focusses on mak-
ing data visualization, exploration, and filtering soon by
reducing the steps in the first and second tasks. This reduces
the overload of data retrieval and promotes spending more
time in the later exploration and analysis.
This is often not encouraged by traditional approaches,
where the entry point is usually a complete query that
requires extensive knowledge of the stored data and the
ontologies used. This approach is error-prone, challenging
the researcher to have a clear question and provide a sophis-
ticated query, including all necessary entities, properties, and
Interactive approaches, such as the flowcharts used in [2],
aid in the construction of a query. However, the user still
needs to know what specific attributes and filters can be
applied. We believe the retrieval phase should be simple
enough to be used iteratively, abstracting the selection pro-
cess to better show how the selection relates to the underlying
ontology structure. This can help to reduce the number of
decisions and input needed from the user. Additionally, the
design and implementation for this interface should favour
the three basic exploration tasks described in [24]:
-Finding Gestalt: Through projections on the variables,
rearranging elements, and zooming (among others) to
Figure 1. Proposed workflow. Diagram showing the user’s natural workflow and the separation of concerns through the use of different views.
VOLUME 4, 2016 3
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
focus on a view and identify Gestalt features, such as
outliers, linearities, groups, and discontinuities.
-Posing queries: Being able to answer visual origi-
nated (from the previous task) questions through visual
means; such as highlighting, colouring or brushing in-
formation related to a subset of the data that is being
-Making comparisons: Being able to create multiple
views and arrange them in such a way that the layout is
meaningful for the user and aids in the task of compar-
ing the data.
Moreover, the issue of providing multiple visualizations to
perform several different tasks in the same view is addressed
by Silva et al. in [25], where a taxonomy for classifying tasks
and a model for visual analytics dashboards are proposed.
We designed our workflow to promote a top-bottom strat-
egy for higher-level tasks, which translates into different
views for each one of them, providing first the overview
of the ontology, then attribute details as needed, and lastly,
retrieving data for a specific subset of the data. Meanwhile, as
described in [25], the exploration of the data, is a bottom-up
approach that starts with a selection of data, its exploration
involving the three fundamental tasks shown in [24] and
mentioned previously, and a refining process that involves
reconsidering the subset data chosen and the visualizations
created until the research goal is fulfilled. The workflow is
detailed in the following subsections.
The sources involved in the analysis are: 1) the data endpoint,
which exposes the triplets; and 2) the ontology schema,
which contains the definition of the entities and the rela-
tionships present in the data. Either of these two resources
may not be publicly accessible; this may happen due to
privacy issues, security concerns, or even if it is a product
in development. Moreover, it is common practice to develop
a domain-specific ontology for tripletstore datasets. For this
reason, it is important to provide a flexible alternative that
allows the use of published, private, or local datasets and
RDF schemas. The approach we propose allows the user
to enter the URL for the endpoint, and uploading a local
file with the ontology schema. This allows the endpoint to
be local or external to the machine, and publicly accessible
ontologies can be downloaded directly from the browser.
Due to its complexity, the selection and retrieval of RDF data
can be intimidating to users. The main cognitive load in this
task is knowing in advance what information is available,
what data provides this information, and the specific syntax
of the query language used. Such needs are often unfulfilled
at the moment of consulting the database, for example, due
to an exploratory motivation or a lack of understanding of the
data structure (the ontology in this case).
The process of creating the queries benefits from a visual
approach that abstracts query languages. However, in order
to lower the complexity of this initial step even more„ it is
necessary to reduce of the number decisions to be made [26].
For example, traditional retrieval using query languages not
only involves specifying graphs, entities, and properties, but
also any aggregation or filter needed. However, these later
aspects are better set when the user has had the opportunity
to explore the data.
One key difference with other approaches (interactive or
traditional query-based) is the less amount of understanding
about the data needed, regarding what properties are avail-
able or the namespaces used. Selecting just unfiltered and
unaggregated data lowers the overhead and simplifies this
step, postponing filtering to a later phase when data can be
seen and understood, and reducing the number of decisions
to make.
This task benefits too from applying Shneiderman’s
mantra: ’overview first, zoom and filter, then details on
demand’ [26]. While having all the information available at
once may seem to provide more informed decisions, ontolo-
gies can grow complex, and properties may be numerous
enough to obscure the view, making the visual abstractions
too difficult to be understood or used. Therefore, data se-
lection can be divided into two different steps: class and
relationship selection, and properties selection, each of which
should take into account the advised task-domain information
actions described in the task by data type taxonomy [26].
They represent the seven main abstract actions performed in
an interface, and translate into different features depending
on the main goal of the view. We provide a set of distinc-
tions for each of the two steps in Table 1. Additionally, the
implementation should allow for undoing selections made by
the user, and extracting sub-collections that can be stored for
later use and shared with other users; for data selection, this
would be in the form of queries.
The different types of data retrieved through SPARQL re-
quire using different visualization techniques. Moreover, the
combined use of these techniques is what helps most to
gain insights into the retrieved data. Dashboards allow for
such combined use with interactive spaces where different
visualizations can be created, arranged, and manipulated.
They promote an exploratory approach that would not be
possible with a traditional approach. Filters and aggregations
can be created visually, rather than relying on the user’s
understanding of the data to plan such selections in advance.
Examples of online data dashboards are Google Data
Studio4, Visual.is5, and Grafana6. However, these tools do
not allow for creating queries visually, a feature that helps
with the effective use of RDF for analysis.
4VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
Task Description
Classes and relationship selection
Overview Providing an overview of the collection implies allowing to get a sense of how much information there is for a certain class at a
Zoom The functionality that allows focusing on a specific item of the ontology.
Filter A task which allows the user to avoid getting distracted with classes, relationships, and properties that are not accessible or relevant
to his/her question.
Details-on-demand Showing the specific name of a class and relationship just when the user needs it, in order to reduce clutter.
Relate See how different items of the query are related to each other.
Properties selection
Overview Provide an overview of the current query including classes, relationships, properties, and order of addition.
Zoom Allow the user to focus on items of interest, providing more information upon interaction.
Filter Allow the user to filter out information, such as properties that are not relevant to his/her question.
Details-on-demand Show only those properties relevant to the selected entities.
Relate Help the user relate the selection of data with the whole -the ontology schema- it belongs to.
Table 1. Correspondence between actions in the workflow and tasks described in the task by data type taxonomy.
It is important to take into account how different target
groups of users can introduce their own needs, influenced
by their research objectives and mental models regarding
data presentation and tool interaction. In our case, we used
an iterative process which helped to introduce changes to
meet user’s expectations and allowed for reducing friction in
the user experience. It also provided information about the
mental process followed by users when performing the tasks;
this is often not expressed explicitly, but it can be identified
by how they use the tool. An example of this insight is
detecting an interest in the results of building visual queries.
This leads to a feature for displaying the SPARQL query built
interactively, an appreciated addition useful for understand-
ing how interactions translate into a database query.
Figure 2. Screenshots from the Invision App platform showing two of the
designed views or the screen mock-up.
We used usability testing to drive the design and imple-
mentation process towards a more successful product, in-
volving ’representative users attempting representative tasks
in representative environments’ [27]. Each of the two phases
had test environments and looked towards obtaining different
insights. Evaluation during the design was carried out using
low-fidelity prototypes, which sought to adapt the workflow
to the user’s needs. Additionally, we used cognitive walk-
throughs for the ongoing evaluation in the implementation
phase, with the intention of detecting usability issues regard-
ing the tool’s ease of use, similar to the work done in [28]–
1) Design
Evaluation in the early phases of design and development
(known as formative evaluation [31]) was done using low-
fidelity interfaces; therefore, it did not influence the user
response much and made it easier to make changes.
Task Description
1 Select your sources for the data and the ontology.
2 Tell how many different entities there are in the ontol-
3 Select what data you want to explore.
4 You now have some data selected. Can you add a visu-
alization for it?
5 How are entries of a set entity distributed by some
attribute? How would you get that information using the
Table 2. Tasks performed by users during the paper prototyping evaluations.
Question Description
1 What entities are you going to start your exploration
2 Among all the data you have selected, which entity
do you want to use for the visualization?
3 What is your desired visualization?
4 The pie chart may have given an overview of the
distribution. Try to add another view to show the
same information in another fashion.
5 Now that there are multiple visualizations, try redis-
tributing them.
Table 3. Additional questions made during the paper prototyping evaluation.
VOLUME 4, 2016 5
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
We used paper prototyping during this early stages of
the project as it is an UX design evaluation tool that helps
iterating quickly, easily and inexpensively through ideas and
concepts. This approach is effective if it meets certain con-
ditions, such as having at least 3-4 people for evaluation.
The first of the two prototypes was drawn with a graphic
editor (Inkscape), printed to paper, and cut into small compo-
nents which were moved according to the user’s interactions.
Additional components were drawn on the fly to try new
approaches for each task.
A higher-fidelity screen mock-up (see Figure 2) was done
using the InvisionApp ( tool.
This second version of the paper prototype7was then used
on the second day with the same group of people, and it was
provided asnonline resource to a second (remote) group using
the comment utility of the platform as a way of communicat-
ing issues.
Both versions of the prototype were designed to be used
in sessions of less than 15 minutes and focussed on the set
of tasks listed in Table 2. An additional set of questions (see
Table 3) was made to promote the use of other functionalities.
2) Implementation
Due to limitations on travel distances and schedule flexibility,
remote usability testing was carried out after each subsequent
iteration of the tool implementation. Each evaluation was
carried out with a varying five-person group, with the focus
on a specific set of tasks that involved using most of the tool’s
available functionality.
In addition to information regarding the success of per-
forming said tasks, more qualitative and subjective data
regarding the ease of use and usability was obtained in the
form of comments. This extra information not only allowed
for evaluating evaluate how well the tool performed, but
also understanding other issues related to the users’ own
mental models and knowledge. An example of this kind of
information was the need for a bubble graph because users
found it easier to interpret the streamgraph.
The development cycle covered four different iterations
where changes were made according to the issues identified
in the intermediate evaluation. The collected feedback was
analysed carefully to avoid observations turning into reports
and key features changing without the needed impact assess-
ment. The main usability issues and the changes that were
done in each iteration can be seen in Table 4.
An RDF data retrieval and visual analysis tool was imple-
mented to demonstrate how the workflow could be applied,
and the needs previously described could be approached.
This tool can be accessed through the following URL: https:
7Publicly available at
// It
consists of three main pages: the landing page, the appli-
cation itself and a help page where the usage of the tool is
As previously discussed, the proposed workflow can be
implemented for any kind of data source, retrieval strat-
egy, and visual catalogue. In this specific implementation,
data will be accessed from SPARQL endpoints, an ontology
schema uploaded from the user’s machine will be used to
query the data, and a set of nine different visualizations will
allow the visual exploration. This tool has been implemented
with web technologies, helping with its distribution and test-
ing. The development of the tool is framed in the exploreAT!
project, which assesses the exploration of digital data regard-
ing the German language in Austria. Following the proposed
workflow, the application comprises three screens, each of
which addresses one of the three tasks identified earlier. The
tool has a top bar with controls to look at the sources and
data selected, and to undo actions taken while using of the
application, and a central section which holds the view for
the specific task.
Querying the database is an important first step, which de-
termines the information that the user will be able to gain.
To this extent, our goal in the present proposal is to help the
user create this query easily and consciously. We start with
the naive idea of what the user will get after the retrieval:
a subset of the whole database which, indeed, will have its
underlying relationships and properties given by the same
ontology. Thus, making the query visually similar to the
node-link diagram that shows the ontology allows the user to
understand the relationships between the ontology, the query,
and the retrieved data.
A common problem of force-directed node-link diagrams
is how poorly they scale with larger amounts of informa-
tion to be displayed. Therefore, the ontology only shows
the classes and relationships, while class-specific properties
are only shown when the user adds a class to the query.
Both class names and properties (including relationships
and attributes) are shown in a simplified form, without the
namespace to make this large amount of information easier
on the eye, especially for entry-level users.
The nodes’ sizes in the node-link diagram are used to
inform the user about the number of instances for the dif-
ferent classes, as proposed in [5]. In the process of creating
the query, the user starts with an initial node selection, and
subsequent nodes that are added by clicking in the available
links (in blue). This approach helps the user make a query
that is supported by the ontology without having to actively
take into account link directionality.
In the lower part of the screen, a similar node-link visu-
alization shows the subset of the classes selected, and the
attributes that each of them have. By clicking the attribute
names, they are added to the query, and highlighted in green.
These two views (see the interface on Figure 3) allow to
6VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
Observed issue Actions to improve the interface
First iteration
The user expected data to follow a numerical or alphabetical order while
the displayed data followed the retrieval order.
Ascending numerical or alphabetical order was used by default with an
option to change the sorting.
Users found it harder to use visualizations with longer text in the labels. Labels were truncated to a fixed length by default, and shown complete in
tables and upon mouse hover.
Relying just on interactive filters was cumbersome for some users. Textual filtering was implemented for easier searching by attribute and
There was in interest on the effect of interactions in the resulting query. The generated SPARQL queries are shown through a control in the top bar
of the application.
Second iteration
Complex visualizations were less understood and, thus, not used frequently. Textual and visual help was provided in the visualization selection control.
Users found distracting having to restart the app for a new exploration. All views were given controls for going back and forth in the workflow.
Less-experienced users found it difficult to deal with RDF-specific aspects,
such as ontology prefixes and namespaces.
The ontology prefix gets inferred from the ontology schema, and RDF-
specific details were hidden to aid novel users with focussing more on the
Third iteration
Users found text-based controls explanatory enough for complex tasks such
as modifying a hierarchy in circle-packing graphs.
More illustrative controls were implemented, showing their effect on the
More explanations were needed when first using the interface. A help page was added, which explained the use of each visualization and
showcased different use cases.
Users expected URLs appearing in text results to be navigable. URLs were made navigable by click interaction.
Fourth iteration
Users tried refreshing the page to see changes when the ontology processing
time was too long.
A loading widget was added, showing the progress and the current task
being done for the ontology load.
Users found it confusing to have different available visualizations for each
data selection.
Visualization descriptions now include a requirements section, showing
unfulfilled ones for each unavailable visualization.
Table 4. Usability issues identified and changes made in each of the implementation iterations.
Figure 3. Data selection screen. Screenshot with the Oldcan ontology loaded
(see use case in Section VI-A).
perform each of the seven actions described in [26], and
provide an overview of the ontology that shows details upon
hovering and allows to filter through relationships, adding
classes for property selection and unding previous actions. It
allows too to relate the query made with the whole it belongs
to by maintaining the same node-link representation which
makes use of the mental model the user has for recognizing
hierarchical data.
The dashboard page provides an expandable work space,
which is used for placing new visualizations. The used win-
Figure 4. Variable selection for Vis creation. Screenshot from the tool.
dow system allows for dragging the frames by the top bar,
and resizing them by clicking and dragging in the bottom-
right corner. This allows the user to create the layout that best
fits his/her research needs. An always present window can be
used to give a name to the visualization and select variables
or aggregations (see Figure 4). After data is selected, a subset
of visualizations is presented to the user; a different set of
visualizations are proposed based on the type, cardinality
and number of variables selected8. A total of nine different
visualizations have been implemented in the tool: table, bar
chart, pie chart, parallel coordinates, violin plot, jitter violin
8The reasons for a visualization not being eligible can be accessed at any
moment, next to the visualization miniatures.
VOLUME 4, 2016 7
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
plot, circle pack diagram, streamgraph and bubblechart. An
additional text field was also provided through which the user
can filter by any combination of desired variables.
In order to facilitate the use of more complex visual
techniques for novel users, the selection screen shows a
representation of the resulting visualization and a textual
description that displays upon hovering and explains the use,
result, and possible configuration of the visualization.
This section presents two cases of study to demonstrate
how the proposed workflow can facilitate both retrieving and
filtering, and exploring the data. The first case of use will
focus on easily filtering through data without understanding
or using complicated SPARQL queries, and the second one
will focus on the use of an ontology to access the data through
a specific point of view.
For this particular use-case, the RDF-explorer is employed to
explore data from the Ontology for Lexicographic Data Col-
lection and ANalysis (OLDCAN) [32], [33]. This data is de-
rived from a non-standard legacy language collection (Daten-
bank der Bairischen Mundarten in Österreich/Database of
Bavarian Dialects in Austria [DBÖ]) [34], [35] containing
records for questionnaires, related answers on paper slips,
excerpts of vernacular dictionaries and folklore literature, and
multimedia content covering a period from the early 20th
century until the present.
The OLDCAN ontology is organised around the question-
naires used to collect traditional, cultural and lexicographic
data. Thus, it contains the questionnaire entity at the heart of
the ontology and links the questionnaires to Questions, Au-
thors, Collectors and Respondents. Each of these entities is
linked to the questionnaires using object properties, including
hasQuestion/isQuestionOf, hasAuthor/isAuthorOf, and other
object relationships. Furthermore, questions are linked to
different entities including Answers, PaperSlips, Collectors
etc. Answers, in turn are linked to Lemmas. Most of these
entities contain their original form in a multimedia format,
such as images, which makes them interconnected with a
specialized entity called Multimedia. The ontology links the
core entities of the collection and further provides several
data properties with the respective domains and ranges.
A detailed description of the ontology is available in [16],
and the current version of the OLDCAN ontology can be
accessed at
OLDCAN is used to semantically annotate the collection
to create approximately 3 million of interconnected RDF
triples. These triples are generated based on R2RML map-
ping of a relational database, which is stored in a MySQL
database. This dataset is made available as a REST API, a
SPARQL endpoint and a downloadable file9.
9A REST API is available at
For this use case, four different questions will be answered
with the use of the platform:
What information related to the questionnaires and an-
swers is available for exploration?
What information can we retrieve from the collection on
gender distribution?
How are questionnaires distributed through time?
How was the collection conceptualized in terms of cul-
tural knowledge? Which aspects of cultural knowledge
can be explored and analysed?
The approach to do this analysis follows the proposed
workflow. An initial entity and properties selection is done
guided by the visual interface, followed by the creation of
views, which can be used to filter the data and answer
the questions. The above questions are needed for both the
retrieval of both detailed information regarding location and
topics for questionnaires, and aggregations of the data based
on the publication date and the author’s gender. This is not
possible with a single query using plain SPARQL. Moreover,
using a complete SPARQL query with filters for a specific
question limits the analysis that can be done for the rest of
them. However, the proposed approach delays filtering until
details for the retrieved data are available, and data visualiza-
tions have been applied. This helps to assess which questions
can be answered with the current data, thereby allowing for
a better reasoning about which filters and aggregations are
more useful. Another advantage of this technique is how it
leverages the retrieved data, allowing the user to focus on
different aspects depending on the current research question.
This is not possible if multiple complex queries have to be
1) Source selection and data retrieval
The very first step for using the dashboard is selecting a local
RDF schema (OLDCAN) and the SPARQL endpoint https:
ds=/oldcan-2 that will be queried. These sources will be
used when selecting both the entities and attributes. These
sources allow performing the two basic tasks of data retrieval
and analysis, which involve the selection of graphs, entities,
properties, filters, and aggregations for the later detailed
listing and information summary. This process is greatly ben-
efitted from the strategy of making simple visual queries in
the first place, and applying filters and aggregations through
visualizations. Selecting these entities and attributes is done
by clicking on a first Answer entity, and then proceeding by
clicking in the available (blue) links in order to add the rest of
the entities. Once added, the visual query in the lower part of
the screen allows for adding attributes to the selected entities,
and proceeding with the retrieval of the data by clicking
on ’Go to dashboard’. Not only is the retrieval phase is
simplified using visual tools, but it allows for understanding
how much information is available for each entity and the
properties that it holds. Thus, it allows for answering the
first questions. For instance, each questionnaire is related
8VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
to the entities Author, Question, and Multimedia; it holds
information regarding the year of publication, the type of
questionnaire, its topics, and other information regarding its
physical form, such as its label, notes on the paper, and notes
that could have been written on top of it.
Figure 5. Visual query for retrieving the label, title, topic, and publication year
of questionnaires, and the gender of the author.
With other tools, the user needs to know this information in
advance to retrieve the data. However, the provided interface
already shows what can be queried to the endpoint and
needs no further filtering. We believe this strategy makes
the selection easier and shorter, lowering the entry barrier
for novice SPARQL users and newer datasets. The example
query shown in Figure 5 would translate into the following
SPARQL query:
2PREFIX oldcan:
4SELECT ?Answer ?Question ?Questionnaire
5?Author ?Answer ?type ?hasTopic
6?Questionnaire ?note ?label
10 GRAPH ?a {?Answer
11 <
12 oldcan:Answer}.
13 GRAPH ?b {?Answer oldcan:isAnswerOf
14 ?Question}.
15 GRAPH ?c {?Question oldcan:isQuestionOf
16 ?Questionnaire}.
17 GRAPH ?d {?Questionnaire oldcan:hasAuthor
18 ?Author}.
19 GRAPH ?e {?Answer
20 oldcan:paperSlipRecordClassification
21 ?paperSlipRecordClassification}.
22 GRAPH ?f {?Question
23 <
> ?type}.
24 GRAPH ?g {?Questionnaire oldcan:hasTopic
25 ?hasTopic}.
26 GRAPH ?h {?Questionnaire oldcan:note ?note}.
27 GRAPH ?i {?Questionnaire
28 <> ?
29 }
2) Interactive exploration and filtering of data
The dashboard screen shows up after data has been retrieved
from the SPARQL endpoint, providing the user with an ini-
tial view for selecting variables and creating visualizations.
These variables are the retrieved classes, relationships, and
properties, which, for the query in Figure 5, hold information
regarding the questionnaire topics, types and classification,
and author’s gender..
The first view used is a bar chart that allows to easily see
how much data there is for each topic, and what cultural top-
ics are present in the collection. We can also see which topics
were deemed important at the time of conceptualization by
having more than one questionnaire for a specific topic. All
visualizations allow filtering by utilizing click interactions
on their elements (such as a bar or a section in a bar chart
or a pie chart, or an area in a streamgraph), applying a
filter that selects just the elements that fall under that group.
The created filters affect all the views, making them update
and possibly enabling visualizations that were not eligible
because of too much available data.
Figure 6. Dashboard screenshot showing the result of using a filter
component for textual search.
We focus on human-related topics for this specific explo-
ration. Questionnaires are selected by creating a filter view
and searching for the term human in the hasTopic property.
After a filter is created, views are automatically updated to
use the resulting data subset. Figure 6 shows a workspace
with a pie chart showing gender information, and a table with
details about the author, year of publication and questions per
Further filtering can be done using the author’s gender,
either by clicking the sectors of the pie chart or searching in
the filter component. Using SPARQL would require adding
the following lines to the query:
1filter contains(str(?topic),"string").
2filter equals(str(?gender),"male/female")
For this specific example, the interface updates to show
just questionnaires related to human topics and published by
female authors and, thus, shows the 243 questions that belong
to the four questionnaires published in 1920. These question-
naires and questions could be read and further analysed by
clicking on the entry in the table.
As shown, the questionnaires have a topic assigned to them
through the hasTopic property, which allows filtering through
them in order to get relevant answers for said topic. However,
it also allows to get a sense of how these questionnaires
VOLUME 4, 2016 9
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
Figure 7. Dashboard screenshot showing the distribution of questionnaires
according to topic and year.
were conceptualized. In Figure 7 a three-views dashboard
layout was used to filter the year 1920 through a bar chart,
the distribution of document count by the topic, and each
specific group of questionnaires for the topic. A total of 64
questionnaires published in 1920 hold 67 different topics,
some of which are shared across questionnaires; this is the
least frequent, as the top-right jitter violin plot shows: each
dot represents a topic, while the area to the right represents
the number of topics that appear the corresponding amount
of times. This second visualization shows how most of the
topics are related to a single questionnaire, but the topics of
wedding, human body, bespoke tailoring, trade and disease
do appear in more questionnaires, indicating a major interest
in these topics during 1920 in the corpus that these question-
naires and questions make up.
The third view is a bubble graph in which two aggregations
have been selected. One aggregation allows weeing topics
represented by bubbles whose size encodes the number of
questionnaires where it appears, and a second aggregation
that shows the questionnaires for each topic. This allows to
filtering by a specific topic and switching to the second in
order to see which questionnaires are involved; for instance,
questionnaires from 7 to 11 have information related to the
topic wedding.
The exploration of SPARQL data endpoints is done with a
reference RDF/OWL schema, which is used to create the
previously discussed visual query. This second case of study
demonstrates the use of the tool to explore endpoints for
which no ontology has been specifically designed; rather,
already existing ontologies are used to model the data.
Data regarding TV and radio programmes broadcast by
the BBC are accessed with the endpoint provided at the
datahub10. The data endpoint is published alongside with
an RDF schema in turtle format which, in our proposal,
10 Visited on 1st April 2019.
is not supported. However, this ontology is partially based
on the friend of a friend (FOAF) ontology11, which can be
used to query the endpoint. This shows how, depending on
the ontology, the available questions change to give more
Figure 8. Visual query to retrieve data for exploring university related
Next, the organizations available in the dataset are ex-
plored using the query shown in Figure 8. This class has
a manageable amount of information and a wide range of
attributes, which can provide information regarding orga-
nizations that were involved in the BBC programmes. The
questions in this case study were:
1) There is information regarding affiliation. What are
these and how much information is there regarding
2) How many undergraduate students do BBC programme-
related universities from America have? What are their
3) Explore how the organizations were founded.
1) Religious affiliation of organizations
Figure 9. Religious affiliation for BBC programme-related organizations.
The use of a pie chart (Figure 9) provides a summary of
the organizations that have the information that was queried;
a total of five organizations out of the 3,693 available in the
dataset have a value for this attribute.
11 Visited on 1st April 2019
10 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
This information can be expanded by making use of a
table that provides per entity information, leading to under-
standing that the majority are related to academia: Episcopal
Collegiate School, Friends School of Minnesota, Brookling
Friends School, Michigan Lutheran Seminary, and Avila
College. Additionally, as data in the Semantic Web is defined
with the use of URIs, further analysis can be done by access-
ing the specific resource by clicking on it in the table.
2) American universities’ size
Interactive filtering can be hard when the attributes’ car-
dinality is too high for the effective use of visualizations.
This is the case when selecting universities by their type of
organization. For this task, we can use a table as an auxiliary
view to get a sense of the retrieved data and then search for
entities containing ’American’ in their description using the
filter component.
A total of 21 universities from America are accessible from
the SPARQL, with the number of undergraduate students
ranging from 83 (American Public University System) to
18,656 (Empire State College). By looking at the visual-
ization, two organizations with an unusual student count
can be spotted: the American Public University System, an
online learning institution offering associate, bachelor’s and
master’s degrees, and other educational programmes. The
second University is a non-American university present in
the current set because it has ’America’ in its description; this
university, located in Quezon City, Philippines, was founded
by the American colonial government on 18th June 1908, and
has a total of 60,889 students: 43,927 undergraduates (the
attribute selected in the visualization), 14,777 postgraduates,
and 2,185 students with no information.
3) Organization founding
When a SPARQL endpoint is queried for attributes, only
entities that have values for such predicates or objects are
retrieved. In this third example, common attributes are used
in order to get information regarding how these organizations
are distributed and how they were founded.
Figure 10. Distribution of organization founding by the year.
The jitter violin plot in Figure 10 shows how the number
of founded organizations range between 1 and 6 for the ma-
jority of years. Further information could be extracted if we
made use of the ’foundingYear’ variable to see its temporal
evolution, a task that can easily be done with streamgraphs.
In this case, organizations are shown alongside their
founding year in a visualization where coincident organi-
zations will appear stacked. Two main time periods can be
seen when a larger number of organizations were founded:
from 1950 to 1978, and from 1990 to 2004. If we wanted to
formulate the same question just for companies, we can use
a textual filter with the ’type’ variable. Figure 11 shows the
result of creating this filter, allowing for completing the initial
task of ’exploring how organizations were founded’. No more
than two companies appear stacked, being the second time in
the year 1953; within the first range mentioned. During 1953,
the two founded companies wre ABC-CLIO and Litton Indus-
tries, whose information can be expanded if the ’comment’
property is selected as part of the visual query or by clicking
in such entity in a table view.
Figure 11. Companies distributed by their founding year on a horizontal
Usability testing is a process that for obtaining a measurable
and comparable score for the tool’s performance (in terms
of effectiveness, efficiency and satisfaction). The evaluation
protocol used in this project consisted of 50-minute sessions
with different users where the time to perform basic tasks
with the tool was measured, the users’ impressions while
using the tool were gathered with a concurrent Think-Aloud
Protocol, and the perceived satisfaction was quantified using
a System Usability Scale (SUS),
The evaluations were carried out with five subjects, rang-
ing from 22 to 50 years old, from different study areas:
computer engineering students, humanist graduates, and
management. All of the intervening users were familiar with
some of the technical terms related to databases, although
only one had previously worked with SPARQL.
VOLUME 4, 2016 11
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
1. Tell how many different organizations there are in the
2. I found the system unnecessarily complex.
3. Use a visualization to represent the amount of web pages
related to the lion’s species.
4. Find the founding year for the religious university with the
highest number of students.
5. Use a visualization to see how organization founding changed
through time.
Table 5. Tasks performed by users during the evaluation.
We provided a brief introduction during the first 10 min-
utes, covering the tool’s usage, the dataset being used, and
the tasks that they were going to perform. This was followed
by 10 minutes of free time for exploration. After this period
of time, they were given 30 minutes to do each of the tasks
shown in Table 5, which were timed and can be seen in
Table 6. These tasks covered understanding the ontology
representation, and four different explorations that involved
filtering, aggregating data, and changing the data selection.
User Task 1 Task 2 Task 3 Task 4 Task 5
User 1 8” 8’ 7’30” 4’ 4’
User 2 6” 7’20” 5’ 5’2” 3’
User 3 3” 7’10” 5’ 3’24” 3’23”
User 4 4” 9’10” 8’30” 5’30” 6’15”
User 5 2’ 12’43” 6’10” 6’ 5’40”
Mean 28” 8’53” 6’26” 4’53” 5’17”
Table 6. Completition rate and execution times for each task done in the
We observed that the first two tasks covering the whole
workflow (task s#2 and #3) took the longest for users to
complete (see Table 6), while the remaining tasks were done
faster. Moreover, users changed the data selection more often
in subsequent tasks, a sign indicating that perhaps an iterative
exploration was more efficient for these tasks12.
As expected, most of the users (four out of five) used
simple visualizations to complete the third task. However,
all the users tried to use more visualizations during the
remaining exploration, involving the custom jitter violin plot
and the bubblegraph. We associate this observation with an
increase in the users’ confidence using the tool.
After the exploration, we asked users to answer a satisfac-
tion questionnaire following the System Usability Scale [36]
(see the questions on Table 7). Applying the SUS metric to
the obtained results (see Table 8) provided us with a score
that compares favourably with the standard average SUS
score (68). We obtained a mean score of 75 (see Table 9),
which corresponds to an adjective score of good; thus, scores
between 68 and 80.3 were rated good, and scores higher than
80.3 were rated excellent.
12Further research would be needed to decide if the increased performance
could be just because of adaptation to the interface
1. I think that I would like to use this system frequently.
2. I found the system unnecessarily complex.
3. I thought the system was easy to use.
4. I think I would need the support of a technical person to be
able to use the system.
5. I found the various functions in this system were well inte-
6. I thought there was too much inconsistency in this system.
7. I would imagine that most people would learn to use this
system very quickly.
8. I found the system very cumbersome to use.
9. I felt very confident using the system.
10. I needed to learn a lot of things before I could get going with
this system.
Table 7. Questions used in the satisfaction questionnaire.
Question Mean SD
1. 3.8 0.75
2. 1.8 0.75
3. 3.4 0.45
4. 1.8 0.75
5. 4.6 0.49
6. 1.8 0.4
7. 4 0.63
8. 1.4 0.49
9. 3.2 0.4
10. 2.2 0.79
Table 8. System-user evaluation results.
Along with the questionnaires, users were asked about
their experiences using the tool. A common feeling was that
getting started with the tool was not immediate, although
performing the tasks was easy enough to get used to it. They
agreed on the practicality of the tool, although a common
issue raised regarding the filter component was that the
users found it misleading to locate it under the visualization
selection view. They also expected the filter to show results
instead of applying it to other views. However, they easily
understood how it worked and applied it to the use cases.
Scale score Mean SD
80 80 70 70 75 75 4.47
Table 9. SUS scale results per evaluation.
This subsection covers the comparison between the imple-
mented tool and previous work done on the subject. Given
the initial goal of combining ontology visualization, data
retrieval, and data visualization, only a subset of the reviewed
work covers a similar workflow to the proposed. Because
of the inability to use these tools, due to unavailability or
discontinuation (hence, they could not be used to replicate
the tasks), a feature-wise comparison is provided in Table 10.
It can be seen how few of the reviewed works have covered
most of the evaluated features. The main reason for this is the
gap in the existing literature to tackle two different problems
at the same time; although, much work has been done on
ontology editing and visualization, such as [4], [5], [7], [15],
12 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
and RDF data visualization [2], [6], [14]. However, fewer
have covered the complete workflow, starting with ontology
exploration, followed by query formulation based on the
learned information, and a later exploration of the retrieved
data, making use of different data visualizations and high-
level tasks such as linked-view manipulation and interactive
filtering (described in [26]). The approach proposed in [25]
covers most of the features considered, and uses semantic
information (described by an ontology) to help with the pro-
cess of viewing a selection for data visualization. However, it
starts from a data source that is ready to be visualized. This
is in contrast to integrating the query phase in an assistive
manner into the tool itself.
Here, a new approach for LOD retrieval and analysis has been
presented. The approach moves the focus from the selection
and retrieval to the exploration and analysis of the data. Part
of the decisions made when creating queries with traditional
tools are moved to the step where the user has access to the
data and is able to make better decisions.
We successfully implemented a tool that demonstrates how
relying just on visual tools, is sufficient to obtain information
from data when retrieval would have needed of a complex
query. The separation of concerns among views allowed for
making use of techniques specific to each of the tasks: it
enables seeing how an ontology’s topology maps to the data
stored in a SPARQL endpoint and makeing visual queries us-
ing that same mental modelmodel. It also provides a config-
urable dashboard interface to perform the next -exploratory-
phase of the process. This strategy has been useful for the
implementation of RDF explorer, and it can also be helpful
too for future development of other exploratory tools that
need different approaches in each step of the process. Finally,
the tool was validated by a final user study that ensured that
the proposed design aligned with the users’ mental model and
permitted a correct exploration of the test dataset.
[1] A. Benito-Santos and R. Therón Sánchez, “Cross-domain visual explo-
ration of academic corpora via the latent meaning of user-authored key-
words,” IEEE Access, vol. 7, pp. 98 144–98160, 2019.
[2] J. Klímek, J. Helmich, and M. Neˇ
caský, “Payola: Collaborative linked data
analysis and visualization framework,” ser. The Semantic Web: ESWC
2013 Satellite Events. Springer Berlin Heidelberg, Conference Proceed-
ings, pp. 147–151.
[3] A. Vázquez-Ingelmo, F. J. Garcia-Peñalvo, and R. Therón, “Information
dashboards and tailoring capabilities - a systematic literature review,”
IEEE Access, vol. 7, pp. 109 673–109 688, 2019.
[4] C. Anutariya and R. Dangol, “Vizlod: Schema extraction and visualization
of linked open data,” in 15th International Joint Conference on Computer
Science and Software Engineering (JCSSE), ser. International Joint Con-
ference on Computer Science and Software Engineering. NEW YORK:
Ieee, 2018, Conference Proceedings, pp. 465–470.
[5] M. Alam and A. Napoli, “Interactive exploration over rdf data using formal
concept analysis,” Proceedings of the 2015 Ieee International Conference
on Data Science and Advanced Analytics (Ieee Dsaa 2015), pp. 529–538,
[6] G. Tschinkel, E. Veas, B. Mutlu, and V. Sabol, “Using semantics for
interactive visual analysis of linked open data,” in ISWC 2014 Posters
& Demonstrations Track, ser. CEUR Workshop Proceedings (CEUR-, vol. 1272. ., 2014, pp. 133–136.
[7] E. Motta, P. Mulholland, S. Peroni, M. d’Aquin, J. M. Gomez-Perez,
V. Mendez, and F. Zablith, “A novel approach to visualizing and navigating
ontologies,” Semantic Web - Iswc 2011, Pt I, vol. 7031, pp. 470–+, 2011.
[8] T. D. Wang and B. Parsia, “Cropcircles: topology sensitive visualization
of owl class hierarchies,” in International Semantic Web Conference,
H. Springer, Berlin, Ed.
[9] S. Krivov, R. Williams, and F. VILLA, “Growl: A tool for visualization and
editing of owl ontologies,” Web Semantics: Science, Services and Agents
on the World Wide Web, vol. 5, no. 2, pp. 54–57, 2007.
[10] Lohmann, Steffen et al., “Webvowl: Web-based visualization of ontolo-
gies,” in International Conference on Knowledge Engineering and Knowl-
edge Management, C. Springer, Ed.
[11] García-Peñalvo, Francisco J. et al., “Using owl-vismod through a decision-
making process for reusing owl ontologies,” Behaviour & Information
Technology, vol. 33, no. 5, 2014.
[12] M. Dudas, S. Lohmann, V. Svatek, and D. Pavlov, “Ontology
visualization methods and tools: a survey of the state of the art,
Knowledge Engineering Review, vol. 33, p. 39, 2018. [Online]. Available:
[13] S. Lohmann, S. Negru, F. Haag, and T. Ertl, “Visualizing ontologies with
vowl,” Semantic Web, vol. 7, no. 4, pp. 399–419, 2016.
[14] D. Valerio Camarda, S. Mazzini, and A. Antonuccio, “Lodlive, exploring
the web of data,” 09 2012.
[15] S. Hartmann, F. Hallay, L. Brinkschulte, J. Rebstadt, L. Gesterkamp,
A. Enders, N. Kewitz, and R. Mertens, “Aspect-oriented visual ontology
editor (avoned): Visual language, aspect-oriented editing concept and
implementation,” International Journal of Semantic Computing, vol. 11,
no. 2, pp. 229–274, 2017.
[16] B. Johnson and B. Shneiderman, “Tree-maps: A space-filling approach to
the visualization of hierarchical information structures,” Proceedings of
the IEEE Visualization ’91, pp. 284–291, 10 1991.
[17] B. Rodin and D. Sullivan, “The convergence of circle packings to the
riemann mapping,” Journal of Differential Geometry, vol. 26, 1987.
[18] B. B. Bederson, B. Shneiderman, and M. Wattenberg, “Ordered
and quantum treemaps: Making effective use of 2d space to
display hierarchies,” in The Craft of Information Visualization, ser.
Interactive Technologies, B. B. BEDERSON and B. SHNEIDERMAN,
Eds. San Francisco: Morgan Kaufmann, 2003, pp. 257 –
278. [Online]. Available:
[19] J. Stasko and E. Zhang, “Focus+ context display and navigation techniques
for enhancing radial, space-filling hierarchy visualizations,” IEEE Sympo-
sium on Information Visualization 2000. INFOVIS 2000. Proceedings, pp.
57–65, 2000.
[20] D. Willis, W. W. Braham, K. Muramoto, and D. A. Barber, Energy
accounts: Architectural representations of energy, climate, and the future.
Routledge, 2016.
[21] T. L. Saaty, “The minimum number of intersections in complete graphs.
Proceedings of the National Academy of Sciences of the United States of
America, vol. 52, no. 2, p. 688, 1964.
[22] Z. Cai, K. Shi, and H. Yang, “A novel visualization for ontologies of
semantic web representation,” in 2015 International Conference on Com-
putational Intelligence and Communication Networks (CICN), Dec 2015,
pp. 1371–1374.
[23] K. Nazemi and D. Burkhardt, “Visual analytical dashboards for
comparative analytical tasks – a case study on mobility and transportation,
Procedia Computer Science, vol. 149, pp. 138 – 150, 2019, iCTE in
Transportation and Logistics 2018 (ICTE 2018). [Online]. Available:
[24] A. Buja, D. Cook, and D. F. Swayne, “Interactive high-dimensional data
visualization,” Journal of Computational and Graphical Statistics, vol. 5,
no. 1, pp. 78–99, 1996. [Online]. Available:
[25] I. C. S. Silva, G. Santucci, and C. M. D. S. Freitas, “Visualization and
analysis of schema and instances of ontologies for improving user tasks
and knowledge discovery,” Journal of Computer Languages, vol. 51, pp.
28 – 47, 2019. [Online]. Available:
[26] B. Shneiderman, “The eyes have it: a task by data type taxonomy for
information visualizations,” in Proceedings 1996 IEEE Symposium on
Visual Languages, Sep. 1996, pp. 336–343.
VOLUME 4, 2016 13
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
Feature Our
[23] [25]
Ontology visualization and data retrieval
Uses ontology X X -X-X-X- - - X
Uses RDF data X X X - - X-X X - -
Ontology schema visualization X X -X-X-X- - - X
Ontology shown along with data X X ---------
Handle equivalent classes X X - - X-X- - - X
Handle subclass properties X X -X X -X X - - X
Class selection X X - - X-X- - - X
Property selection X-X-X-X- - - X
The user can query data X-X- - X- - XXX
Data exploration and visualization
Raw data visualization X X - - - X-X- - -
Multiple visualization options X-X- - X- - XXX
Simultaneous views X- - - - - - - XXX
Linked views functionality X- - - - - - - - - X
Textual filters X-Xa-XbX- - - - X
Interactive filters X- - - - X-X X X X
a) Done in the process of querying.
b) Filtering is done on the ontology classes, not retrieved data.
Table 10. Feature comparison between the proposed tool and previous work.
[27] J. Lewis, “Sample sizes for usability studies: Additional considerations,”
Human factors, vol. 36, pp. 368–78, 07 1994.
[28] J. R. Tidal, “One site to rule them all, redux: The second round of usability
testing of a responsively designed web site,” 2017.
[29] M. Jacobs, I. Henselmans, D. L. Arts, M. ten Koppel, S. S. Gisbertz, S. M.
Lagarde, M. I. van Berge Henegouwen, M. A. G. Sprangers, H. C. J. M.
de Haes, and E. M. A. Smets, “Development and feasibility of a web-based
question prompt sheet to support information provision of health-related
quality of life topics after oesophageal cancer surgery,” European Journal
of Cancer Care, vol. 27, no. 1, p. e12593, 2018. [Online]. Available:
[30] M. Stojmenovi´
c, T. Oyelowo, A. Tkaczyk, and R. Biddle, “Building
website certificate mental models,” in Persuasive Technology, J. Ham,
E. Karapanos, P. P. Morita, and C. M. Burns, Eds. Cham: Springer
International Publishing, 2018, pp. 242–254.
[31] J. Lazar, J. H. Feng, and H. Hochheiser, Research methods in human-
computer interaction. Morgan Kaufmann, 2017.
[32] Y. Abgaz, A. Dorn, B. Piringer, E. Wandl-Vogt, and A. Way, “Semantic
modelling and publishing of traditional data collection questionnaires
and answers,” Information, vol. 9, no. 12, 2018. [Online]. Available:
[33] Y. Abgaz, A. Dorn, B. Piringer, E. Wandl-Vogt, , and A. Way, “A seman-
tic model for traditional data collection questionnaires enabling cultural
analysis,” in Proceedings of the LREC 2018 Workshop "6th Workshop on
Linked Data in Linguistics (LDL-2018)". Miyazaki, Japan., 2018.
[34] Österreichische Akademie der Wissenschaften, “Datenbank der bairischen
Mundarten in Österreich [Database of Bavarian Dialects in Austria]
(DBÖ),” 2018.
[35] E. Wandl-Vogt, “Datenbank der bairischen Mundarten in Österre-
ich electronically mapped (dbo@ema),”
beschreibung/, 2012, retrieved January 17, 2018.
[36] J. Brooke, “Usability evaluation in industry; chap. sus-a quick and dirty
1183 usability scale,” 1996.
puter Engineering graduate from the Universitity
of Salamanca, where he is completing his master’s
degree on Inteligent Systems. He is currently part
of the technical and support staff at the Visual
Analytics Group VisUSAL (within the Recog-
nized Research Group GRIAL), where he aids
in the design of new visualization techniques for
leveraging the understanding of complex data in
digital humanities. His areas of interest are data
visualization, process automation, and graphic design. Alejandro has special
interest in the design of creative and interactive tools for multivariate data.
Assistant and Lecturer at the Department of Com-
puter Science and Automation at the University
of Salamanca (Spain), which he joined in 2016.
Alejandro completed his BSc in Computer Engi-
neering at the same university, from which he also
obtained a MSc in Intelligent Systems in 2016.
He is a member of the Visual Analytics Group
VisUSAL (within the Recognized Research Group
GRIAL), where he is currently completing his
PhD under the supervision of Dr. Roberto Therón. In his thesis, he applies
visual analytics in a broad range of interdisciplinary research contexts such
as the digital humanities, sports science or linguistics. His interests lie in the
areas of human-computer interaction, design, statistics and education. He
has taught HCI and Introduction to Python Programming for Statisticians at
the Faculty of Sciences of Salamanca in the past.
14 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2948115, IEEE Access
A. Rodríguez-Díaz et al.: Intuitive ontology-based SPARQL queries for RDF data exploration
AMELIE DORN earned her PhD in Linguistics
from Trinity College Dublin, Ireland (2014) fo-
cusing on the analysis and description of the in-
tonation of Donegal Irish, a Northern variety of
Irish. She further holds an MPhil in Linguistics
from Trinity College Dublin (TCD), and an MA
in European Studies and a BA in French, Span-
ish and Portuguese both from University College
Dublin (UCD). Since 2015 Amelie Dorn is a Se-
nior Researcher at the Austrian Centre for Digital
Humanities at the Austrian Academy of Sciences (ACDH-OeAW). Within
the core unit “Methods and Innovation” she works in exploration space @
ACDH discovering processes to open up cultural collections. She also acts as
a principal investigator in the ChIA project – accessing & analysing cultural
images with new technologies (2019-2021). Amelie Dorn has experience
and knowledge in Open Innovation methods and practices from being a
selected member of the 2nd Lab for Open Innovation in Science (LOIS II)
at the Ludwig Boltzmann Gesellschaft Vienna (AT), which is worldwide the
first training program in Open Innovation in Science.
YALEMISEW earned his PhD in computer sci-
ence, from Dublin City University (2013) focusing
on ontology evolution and analysis of impacts
of the changes, Masters degree in information
science and Bachelor’s degree in library and in-
formation science from Addis Ababa University.
Yalemisew Abgaz is a research fellow in Adapt
Centre at Dublin City University in the area of
Semantic Web technologies. He is affiliated with
Austrian Centre for Digital Humanities at Austrian
Academy of Sciences, and he is a principal investigator of ChIA project –
accessing & analysing cultural images with new technologies. Yalemisew is
also an associate faculty at National College of Ireland, School of Comput-
ing. Before joining Adapt, he was a postdoctoral researcher (2014-2017) at
Maynooth University and was a lecturer at Addis Ababa University (2006-
2008). Yalemisew Abgaz is mainly interested in topics at the intersection
between Semantic Web technologies, knowledge representation and natural
language processing including: ontology development, ontology evolution,
semantic search, semantic publishing, information retrieval, computational
creativity, etc. Yalemisew has interest in natural language processing and
semantic modelling of Amharic digital collections. He is serving as a
reviewer of different conferences and workshops.
EVELINE WANDL-VOGT is experimentalist,
network facilitator, cultural lexicographer, founder
and coordinator of exploration space (11.2017-),
working group “methods and innovation” at the
Austrian Centre for Digital Humanities at the Aus-
trian Academy of Sciences. The working group is
an open space for innovation and experimentation,
with the aim of stimulating, designing, enabling
and scientifically analyzing new forms of knowl-
edge production at the interface of science, tech-
nology and society. Beyond Humanities, methods of Open Innovation in
Science, Geography and Environmental studies as well as Arts and Design
are applied. She has a multidisciplinary university background (German
Philology, Geography, Informatics, Theater studies, Education Science,
Social Innovation) as well as dedicated experience in Lexicography, Data
Curation and Design Thinking. She is also distinguished member of the
worldwide first Lab for Open Innovation in Science (2017-2018). Eveline
Wandl-Vogt is a visiting scholar at Harvard University, Department for
Romance Languages and Literatures (2018-) as well as research affiliate
at metaLAB (at) Harvard at Berkman Klein Center Harvard (2019-). She
acts as European research manager and initiator on various international
boards and serves as expert in various global initiatives, mainly in the area
of technical and social infrastructures, such as ADHO, ALLEA, COST
actions, DARIAH, ECSA, and standardization bodies. Eveline is a network
facilitator, passionate to create cross-sectoral, cross-organizational value
driven innovation networks of purpose.
ROBERTO THERÓN received the Diploma de-
gree in computer science from the University of
Salamanca, the B.S. degree from the University
of A Coruña, the B.S degree in communication
studies and the B.A. degree in humanities from the
University of Salamanca, and the PhD degree from
the Research Group Robotics, University of Sala-
manca. His PhD thesis was on parallel calculation
of the configuration space for redundant robots. He
is currently the Manager of the VisUSAL Group
(within the Recognized Research Group GRIAL), University of Salamanca,
which focusses on the combination of approaches from computer science,
statistics, graphic design, and information visualisation to obtaining an
adequate understanding of complex datasets. He has authored over 100
articles in international journals and conferences. In recent years, he has
been involved in developing advanced visualisation tools for multidimen-
sional data, such as genetics or paleo-climate data. In the field of visual
analytics, he develops productive collaborations with groups and institutions
internationally recognised as the Laboratory of Climate Sciences and the
Environment, France, or the Austrian Academy of Sciences, Austria. He
received the Extraordinary Doctoral Award for his PhD thesis.
VOLUME 4, 2016 15
... First, by providing users new exploration paths (SPARQL Query Templates) to query the collections using the newly added ontology concepts and relationships as used in [45,46]. Second, by supporting visualisation of the collection using the new annotation as a criterion for aggregating images as in [47]. Third, the use of interactive chatbots that are trained based on the annotated data to support queries that are based on precompiled templates. ...
... The methodology further provided mechanisms for efficient exploration of the resources by enabling exploration paths and templates. The following SPARQL templates are introduced to support the explorations [47]. The exploration of the triples is not restricted to the exploration paths, however becomes open and can be used for interlinking images with selected abstract cultural queries. ...
Full-text available
Cultural heritage images are among the primary media for communicating and preserving the cultural values of a society. The images represent concrete and abstract content and symbolise the social, economic, political, and cultural values of the society. However, an enormous amount of such values embedded in the images is left unexploited partly due to the absence of methodological and technical solutions to capture, represent, and exploit the latent information. With the emergence of new technologies and availability of cultural heritage images in digital formats, the methodology followed to semantically enrich and utilise such resources become a vital factor in supporting users need. This paper presents a methodology proposed to unearth the cultural information communicated via cultural digital images by applying Artificial Intelligence (AI) technologies (such as Computer Vision (CV) and semantic web technologies). To this end, the paper presents a methodology that enables efficient analysis and enrichment of a large collection of cultural images covering all the major phases and tasks. The proposed method is applied and tested using a case study on cultural image collections from the Europeana platform. The paper further presents the analysis of the case study, the challenges, the lessons learned, and promising future research areas on the topic.
... Visualization is the capability of presenting information for better understanding [14] by leveraging the users to understand datasets and simulation (in the context of semantic based applications) is more focused on importing and integrating data items in nodes from multiple query sources [18]. So, considering the semantic Web context, our end user interface simulated the data queried from different interlinked sources and visualized it in different formats including the iconic representation of data (as shown in Fig. 13). ...
Full-text available
bold xmlns:mml="" xmlns:xlink="">Background: The trend in producing linked open data to publish high-quality interlinked data has gained widespread traction in recent years. Various sectors are producing linked open data to increase public access and ensure transparency, in addition to a better utilization of government data, namely linked open government data. Problem Definition: As compared to the developed countries, Saudi Arabia lags behind in benefiting from this new era of ubiquitous web of data, despite its publication of government related data in non-linked format. In the context of Saudi open government data, the full potential of multi-category data published by various government agencies at different portals is not being realized as the data are not published in open data format and remain unlinked to other existing datasets. Methodology: To bridge this gap, this study presents a framework to extract and generate semantically enriched data from various data sources under different domains. The framework was used to produce the Saudi linked open government data cloud by interlinking data entities with each other and with external existing open datasets. Results: The effectiveness of our approach is validated by applying it to a socially significant issue, i.e., divorce rate, in Saudi Arabia. By posing smart queries to semantically enriched data, we were able to perform an in-depth analysis of different factors related to increasing divorce rates in Saudi Arabia. Arguably, without using linked open data and related technologies such analysis would not have been possible. Finally, we also present a simulated visual environment for better understanding and communication of such analysis for decision and policy makers.
... Expression and Affix and integration of the resulting data into the visualisation system (Rodríguez Díaz et al., 2019) developed for the exploreAT! project. ...
Conference Paper
Full-text available
This paper describes the conversion of a lexicographic collection of a non-standard German language dataset (Bavarian Dialects) into a Linguistic Linked Open Data (LLOD) format within the framework of ExploreAT! Project. The collection is divided into three parts: 1) conceptual content for unique corpus collection-questionnaire dataset (DBÖ questionnaires) which contains details of the questionnaires and associated questions, 2) metadata regarding the collection framework-including collectors and hierarchical system of localisations, and 3) lexical dataset (DBÖ entries)-both unique data collections as answers to the questions and unique data collections as excerpts of already published sources. In its current form, the DBÖ entries dataset is available in a TEI/XML format separately from the questionnaire dataset. This paper presents the mapping of the lexical entries from the TEI/XML into an LLOD format using the Ontolex-Lemon model. We present the resulting lexicon of Bavarian Dialect and the approach used to interlink the data collection questionnaires with their corresponding answers (lexical entries). The output complements DBÖ questionnaires dataset, which is already in an LLOD format, by semantically interlinking the original questions with the answers and vice-versa.
Full-text available
This article provides insights into dealing with complexities in the Digital Humanities project exploreAT!. By exploring a non-standard language collection for cultural insights, a three-fold approach is presented looking into concrete realizations and solutions of tackling challenges in terms of Open Innovation infrastructure, technology and the topic of choice, food. Methods and processes applied and developed in the project are aimed to serve as examples for future projects with similar data sets.
Full-text available
Recently, Virzi (1992) presented data that support three claims regarding sample sizes for usability studies: (1) observing four or five participants will allow a us-ability practitioner to discover 80% of a product's usability problems, (2) observing additional participants will reveal fewer and fewer new usability problems, and (3) more severe usability problems are easier to detect with the first few participants. Results from an independent usability study clearly support the second claim, partially support the first, but fail to support the third. Problem discovery shows diminishing retums as a function of sample size. Observing four to five participants will uncover about 80% of a product's usability problems as long as the average likelihood of problem detection ranges between 0.32 and 0.42, as in Virzi. If the average likelihood of problem detection is lower, then a practitioner will need to observe more than five participants to discover 80% of the problems. Using behavioral categories for problem severity (or impact), these data showed no correlation between problem severity (impact) and rate of discovery. The data provided evidence that the binomial probability formula may provide a good model for predicting problem discovery curves, given an estimate of the average likelihood of problem detection. Finally, data from economic simulations that estimated return on investment (ROI) under a variety of settings showed that only the average likelihood of problem detection strongly influenced the range of sample sizes for maximum ROI.
Full-text available
The design and development of information dashboards are not trivial. Several factors must be accounted; from the data to be displayed to the audience that will use the dashboard. However, the increase in popularity of these tools has extended their use in several and very different contexts among very different user profiles. This popularization has increased the necessity of building tailored displays focused on specific requirements, goals, user roles, situations, domains, etc. Requirements are more sophisticated and varying; thus, dashboards need to match them to enhance knowledge generation and support more complex decision-making processes. This sophistication has led to the proposal of new approaches to address personal requirements and foster individualization regarding dashboards without involving high quantities of resources and long development processes. The goal of this work is to present a systematic review of the literature to analyze and classify the existing dashboard solutions that support tailoring capabilities and the methodologies used to achieve them. The methodology follows the guidelines proposed by Kitchenham and other authors in the field of software engineering. As results, 23 papers about tailored dashboards were retrieved. Three main approaches were identified regarding tailored solutions: customization, personalization, and adaptation. However, there is a wide variety of employed paradigms and features to develop tailored dashboards. The present systematic literature review analyzes challenges and issues regarding the existing solutions. It also identifies new research paths to enhance tailoring capabilities and thus, to improve user experience and insight delivery when it comes to visual analysis.
Full-text available
Nowadays, scholars dedicate a substantial amount of their work to the querying and browsing of increasingly large collections of research papers on the Internet. In parallel, the recent surge of novel interdisciplinary approaches in science requires scholars to acquire competencies in new fields for which they may lack the necessary vocabulary to formulate adequate queries. This problem, together with the issue of information overload, poses new challenges in the fields of natural language processing (NLP) and visualization design that call for a rapid response from the scientific community. In this respect, we report on a novel visualization scheme that enables the exploration of research paper collections via the analysis of semantic proximity relationships found in author-assigned keywords. Our proposal replaces traditional string queries with a bag-of-words (BoW) extracted from a user-generated auxiliary corpus that captures the intentionality of the research. Continuing along the lines established by other authors in the fields of literature-based discovery (LBD), NLP and visual analytics (VA), we combine novel advances in the fields of NLP with visual network analysis techniques to offer scholars a perspective of the target corpus that better fits their research interests. To highlight the advantages of our proposal, we conduct two experiments employing a collection of visualization research papers and an auxiliary cross-domain BoW. Here, we showcase how our visualization can be used to maximize the effectiveness of a browsing session by enhancing the language acquisition task, which allows for effectively extracting knowledge that is in line with the users’ previous expectations.
Full-text available
Mobility, logistics and transportation are emerging fields of research and application. Humans’ mobility behavior plays an increasing role for societal challenges. Beside the societal challenges these areas are strongly related to technologies and innovations. Gathering information about emerging technologies plays an increasing role for the entire research in this ares. Humans’ information processing can be strongly supported by Visual Analytics that combines automatic modelling and interactive visualizations. The juxtapose orchestration of interactive visualization enables gathering more information in a shorter time. We propose in this paper an approach that goes beyond the established methods of dashboarding and enables visualizing different databases, data-sets and sub-sets of data with juxtaposed visual interfaces. Our approach should be seen as an expandable method. Our main contributions are an in-depth analysis of visual task models and an approach for juxtaposing visual layouts as visual dashboards to enable solving complex tasks. We illustrate our main outcome through a case study that investigates the area of mobility and illustrates how complex analytical tasks can be performed easily by combining different visual interfaces.
Full-text available
Extensive collections of data of linguistic, historical and socio-cultural importance are stored in libraries, museums and national archives with enormous potential to support research. However, a sizable portion of the data remains underutilised because of a lack of the required knowledge to model the data semantically and convert it into a format suitable for the semantic web. Although many institutions have produced digital versions of their collection, semantic enrichment, interlinking and exploration are still missing from digitised versions. In this paper, we present a model that provides structure and semantics to a non-standard linguistic and historical data collection on the example of the Bavarian dialects in Austria at the Austrian Academy of Sciences. We followed a semantic modelling approach that utilises the knowledge of domain experts and the corresponding schema produced during the data collection process. The model is used to enrich, interlink and publish the collection semantically. The dataset includes questionnaires and answers as well as supplementary information about the circumstances of the data collection (person, location, time, etc.). The semantic uplift is demonstrated by converting a subset of the collection to a Linked Open Data (LOD) format, where domain experts evaluated the model and the resulting dataset for its support of user queries.
Full-text available
Various ontology visualization tools using different visualization methods exist and new ones are being developed every year. The goal of this paper is to follow up on previous surveys with an updated classification of ontology visualization methods and a comprehensive survey of available tools. The tools are analyzed for the used visualization methods, interaction techniques and supported ontology constructs. It shows that most of the tools apply two-dimensional node-link visualizations with a focus on class hierarchies. Color and shape are used with little variation, support for constructs introduced with version 2 of the OWL Web Ontology Language is limited, and it often remains vague what tasks and use cases are supported by the visualizations. Major challenges are the limited maturity and usability of many of the tools as well as providing an overview of large ontologies while also showing details on demand. We see a high demand for a universal ontology visualization framework implementing a core set of visual and interactive features that can be extended and customized to respective use cases.
Treemaps are a space-filling visualization method capable of representing the large hierarchical collections of quantitative data in a compact display. A treemap works by dividing the display area into a nested sequence of rectangles, whose areas correspond to an attribute of the data set by effectively combining the aspects of a Venn diagram and a pie chart. Several algorithms are available to create more useful displays by controlling the aspect ratios of the rectangles that make up a treemap. While these algorithms do improve the visibility of small items in a single layout, they introduce instability over time in the display of dynamically changing data, fail to preserve the order of the underlying data, and create layouts that are difficult to visually search. In addition, continuous treemap algorithms are not suitable for displaying fixed-sized objects within them, such as images. This chapter introduces a new “strip” treemap algorithm, which addresses these shortcomings. The chapter analyzes other pivot algorithms developed recently and shows the trade-offs between the strip algorithm and these algorithms. Using experimental evidences from the Monte Carlo trials and from the actual stock market data, it is found that ordered treemaps are more stable as compared to other layouts, while maintaining relatively favorable aspect ratios of the constituent rectangles. The chapter also discusses the quantum treemap algorithms, which modify the layout of the continuous treemap algorithms to generate rectangles that are the integral multiples of an input object size. The quantum treemap algorithm has been applied to PhotoMesa, an application that supports the browsing of large numbers of images.
Ontologies are an important resource for knowledge representation. Their structure can be complex due to role relations between several concepts, distinct attributes, and different instances. In this paper, we discuss a Visual Analytics solution, relying on the use of multiple coordinated views for exploring different ontology aspects and a novel use of the degree of interest (DoI) suppression technique to reduce the complexity of the ontology visual representation. Visual Analytics facilitates the understanding of the domain and tasks represented by ontologies, thus allowing to carry out exploratory analysis to optimize the comprehension of data semantics including non-explicit relationships between data. Through the DoI technique, we place the main concept in focus, distinguishing it from the unnecessary information and facilitating the analysis and understanding of correlated data. We evaluated all the devised solutions, and the results reinforce the importance of providing visualization and analysis techniques dedicated to the schema and instances levels of ontologies for the discovery of non-explicit information.
Research Methods in Human-Computer Interaction is a comprehensive guide to performing research and is essential reading for both quantitative and qualitative methods. Since the first edition was published in 2009, the book has been adopted for use at leading universities around the world, including Harvard University, Carnegie-Mellon University, the University of Washington, the University of Toronto, HiOA (Norway), KTH (Sweden), Tel Aviv University (Israel), and many others. Chapters cover a broad range of topics relevant to the collection and analysis of HCI data, going beyond experimental design and surveys, to cover ethnography, diaries, physiological measurements, case studies, crowdsourcing, and other essential elements in the well-informed HCI researcher's toolkit. Continual technological evolution has led to an explosion of new techniques and a need for this updated 2nd edition, to reflect the most recent research in the field and newer trends in research methodology. This Research Methods in HCI revision contains updates throughout, including more detail on statistical tests, coding qualitative data, and data collection via mobile devices and sensors. Other new material covers performing research with children, older adults, and people with cognitive impairments.