Content uploaded by Tiziana Catarci
Author content
All content in this area was uploaded by Tiziana Catarci on Jul 04, 2018
Content may be subject to copyright.
Digital Interaction: Where Are We Going?
Tiziana Catarci
Sapienza University of Rome
Rome, Italy
catarci@diag.uniroma1.it
Massimo Amendola
Ministry of Economic Development
Rome, Italy
massimo.amendola@mise.gov.it
Francesca Bertacchini
University of Calabria
Rende, Italy
frncescabertacchini@live.it
Eleonora Bilotta
University of Calabria
Rende, Italy
bilotta@unical.it
Marco Bracalenti
University of Perugia
Perugia, Italy
marco.bracalenti90@gmail.com
Paolo Buono
University of Bari Aldo Moro
Bari, Italy
paolo.buono@uniba.it
Antonello Cocco
Ministry of Economic Development
Rome, Italy
antonello.cocco@mise.gov.it
Maria Francesca Costabile
University of Bari Aldo Moro
Bari, Italy
maria.costabile@uniba.it
Giuseppe Desolda
University of Bari Aldo Moro
Bari, Italy
giuseppe.desolda@uniba.it
Francesco Di Nocera
Sapienza University of Rome
Rome, Italy
francesco.dinocera@uniroma1.it
Stefano Federici
University of Perugia
Perugia, Italy
stefano.federici@unipg.it
Giancarlo Gaudino
Ministry of Economic Development
Rome, Italy
giancarlo.gaudino@mise.gov.it
Rosa Lanzilotti
University of Bari Aldo Moro
Bari, Italy
rosa.lanzilotti@uniba.it
Andrea Marrella
Sapienza University of Rome
Rome, Italy
marrella@diag.uniroma1.it
Maria Laura Mele
University of Perugia
Perugia, Italy
marialaura.mele@gmail.com
Pietro S. Pantano
University of Calabria
Rende, Italy
piepa@unical.it
Isabella Poggi
Roma Tre University
Rome, Italy
isabella.poggi@uniroma3.it
Laura Tarantino
University of L’Aquila
L’Aquila, Italy
laura.tarantino@univaq.it
ABSTRACT
In the framework of the AVI 2018 Conference, the interuniversity
center ECONA has organized a thematic workshop on “Digital In-
teraction: where are we going?”. Six contributions from the ECONA
members investigate dierent perspectives around this thematic.
CCS CONCEPTS
•Human-centered computing →
Human computer interaction
(HCI);Interaction design;Visualization;
•Security and privacy →
Human and societal aspects of security and privacy;
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
AVI ’18, May 29-June 1, 2018, Castiglione della Pescaia, Italy
©2018 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-5616-9/18/05.
https://doi.org/10.1145/3206505.3206606
KEYWORDS
Visual Interfaces, Multimodal Interaction, Accessibility, Usability
Evaluation, Participatory Design, User Experience, Human Factors
in Cybersecurity
ACM Reference Format:
Tiziana Catarci, Massimo Amendola, Francesca Bertacchini, Eleonora
Bilotta, Marco Bracalenti, Paolo Buono, Antonello Cocco, Maria Francesca
Costabile, Giuseppe Desolda, Francesco Di Nocera, Stefano Federici, Gian-
carlo Gaudino, Rosa Lanzilotti, Andrea Marrella, Maria Laura Mele, Pietro
S. Pantano, Isabella Poggi, and Laura Tarantino. 2018. Digital Interaction:
Where Are We Going?. In AVI ’18: 2018 International Conference on Advanced
Visual Interfaces, AVI ’18, May 29-June 1, 2018, Castiglione della Pescaia, Italy.
ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3206505.3206606
INTRODUCTION
ECONA is an interuniversity center drawing on the expertise of
teaching and researching sta from eight Italian Universities. It
is a permanent organization - open to all academic contributions
- set up to provide a focus for the study of cognitive processing.
ECONA involves a network of academics within a variety of disci-
plines such as Psychology, Computer Science, Engineering, Maths,
Physics, Biology, Medicine, Economics and Architecture. Its main
AVI ’18, May 29-June 1, 2018, Castiglione della Pescaia, Italy T. Catarci et al.
goal is to promote multidisciplinary projects in several elds of
research including human-computer interaction, psychology of
cognitive processes, machine learning, natural language process-
ing, psychophysiology and neuropsychology, modeling of mental
processes.
In the framework of the AVI 2018 Conference, ECONA has orga-
nized a thematic workshop on “Digital Interaction: where are we
going?”, accepting contributions from the ECONA members only
but open to the public. The various talks aim at providing answers
from dierent perspectives.
A rst work surveys many years of research in visual interfaces
showing that they are still an indispensable component even in
richer interaction environments. Then, another author stresses the
importance of being more inclusive by studying interaction modal-
ities and systems for groups of people with special needs, also
tailoring standardized tools and techniques to address their needs.
A third contribution emphasizes the importance of enriching the
interaction with virtual agents (a virtual tourist guide in particular)
by reproducing the human guide behaviour in terms of both dia-
logue and gestures, so oering to the visitors not only information
but also an emotional experience. The distance between humans
and machines is also investigated in a study aiming at designing
objects whose physical characteristics best t the intended usage
to satisfy the users’goals. The study compares users’shape choices
with those automatically selected by ML algorithms highlighting
the many overlaps. Finally, the still existing need of concentrating
on the users when dealing with everyday software is discussed in
two talks. The rst one makes a step forward in having a more us-
able Public Administration software by providing the practitioners
with a semi-automatic advanced tool for assessing the usability of
web sites and services. In the last talk the underestimated role of
the human being in the cybersecurity chain is highlighted together
with possible solutions to increase the human awareness of the
cyber risks and reduce the human errors without sacricing the
system usability.
VISUAL INTERFACES WILL STILL PLAY A KEY
ROLE IN FUTURE INTERFACES
Authors: Paolo Buono, Maria Francesca Costabile.
The series of AVI conference was started in 1992 by Tiziana
Catarci, Maria Francesca Costabile and Stefano Levialdi, with the
objective to meet people interested in any type of visual interfaces.
After 26 years, the AVI conference is still going very well, attract-
ing world leaders. The current great variety of ICT technology is
determining new possibilities for user interfaces. Thus AVI has
broadened the topics it covers, still keeping its main focus on the
conception, design, implementation and evaluation of novel visual
interfaces. AVI success is a further indication that visual interfaces
are still able to create valuable user experiences.
The ongoing advances in sensor and display technologies, CPUs,
GPUs, and wireless networks are a continuous source of innovation
with novel devices ranging from very large displays to small wear-
ables such as smart watches or augmented reality glasses. All these
new devices push researchers to envision new interaction possibili-
ties. The proliferation of sensor technology does not only stimulate
humans’ sense of vision and hearing, but also sense of touch, smell,
and taste, challenging to master these senses in creating novel mul-
tisensory interfaces. For instance, a recent project discusses how
the use of ultrasound mid-air haptic stimulation allowed users to
communicate their emotional status. However, researchers working
in Virtual Reality (VR) point out that VR systems are now able to
create highly visually convincing experiences, they also perform
pretty well with the sense of hearing, while current technology is
still not able to stimulate our other senses with the same resolution.
We are now more than three decades after the widespread adop-
tion of visual user interfaces with mouse and keyboard. There are
more smartphones in the world than desktop PCs and interaction
by touch and multi-touch is very popular. One successful example
are large interactive displays and whiteboards that rely on pen
or touch input and can now be found in many meeting and class
rooms or even in public spaces. Over time, size and resolution of
such displays increased and they also became far more aordable
and widespread, so that the current focus is on the development of
more complex applications that meet users’ needs in various situ-
ations. With the goal of making interaction more âĂIJnaturalâĂİ,
new modalities and interaction languages are studied to improve
the interaction with the new systems, possibly not mediated by
devices such as mouse and keyboard. Performing gestures (or in
general body movements) to allow people to communicate with
the system is an example. Indeed, advances in computer vision
permit real-time body, hand, and nger tracking, making possible
to recognize human motions from a distance. In previous work
of one of the authors, set of gestures are presented as a type of
visual language, suggesting researchers to capitalize on what visual
language research has produced since more than thirty years.
A workshop at AVI 2018 considers that multimodal interaction
can oer many potential benets for data visualization. So far, ex-
isting visualization techniques have mostly explored a single input
modality such as mouse, touch, pen, speech. We agree that it is
worth exploiting the strengths of dierent interaction modalities
when analyzing large amounts of data, but proper visual repre-
sentations have the great potential of enabling people to quickly
grasp the content that such data bring, in order to speed up the
decision-making process.
Having analyzed various new types of interaction possibilities,
this position paper states that visual interfaces will still play a
major role in future user interfaces, even if, in some cases, they
might be complemented with other interaction modalities within a
multimodal interface.
DIGITAL INTERACTION: WHAT IS THE
QUESTION? A POSITION ABSTRACT
Author: Laura Tarantino.
Speaking about digital interaction, I would like to provocatively
turn the proposed question “where are we going” into “where do
we want to go”, which I nd appropriate, giving both the maturity
of the discipline and the specic context, i.e., a conference in a eld
multidisciplinary in its very nature and regarding users as the cen-
tral aspect of the scientic discourse. Studies focused on successful
technological innovation (e.g., [
7
,
18
]) underline the necessity of
putting emphasis on three legs of human-centered product develop-
ment, namely user experience, marketing, and technology: an idea
Digital Interaction: Where Are We Going? AVI ’18, May 29-June 1, 2018, Castiglione della Pescaia, Italy
must be desirable (user’s point of view), viable (business point of
view), and feasible (technology point of view). I argue that, while it
is a company’s interest to ensure the balance of the three support-
ing legs, the scientic community may - or, to better say, should
- foster also unbalanced research, e.g., focusing on specic users’
groups deserving community attention and high quality research.
This is for example the case of ICT-enhanced treatment for peo-
ple with Autism Spectrum Disorder, (see, .e.g., [
4
,
11
,
20
] for sur-
veys), which may be regarded as paradigmatic for application do-
mains in which advanced ICT solutions are regarded as highly
promising but still a bit at their infancy. This may imply the need of
downsizing some experiment parameters considered standard oth-
erwise, e.g., in terms of technological maturity of results (in many
case still at the stage of proof-of-concepts) and size of users’ groups
involved in the evaluation (for example, out of the 38 ICT-based
studies surveyed in [4], 20 involved a sample with a size below 10
or not even specied and only 3 have been evaluated with more
than 30 persons). While these limitations - along with the speciality
of the users population - might suggest to someone that prelimi-
nary results are not of interest for the scientic community, it is
reasonable to ask ourselves whether it is on the contrary exactly
the speciality of the users population and its need and rights for
mature studies that should make the scientic community favour
studies of this kind in the mainstream of HCI research.
Where do we want to go? Which are our objectives and ethical
issues as scientists?
THE MULTIMODAL COMMUNICATION OF
ART COMMENTATORS. FROM ANALYSIS TO
SIMULATION IN VIRTUAL GUIDES
Author: Isabella Poggi.
In the context of the National Project CHROME (Cultural Her-
itage Resources Orienting Multimodal Experiences), the research
unit of Roma Tre is collaborating with the Principal Coordinator
Franco Cutugno and the research Unit of Napoli to the construction
of Virtual Touristic Guides. The Virtual Guide to be implemented,
Maya, is supposed to guide human tourists in a virtual tour of
the Chartreuses in Campania, Italy, namely those of S.Martino in
Naples, S.Giacomo at Capri, and the Chartreuse of Padula.
While the architect partners of Naples work at a 3D reconstruc-
tion of the three Chartreueses through drone grabbing, and engi-
neers build the Virtual guide, the task of Roma Tre is the analysis of
the multimodal communicative behavior of human “Art Commenta-
tors” (AC): tourist guides, art history experts and other professionals
working to illustrate Cultural Heritage resources. The analysis of
ACs’ multimodal behavior implies on the one side to nd out the
recurrent structure of their verbal discourse, on the other to anal-
yse the body communicative behavior concomitant to the single
parts or aspects of that structure. On the former side, a corpus
of ten videos on youtube is collected in which ten ACs illustrate
artworks by ancient or contemporary artists in the same TV format.
In each video the discourse of the AC is analysed in terms of its
hierarchy of goals and a general script is singled out of the goals
typically pursued by Art Commentators in presenting an artwork:
the “textual goals” of providing information about the work, the
Author, and their cultural-historical milieu are aimed at “emotional
goals” of triggering emotions in Users, hence bringing about their
cultural or spiritual elevation; “textual goals” are pursued, in their
turn, through the “modal goals” of soliciting attention, triggering
curiosity and interest, facilitating comprehension through lexical
denition, vivid illustration, explanations and belief connections.
On the side of body communicative behavior, an annotation
scheme is tuned to describe and classify the literal and indirect
meanings conveyed by AC’s gestures, facial expressions, posture,
and gaze. These are nally connected to the specic nodes of AC’s
discourse: for each body behavior of AC it is assessed whether
it contributes to the textual goals of illustrating the artwork (e.g.
a deictic gesture pointing at a part of a painting), to the goal of
triggering emotions in the User (e.g. a facial expression of enthusi-
asm or admiration) or nally to one of reconnecting the delivered
information to the User’s previous beliefs (e.g., an allusive gaze).
The description and classication of the verbal and bodily com-
munication in all ACs in the corpus allows to distinguish their
dierent styles as commentators, both in terms of their preference
for textual, emotional or modal goals, and in terms of their use of re-
current verbal or bodily signals. This is nally aimed at reproducing
such dierent styles on the Virtual Guide.
WHAT IS THE FINEST FORM FOR REALIZING
THIS PURPOSE? AESTHETICAL-FUNCTIONAL
VALUES AS FITNESS FUNCTION
Authors: Francesca Bertacchini, Pietro S. Pantano, Eleonora Bilotta.
Besides allowing robots that interact with humans in social envi-
ronment [
5
], or realizing visual analysis based on advanced chaotic
algorithms [
2
,
3
], one of the main innovation of recent scientic
and technological advancements is digitization and digitalization
of physical objects [
24
], process that strongly inuences contempo-
rary industrial production. The possibility to parametrize an object
allows the related modication of its function as well, by exploring
the parameter space of its geometrical conguration. This option,
which many CAD systems actually allow, has opened new unpre-
dictable possibility to generate digital 3D shapes of objects, to be
used in smart manufacturing [
26
]. In this technological scenario,
many generative algorithms consent to produce thousands of imita-
tive digital objects that vary slightly. However, the problems raise
when we want to select the digital objects that best t with our
purpose in order to satisfy the needs of classes of users, in a design
process.
We have implemented some computational systems that produce
this wealth of digital shapes of objects. We thought to use these ob-
jects for smart manufacturing, allowing the process of 3D printing,
for physically create new and interesting items for daily use. What
is the best digital object to be chosen, having thousands of digital
shapes? Sims, in his famous experiments in Computer Graphics
[
21
], explained a method for procedurally generating 3D virtual
creatures, using connected graphs, L-systems, and neural networks
to generate both morphologies and control behavior. Then, he used
aesthetical and functional tness functions to choose, among a
huge amount of evolved creatures, the ones that best t with his
purposes. We used a similar procedure of choice made by Sims,
AVI ’18, May 29-June 1, 2018, Castiglione della Pescaia, Italy T. Catarci et al.
employing an empirical tness that used both the congruence and
the appeal of the digital structure to select among thousands of
possible shapes. The results must be congruent with the function
that the physical object must carry out in the physical environment.
To achieve this goal, we created an experimental situation in
which users chose the digital shape they prefer, according to some
specied functions. Results have been collected and categorized. In
order to compare these results with an automatic choice made by a
computational system able to evaluate visual objects, we trained a
machine learning system, with dierent functions, to analyse the
same data we have used for human subjects. Unexpectedly, the
categories to which the ML arrived by some means overlap with
the considered choices made by humans. The study of shared visual
elements between articial systems and humans leads to new inter-
esting perspectives of articial intelligence which is increasingly
closer to the human ones.
UTASSISTANT: A NEW SEMI-AUTOMATIC
USABILITY EVALUATION TOOL FOR ITALIAN
PUBLIC ADMINISTRATIONS
Authors: Stefano Federici, Maria Laura Mele, Rosa Lanzilotti,
Giuseppe Desolda, Marco Bracalenti, Giancarlo Gaudino, Antonello
Cocco, Massimo Amendola.
Since 2013, the Department of Public Function of the Italian
Ministry for Simplication and Public Administration (PA) has
been developing two usability evaluation protocols designed both
for desktop solutions (eGLU 2.1) [
8
] and mobile platforms (eGLU-
Mobile) [9].
The current work presents a usability evaluation Web plat-
form called UTAssistant (“Usability Tool Assistant”) [
10
], a semi-
automatic digital tool developed to support PA web service prac-
titioners in designing usability assessment tests from the outset
through the data analysis process, by following the principles and
recommendations of both eGLU 2.1 and eGLU-M. UTAssistant aims
to provide Italian PA with an easy-to-use tool for assessing the
usability of PA websites and services; there is no need to install
any software on evaluators’ devices, as required by the existing
tools for usability testing (e.g., Morae, https://www.techsmith.com/
morae.html [25]).
The fact that UTAssistant is a Web platform represents an im-
portant contribution to PA usability assessment, since remote par-
ticipation fosters wider adoption of these tools and, consequently,
of usability testing techniques. UTAssistant supports evaluators
(e.g. Web managers of PA websites) of usability assessment proce-
dures in a step-by-step and semi-automatic way. The procedures in
UTAssistant follow the protocols used in eGLU 2.1 and eGLU-M,
and the experimental methodology used to evaluate the usability
of UTAssistant also follows the GLU principles and recommenda-
tions by integrating them with new bio-behavioral methods for
assessing user interaction. This methodology combines various
methods and techniques borrowed from standard usability evalu-
ation procedures and psychophysiological investigation methods
using bio-behavioral measures [6].
The new methodology aims to assess usability tools by combin-
ing the following: (i) eye-tracking, facial recognition, and electroen-
cephalography measurements [
13
]; (ii) standard usability evalua-
tion processes, which are compliant with international usability
guidelines [
6
]; (iii) heuristic usability investigations by UX experts
[
14
–
17
]; and (iv) remote online usability evaluation with highly
representative numbers of end-users recruited through Web-based
recruitment platforms. The experimental methodology uses bio-
behavioral variables that are mostly hidden from users, thus over-
coming the most common issues that can occur in traditional as-
sessment methodologies, such as the users’ tendency to answer
questions in a way that is aected by the presumed expectations
of the evaluator, i.e. the social desirability bias. UTAssistant, re-
engineered according to the user-centered evaluation process and
complying with ISO/IEC 25010 [
1
] + eGLU 3.0, forms the content
of the eGLU-box service pack.
UNDERSTANDING HUMAN FACTORS IN
CYBERSECURITY
Authors: Tiziana Catarci, Francesco Di Nocera, Andrea Marrella.
The Cyber-Security eld investigates solutions for protecting
systems, networks, software and data from unauthorized access
or attacks aimed at exploitation [
22
]. The eld is growing in its
importance due to the increasing presence of connected devices
(such as PCs, smartphones, tablets, etc.). The huge amount of data
and events produced by the use of those devices dramatically aect
the reliability of systems and of exchanged information.
Cyber-security is evolving quickly, and several technological
solutions for the design of safer systems are currently being de-
veloped. However, technology advances alone can just mitigate
the possibility of security breaches, but cannot solve completely
all the challenges faced in cybersecurity. To date, the problem is
that research in this eld tends to neglect that the weakest link
in the cybersecurity chain is the human being and her/his limited
awareness of the multitude of risks deriving from the interaction
with any (modern) technological environment [19].
It is not a coincidence that most of the current attacks are tar-
geting uninformed or misinformed people [
12
]. The right use of
security tools relies on the awareness of their usefulness. Therefore,
if users do not understand or are not aware of the security risks,
they are more vulnerable to make incorrect behaviours. Moreover,
users may be aware of the risk, but may not know which is the
correct behaviour. When the user feels overwhelmed by the system
demands, s/he could dismiss the system itself [23].
Here we will discuss and tackle this issue by presenting our
ongoing research on understanding human factors in cybersecu-
rity. Specically, we are investigating user-centred foundations and
solutions that are specically tailored to increase cyber-awareness
and reduce human errors. Our research is at the intersection of
Human-Computer Interaction, Behavior Analysis and Cybersecu-
rity, and is aimed at realizing a “science of usable solutions for
cyber-security” that could eventually lead to nd the right balance
between usability and security.
Digital Interaction: Where Are We Going? AVI ’18, May 29-June 1, 2018, Castiglione della Pescaia, Italy
REFERENCES
[1]
ISO/IEC 25010:2011. 2011. Systems and Software Engineering - Systems and
Software Quality Requirements and Evaluation (Square) - System and Software
Quality Models. (2011). Retrieved April 24, 2018 from https://www.iso.org/
standard/35733.html
[2]
Marjan Abdechiri, Karim Faez, Hamidreza Amindavar, and Eleonora Bilotta. 2017.
The chaotic dynamics of high-dimensional systems. Nonlinear Dynamics 87, 4
(2017).
[3]
Marjan Abdechiri, Karim Faez, Hamidreza Amindavar, and Eleonora Bilotta. 2017.
Chaotic Target Representation for Robust Object Tracking. Image Commun. 54,
C (2017), 23–35.
[4]
Nuria Aresti-Bartolome and Begonya Garcia-Zapirain. 2014. Technologies as
support tools for persons with autistic spectrum disorder: a systematic review.
International journal of environmental research and public health 11, 8 (2014).
[5]
Francesca Bertacchini, Eleonora Bilotta, and Pietro Pantano. 2017. Shopping with
a robotic companion. Computers in Human Behavior 77 (2017).
[6]
Simone Borsci, Masaaki Kurosu, Stefano Federici, and Maria Laura Mele. 2013.
Computer Systems Experiences of Users with and Without Disabilities: An Evaluation
Guide for Professionals (1st ed.). CRC Press, Inc.
[7]
Tim Brown. 2009. Change by Design: How Design Thinking Transforms Organiza-
tions and Inspires Innovation. Technical Report. Harper Business.
[8]
Dipartimento della Funzione Pubblica. 2015. Il Protocollo Eglu 2.1: Come Real-
izzare Test Di Usabilitá Semplicati Per I Siti Web E I Servizi Online Delle Pa.
(2015). Retrieved April 24, 2018 from http://www.funzionepubblica.gov.it/sites/
funzionepubblica.gov.it/les/Protocollo_eGLU_2_1_19082015_DEF_2.pdf
[9]
Dipartimento della Funzione Pubblica. 2015. Il Protocollo Eglu-M: Come Real-
izzare Test Di Usabilitá Semplicati Per I Siti Web E I Servizi Online Delle Pa.
(2015). Retrieved April 24, 2018 from http://www.funzionepubblica.gov.it/sites/
funzionepubblica.gov.it/les/Protocollo_eGLU_2_1_19082015_DEF_2.pdf
[10]
Giuseppe Desolda, Giancarlo Gaudino, Rosa Lanzilotti, Stefano Federici, and
Antonello Cocco. 2017. UTAssistant: A Web Platform Supporting Usability Test-
ing in Italian Public Administrations. In Proceedings of the Doctoral Consortium,
Posters and Demos at CHItaly 2017 co-located with 12th Biannual Conference of the
Italian SIGCHI Chapter (CHItaly 2017), Cagliari, Italy, September 18-20, 2017.
[11]
Ouriel Grynszpan, Patrice L Weiss, Fernando Perez-Diaz, and Eynat Gal. 2014.
Innovative technology-based interventions for autism spectrum disorders: a
meta-analysis. Autism 18, 4 (2014).
[12]
Lee Hadlington. 2017. Human factors in cybersecurity; examining the link
between Internet addiction, impulsivity, attitudes towards cybersecurity, and
risky cybersecurity behaviours. Heliyon 3, 7 (2017).
[13]
Maria Laura Mele and Stefano Federici. 2012. A psychotechnological review on
eye-tracking systems: towards user experience. Disability and Rehabilitation:
Assistive Technology 7, 4 (2012).
[14]
Rolf Molich and Jakob Nielsen. 1990. Improving a Human-computer Dialogue.
Commun. ACM 33, 3 (1990).
[15]
Jakob Nielsen. 1994. Enhancing the Explanatory Power of Usability Heuristics.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI ’94). ACM.
[16]
Jakob Nielsen and Robert L. Mack (Eds.). 1994. Usability Inspection Methods. John
Wiley & Sons, Inc.
[17]
Jakob Nielsen and Rolf Molich. 1990. Heuristic Evaluation of User Interfaces. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI ’90). ACM.
[18]
Donald A Norman. 1998. The invisible computer: why good products can fail, the
personal computer is so complex, and information appliances are the solution. MIT
press.
[19]
Kent L Norman. 2017. Cyberpsychology: An introduction to human-computer
interaction. Cambridge university press.
[20]
Sarah Parsons and Sue Cobb. 2011. State-of-the-art of virtual reality technologies
for children on the autism spectrum. European Journal of Special Needs Education
26, 3 (2011).
[21]
Karl Sims. 1994. Evolving virtual creatures. In Proceedings of the 21st annual
conference on Computer graphics and interactive techniques. ACM.
[22]
Peter W Singer and Allan Friedman. 2014. Cybersecurity: What everyone needs to
know. Oxford University Press.
[23] Jeremiah D Still. 2016. Cybersecurity needs you! Interactions 23, 3 (2016).
[24]
Fiona Sussan and Zoltan J Acs. 2017. The digital entrepreneurial ecosystem.
Small Business Economics 49, 1 (2017).
[25]
Craig Tomlin. 2018. 14 Usability Testing Tools Matrix and Comprehensive
Reviews. (2018). Retrieved April 24, 2018 from http://www.usefulusability.com/
14-usability- testing-tools- matrix-and- comprehensive-reviews/
[26]
Pai Zheng, Zhiqian Sang, Ray Y Zhong, Yongkui Liu, Chao Liu, Khamdi Mubarok,
Shiqiang Yu, Xun Xu, and others. 2018. Smart manufacturing systems for Industry
4.0: Conceptual framework, scenarios, and future perspectives. Frontiers of
Mechanical Engineering (2018).