ArticlePDF Available

User Experience Evaluation of WhatsOnWeb: A Sonificated Visual Web Search Clustering Engine

Authors:

Abstract and Figures

The aim of this study was to present a usability evaluation conducted under the User Experience (UX) perspective of a sonificated search engine called WhatsOnWeb, an accessible application based on sophisticated graph visualization algorithms which conveys datasets using graph-drawing methods based on semantically clustered data. Starting with evidence from an amodal system processing spatial representation, the differences between blind and sighted users’ interactions whilst surfing WoW was analysed by following the Partial Concurrent Thinking Aloud protocol. Our results demonstrate that the user’s ability to perform spatial exploration tasks guided by either visual or acoustic cues seems to be functionally equivalent.
Content may be subject to copyright.
Dynamic Publishers, Inc., USA
User Experience Evaluation of WhatsOnWeb: A
Sonificated Visual Web Search Clustering Engine
Maria Laura Mele1, Stefano Federici2, Simone Borsci3, Giuseppe Liotta3
1 ECoNA—Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, Sapienza University of Rome,
IT, Department of Psychology, Via dei Marsi, 78 - 00185 Rome (RM), Italy
marialaura.mele@uniroma1.it
2 Department of Human and Education Sciences, University of Perugia
Piazza G. Ermini 1, Perugia (PG) 06123, Italy
stefano.federici@unipg.it
3 Department of Human and Education Sciences, University of Perugia
Piazza G. Ermini 1, Perugia (PG) 06123, Italy
simone.borsci@gmail.com
4 Department of Electrical and Information Engineering, University of Perugia
Via Duranti 93, Perugia (PG) 06123, Italy
liotta@diei.unipg.it
Abstract: The aim of this study was to present a usability
evaluation conducted under the User Experience (UX)
perspective of a sonificated search engine called WhatsOnWeb,
an accessible application based on sophisticated graph
visualization algorithms which conveys datasets using
graph-drawing methods based on semantically clustered data.
Starting with evidence from an amodal system processing
spatial representation, the differences between blind and
sighted users’ interactions whilst surfing WoW was analysed by
following the Partial Concurrent Thinking Aloud protocol. Our
results demonstrate that the user’s ability to perform spatial
exploration tasks guided by either visual or acoustic cues seems
to be functionally equivalent.
Keywords: user experience, usability, accessibility, information
visualization, sonification.
I. Introduction
Traditionally, the literature on spatial cognition has given
considerable attention to the spatial representation guided by
only visual exploration, giving less consideration to the
analysis of mental representations guided by different
sensory inputs. However, a growing number of authors
support the amodal hypothesis [1] of spatial representation,
highlighted by analyses of the involvement of different
sensory ways to convey information in spatial mapping
processing [2]. Visual, auditory, haptic, and kinesthetic
sensory information seems to be encoded into the same
spatial mental image independently from the nature of input
source [3, 4], according to many neuroimaging studies
highlighting the fact that processed multisensory inputs seem
to converge in common brain regions [5, 6].
In agreement with the amodal hypothesis, many studies
analysing the performance of blind people in processing
spatial auditory inputs have recently been carried out which
show that blind people seem to have a motion ability in
performing spatial exploration guided by acoustic cues that is
functionally equivalent to the visually guided exploration by
sighted people [7]; moreover, it has been shown that blind
people performing spatial exploration tasks seem to process
spatial auditory inputs in a more efficient way than sighted
people [8, 9]. Recently, Delogu et al. [10] carried out an
experimental analysis on the exploration of georeferenced
information by using the software iSonic [11], and pointed
out that the sonification process, integrated with haptic
exploration, allows the transmission of geographical spatial
information to blind people. It seems that spatial information
processing is guided by strategies related to two different
frames of body reference: the egocentric frame and the
allocentric frame. The spatial orientation of blind people
follows some strategies based on corporal reference points
rather than the allocentric strategies used by sighted people
in mental rotation and scanning tasks [12]. Therefore, the
nature of sound seems to be able to communicate the
complexity of static or dynamic data representation, keeping
inner relations unchanged [13].
This work aimed to introduce a user experience (UX)
evaluation [14, 15] of WhatsOnWeb (WoW), an accessible
Web and desktop sonificated visual Web search clustering
engine recently implemented at the Department of Computer
Engineering (DIEI) of the University of Perugia.
WhatsOnWeb is an application based on sophisticated graph
visualization algorithms [16] which convey datasets using
graph-drawing methods based on semantically clustered data
[17]. Unlike the most common search engines (e.g. Google,
Yahoo!) which give a top-down, flat representation (i.e.,
search engines report pages SERPs) [18, 19], WoW returns
a visuo-spatial data output providing a whole information
International Journal of Computer Information Systems and Industrial Management Applications.
ISSN 2150-7988 Volume 4 (2012) pp. 352–357
c
MIR Labs, www.mirlabs.net/ijcisim/index.html
Mele et al.
353
representation within a single browseable page. In this way,
WoW overcomes the efficiency limitation of a top-down
representation by introducing alternative ways to convey
spatial information [19]. Therefore, the WoW graphic output
organization allows users to increase the possibility of
finding useful information (i.e. increasing access to
knowledge on the Web).
In this work, we analysed the differences between totally
blind and sighted users’ interactions while surfing the WoW
search engine in order to compare the visual layouts of WoW
with the sonificated ones. This evaluation was conducted by
following two procedures a heuristic evaluation and a
usability evaluation with end-users in order to demonstrate
both qualitatively and quantitatively that there are no
significant functional differences between the interactions of
blind and sighted users [20, 21]. In this way, we wanted to
confirm that sonification methods offer an effective tool for
designing human-computer interfaces which are able to
overcome the digital divide that has arisen by the
visuocentric modality in which contents are commonly
conveyed.
II. The Visual WhatsOnWeb: An Accessible
Web Search Clustering Engine
The visual WhatsOnWeb system prototype [16] was
designed in 2007 by the DIEI and redesigned following the
User Centered Design approach in compliance with
accessibility and usability principles. In particular, the
redesign of WoW was conducted in accordance with the
accessibility guidelines WCAG v1.0 and v2.0 as proposed by
the World Wide Web Consortium (W3C), the national and
international accessibility rules, i.e. the 508 section of the
rehabilitation act and the Stanca Act n. 4 January 9th, 2004
and the ISO 13407 “Human-centered design processes for
interactive systems” [22].
The reengineering of the pre-existing code of WhatsOnWeb
was carried out by decoupling the algorithm in compliance
with Java Foundation Classes and the guidelines provided by
Sun and IBM [23, 24], supported by specific extensions for
Java accessibility architecture. In order to allow a
device-independent interaction with the visual Web search
engine, an architecture which allows navigation in at least
two conditions controlled by users with the keyboard was
implemented by following the characteristics and
navigational constraints of the graph. Moreover, a composite
architecture was produced in order to allow the vocalization
function: this choice grants users access to the information
system; indeed, the user might also choose to use a vocalizer
when a screen reader has not previously been installed in the
platform.
The graphs of the information structure are independent from
each layout, i.e. the spatial representation of data, and are
structured by different levels of navigation. Navigation is
allowed by using different kind of vertices: the cluster nodes,
which represent semantic sets of results that can be expanded
and collapsed in order to analyse the requested query in
depth up to the leaf nodes, which represent the results of the
research (Figure 1). Navigation was made possible in two
directions, from the m peak to the following m + 1 peak, or
vice versa following opposite movements. Moreover, when
navigation is focused on the last peak of the list, it is
automatically led to continue from the first peak. At an
expanded peak, a sub-list of results is available on the graph.
Navigation can also be carried out by sliding the expanded
sub-nodes one by one, shifting the action starting from the
following cluster node.
Figure 1. Radial Layout - exhaustive expansion of a cluster
node
The visual WoW prototype is composed of four kinds of
layouts that can be chosen by users: the TreeMap layout
(Figure 2.A), the Radial layout (Figure 2.B), the Layered
layout (Figure 2.C), and the Orthogonal layout (Figure 2.D).
In a previous study, the effectiveness and efficiency of each
kind of layout was evaluated through a navigation task and a
satisfaction questionnaire [16].
Figure 2. WhatsOnWeb layouts: a) TreeMap; b) Radial; c)
Layered; d) Orthogonal
The results of the navigation task showed that the TreeMap
had the best layout, allowing participants to find about 50%
of the relevant results for the assigned topics, whereas this
percentage was between 33% and 37% for the other layouts.
Morever, 56% of subjects judged the TreeMap representation
as the best layout on the satisfaction questionnaire. Using
this evidence, a new layout called the Spiral TreeMap
(Figure 3) was implemented in order to provide a more
effective and efficient spatial representation of data. The new
layout was designed so that a spiral navigation of the
information is possible: the node with the highest rank on the
User Experience Evaluation of WhatsOnWeb: A Sonificated Visual Web Search Clustering Engine 354
web and the greatest number of results is set in the centre of
the screen, whereas the other, less relevant, clusters/leafs are
gradually set around it in a spiral shape. The usability of this
new layout was subsequently evaluated on the visual
sonificated version of WhatsOnWeb.
Figure 3. Spiral TreeMap layout.
III. The User Experience Design (UXD)
Process of the Sonification of WhatsOnWeb
A. The UX sonification of WhatsOnWeb
Over the last twenty years Information Representation
research has focused on alternative ways to transmit spatial
information via non-visual sensory channels: the challenge
was to convey the spatial information data contained in a
visual representation by keeping the inner relations
unchanged. A widely adopted method for transforming
visual spatial representation is the sonification approach, i.e.,
“the transformation of data relations into perceived relations
in an acoustic signal for the purposes of facilitating
communication or interpretation” [25].
The literature on sonification pays particular attention to the
implementation of aids able to locate spatial information
about environments by means of acoustical signals: an
electronic travel aid (ETA) is “a device that emits energy
waves to detect the environment within a certain range or
distance, processes reflected information, and furnishes the
user with certain information in an intelligible and useful
manner” [26]. Unlike the ETA field, abstract data
sonification seems to be a more complex challenge due to the
difficulties of granting a functionally equivalent transmission
of the spatial relations whilst keeping the features emerging
from the user’s dynamic interaction unchanged. In fact, in
many systems [27, 28, 29, 30], information is mainly
converted into natural sounds and shown to users in a static
and non-interactive way (e.g. an audio registration): in this
way, users can obtain the information but they cannot
interact with the system. Moreover, there is a lack of studies
assessing the accessibility and usability of sonification
devices by blind users [12].
Interacting with WhatsOnWeb allows the manipulation of
abstract data that is information which is not correlated with
any physically obvious space. In WhatsOnWeb, indexed data
is organized by semantic correlations resulting in abstract
information; therefore, as a theoretical background for the
sonification of the indexed abstract information, we adopted
the sonification framework proposed in 2007 by Zhao,
Plaisant, and Shneiderman [11] in order to allow to users
dynamic navigation around the interaction environment: the
Action by Design Component (ADC) sonification model.
The design of the graph sonification model of WhatsOnWeb
was carried out by implementing and testing three types of
combinations (Table 1) in a univocal way between visual
features and some different features of sound. In particular,
we created three sonification layouts PanAndPitch
Sonification layout, the VolumeAndPitch Sonification layout,
and the BlinkAndPitch Sonification layout by differently
combining the tone, the pitch, the volume, blinking and the
grid reference of sound with the fundamental spatial
graphical features of WoW; that is to say, the z axes, the web
ranking of each indexed cluster or single data, the level of
navigation and the type of vertex (cluster node, leaf node).
x Axis y Axis Ranking Level
Pan And
Pitch
Panning Pitch Volume Timbre
Volume And
Pitch
Volume Pitch Blinking Timbre
Blink And
Pitch
Blinking Pitch Volume TImbre
Table 1. Comparison of results.
We created the first layout, the PanAndPitch Sonification, by
using panning to represent the x axis of the Cartesian plane
and the pitch of sound for the y axis; moreover, we used the
volume to represent the ranking of information, the timbre to
show the level of navigation and the double timbre to
describe the leaf node. Unlike the first layout, the second
one, the VolumeAndPitch Sonification, uses the sound
volume level in order to represent the x axis by considering
the Euclidean distance coding for a node compared to the
origin of axes, whereas panning was used to strengthen the
node detection as absolute information; furthermore, the
node detection on the y axis is transmitted by using the pitch
of sound. Finally, the third layout, the BlinkAndPitch
Sonification, uses the frequency of the sound blinking
together with panning to convey spatial relations through the
independent mapping of the x axis and, as in the previous
layouts, it uses the note pitch for representing nodes on the y
axis.
Each sonification layout combines sound and visual events
and each is able to describe both global and particular
browsing data information: once the user searches for a
query by selecting the search button, first, a global
representation of the information is displayed by means of
the temporization technique [31], which allows mapping of
the information from a non-temporal domain such as the
visual one to a temporal domain such as the acoustic one.
In this way, the temporization provides a description of the
role of each cluster among the whole information
representation and it allows to users to process a first mental
representation of an overall overview of information. After
the first automatic preview the navigation of each graphic
node is translated into a complex tone with a latency less
than 100 ms to prevent overloading the short term memory
[32], representing the corresponding paraverbal information.
The orientation of the user's position among the space of
navigation is facilitated by a reiterable feedback function
which provides the overall preview; moreover, a persistent
signal indicating the user’s current position is also provided.
Finally, information identification and the memorization of
each cluster node are strengthened by verbal feedback voiced
by the integrated synthesizer.
B. The evaluation process of the sonificated WhatsOnWeb
The UX evaluation of the reengineered and sonificated
WhatsOnWeb (WoW) application was conducted by
following two experimental procedures. First, an expert
usability evaluation was performed for each sonification
layout in order to design a final layout to use in the second
evaluation process with end users. Then, we investigated the
quality and the satisfaction of users’ interactions with both
the visual and acoustic sonificated displays of the WoW
search engine.
1) The first experimental procedure analysed the usability of
the sonification layouts of WoW the PanAndPitch
Sonification layout, the VolumeAndPitch Sonification layout,
and the BlinkAndPitch Sonification layout for each of the
graphic layouts Radial, Layered and Spiral TreeMap. This
evaluation was conducted by three UX experts with more
than five years of experience, which was carried out in a user
scenario by applying a readjustment of Nielsen’s heuristic
list [33]. In this way, the issues of each layout with medium
and high levels of severity were identified: this evaluation
phase allowed us to select the best combination of acoustic
and visual features and to unify each of them in a single
layout that we called PanAndPitchBlinking sonification
layout. This new layout was able to convey spatial
information through the Cartesian plane by using the panning
technique to represent the position of data on the x axis and
the pitch of the note to represent the position on the y axis.
Moreover, it used sound blinking to represent the rank order
of each node.
2) The second experimental procedure investigated the UX
quality according to two groups of participants blind and
sighted users involved in a usability evaluation using the
Partial Concurrent Thinking Aloud [34, 35] protocol and the
System Usability Scale questionnaire [36, 37]. After a
description of the task and a preliminary exploration of the
layout lasting at least 3 minutes, four totally blind users and
four sighted users (mean age 28, equally distributed by sex)
were asked to navigate the WoW search engine by following
a particular scenario consisting of an exhaustive search for a
given query by means of keyboard navigation: both blind
and sighted users navigated each of the three types of graphic
layouts Radial, Layered and Spiral TreeMap by means of
either the visual display or the PanAndPitchBlinking
sonification layout. During navigation, we used the The
Partial Concurrent Thinking Aloud (PCTA) technique to
identify usability problems found by the user during
interactions with the interface. The PCTA is a qualitative
usability evaluation technique composed of a phase in which
the participants indicate each problem they find during the
interaction, i.e. the concurrent protocol, and a phase in which
the participants are asked to observe their recorded
performance and verbalize their action “aloud”, i.e. the
retrospective protocol [34, 35]. The PCTA technique is a
new evaluation verbal protocol that is able to avoid the
possible problems found during the evaluation process using
concurrent or the retrospective verbal protocols with blind
users [34, 35]. Once the users reached the requested query,
they were interviewed about their graphic layout preferences
and finally they were asked to complete the System Usability
Scale (SUS) survey.
Each problem found during the PCTA protocols was
matched with Nielsen’s heuristic list as used in the first
experimental procedure: the subjects found 19 problems, 9
related to visual performance and 11 related to auditory
performance. The statistical analysis carried out by SPSS 18
on the task completion times for each layout showed no
significant differences between the two groups and between
the different kinds of layout (Layered layout, F(1,6)=4.524;
p=ns; Spiral TreeMap layout, F
(1,6)=0.097; p=ns) except for
the Radial layout (F(1,6)=13.690; p<0.05). The analysis of the
SUS scores showed no significant differences (F(1,6)=0.2729;
p=ns) between the two groups of participants. Therefore,
since these results highlight similar levels of efficacy,
efficiency, and satisfaction for the two groups for both
information presentation modalities, the sonificated modality
and the visual modality performances seem to be
homogeneous [29].
IV. Conclusions
Unless most of the Web search engines (e.g. Google,
Yahoo) is marked as “accessible”, accessibility seems to be
actually not enough: there is a strong need to implement
search engines that are both accessible and usable [20, 21].
In fact, many authors highlighted the digital gap that exists
between blind people interacting with the Web by using
screen readers, and sighted people [38]. In particular, the
exploration of SERPs by the most common search engines
seems to be more difficult when accessed by blind people
using assistive technologies. In 2004, Ivory et al. highlighted
the fact that blind users took twice as long as sighted
participants to explore search results, and three times as long
to explore web pages [39]. Users with visual disabilities
cannot access all paraverbal information concerning “not
only just the access to text but also to graphics, tables and
figures” [40].
In this work, we introduced a visual sonificated Web search
engine called WhatsOnWeb, which seems to allow blind and
sighted users easier manipulation and findability of
information by returning a geometrical spatial representation
of the indexed Web data. In order to emulate and facilitate
the cognitive mental information processing which organizes
human knowledge through semantic categorization [41], the
WoW clusters information in semantic nodes, making it
easier for all users to find and elaborate on information
conveyed by Information and Communication Technologies.
The results of our evaluation show a global functional
homogeneity between sighted and blind users' experiences of
WoW, suggesting that a system which grants accessibility
and usability considerably reduces the digital divide.
Moreover, WhatsOnWeb is designed in order to provide a
device-independent extensible architecture which can lead
events through two interaction states. In this way, the
reduction in the number of events necessary for searching for
a query allows navigation through control systems and/or
communication systems, such as the Brain Computer
Interface (BCI), eye-trackers, tongue controllers and
speech/sound interfaces.
Mele et al.
355
[1] D. J. Bryant, “Representing Space in Language and
Perception,” Mind and Language, 12(3-4), 239-264,
1997. doi:10.1111/j.1468-0017.1997.tb00073.x
[2] S. Millar, “Understanding and representing space:
theory and evidence from studies with blind and sighted
children,” New York, NY, US: Oxford University
Press, 1994.
[3] R. S. Jackendoff, “Consciousness and the computational
mind,” Cambridge, MA, US: MIT Press, 1987.
[4] G. A. Miller, and P. N. Johnson-Laird, “Language and
perception,” Cambridge, MA, US: Harvard University
Press, 1976.
[5] A. Amedi, K. Kriegstein, N. Atteveldt, M. Beauchamp,
and M. Naumer, “Functional imaging of human
crossmodal identification and object recognition,”
Experimental Brain Research, 166(3-4), 559-571, 2005.
doi:10.1007/s00221-005-2396-5
[6] J. Driver, and T. Noesselt, “Multisensory interplay
reveals crossmodal influences on ‘sensory-specific’
brain regions, neural responses, and judgments,”
Neuron, 57(1), 11-23, 2008.
doi:10.1016/j.neuron.2007.12.013
[7] D. J. Bryant, “A spatial representation system in
humans,” Psycoloquy, 3(16), 1992. Retrieved from
http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/psummary?
3.16
[8] M. N. Avraamides, J. M. Loomis, R. L. Klatzky, and R.
G. Golledge, “Functional Equivalence of Spatial
Representations Derived From Vision and Language:
Evidence From Allocentric Judgments,” Journal of
Experimental Psychology: Learning, Memory, and
Cognition, 30(4), 801-814, 2004.
doi:10.1037/0278-7393.30.4.804
[9] M. de Vega, M. Cocude, M. Denis, M. J. Rodrigo, and
H. D. Zimmer, “The interface between language and
visuo-spatial representations,” in M. Denis, R. H.
Logie, C. Cornoldi, M. de Vega and J. EngelKamp
(Eds.), Imagery, language, and visuo-spatial thinking,
pp. 109136, Hove, UK: Psychology Press, 2001.
[10] F. Delogu, M. Palmiero, S. Federici, H. Zhao, C.
Plaisant, and M. Olivetti Belardinelli, “Non-visual
exploration of geographic maps: does sonification
help?,” Disability & Rehabilitation: Assistive
Technology, 5(3), 164-174, 2010.
doi:10.3109/17483100903100277
[11] H. Zhao, B. Shneiderman, and C. Plaisant, “Listening to
Choropleth Maps: Interactive Sonification of
Geo-referenced Data for Users with Vision
Impairment,” in J. Lazar (Ed.), Universal Usability:
Designing Computer Interfaces for Diverse User
Populations, pp. 141-174, West Sussex, UK: Wiley and
Sons, 2007.
[12] M. Olivetti Belardinelli, S. Federici, F. Delogu, and M.
Palmiero, “Sonification of Spatial Information:
Audio-tactile Exploration Strategies by Normal and
Blind Subjects,” in C. Stephanidis (Ed.), Universal
Access in HCI, Part II, HCII 2009, LNCS 5615, pp.
557-563, Berlin Heidelberg, DE: Springer-Verlag,
2009. doi:10.1007/978-3-642-02710-9_62
[13] G. Kramer, “An Introduction to Auditory Display,” in
G. Kramer (Ed.), Auditory Display: Sonification,
Audification, And Auditory Interfaces (Proceedings
Volume 18, Santa Fe Institute Studies in the Sci), pp.
1-78, Reading, MA, US: Addison-Wesley, 1994.
[14] D. A. Norman, “The Invisible Computer. Why Good
Products Can Fail, the Personal Computer is So
Complex, and Information Appliances are the
Solution,” Cambridge, MA, US: MIT Press, 1998.
[15] P. C. Wright, J. McCarthy, and L. Meekison, “Making
sense of experience,” in M. A. Blythe, K. Overbeeke,
A. F. Monk and P. C. Wright (Eds.), Funology: from
usability to enjoyment, pp. 43-53, Norwell, MA, US:
Kluwer Academic Publishers, 2004.
[16] E. Di Giacomo, W. Didimo, L Grilli, and G. Liotta,
“Graph Visualization Techniques for Web Clustering
Engines,” IEEE Transactions on Visualization and
Computer Graphics, 13(2), 294-304, 2007.
doi:10.1109/TVCG.2007.40
[17] A. Rugo, M. L. Mele, G. Liotta, F. Trotta, E. Di
Giacomo, S. Borsci, and S. Federici, “A Visual
Sonificated Web Search Clustering Engine,” Cognitive
Processing, 10(Suppl 2), 286-289, 2009.
doi:10.1007/s10339-009-0317-4
[18] S. Borsci, S. Federici, M. L. Mele, and G. Stamerra,
“Global Rank: improving a qualitative and inclusive
level of web accessibility,” in Conference Proceedings -
Lancaster University, Lancaster University, UK,
September 2-4, 2008.
[19] S. Federici, S. Borsci, M. L. Mele, and G. Stamerra,
“Web Popularity: An Illusory Perception of a
Qualitative Order in Information,” Universal Access in
the Information Society, 2010.
doi:10.1007/s10209-009-0179-7
[20] M. L. Mele, S. Borsci, A. Rugo, S. Federici, G. Liotta,
F. Trotta, and E. Di Giacomo, “An Accessible Web
Searching: An On-going Research Project,” in P. L.
Emiliani, L. Burzagli, A. Como, F. Gabbanini and
A.-L. Salminen (Eds.), Assistive Technology from
Adapted Equipment to Inclusive Environments
AAATE 2009 (25 ed.) Vol. 25, pp. 854, Florence, IT:
IOS Press, 2009. doi:10.3233/978-1-60750-042-1-854
[21] M. L. Mele, S. Federici, S. Borsci, and G. Liotta,
“Beyond a Visuocentric Way of a Visual Web Search
Clustering Engine: The Sonification of WhatsOnWeb,”
in K. Miesenberger, J. Klaus, W. Zagler and A.
Karshmer (Eds.), Computers Helping People with
Special Needs, pp. 351-357, Berlin, DE: Springer, Vol.
1, 2010. doi:10.1007/978-3-642-14097-6_56
[22] International Standards Organization (ISO), “ISO
13407: Human-centred design processes for interactive
systems, 1999. Retrieved from
http://www.iso.org/iso/iso_catalogue/catalogue_tc/catal
ogue_detail.htm?csnumber=21197
[23] B. Feigenbaum, and M. Squillace, “Accessibility
validation with RAVEN,” in Proceedings of the 2006
International Workshop on Software quality, Shanghai,
CN, 2006.
doi:http://doi.acm.org/10.1145/1137702.1137709
[24] IBM, “Rule-based Accessibility Validation Environment
(RAVEn) On Accessibility,” Retrieved from
http://www-03.ibm.com/able/resources/raven.html
[25] G. Kramer, B. Walker, T. Bonebright, P. Cook, J.
Flowers, N. Miner, and J. Neuhoff, “Sonification
report: Status of the field and research agenda,” Santa
Fe, NM: National Science Foundation by members of
the International Community for Auditory Display,
1997.
356
References
User Experience Evaluation of WhatsOnWeb: A Sonificated Visual Web Search Clustering Engine
[26] L. W. Farmer, and D. L. Smith, “Adaptive technology,”
in B. B. Blasch, W. R. Wiener and R. Welsh (Eds.),
Foundations of orientation and mobility ( 2nd ed.) pp.
231-259, New York, NY, US: American Foundation for
the Blind Press, 1998.
[27] S. A. Brewster, “Using nonspeech sounds to provide
navigation cues,” ACM Transactions on
Computer-Human Interaction (TOCHI), 5(3), 224-259,
1998. doi:10.1145/292834.292839
[28] D. Lunney, R. C. Morrison, M. M. Cetera, R. V.
Hartness, R. T. Mills, A. D. Salt, and D.C. Sowell, “A
Microcomputer-Based Laboratory Aid for Visually
Impaired Students,” IEEE Micro, 3(4), 19-31, 1983.
doi:10.1109/MM.1983.291134
[29] D. L. Mansur, M. M. Blattner, and K. I. Joy, “Sound
graphs: A numerical data analysis method for the blind,
Journal Of Medical Systems,” 9(3), 163-174, 1985.
doi:10.1007/BF00996201
[30] R. Ramloll, B. Stephen, W. Yu, and B. Riedel, “Using
non-speech sounds to improve access to 2D tabular
numerical information for visually impaired users,” in
A. Blandford, J. Vanderdonckt and P. D. Gray (Eds.),
People and computers XV: Interactions without
frontiers - Joint Proceedings of HCI 2001 and IHM
2001, pp. 515-530, Berlin, DE: Springer, 2001.
[31] S. Saue, “A model for interaction in exploratory
sonification displays,” in International Conference on
Auditory Display (ICAD), International Community for
Auditory Display, Atlanta, GE, US, 2000.
http://www.icad.org/websiteV2.0/Conferences/ICAD20
00/ICAD2000.html
[32] R. C. Atkinson, and R. M. Shiffrin, “The control of
short-term memory,” Scientific American, 225(2),
82-90, 1971.
[33] J. Nielsen, “Enhancing the explanatory power of
usability heuristics,” in Proceedings of the SIGCHI
conference on Human factors in computing systems:
celebrating interdependence, Boston, MA, US, 1994.
doi:10.1145/191666.191729
[34] S. Federici, S. Borsci, and M. L. Mele, “Usability
evaluation with screen reader users: A video
presentation of the PCTA’s experimental setting and
rules,” Cognitive Processing, 11(3), 285288, 2010.
doi:10.1007/s10339-010-0365-9
[35] S. Federici, S. Borsci, and G. Stamerra, “Web usability
evaluation with screen reader users: Implementation of
the Partial Concurrent Thinking Aloud technique,”
Cognitive Processing, 11(3), 263272, 2010.
doi:10.1007/s10339-009-0347-y
[36] S. Borsci, S. Federici, and M. Lauriola, “On the
Dimensionality of the System Usability Scale (SUS): A
Test of Alternative Measurement Models,” Cognitive
Processing, 10(3), 193-197, 2009.
doi:10.1007/s10339-009-0268-9
[37] J. Brooke, “SUS: A quick and dirty usability scale,” in
P. W. Jordan, B. Thomas, B. A. Weerdmeester and I. L.
McClelland (Eds.), Usability evaluation in industry, pp.
189-194, London, UK: Taylor & Francis, 1996.
[38] K. P. Coyne, and J. Nielsen, “Beyond ALT text: Making
the web easy to use for users with disabilities,”
Fremont, CA, US: Nielsen Norman Group, 2001.
[39] M. Y. Ivory, S. Yu, and K. Gronemyer, “Search result
exploration: a preliminary study of blind and sighted
users’ decision making and performance,” in CHI ’04
extended abstracts on Human factors in computing
systems, Vienna, AT, April 24-29, 2004.
doi:10.1145/985921.986088
[40] C. Jay, R. Stevens, M. Glencross, A. Chalmers, and C.
Yang, “How people use presentation to search for a
link: Expanding the understanding of accessibility on
the Web,” Universal Access in the Information Society,
6(3), 307-320, 2007. doi:10.1007/s10209-007-0089-5
[41] J. R. Anderson, “The architecture of cognition,”
Cambridge, MA, US: Harvard University Press, 1993.
Author Biographies
Maria Laura Mele, PhD student of cognitive psychology
at the Interuniversity Centre for Research on Cognitive
Processing in Natural and Artificial Systems (ECoNA) of
the Sapienza University of Rome. Her research topics are
User eXperience, accessibility, usability, user centered
design, assistive technologies and eye-tracking
methodology. She is a member of CognitiveLab research
team of University of Perugia (www.cognitivelab.it).
Stefano Federici, PhD, is currently Associate Professor
of General Psychology at the University of Perugia. He is
member of: the editorial board of Disability and
Rehabilitation: Assistive Technology International
Journal and Cognitive Processing: International Quarterly
of Cognitive Science; the Scientific Committee of the the
International Conference on Space Cognition (ICSC). He
is the coordinator of research team of CognitiveLab at
University of Perugia (www.cognitivelab.it). He collected
more than 100 international and national publications on
cognitive psychology, psychotechnology, disability and
usability.
Simone Borsci, PhD, is temporary research fellow in
General Psychology at the University of Perugia. He
obtained a PhD (2010) in Cognitive psychology at the
Sapienza University of Rome. He is a member of
Interuniversity Center for Research on Cognitive
Processing in Natural and Artificial Systems (ECONA)
and CognitiveLab of University of Perugia
(www.cognitivelab.it). He collected 20 international
and national publications on Psychotechnologies, Web accessibility and
usability, User Experience evaluation.
Giuseppe Liotta received a Ph.D. in Computer Science
from the University of Rome La Sapienza in 1995 and
is currently a professor in the Department of Computer
Engineering at the University of Perugia. His current
research interests include Information Visualization,
Graph Drawing, and Computational Geometry. On these
topics he published several papers and gave invited
lectures world wide He served and chaired program
committees of international symposiums and is editor and
managing editor of international journals. His research
has been founded by the Italian National Research
Council, by the Italian Ministry of Research and
Education, by the EU, and by several industrial sponsors.
He is a steering committee member of the International
Symposium on Graph Drawing and a member of the
IEEE and ACM.
Mele et al.
357
... We used the sonification transformation model Action by Design Component (ADC; ZHAO, SHNEIDERMAN, and PLAISANT, 2007), which transforms the visuo-spatial data output into a corresponding audio-spatial output. We evaluated the differences between totally blind and sighted users' experiences while they were respectively interacting with the corresponding WoW sonificated or visual version of the search engine (MELE et al., 2012). ...
Conference Paper
Full-text available
Trata-se de uma descrição do perfil de estudantes universitários sobre o uso de internet e sua influência nas interações presenciais.
... We used the sonification transformation model Action by Design Component (ADC; ZHAO, SHNEIDERMAN, and PLAISANT, 2007), which transforms the visuo-spatial data output into a corresponding audio-spatial output. We evaluated the differences between totally blind and sighted users' experiences while they were respectively interacting with the corresponding WoW sonificated or visual version of the search engine (MELE et al., 2012). ...
Conference Paper
Full-text available
In this work, we present the sonification procedure of a visual Web search engine called WhatsOnWeb (WoW) and its usability evaluation. The WoW search engine is based on graphic visualisation algorithms conveying datasets by semantic correlations and clusters through graph-drawing methods. WoW has been developed combining different visual and auditory features in three sonificated layouts that transmit spatial information through acoustic non-verbal events. WoW usability has been evaluated for both visual and sonificated versions with blind and sighted users. Since results show no differences in usability, we conclude that the sonification methodology makes visual content accessible, usable and, therefore, equally learnable for both blind and sighted people.
Article
Full-text available
A verbal protocol technique, adopted for a web usability evaluation, requires that the users are able to perform a double task: surfing and talking. Nevertheless, when blind users surf by using a screen reader and talk about the way they interact with the computer, the evaluation is influenced by a structural interference: users are forced to think aloud and listen to the screen reader at the same time. The aim of this study is to build up a verbal protocol technique for samples of visual impaired users in order to overcome the limits of concurrent and retrospective protocols. The technique we improved, called Partial Concurrent Thinking Aloud (PCTA), integrates a modified set of concurrent verbalization and retrospective analysis. One group of 6 blind users and another group of 6 sighted users evaluated the usability of a website using PCTA. By estimating the number of necessary users by the means of an asymptotic test, it was found out that the two groups had an equivalent ability of identifying usability problems, both over 80%. The result suggests that PCTA, while respecting the properties of classic verbal protocols, also allows to overcome the structural interference and the limits of concurrent and retrospective protocols when used with screen reader users. In this way, PCTA reduces the efficiency difference of usability evaluation between blind and sighted users.
Article
Full-text available
Purpose: This study aims at evaluating the effectiveness of sonification as a mean to provide access to geo-referenced information to users with visual impairments. Method: Thiry-five participants (10 congenitally blind, 10 with acquired blindness and 15 blindfolded sighted) completed four tasks of progressive difficulty. During each task, participants first explored a sonified map by using either a tablet or a keyboard to move across regions and listened to sounds giving information about the current location. Then the participants were asked to identify, among four tactile maps, the one that crossmodally corresponds to the sonifed map they just explored. Finally, participants answered a self-report questionnaire of understanding and satisfaction. Results: Participants achieved high accuracy in all of the four tactile map discrimination tasks. No significant performance difference was found neither between subjects that used keyboard or tablet, nor between the three groups of blind and sighted participants. Differences between groups and interfaces were found in the usage strategies. High levels of satisfaction and understanding of the tools and tasks emerged from users' reports.
Article
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.
Conference Paper
Several published sets of usability heuristics were compared with a database of existing usability problems drawn from a variety of projects in order to determine what heuristics best explain actual usability problems. Based on a factor analysis of the explanations as well as an analysis of the heuristics providing the broadest explanatory coverage of the problems, a new set of nine heuristics were derived: visibility of system status, match between system and the real world, user control and freedom, consistency and standards, error prevention, recognition rather than recall, flexibility and efficiency of use, aesthetic and minimalist design, and helping users recognize, diagnose, and recover from errors.
Article
Introduction: questions and terms 1. Modality and cognition in developmental theories and evidence 2. The modalities as convergent sources of spatial information 3. Neuropsychological evidence on convergence 4. Shape coding by vision and touch 5. Spatial coding: studies in small-scale space 6. Information and understanding large-scale space 7. Non-verbal representation: images, drawings, maps, and memory 8. Some practical implications 9. A theory of spatial understanding and development
Article
Posted 05/23/1992. Reviews evidence for the functional equivalence of spatial representations of observed environments and environments described in discourse. It is argued that people possess a spatial representation system that constructs mental spatial models on the basis of perceptual and linguistic information. Evidence for a distinct spatial system is reviewed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Space can be understood through perception and language, but are the processes that represent spatial information the same in both cases? This paper reviews psychological evidence for the functional equivalence of spatial representations based on perceptual and linguistic inputs. It is proposed that spatial information is processed by a specialised spatial representation system (SRS) that creates geometric representations of space. The SRS receives inputs from perceptual and linguistic systems and uses these basic inputs to construct mental spatial models of the observed or described environment. A mental spatial model is created by determining the coordinate locations of objects in the egocentric or allocentric frame of reference. The goal of the SRS is not to represent strictly what is perceived, but to model an environment that has an inherent three-dimensional spatial structure.