Nicholas A Giudice

Nicholas A Giudice
  • Ph.D.
  • Professor (Full) at University of Maine

About

101
Publications
48,758
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,770
Citations
Current institution
University of Maine
Current position
  • Professor (Full)
Additional affiliations
September 2008 - present
University of Maine
Position
  • Professor (Full)
Education
September 1998 - May 2004
University of Minnesota
Field of study
  • Cognitive and Brain Sciences
September 1993 - May 1997
Providence College
Field of study
  • Psychology and Philosophy

Publications

Publications (101)
Preprint
Full-text available
While guide dogs offer essential mobility assistance, their high cost, limited availability, and care requirements make them inaccessible to most blind or low vision (BLV) individuals. Recent advances in quadruped robots provide a scalable solution for mobility assistance, but many current designs fail to meet real-world needs due to a lack of unde...
Article
Full-text available
The lack of accessible information conveyed by descriptions of art images presents significant barriers for people with blindness and low vision (BLV) to engage with visual artwork. Most museums are not able to easily provide accessible image descriptions for BLV visitors to build a mental representation of artwork due to vastness of collections, l...
Article
Introduction: Informational graphics and data representations (e.g., charts and figures) are critical for accessing educational content. Novel technologies, such as the multimodal touchscreen which displays audio, haptic, and visual information, are promising for being platforms of diverse means to access digital content. This work evaluated educat...
Article
Full-text available
Multimodal learning systems have been found to be effective in studies investigating cognitive theory of multimedia learning. Yet this research is rarely put into practice in Science, Technology, Engineering, and Math (STEM) learning environments, which are dominated by visual graphics. Introducing multimodal learning systems into STEM settings and...
Article
Full-text available
Graphical representations are ubiquitous in the learning and teaching of science, technology, engineering, and mathematics (STEM). However, these materials are often not accessible to the over 547,000 students in the United States with blindness and significant visual impairment, creating barriers to pursuing STEM educational and career pathways. F...
Article
Full-text available
We introduce SIM (acronym for “Semantic Interior Mapology”), a web app that allows anyone to quickly trace the floor plan of a building, generating a vectorized representation that can be automatically converted into a tactile map at the desired scale. The design of SIM was informed by a prior focus group with seven blind participants. Maps generat...
Conference Paper
Full-text available
Mid-air ultrasonic feedback is a new form of haptic stimulation supporting mid-air, touch-free user interfaces. Functional implementation of ultrasonic haptic (UH) interfaces depend upon the ability to accurately distinguish between the intensity, shape, orientation, and movement of a signal. This user study (N = 15) investigates the ability to non...
Conference Paper
Full-text available
Ultrasonic haptic (UH) feedback employs mid-air ultrasound waves detectable by the palm of the hand. This interface demonstrates a novel opportunity to utilize non-visual input and output (I/O) functionalities in interactive applications, such as vehicle controls that allow the user to keep their eyes on the road. However, more work is needed to ev...
Article
Full-text available
Navigation systems have become increasingly available and more complex over the past few decades as maps have changed from largely static visual and paper-based representations to interactive and multimodal computerized systems. In this introductory article to the Special Issue on Human-computer Interaction, Geographic Information, and Navigation,...
Preprint
Navigation systems have become increasingly available and more complex over the past few decades as maps have changed from largely static visual and paper-based representations to interactive and multimodal computerized systems. In this introductory article to the Special Issue on Human-computer Interaction, Geographic Information, and Navigation,...
Article
Full-text available
The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental...
Conference Paper
Full-text available
When students with blindness and visual impairment (BVI) are confronted with inaccessible visual graphics in the geometry classroom, additional instructional supports are often provided through verbal descriptions of images, tactile and haptic representations, and/or kinetic movement. This preliminary study examined the language used by instruction...
Article
A significant number of individuals in the United States report a disability that limits their ability to travel, including many people who are blind or visually impaired (BVI). The implications of restricted transportation result in negative impacts related to economic security, physical and mental health, and overall quality of life. Fully autono...
Article
Full-text available
Background The purpose of this study was to develop an Augmented Reality (AR) app on heart failure for remote training of nursing students and compare it against recorded video lecture. We conducted a quasi-experimental study using pretest-posttest design with junior nursing students. Methods The experimental group used the self-paced app; the con...
Article
Full-text available
This article first reviews the pros and cons of current accessible indoor navigation systems and then describes a study using commercial smart devices to navigate routes through a complex building. Our interest was in comparing performance when using real-time narrative descriptions (system-aided condition) vs. a memory-based condition where the sa...
Article
Full-text available
With content rapidly moving to the electronic space, access to graphics for individuals with visual impairments is a growing concern. Recent research has demonstrated the potential for representing basic graphical content on touchscreens using vibrations and sounds, yet few guidelines or processes exist to guide the design of multimodal, touchscree...
Article
Full-text available
Vibration plays a significant role in the way users interact with touchscreens. For many users, vibration affords tactile alerts and other enhancements. For eyes-free users and users with visual impairments, vibration can also serve a more primary role in the user interface, such as indicating streets on maps, conveying information about graphs, or...
Article
Full-text available
This paper explores the viability of new touchscreen-based haptic/vibrotactile interactions as a primary modality for perceiving visual graphical elements in eyes-free situations. For touchscreen-based haptic information extraction to be both accurate and meaningful, the onscreen graphical elements should be schematized and downsampled to: (1) maxi...
Article
Full-text available
This article starts by discussing the state of the art in accessible interactive maps for use by blind and visually impaired (BVI) people. It then describes a behavioral experiment investigating the efficacy of a new type of low-cost, touchscreen-based multimodal interface, called a vibro-audio map (VAM), for supporting environmental learning, cogn...
Article
Full-text available
Introduction This article describes an evaluation of MagNav, a speech-based, infrastructure-free indoor navigation system. The research was conducted in the Mall of America, the largest shopping mall in the United States, to empirically investigate the impact of memory load on route-guidance performance. Method Twelve participants who are blind an...
Conference Paper
Full-text available
Indoor navigation and exploration of museum environments present unique challenges for visitors who are blind or have significant vision impairments (BVI). Like other indoor spaces, museums represent dynamic indoor environments that requires the need for both guided and self-tour experiences to allow for BVI visitor independence. In order to fully...
Article
Full-text available
Touchscreen-based, multimodal graphics represent an area of increasing research in digital access for individuals with blindness or visual impairments; yet, little empirical research on the effects of screen size on graphical exploration exists. This work probes if and when more screen area is necessary in supporting a pattern-matching task. Purpo...
Conference Paper
Full-text available
Touchscreen-based smart devices, such as smartphones and tablets, offer great promise for providing blind and visually-impaired (BVI) users with a means for accessing graphics non-visually. However, they also offer novel challenges as they were primarily developed for use as a visual interface. This paper studies key usability parameters governing...
Chapter
The overarching goal of our research program is to address the long-standing issue of non-visual graphical accessibility for blind and visually-impaired (BVI) people through development of a robust, low-cost solution. This paper contributes to our research agenda aimed at studying key usability parameters governing accurate rendering and perception...
Chapter
Full-text available
This chapter considers what it means to learn and navigate the world with limited or no vision. It investigates limitations of blindness research, discusses traditional theories of blind spatial abilities, and provides an alternative perspective of many of the oft-cited issues and challenges underlying spatial cognition of blind people. Several pro...
Conference Paper
Full-text available
The overarching goal of our research program is to address the longstanding issue of non-visual graphical accessibility for blind and visually impaired (BVI) people through development of a robust, low-cost solution. This paper contributes to our research agenda aimed at studying key usability parameters governing accurate rendering and perception...
Chapter
For individuals with significant vision impairment, due to natural aging processes or early vision loss, descriptions of indoor scenes require a high level of precision in spatial information to convey accurate object relations and allow for the formation of effective mental models. This paper briefly describes a single experiment conducted within...
Conference Paper
Full-text available
A critical component of effective navigation is the ability to form and maintain accurate cognitive maps. Proper cognitive map maintenance can become difficult for older adults as many of the constituent memory structures exhibit degradation with age. The present study employed a novel testing paradigm where younger adult participants (20 to 40 yea...
Article
Full-text available
The present study investigated cognitive map development in multi-level built environments. Three experiments were conducted in complex virtual buildings to examine the effects of five between-floor structural factors that may impede the accuracy of humans’ ability to build multi-level cognitive maps. Results from Experiments 1 and 2 revealed that...
Article
Full-text available
When walking without vision, people mentally keep track of the directions and distances of previously viewed objects, a process called spatial updating. The current experiment indicates that while people across a large age range are able to update multiple targets in memory without perceptual support, aging negatively affects accuracy, precision, a...
Article
Full-text available
Background/Study Context: Aging research addressing spatial learning, representation, and action is almost exclusively based on vision as the input source. Much less is known about how spatial abilities from nonvisual inputs, particularly from haptic information, may change during life-span spatial development. This research studied whether learnin...
Article
Full-text available
Touchscreen devices, such as smartphones and tablets, represent a modern solution for providing graphical access to people with blindness and visual impairment (BVI). However, a significant problem with these solutions is their limited screen real estate, which necessitates panning or zooming operations for accessing large-format graphical material...
Conference Paper
Full-text available
In order to provide accurate automated scene description and navigation directions for indoor space, human beings need intelligent systems to provide an effective cognitive model. Information provided by the structure and use of spatial prepositions is critical to the development of accurate and effective cognitive models. Unfortunately, the use an...
Conference Paper
Full-text available
The limited screen real estate of touchscreen devices necessitates the use of zooming operations for accessing graphical information such as maps. While these operations are intuitive for sighted individuals, they are difficult to perform for blind and visually-impaired (BVI) people using non-visual sensing with touchscreen-based interfaces. We add...
Conference Paper
Full-text available
The aging process is associated with changes to many tasks of daily life for older adults, e.g. driving. This is particularly challenging in rural areas where public transportation is often non-existent. The current study explored how age affects driving ability through use of an immersive virtual reality driving simulator. Participants were requir...
Conference Paper
Full-text available
People often become disoriented and frustrated when navigating complex, multi-level buildings. We argue that the principle reason underlying these challenges is insufficient access to the requisite information needed for developing an accurate mental representation, called a multi-level cognitive map. We postulate that increasing access to global l...
Article
Full-text available
Four different platforms were compared in a task of exploring an angular stimulus and reporting its value. The angle was explored visually, tangibly as raised fine-grit sandpaper, or on a touch-screen with a frictional or vibratory signal. All platforms produced highly accurate angle judgments. Differences were found, however, in exploration time,...
Article
People who are blind or visually impaired face difficulties using a growing array of everyday appliances because they are equipped with inaccessible electronic displays. We report developments on our "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display and have the contents read aloud, to ad...
Conference Paper
Full-text available
People who are blind or visually impaired face difficulties using a growing array of everyday appliances because they are equipped with inaccessible electronic displays. We report developments on our --Display Reader-- smartphone app, which uses computer vision to help a user acquire a usable image of a display and have the contents read aloud, to...
Article
Full-text available
This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, static pins, haptic, force feedback, and multimodal methods of rendering maps, graphs and models. The autho...
Article
Full-text available
Presents a summary of the articles included in this issue of the publication that focus on haptic assistive technology for people that are visually impaired.
Article
Full-text available
Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfac...
Article
Full-text available
This paper summarizes the implementation, evaluation, and usability of non-visual panning operations for accessing graphics rendered on touch screen devices. Four novel non-visual panning techniques were implemented and experimentally evaluated on our experimental prototype, called a Vibro-Audio Interface (VAI), which provides completely non-visual...
Conference Paper
Full-text available
The goal of this study was to investigate how the immersion level of virtual environments (HMD vs. desktop) and rotation method (physical vs. imagined) affects wayfinding performance in multi-story virtual buildings and the development of multi-level cognitive maps. Twelve participants learned multi-level virtual buildings using three VE conditions...
Conference Paper
Full-text available
It is known that people have problems when wayfinding in multi-level buildings. We propose that this challenge is largely due to development of inaccurate multi-level cognitive maps of the 3D building structure. We argue that better visualization of the layered structure of the building could facilitate multi-level cognitive map development and sig...
Article
Full-text available
Humans' spatial representations enable navigation and reaching to targets above the ground plane, even without direct perceptual support. Such abilities are inconsistent with an impoverished representation of the third dimension. Features that differentiate humans from most terrestrial animals, including raised eye height and arms dedicated to mani...
Article
Full-text available
Location sharing in indoor environments is limited by the sparse availability of indoor positioning and lack of geographical building data. Recently, several solutions have begun to implement digital maps for use in indoor space. The map design is often a variant of floor-plan maps. Whereas massive databases and GIS exist for outdoor use, the major...
Article
Full-text available
Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction and evaluation of an inertial dead reckoning navigation system that provides real-time auditory guidance along mapped routes. Inertial dead reckoning is a navigation technique coupling step counting together with head...
Conference Paper
Full-text available
This paper evaluates an inexpensive and intuitive approach for providing non-visual access to graphic material, called a vibro-audio interface. The system works by allowing users to freely explore graphical information on the touchscreen of a commercially available tablet and synchronously triggering vibration patterns and auditory information when...
Article
Full-text available
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using exten...
Conference Paper
Full-text available
There is growing interest in improving indoor navigation using 3D spatial visualizations rendered on mobile devices. However, the level of information conveyed by these visualization interfaces in order to best support indoor spatial learning has been poorly studied. This experiment investigates how learning of multi-level virtual buildings assiste...
Article
Full-text available
Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction of and evaluation of a navigation system that infers the users' location using only magnetic sensing. It is well known that the environments within steel frame structures are subject to significant magnetic distortion...
Article
Full-text available
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with shor...
Article
Full-text available
This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence fo...
Chapter
Full-text available
losing vision is a significant decrement in performance of actions that rely on the spatial resolution and wide field of view that vision provides, particularly under tight temporal constraints (see Chapters 2 and 4). Returning a tennis serve or driving in city traffic are examples. Nonetheless, the ability of many blind people to perform tasks tha...
Article
Full-text available
Several studies have verified that multi-level floors are an obstacle for indoor wayfinding (e.g., navigators show greater angular error when making inter-level pointing judgments and experience more disorientation when way-finding between floors). Previous literature has also suggested that a multi-level cognitive map could be a set of vertically...
Article
Full-text available
This paper proposes an interface that uses automatically-generated Natural Language (NL) descriptions to describe indoor scenes based on photos taken of that scene from smartphones or other portable camera-equipped mobile devices. The goal is to develop a non-visual interface based on spatio-linguistic descriptions which could assist blind people i...
Article
Full-text available
In two experiments, we investigated whether reference frames acquired through touch could influence memories for locations learned through vision. Participants learned two objects through touch, and haptic egocentric (Experiment 1) and environmental (Experiment 2) cues encouraged selection of a specific reference frame. Participants later learned e...
Article
Full-text available
In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in hum...
Article
Full-text available
Accurate processing of nonvisual stimuli is fundamental to humans with visual impairments. In this population, moving sounds activate an occipito-temporal region thought to encompass the equivalent of monkey area MT+, but it remains unclear whether the signal carries information beyond the mere presence of motion. To address this important question...
Article
Full-text available
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the m...
Conference Paper
Full-text available
There is a long history of providing assistive technology to blind persons. In the spatial domain, most of this effort has focused, however, on low-level mobility cues (e.g., avoiding obstacles) and has been developed from the third-person engineering perspective. We argue that improving independence and navigation abilities without vision requires...
Conference Paper
Full-text available
As indoor spaces become more complex, and information technology develops, there is a growing use of devices that help users with a variety of tasks in indoor space. Outdoor spatial informatics is well developed, with GIS at their core. Indoor spatial informatics is less well developed, and there is currently a lack of integration between outdoor a...
Article
Full-text available
We report on three experiments that investigate the efficacy of a new type of interface called a virtual verbal display (VVD) for nonvisual learning and navigation of virtual environments (VEs). Although verbal information has been studied for route-guidance, little is known about the use of context-sensitive, speech-based displays (e.g., the VVD)...
Article
Full-text available
Evidence for amodal representations after bimodal learning: Integration of haptic-visual layouts into a common spatial image. Spatial Cognition & Computation, 9(4), 287-304. Abstract: Participants learned circular layouts of six objects presented haptically or visually, then indicated the direction from a start target to an end target of the same o...
Article
Full-text available
Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (eg doors, signs, and lighting) for acquiring cognitive maps of novel indoor layouts. In this study we asked whether visual impairment and age affect reliance on non-geometric visual information for layout learn...
Conference Paper
Full-text available
We investigate verbal learning and cognitive map development of simulated layouts using a non-visual interface called a virtual verbal display (VVD). Previous studies have questioned the efficacy of VVDs in supporting cognitive mapping (Giudice, Bakdash, Legge, & Roy, in revision). Two factors of interface fidelity are investigated which could acco...
Article
Full-text available
Indoor navigation technology is needed to support seamless mobility for the visually impaired. A small portable personal navigation device that provides current position, useful contextual wayfinding information about the indoor environment and directions to a destination would greatly improve access and independence for people with low vision. Thi...
Chapter
Full-text available
Introduction Factors Influencing Blind Navigation Technology to Augment Blind Navigation Review of Selected Navigational Technologies
Conference Paper
Full-text available
Blindfolded participants were guided along routes from two display modes: spatial language ("left," "right," or "straight") or spatialized audio (where the perceived sound location indicates the target direction). Half of the route guidance trials were run concurrently with a secondary vibrotactile N-back task. To assess cognitive map development,...
Article
Full-text available
This work investigates whether large-scale indoor layouts can be learned and navigated non-visually, using verbal descriptions of layout geometry that are updated, e.g. contingent on a participant's location in a building. In previous research, verbal information has been used to facilitate route following, not to support free exploration and wayfi...
Article
Full-text available
We report a vibrotactile version of the common n-back task used to study working memory. Subjects wore vibrotactile stimulators on three fingers of one hand, and they responded by pressing a button with the other hand whenever the current finger matched the one stimulated n items back. Experiment 1 showed a steep decline in performance as n increas...
Article
Full-text available
A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight")...
Poster
Full-text available
Learning Virtual Building Layouts: The Effects of Age on the Use of Geometric and Nongeometric Visual Information
Article
When people visually inspect a map, the map's orientation at learning is known to be privileged in memory (Orientation Specificity). That is, judgments aligned with the map are reliably more accurate than those which are mis-aligned (Alignment Effect). The present studies use an alignment paradigm with visual and haptic map learning to investigate...
Article
This study examines the effect of age on the use of visual information when learning target locations in novel buildings. We define two types of visual information available for indoor navigation: 1) geometric cues conveying information about layout geometry, specifically, the network of corridors, and 2) nongeometric cues that are distinct from ge...
Conference Paper
Full-text available
Mobility challenges and independent travel are major concerns for blind and visually impaired pedestrians [1][2]. Navigation and wayfinding in unfamiliar indoor environments are particularly challenging because blind pedestrians do not have ready access to building maps, signs and other orienting devices. The development of assistive technologies t...
Article
PURPOSE Are verbal descriptions as effective as visual input for learning spatial layouts? We asked whether people can learn building layouts through exploration of computer-based virtual displays that use synthetic speech to describe layout geometry. If learning with these verbal displays transfers to efficient navigation in real buildings, such d...
Article
The cognitive representation underlying human spatial navigation is often dichotomized into "route knowledge" and "survey knowledge." Motivated by concepts from studies of animal navigation, we propose a different form of underlying representation for human navigation. "Maplets" are small pieces of maps whose configural information can be encoded f...
Article
Purpose: We are interested in how well geometric information about the layout of a building can be conveyed by spatial language. Can people explore and learn building layouts nonvisually using verbal descriptions? Does learning strategy or navigation performance differ as a function of the amount of verbal information provided? In this study, we co...

Network

Cited By