Project

The UniDescription Project: Examining Audio Description Through Empirical and Experimental Measures

Goal: This UniD project (beta testing at: http://www.unidescription.org/) has been developed to help people create more audio description and to be a robust resource for those interested in this topic, including "best practices" guidelines, updated scholarly research, and a forum for related thoughts and discussions.

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
15
Reads
0 new
91

Project log

Brett Oppegaard
added 3 research items
Purpose: This study compares value expressions of intervention designers and participants in a hackathon-like event to research relationships between values and gamification techniques. Our research identifies and analyzes value expressions during a large-scale intervention at national parks for social inclusion of people who are blind or have low vision. Researchers and organizations can use our model to create common-ground opportunities within values-sensitive gamified designs. Method: We collected qualitative and quantitative data via multiple methods and from different perspectives to strengthen validity and better determine what stakeholders wanted from the gamified experience. For methods-a pre-survey, a list of intervention activities, and a post-survey-we analyzed discourse and coded for values; then we compared data across sets to evaluate values and their alignment/misalignment among intervention designers and participants. Results: Without clear and focused attention to values, designers and participants can experience underlying, unintended, and unnecessary friction. Conclusion: Of the many ways to conceptualize and perform a socially just intervention, this research illustrates the worth of explicitly identifying values on the front end of the design intervention process and actively designing those values into the organizational aspects of the intervention. A design model like ours serves as a subtextual glue to keep people working together. The model also undergirds these complementary value systems, as they interact and combine to contribute to a cause.
Introduction: American media-accessibility pioneers in the 1970s and 1980s not only sparked interest in the academic study of audio description, they also originated many practical techniques, protocols, theoretical perspectives, guidelines, and standards that persist in the fabric of this type of work decades later. In this study, we located and analyzed source documents for two oft-mentioned innovators—Gregory Frazier and Margaret Pfanstiehl—to shine light on their individual perspectives through a historiography of their foundational writings and associated media. Method: This analysis was conducted on publicly available source documents, such as Frazier’s landmark thesis and also included a trove of Pfanstiehl’s personal correspondence, as a way to establish particular points of theoretical and historical interest. Results: We found that despite the prominent place of Frazier and Pfanstiehl in audio description lore, neither actually published much writing about what they did and why they did it. Some of what they wrote has been selectively repeated, but other parts have been forgotten. In that respect, this research method could be used to more precisely trace and identify where particular practices emerged, under which theoretical perspectives, and complications. It also can help to show how these ideas were documented and tested during their emergence and domestication, as a way to gauge procedural rigor as well as validity of related findings. Discussion: Audio description scholarship needs theoretical anchors, but it also needs systematic testing of assumptions inherent in those theoretics, which this study helps to identify. Implications for Practitioners: Audio describers invariably will encounter the moment when an assertion of “this is the way we do it” collides with the curiosity of “why?” To promote best practices, the field has to understand where practices came from, how they developed, and as Frazier recommended, put those ideas to “objective” tests.
Gender gets socially constructed in many visual ways, but people who are blind or who have low-vision want to know the gender of those around them, too, as well as other salient positionality details. Like with age, race, fashion, etc., a person’s appearance can provide a lot of information about them and their character. Audio description, as a form of audiovisual translation, is a way to make that appearance accessible to those who cannot see it. Yet empirical research about audio description of gender – a complicated and highly contested arena of public discourse – is underdeveloped. This study addresses that issue through a Grounded Theory approach, constructivist in nature, that both generated self-descriptions of portrait images and piloted a model way to analyze them. This process prompted 179 new self-descriptions written during three hackathon-like events over multiple years, illuminating compositional gender-construction strategies as well as fertile paths for audio description research.
Brett Oppegaard
added a research item
The first time I tried to test audio description in our research team’s prototype mobile app, I couldn’t figure out how to get the app to work in my phone’s VoiceOver-like mode. I then spent about a half-hour frantically just trying to get out of that accessibility setting, which seemed to have turned my device into an unusable brick. I eventually found a way back from this dark and mysterious audio-oriented interface, with the help of internet searches and guides, but I did not return from the experience unchanged. The type of frustration I struggled with — for just a moment — is an everyday, all-the-time, and enduring part of life for people who are blind or visually impaired. Only there is no simple online hack for it. When roles are reversed, and a blind person tries to explore a sight-oriented environment such as a museum, an exhibition hall, or a visitor center, through its ocularcentric interfaces, the media ecosystem can turn hostile quickly and in surprising ways, too. All types of media (videos, photographs, illustrations, timelines, charts, tables, maps, etc.) require significant audible augmentation to speak to this audience. Best practices for doing such work are scarce. Not surprisingly, audio versions of visual media often aren’t readily available. What can be done about that? This paper, aligned with the theme of this publication, argues for an inclusive design approach.
Brett Oppegaard
added 6 research items
Mobility and location-awareness are pervasive and foundational elements of contemporary communication systems, and a descriptive term to synthesize them, "locative media", has gained widespread use throughout mobile media and communication research. That label of "locative media", though, usually gets defined ad hoc and used in many different ways to express a variety of related ideas. Locative features of digital media increasingly have changed from visible location-driven aspects of user interfaces, such as check-in features and location badges, toward more inconspicuous ways of relating to location through automated backend processes. In turn, locative features-whether in journalism or other formats and content types-are now increasingly algorithmic and hidden "under the hood", so to speak. Part of the problem with existing classifications or typologies in this field is that they do not take into account this practical shift and the rapid development of locative media in many new directions, intertwining ubiquitous digital integration with heterogeneous content distinctions and divergences. Existing definitions and typologies tend to be based on dated practices of use and initial versions of applications that have changed significantly since inception. To illustrate, this article identifies three emerging areas within digital journalism and mobile media practice that call for further research into the locative dimensions of journalism: the situational turn in news consumption research, platform-specific visa -vis platform-agnostic mobile news production, and personalised news.
Purpose: Technical communicators concerned with such issues as media accessibility, disability rights, and universal design could explore fertile scholarly ground by investigating Audio Description more through applied research methods. This article illustrates such potential through the explication of a transmodal-translation process conducted on National Park Service brochures, including interpretation and transformation of their maps into acoustic forms. Method: Our mixed-methods approach included feedback from diverse blind, visually impaired, and sighted stakeholders, including administrators, media designers, and representative park-site users. These insights were then tested through field work and complemented by multiple interviews and focus groups. During this process, we developed digital tools—including open-source software and free mobile apps—for iterative testing and sharing of ideas. Results: Besides generating thematic and diverse insights about this topic, our study also established, developed, and refined a set of best-practices guidelines based on research in the field, informed by gathered empirical evidence. These guidelines are intended to support subject-matter experts at public attractions, regardless of discipline, in the creation of better, more accessible maps through Audio Description. Conclusion: How could a person possibly transform a complex, fully visual, and printed-on-paper map into useful acoustic media for blind and visually impaired visitors? After consulting the scattered, related literature, we oriented our efforts toward the multi-faceted technical communication practice of localization. We then dedicated our project resources to real-world interventions through both the application and the development of audio-description strategies and digital-media-delivery systems as a practical and universal approach to these related translation and localization problems. Keywords: maps, audio description, blindness and visual impairment, mobile apps, best practices guidelines.
Proximity has helped practitioners and scholars to determine newsworthiness for generations. Emerging mobile technologies, though, with contextual-awareness capabilities, have been complicating many of the related issues and expanding the realm of journalistic content—as well as conceptualizations of timeliness—through growing digital tethers to place and use of that material in place. Those evolving complexities include the increasing possibilities for journalists to make connections to contemporary audiences through the customization of content based on matters of user location. In turn, where an audience member is located when media is delivered can matter greatly. Geolocation metadata has become ubiquitous and media delivery systems can sort that data to customize user experiences based on place. In terms of such tailoring, mobile devices allow novel kinds of personalized connections to journalism, prompted by a geographical nearness to physical stimuli. In response, this study examines the potential of proximity for impact on key factors of engagement, through the involvement, social facilitation, and satisfaction of users. This conceptualization of mobile journalism shows that media designers now not only can know precisely where their particular audience is but also adapt their messages to the situation as a way to generate more engaging experiences.
Philipp Jordan
added a research item
Audio description, a form of trans-modal media translation, allows people who are blind or visually impaired access to visually-oriented, socio-cultural, or historical public discourse alike. Although audio description has gained more prominence in media policy and research lately, it rarely has been studied empirically. Yet this paper presents quantitative and qualitative survey data on its challenges and opportunities, through the analysis of responses from 483 participants in a national sample, with 334 of these respondents being blind. Our results give insight into audio description use in broadcast TV, streaming services, for physical media, such as DVDs, and in movie theaters. We further discover a multiplicity of barriers and hindrances which prevent a better adoption and larger proliferation of audio description. In our discussion, we present a possible answer to these problems - the UniDescription Project - a media ecosystem for the creation, curation, and dissemination of audio description for multiple media platforms.
Philipp Jordan
added a research item
This PPT highlight excerpts of my work as Research Assistant as part of the Unidescription (UniD) project. The UniD project is an ongoing, longitudinal, research initiative with the goal to increase the recognition, creation, usage and dissemination of audio description for the general public and visual impaired audiences alike. More broadly spoken, the UniD project encompasses HCI/ICT design and evaluation for special target audiences and is located in the larger digital humanities realm. The project aims to identify audio description "best practices" and heuristics. We also create web tools and smartphone apps to create and disseminate audio description. The main focus of the project is research on static media translation into audio description. We apply a variety of HCI methods, ranging from field research to design sprints, from the application of gamification to the involvement of multiple stakeholders in the design and evaluation process. As far, we have mostly worked with the National Park Service (NPS) and created 50 audio descriptions of NPS brochures, which were prior only accessible to sighted visitors. We are also exploring the audio description of the various UHM campus maps, such as the art-, plant- or general campus building and services maps. In the talk, I will give an overview of the project, demo the UniD webtool, showcase the apps and describe our design design sprints, which we call "Descriptathons". You can learn more about the background of the UniD project here: https://www.unidescription.org
Brett Oppegaard
added a research item
"Gamification" research has evolved and grown dramatically in recent years, gaining popularity across disciplines. While such efforts have generated headway in many respects, and in various directions, from conceptual understandings to user studies, the field could benefit from more work focused upon use in research methodologies at the nexus of practice and theory. This paper, in turn, reflects upon such an experiment aimed at the design and application of gamification techniques within a typical technical-communication context. In this case, subject matter experts within the National Park Service were being asked to improve accessibility of their site brochures by audio describing them. During this training, they were given an overview of audio description, as a process, as well as introduced to a prototype web tool and then asked to use that tool to create the description for their site brochure. Unlike previous training exercises with other parks in this project, though, this group also was organized by sites into a tournament bracket, in which pairs of parks competed against each other in exercises designed to create comparable audio description. The winner of each round, as determined by an independent panel of judges, advanced to the next round, spurred by the promise of fun Hawaiian-themed prizes at the end. This gamification strategy appeared to generate more data, and more research-focused data, than the previous training exercises we have offered, per user. It also apparently engaged many, in various evident ways. But it also seemed to disenfranchise some as well, who dropped out of this voluntary training, creating a mix of results, which will be outlined in this paper.
Brett Oppegaard
added 4 research items
This paper describes a National Park Service (NPS) and University of Hawaii research project that is developing a mobile application for audio describing NPS print brochures for blind and visually impaired park users. The project has the potential to expand access to cultural and aesthetic material for blind and visually impaired people.
Unigrid" design specifications created by Massimo Vignelli have provided the standards for the layout of paper brochures at U.S. National Park Service sites for more than three decades. These brochures offer visitors a familiar analog presentation of visual information, blending text, photographs, maps, and illustrations. These materials, however, are not accessible to people who are blind, have low vision, or a print disability. The National Park Service for decades has been challenged -- by requirements and principle -- to offer alternate formats that provide equivalent experiences and information of these print materials. In other words, people who are blind or visually impaired should have access to a "brochure" experience, too. This exploratory study, funded by the National Park Service, takes a new approach to this long-term problem by conducting a content analysis of current Unigrid brochures to determine their fundamental components, found in practice. This components-based approach is intended to provide clear pathways for cross-modal translation of the printed material into audio-described media, which then, can be efficiently distributed via mobile apps, as an extension of these original components.
“Unigrid” design specifications created by Massimo Vignelli have provided the standards for the layout of paper brochures at U.S. National Park Service sites for more than three decades. These brochures offer visitors a familiar analog presentation of visual information, blending text, photographs, maps, and illustrations. These materials, however, are not accessible to people who are blind, have low vision, or a print disability. The National Park Service for decades has been challenged – by requirements and principle – to offer alternate formats that provide equivalent experiences and information of these print materials. In other words, people who are blind or visually impaired should have access to a “brochure” experience, too. This exploratory study, funded by the National Park Service, takes a new approach to this long-term problem by conducting a content analysis of current Unigrid brochures to determine their fundamental components, found in practice. This components-based approach is intended to provide clear pathways for cross-modal translation of the printed material into audio-described media, which then, can be efficiently distributed via mobile apps, as an extension of these original components.
Brett Oppegaard
added a project goal
This UniD project (beta testing at: http://www.unidescription.org/) has been developed to help people create more audio description and to be a robust resource for those interested in this topic, including "best practices" guidelines, updated scholarly research, and a forum for related thoughts and discussions.