Accessibility of High-Fidelity Prototyping Tools
School of Information, Rochester Institute of Technology, Ny, USA, email@example.com
Garreth W. Tigwell
School of Information, Rochester Institute of Technology, Ny, USA, firstname.lastname@example.org
School of Information, Rochester Institute of Technology, Ny, USA, email@example.com
High-fidelity prototyping tools are used by software designers and developers to iron out interface details without full
implementation. However, the lack of visual accessibility in these tools creates a barrier for designers who may use screen readers,
such as those who are vision impaired. We assessed conformance of four prototyping tools (Sketch, Adobe XD, Balsamiq, UXPin)
with accessibility guidelines, using two screen readers (Narrator and VoiceOver), focusing our analysis on GUI element accessibility
and critical workflows used to create prototypes. We found few tools were fully accessible, with 45.9% of GUI elements meeting
accessibility criteria (34.2% partially supported accessibility, 19.9% not supporting accessibility). Accessibility issues stymied efforts
to create prototypes using screen readers. Though no screen reader-tool pairs were completely accessible, the most accessible pairs
were VoiceOver-Sketch, VoiceOver-Balsamiq, and Narrator-Balsamiq. We recommend prioritizing improved accessibility for input
and control instruction, alternative text, focus order, canvas element properties, and keyboard operations.
CCS CONCEPTS •Human-centered computing~Accessibility~Accessibility systems and tools•Human-centered
computing~Accessibility~Empirical studies in accessibility
Additional Keywords and Phrases: High-Fidelity Prototyping, Prototyping Tools
Desktop software applications, such as Axure, Sketch, InVision, and AdobeXD, empower user interface and user
experience (UI/UX) designers to rapidly create high-fidelity prototypes. Effort has been made to improve web and
mobile accessibility for screen reader users [12,13,16,17,21,31], and to improve accessibility in the design process
[11,25,37,44]. Blind and low vision individuals are successful in artistic and creative endeavors [2,3,10,15,41], yet the
accessibility and usability of high-fidelity prototyping tools for screen reader users remains undetermined, limiting
vision impaired professionals in the technology design industry. We explored the accessibility of prototyping tools and
their prototypes by assessing how well both work with screen readers. Specifically, we attempted to use prototyping
tools with screen readers to see which parts of the prototyping tools were accessible by screen reader, and which parts
were not. We focused on the following research questions: (1) How compatible are common/popular mainstream
prototyping tools with screen reader software? (2) What accessibility issues arise when using prototyping tools with
screen readers, specifically, what part of the prototyping tool is in/compatible with screen readers, and what
prototyping tools are recommended for screen reader users? We assessed tool conformance to accessibility guidelines,
and examined workflows for prototype creation. We present the results of our investigation, showing that prototyping
tools are largely not accessible via screen readers, with the most accessible dyads VoiceOver-Sketch, VoiceOver-
Balsamiq, and Narrator-Balsamiq. Our findings comprise an empirical contribution, providing data about how well
prototyping tools can be used with screen readers, describing the severity of problems that prevent accessibility, and
offering suggestions for improving specific application accessibility.
2 RELATED WORK
Much work has focused on how to improve the human-centered design process to create accessible solutions, a key
aspect of which is including users with disabilities in the technology design process to ensure accessibility persists to
the final product [11,25,35,37,44]. Few studies focus on the disabled user as the designer, centering accessibility in the
[4,6]. Although some recent work has begun to investigate how to support blind and vision impaired
developers editing the design of web pages , our work takes a different approach by focusing on accessibility at an
earlier point in the design process: what has been done in accessibility and design broadly, where is accessibility
defined and enforced, and what do current tools and techniques offer to enable blind and low vision designers to carry
out their work.
2.1 Accessibility in User-Centered and Interaction Design
As the quintessential human-focused approach, User-Centered Design (UCD) focuses on the user experience in design
consideration, yet relies on designers to explicitly name disabled users as part of the target user [38,39]. Similarly,
Value Sensitive Design implores designers to consider a constellation of value systems, including disability, in
technology design solutions . In contrast, User-Sensitive Inclusive Design exhorts designers to be deliberate in
including people with disabilities, starting by assuming that some portion of users will be disabled, and then
encouraging designers to approach and engage users with disabilities . Finally, Design for Social Accessibility
borrows a combination of approaches by encouraging designers to consider users with and without disabilities
throughout the design process [36,37]. These approaches motivate and scaffold help enabling designers to be inclusive,
such as to have people with disabilities participate in the design process up-front, thus improving accessibility in the
final design. By contrast, to our knowledge, few design approaches assume the
is a person with disability
[4,6]. Indeed, while there are few designers out there who identify as having a disability [27,41], not much is known
about how accessible the design process is for people with disabilities. Since much of the design process is visual,
including many of the tools used in interaction design, we focus specifically on how accessible tools are for people
with vision impairments.
Many digital tools are available to help interaction designers derive usable and aesthetically appealing designs.
Recently, Potluri et al.  proposed the use of AI to support blind or low vision creators designing user interfaces.
They reported that participants often left design aesthetic decisions to somebody else, providing an opportunity for
where in the process AI-assistance may provide support. In addition, though participants demonstrated confidence
when asked to sketch the interface layout of mobile apps, beyond an assessment of lo-fi sketching and tactile materials,
there was no evaluation on the accessibility of current industry-standard digital prototyping tools for blind or low
vision users. High-fidelity prototyping tools help to bring the design to users for advanced user testing and feedback.
We consider how a vision impaired designer might use a screen reader and high-fidelity prototyping tool to create
Prototyping is an integral part of the UCD process , as prototypes instantiate user interfaces for testing with users
to elicit feedback and refine designs . In human-computer interaction (HCI), prototypes range from low-fidelity to
interactive high-fidelity manifestations of design ideas , involving mixed-levels of fidelity on different dimensions
. Low-fidelity prototypes are low resource strategies used to obtain user feedback in the early stages of the design
process, and help designers to evaluate alternatives and assess current design to improve usability [19,34,42]. High-
fidelity prototypes are used at later stages of the design process, employing visual and interactive details that look and
behave like a real product , making them suitable for usability testing . Many high-fidelity prototyping tools,
such as Sketch, InVision, and Adobe XD, allow designers to create click-through prototypes to demonstrate minimum
interaction. However, for blind or low vision user interface and user experience (UI/UX) practitioners, it is unclear if
such prototyping tools are usable or even operable with screen readers. For software to be operable it must be accessible
to the point that a user can interact with all functions to complete tasks, whereas questions about its usability will look
deeper into the overall experience tied to measurements such as intuitiveness, understandability, navigation and speed
to complete tasks. Due to little research in this area, our work is initially focused on the operability of prototyping
tools with screen readers. Prior work found some high-fidelity prototypes may be inaccessible for user testing with
blind or low vision participants , supporting the notion that accessibility issues in the final product can often be
traced back to the tools that designers and developers used during their creation . However, it is yet undetermined
how such tools may be inaccessible for blind or low vision creators, and the extent that the tools themselves result in
an inaccessible high-fidelity prototype.
Screen readers convert digital content into synthesized speech, empowering blind and low vision computer users to
operate computing devices with the keyboard or touch gestures. The first screen reader for GUI systems, the IBM
Screen Reader/2, was launched in 1992 , and screen readers continue to be popular tools that provide access to
computers for blind and low vision users . When a user selects an on-screen element (
, text or images), a screen
reader describes the item, and may also communicate guidance for how to interact with the item. There exists a range
of different screen readers in a variety of price points, the most popular desktop versions are JAWS (Job Access with
Speech), Apple VoiceOver, and Microsoft Narrator; the most popular web versions include Non-Visual Desktop Access
(NVDA). Of these, VoiceOver, Narrator, and NVDA are low cost and more readily available .
2.3 Accessibility Guideline and Evaluation
Web accessibility is covered by a plethora of guiding principles and tools, meanwhile few guidelines exist for desktop
application accessibility. Thus, although accessibility of desktop prototyping tools was the focus of our project, we
turned to web accessibility best practices that may be applicable to our work.
Non-Web Accessibility Guidelines
There are no accessibility guidelines solely for desktop software alone. Both Web Content Accessibility Guidelines
(WCAG) 2.1 , and Revised Section 508  standards (specifically applicable to federal agencies in the U.S.) mainly
address web content accessibility. Other platform-specific guidelines, such as Google Accessibility and Microsoft
Accessibility, provide detailed accessibility requirements for their specific platform framework. The World Wide Web
Consortium (W3C) provides guidance on applying WCAG 2.0 to non-web information and communication
technologies , and the IBM Accessibility Checklist Version 7.1  consolidates the Revised Section 508 standards
and WCAG 2.1 Level A and Level AA Success Criterion (SC) into relevant checkpoints for non-web software and
assistive technologies use. Although WCAG was originally created to be “technology-neutral,” embedded in its
construction is the presumption of “user-agent” technology,
, a browser [14:3]. Thus, some reconciliation is
necessary to appropriately interpret the WCAG and similar accessibility guidelines for desktop applications.
Accessibility Evaluation in Application and Software
People with vision impairments often use assistive technologies, such as screen readers, to access digital content. In
this regard, accessibility is a combination of cohesive software compliance with accessibility guidelines and usability
with assistive technologies. Despite the lack of non-web accessibility guidelines, other types of accessibility assessment
has been conducted on non-web platforms ,
, a prevalent issue among Android apps is that unlabelled buttons
pose a problem for screen readers . The shift to mobile-first computing  and the ease of creating automated
evaluations of websites and apps may have resulted in a lack of large-scale desktop accessibility assessments. Instead,
software companies evaluate the accessibility conformance of their own software. For example, Adobe specialists used
screen readers and screen magnifiers to test accessibility conformance of Adobe Photoshop CC against WCAG 2.0,
Section 508, and EN 301 549 , finding that conformance level varied, and assessing conformance levels as:
does not support
(accessibility), not applicable, and not evaluated . Similarly, the IBM Mobile
Accessibility Checker (MAC), evaluates accessibility conformance in mobile applications with the IBM Accessibility
Checklist. This strategy has been determined as costly , leading researchers to develop the Inaccessible Element
Rate (IAER) as a lower cost alternative .
Research focused on screen reader accessibility and usability emphasized that critical areas for consideration
include: clarity of user interface and interactive elements, logical navigation order of contents and elements, quick and
easy identification of elements or content .
Many UX prototyping tools are non-web applications. Meanwhile, most common accessibility evaluation tools are
web-based, or are proprietary and brand-specific. We investigate the accessibility of desktop prototyping tools, through
the use of screen readers, focusing on clarity of interactive elements, ease of manipulating controls and widgets and
Because no standard industry guideline existed for evaluating accessibility for non-web software, we adapted
guidelines based on the IBM Accessibility Checklist  to assess compatibility with screen readers. We assessed four
prototyping tools (Sketch, Adobe XD, Balsamiq, and UXPin) individually with two screen readers (VoiceOver and
Narrator) for accessibility. We selected these prototyping tools for their popularity, and we selected non-web-based
screen readers for their popularity and cost. For each tool, we assessed the accessibility of software-wide elements such
as Buttons, Textview, Image, and so on. Then we examined critical workflows based on designed tasks (
required to create a prototype). Finally, we conducted a cursory check on the prototype to determine if it was operable
with the screen reader.
The first author conducted the evaluations using the screen readers, and identifies as sighted. Our approach engages
functionally-oriented incompatibilities of screen readers and components of prototyping tools, not necessarily usable
accessibility. For example, our study identifies missing Labels for controls, but we do not draw conclusions about
accessibility and usability effects for downstream tasks that use controls. We acknowledge that evaluation is best
conducted by an experienced screen reader user, and as such, our approach gives us limited ability to conduct full
accessibility and usability evaluations. We caution that a sighted evaluator limits the scope of our findings. Sighted
users who do not use screen readers as their primary technical intermediary are unlikely to reach the same level of
skill as an expert screen reader user in navigation and functional expediency. An expert will have knowledge of specific
strategies used to address accessibility issues that sighted users lack. To this end, our study is a step toward
documenting the technical compatibility between screen readers and prototyping tools as shown by our first research
3.1 Prototyping Tools and Screen Readers
The first author led the prototyping tools evaluation, drawing on four years’ experience with prototyping tools,
including proficiency using Sketch (v63.1), Figma (version dated February 27, 2020), Adobe XD (version dated February
10, 2020), and InVision (version dated February 20, 2020). In preparation for the study, the first author learned to use
screen readers (VoiceOver (version for iOS 13), Narrator (version dated November 2019), and NVDA (v2019.3.1)),
prioritizing popular, low-cost and easy-to-access tools. Though popular, JAWS and TalkBack (on Android) were
excluded from initial consideration due to cost and mobile-only application, respectively. As we narrowed the
prototyping tools we would examine, we excluded NVDA because it is mainly used for web accessibility and had poor
compatibility with desktop software.
The first author engaged the appropriate tutorials for each1, then practiced with the screen readers for two hours
a day for two weeks (totaling 28 hours across all the screen readers). Specifically, the first author (1) followed the
tutorials to learn basic and essential commands, such as completing VoiceOver’s Quick Start Training, repeating
training as needed to master recall without needing to reference command lists; (2) read documentation on screen
1 VoiceOver: https://help.apple.com/voiceover/mac/10.15/
reader websites and followed online YouTube tutorials as appropriate for additional support; (3) focused on learning
and mastering cycle-through and reverse-cycle-through elements using Tab and Shift+Tab, including interacting with
content areas by using VO-Shift-Down Arrow and VO-Shift-Up Arrow; (4) practiced the necessary steps to fluently
operate each prototyping tool before beginning analysis, proceeding with tool assessment only once they had
eliminated user performance issues.
We inspected four prototyping tools with macOS and three with Windows (Table 1), using the integrated screen
readers for each operating system,
, VoiceOver and Narrator, respectively. We chose four prototyping tools: Sketch,
Adobe XD, Balsamiq, and UXPin, drawing on popular tools in the 2019 Prototyping Tool ranking list
(https://uxtools.co/tools/prototyping/), prioritizing popularity, operating system, price, and features. AdobeXD, UXPin,
and Balsamiq were available on both Windows and macOS, and could be evaluated by both VoiceOver and Narrator.
Although Sketch was not available on the Windows platform, its ranking as #1 on the prototyping tools list (and its
relatively low price point compared with other high ranking tools) compelled us to include it in our assessment.
Table 1 Screen readers and Prototyping tools
Windows does not support Sketch
Terms of Prototyping Tools
To maintain consistency across prototyping tools, we categorized four parts of tool anatomy: Canvas, Layer, Element,
Element Parameters (Figure 1):
1. Canvas: The main area of the prototyping tool. Prototypes are created in this area.
2. Layer: This area contains the list of elements created in the Canvas.
3. Element: Elements could be shapes, text, lines, and so on. The prototype is a combination of different elements.
4. Element Parameters: This area lists parameters related to a selected element, such as position, width, color,
and so on.
Figure 1. Typical user interface of a prototyping tool.
3.2 Accessibility Guidelines for Assessment
We drew on The IBM Accessibility Checklist  to assess the tools because it incorporated Section 508 and Web
Content Accessibility Guidelines (WCAG), and included items from mainstream guidelines. We focused on how screen
readers accessed and controlled prototyping tools (Table 2),
, we excluded WCAG Success Criterion (SC) 1.2 and
1.4 because of the focus on time-based media and color and contrast, respectively, which are not relevant to screen
readers. We also define a new accessibility conformance criterion called “Control Instruction” to address GUI elements
with multiple operations.
Table 2 Accessibility Guideline Checklist
SC 2.1.1 (Keyboard);
SC 2.1.2 (No Keyboard Trap);
SC 2.1.4 (Character Key Shortcuts)
The application should be navigable and operable by using just
SC 2.4.3 (Focus Order);
SC 2.4.7 (Focus Visible)
The application should provide ways for users to determine where
SC 3.2.1 (On Focus);
SC 3.3.1 (Error Identification);
SC 3.3.2 (Labels or Instructions);
502.3.5 (Modification of Values);
502.3.9 (Modification of Text)
Content should be clear and operate in predictable ways.
SC 1.1.1 (Non-text Content);
SC 2.4.2 (Page Titled);
SC 2.5.3 (Label in Name)
There should be alternate ways to understand the non-text content.
SC 2.4.4 (Link Purpose (In Context));
SC 2.4.6 (Headings and Labels);
The application should provide ways to help users navigate and find
SC 1.3.3 (Sensory Characteristics)
Effective sensory characteristics of components should be provided
when components receive focus.
504.2 (Content Creation or Editing)
The output from the authoring tool must be accessible and verified
against all checkpoints in the WCAG section of The IBM
Our own defined criterion
Elements must accurately convey all possible operations when it
receives focus, for example, in cases where control instructions are
provided in tooltips or with visual signifiers in the UI, such as
images with drop shadow to indicate they can be pressed. Sighted
users have access to these visual signifiers, and it is important to
verify whether screen readers indicate this information.
*Though our walkthrough was guided by the IBM Checklist, the main part of the checklist constitutes WCAG.
Therefore, for simplicity, we list the specific WCAG mappings when possible (rather than the corresponding IBM
Checklist item). We adopt IBM Checklist’s “Success Criteria (SC)” to denote each criterion.
We assessed conformance to guidelines in Table 2 with the following ratings :
• Supported (S): The functionality of the prototyping tool meets all corresponding guidelines and can be used
effectively with screen readers.
• Partially Supported (PS): For one corresponding guideline, the functionality of the prototyping tool partially
meets it and can be used with screen readers. For multiple corresponding guidelines, the functionality of the
prototyping tool meets at least one of them.
• Not Supported (NS): The functionality of the prototyping tool violates all corresponding guidelines and
cannot be used with screen readers.
3.3 Assessment Method
For each prototyping tool, we first examined software-wide control elements such as Buttons, Textview, Dropdown,
etc., because limited access to these key controls would impact accessibility of the tool as a whole.
GUI Control Elements Testing
We assessed GUI control elements based on essential screen reader keyboard commands used to navigate operations,
executing the following commands:
• Cycle through all interactive GUI elements using Tab or screen reader hotkeys.
• Reverse cycle through the same GUI elements with Shift + Tab or screen reader hotkeys.
• Use Arrow Keys to move within composite GUI elements.
• Invoke elements in focus using Enter or Spacebar.
• Use Esc to exit from invoked UI elements.
We checked the following to assess accessibility:
• For a control element, ensure screen readers read its name.
• For a control element, ensure screen readers read its state (on/checked).
• For a control element, ensure screen readers read its control type.
• For a control element, ensure screen readers read its instruction text.
• For a table, ensure screen readers read its table name, table description, row and column headings.
We adapted the inaccessible element rate (IAER) to measure the overall accessibility of GUI elements based upon
prior work . The IAER is calculated as a percentage of the number of GUI elements with accessibility issues relative
to the total number of GUI elements that have potential accessibility impact in the application. For improved readability
we will borrowed the
notation used by Adobe in their Photoshop CC application evaluation , in the
o , , or = inaccessible element rate for Supported (S), Partially Supported (PS)
or Not Supported (NS) in a prototyping tool.
o , , or = actual number of elements with S, PS, or NS in a prototyping tool.
= total number of elements that have potential accessibility impact in the app.
Task Testing: Assessing Critical Workflows
We assessed tasks that comprise the workflow process used to create a prototype. Each task in the workflow was
designed to meet commonly used functionality of prototyping tools. The simple prototype consisted of two user
interfaces with an Application Bar and a Button (Figure 2). The two user interfaces were linked and selecting the
Button would cycle between them. The procedure for creating the prototype included commonly used items and
functions for interface design, such as Buttons and page switching.
2 Though we use a different notation scheme, we applied the same analysis for the IAER as used by .
Figure 2 The basic prototype that was created without using a screen reader
To create the prototype, Sketch, AdobeXD and UXPin had similar workflows, clarified in Table 3; Balsamiq was
slightly different, and so we outline its tasks in Table 4. We clarify the steps involved for each tool in Tables 3 and 4.
For each step shown in Table 3, we (1) mapped the task to accessibility guideline(s), (2) completed the task with the
screen reader, and (3) determined the conformance level based on guideline recommendations.
Table 3 Task List for Sketch, AdobeXD, and UXPin
1. Create the artboard of first
1.1 Create an artboard. Set size.
1.2 Name it as “Screen 1”
2. Create a Layout
2.1 Create a Layout that has a column grid with 4 columns and 20 gutters (or margin).
3. Create Application Bar
3.1 Create a 360px*76px rectangle as an Application Bar.
3.2 Place it at the top of the Canvas.
3.3 Name it as “App Bar”.
3.4 Set the color to #00000.
3.5 Set the opacity to 54%.
3.6 Set the border width to 2.
4. Add Text to Application
4.1 Insert a Text field in the Canvas and type “Application”.
4.2 Place it in the center of the “App Bar”.
4.3 Group this text with the “App Bar” rectangle.
4.4 Name the group as “AppBar”.
5. Create a Button
5.1 Create a 238px*36px rectangle as a Button.
5.2 Place it at the bottom of the Canvas and in the center of the Canvas. Set the distance between
Application Bar and button is 36px.
5.3 Name it as “button”.
5.4 Insert a text field in the Canvas and type “Next Page”.
5.5 Place it in the center of the “button”.
5.6 Group this text with the “App Bar” rectangle.
5.7 Name the group as “button1”.
6. Create the second screen
6.1 Duplicate the Canvas we create in the previous task.
6.2 Rename the Canvas to “Screen 2”.
6.3 Edit the text of “Next Page” text field to “Last Page”.
6.4 Rename the “button 1” to “button 2”.
7. Create the interaction
7.1 Link the Button 1 to Canvas “Screen 2”.
7.2 Link the Button 2 to Canvas “Screen 3”.
8. Test the prototype
8.1 Preview the prototype.
8.2 Operate prototyping using screen readers.
*Conformance Level refers to supported (S), partially supported (PS), and not supported (NS)
Unlike the other tools, Balsamiq had one sub-step in Step 3 (create Application Bar), Step 4 (add Text to Application
Bar) and Step 5 (create a Button), and did not have Step 2 (create a Layout) (Table 4). Balsamiq provided hundreds of
templates of user interface elements, thus, it took fewer steps to create the basic prototypes compared with others. For
each step shown in Table 4, we (1) mapped the task to accessibility guideline(s), (2) completed the task with the screen
reader, and (3) determined the conformance level based on guideline recommendations.
Table 4 Task List for Balsamiq
1. Create the artboard of first
1.1 Create a Canvas or artboard. Add an Android phone template.
1.2 Name it as “Screen 1”.
2. Create a Layout
3. Create Application Bar
3.1 Add an Application Bar template.
4. Add text to Application Bar
Edit a text field in the Application Bar and type “Application”.
5. Create a Button
Add a Button template and name it as "Next Page".
6. Create the second Screen 6.1 Duplicate the Canvas created in the previous task.
6.2 Rename the Canvas to “Screen 2”.
6.3 Edit the text of “Next Page” in the Button to “Last Page”.
7. Create the interaction
7.1 Link the Button 1 to Canvas “Screen 2”.
7.2 Link the Button 2 to Canvas “Screen 3”.
8. Test the prototype
8.1 Preview the prototype.
8.2 Operate prototyping using screen readers.
For each of the workflow assessments, we evaluated the task completion rate of different screen readers and
prototyping tools, summarizing which guidelines were violated and why for each function. Throughout, we
documented information not read by screen readers and functions that could not be manipulated (see Appendix for
example tables of data collection items).
The prototyping tools we assessed were largely incompatible with the screen readers used. Some GUI control elements
could be accessed via screen reader, but not the control functions, rendering inaccessible the ability to manipulate
prototyping elements. More specifically, prototyping tools worked best with screen readers when setting control
element parameters, and worst when moving and linking elements on the Canvas.
For screen reader and GUI element accessibility of different dyads, we found: (1) Sketch-VoiceOver had the best
compatibility; (2) GUI elements of Adobe XD worked great with VoiceOver; (3) Most functions of Adobe XD were
inaccessible with Narrator; (4) Functions and GUI elements of UXPin performed poorly with both screen readers used;
(5) Balsamiq’s functions were compatible with screen readers but not its GUI elements (Figure 3).
Figure 3 Accessibility status of testing groups
4.1 Prototyping Tool Compatibility with Screen Readers
Of the seven test prototyping tools and screen reader dyads, we examined 1272 GUI control elements and ranked the
inaccessible element rate (IAER) of each tool and screen reader dyad. The average IAER for tool and screen reader
dyads was 19.9% (NS), 34.2% (PS), and 45.9% (S). Fifty-five percent of GUI control elements assessed had accessibility
issues when used with a screen reader. GUI elements in Sketch and AdobeXD had better compatibility with screen
readers than the other prototyping tools: 63% of GUI control elements were accessible with VoiceOver-AdobeXD and
60% were accessible with VoiceOver-Sketch. About one-third of GUI control elements were accessible for Narrator-
UXPin (37%) and VoiceOver-UXPin (32%). At 32% accessible, Narrator-AdobeXD was rated as “not supported.”
Our IAER analysis showed that AdobeXD and UXPin GUI control element accessibility with screen readers was
either Supported or Not Supported, with little accessibility that was Partially Supported. In contrast, Balsamiq GUI
control elements were largely Partially Supported,
, the elements were accessible to the screen readers, but with a
few issues (Figure 4).
Figure 4 Inaccessible element rate (IAER) for the test prototyping tools: (a) for Supported accessibility guidelines; (b) for Partially
Supported accessibility guidelines; (c) for Not Supported accessibility guidelines.
4.2 Assessment of Accessibility Issues by GUI Elements
We assessed the accessibility of individual widget categories (Button, Image, TextField,
.), because such widgets
accounted for the primary way tools were used (Figure 5).
Figure 5 Conformance level percentage by individual widget GUI elements
All widget categories assessed were partially incompatible with screen readers to some degree, with most widgets
not recognized by screen readers, or having limited functional capability. Our assessment indicated that, at best,
cursory attention to accessibility issues enabled some widgets to be identified by screen readers, but they were largely
inoperable. In Table 5, we list the GUI Element assessed, the criteria it was assessed against, and specific issues found
across the tools.
Table 5. Control Element Issues per WCAG Success Criterion Assessed
Issues Found Example
Buttons are frequently
used to control various
4% of buttons had not-supported
(NS) issues related to descriptions
and focus, 54% had partially-
Screen readers did not read the name of "Add
Rectangle" button, and only read it as "Button."
Issues Found Example
example, a button can be
used to activate drawing
tools, to align elements,
and so on. Screen readers
should have access to
Button name and operation
supported (PS) issues related to
descriptions, 42% of buttons did
not have issues.
In this way, screen reader users would not know
what the button is for.
Image elements represent
images and images should
have alternate ways to
access and understand the
6% of Image elements had NS
issues and 46% had PS issues. 48%
of Image elements assessed were
compatible with screen readers.
In the worst case, screen readers did not
recognize the existence of the Image. For
example, on an image for “Circle Button,” the
button was clickable, but the screen reader did
not state the name of the control or Image
description, and screen readers were unaware of
the existence of this click control.
A Group element
represents a group
containing multiple GUI
elements in the testing app.
Screen readers should be
able to access the name and
operation of the group, and
also the individual
elements that comprise the
91% of Group elements were not
compatible with screen readers
and caused accessibility issues
related to content operation.
A screen reader described a Group that consists
of an Image, a Dropdown menu, and several
Text elements but omitted details about the
operable Dropdown menu. Users would not
know this Group was operable.
mainly in web-based
software—operates like a
button. Similarly, screen
readers should have access
to Link purpose and URL
52% of link elements assessed did
not work with screen readers and
caused accessibility issues related
to description and content
Users select a Link to add shadow and blur to an
element. N and PS Link elements either did not
allow users to access control to the link (i.e.,
could not click the link) or did not read the
TextField elements require
users to input texts or
values. TextField elements
must allow screen reader
focus, and access to the
TextField name and
30% of TextField elements had
issues related to description and
TextField elements often did not receive focus,
therefore rendering the controls invisible to
Figure 6 TextField that requires user to input
width, height, x position, and y position.
A DropDown element is
used to expand hidden
items. Similarly to Group
elements, screen readers
should be able to access the
name and operation of the
DropDown, and also the
individual elements within
All DropDown elements assessed
did not support guidelines related
to content operation and content
description. Specifically, screen
readers did not recognize
elements in DropDown lists,
instead describing the currently-
Figure 7 Screen readers did not recognize the
DropDown list items, only describing “Arial,
Text Element,” rather than indicating that they
were selected items from the list.
Issues Found Example
Checkbox Checkboxes are used to
make non-binary choices.
As noted above,
Checkboxes should receive
focus, allow access to the
element name and
operations, and be operable
40% of Checkbox issues were
ranked as NS and 37% were
ranked as PS. Such issues
included lack of description and
Figure 8 Checkbox was inaccessible with
keyboard, such that if the user needed to add or
remove the shadow using the Checkbox, they
were unable to access Checkbox and the
operation via the screen reader.
4.3 Assessment of Accessibility Issues by Accessibility Guideline
In addition to assessing how well screen readers could access widgets, we analyzed how well tools abided accessibility
guidelines (Table 6). We assessed that most guidelines were Not Supported, indicating that accessibility was severely
affected. We assessed fewer guidelines as Partially Supported (Table 7) or Supported. Guidelines assessed as Not
Supported (NS), included: lack of input instruction (18.8%), missing alternative text (15.3%), lack of control instruction
(14.8%), improper focus order (12.9%), lack of visible focus indicator (9.8%) and unavailable keyboard operation (8.3%).
Altogether, 79.9% of items assessed ranked as NS.
Table 6. Accessibility Guideline Issues Assessed as Not Supported
Guideline Description Issues Found Example
We assessed input instruction
against SC 3.3.2, checking if the
GUI element provided
instruction through a screen
reader to let users know what
value or text constituted input.
For 18.8% of the elements
assessed, screen readers
accessed values, but not
what the value represented.
Figure 9 In the VoiceOver-Sketch dyad,
when VoiceOver read a TextField of Y
position as "360", omitting that 360
represents the value of the Y position.
We assessed alternative text for
all non-text input GUI elements,
ToggleButton. According to SC
1.1.1, non-text GUI element
must have associated text to
specify its type and name.
We found 33% of the
elements that should have
correct alternative text had
issues. Specifically, out of the
782 instances of alternative
text, 264 had issues (rated as
N or PS), 518 elements were
rated as S.
Without alternative text, the non-text
content is invisible to the screen reader,
making it impossible for users to know
the existence and purpose of the non-text
GUI element. Elements that rated N or PS
either did not have alternative text, the
alternative text did not received focus,
was incorrect, or was inoperable by
(We defined this criterion
because these issues were not
covered in other guidelines);
We assessed if control
instruction was accessible by
screen reader via keyboard
interactions, including the
element’s controllable aspects
, accessing child elements).
Although some images or
text elements were clickable
or draggable, screen readers
did not read label
Figure 10 When focusing on "Date Picker"
in Balsamiq, VoiceOver described it as
"Date Picker, image" without providing
the tooltip instruction "Drag-and-drop or
double-click to add." Meanwhile,
Narrator described both alternative
information and the Tooltip. Without
control instruction, screen reader users
may not be aware of different possible
interactions for a specific control element.
We assessed that all elements
received focus in the correct
order. SC 2.4.3 states that
improper focus does not
preserve meaning and
129 out of 131 elements were
assessed on focus order as
For instance, when the user clicks a link
to expand a list area, the focus should
move from the link to the list. If the focus
moves out of order, it might lead to a
severe error where users may operate
other elements rather than the list.
SC 2.4.7 requires that keyboard
operable user interfaces indicate
when an element receives focus.
We assessed how well visible
indicators provided additional
feedback to users about where
they are in the system.
Some screen readers would
read elements in test
prototyping tools without
showing the visible
indicator, however, then
users may lose focus and not
know where the GUI
Figure 11 The additional white bounding
box around “Square Grid” provides a
visible focus indicator (UXPin on Mac) so
that users know the focus is on that
element. The visible indicator for
SC 2.1.1 requires that all
functionality is operable
through a keyboard interface.
We assessed the operability of
keyboard control elements. If an
interactive element is
inoperable by keyboard, it is
inaccessible by screen reader.
Overall, most keyboard
operations worked well
except for combination
selecting/editing text items
on, or operating the Canvas.
When keyboard operations did not work,
the most common issue was inability to
operate Dropdown menus properly
Figure 12 Dropdown menus were often
not operable by keyboard, making
inaccessible options under “Typography.”
SC 2.5.3 ensures that a
component's visible Text Label
is the same as the accessible
name for the component. It can
benefit low-vision users who
use a screen reader. We
assessed the presence and
accuracy of Text Labels,
including checking that the
Label Name specifically labels
components and is not the
element’s alt text (1.1.1).
Not many buttons in
prototyping tools were found
that had text labels with it.
Of the 70 elements evaluated
for text label name, 68 of
them were rated as NS
because screen readers did
not read the text label.
However, this assessment
was only done for visible
text labels, which means
elements without text label
names were not checked.
Figure 13 The “align layer to center”
button had the same visible Text Label as
its alternative text. Low-vision screen
reader users may not get any further
clarity on which button received focus.
The leading causes of partially supported accessibility issues were: missing alternative text and lack of input
instruction, accounting for 82.8%, and 10.5%, respectively.
Table 7. Accessibility Guideline Issues Assessed as Partially Supported
We checked if elements had
associated text to indicate
its type and context of use.
Some elements of test
prototyping tools had
An "Edit Button" in Narrator-Balsamiq, Narrator
only described it as "Button," not "Edit, Button".
Without the button name "Edit", the purpose of the
button was not recognizable.
SC 3.3.2 requires input
instruction for a given
control element. We
assessed if screen readers
could read the TextElement
when moving the focus
from the text editing area to
Some prototyping tools
instruction with the
input area but in an
Figure 14 Input instruction "Width" and "Height"
were provided as independent elements, not
necessarily attached to the textbox.
Guidelines well supported included alternative text (37%), input instruction (31%), keyboard operation (9%), and
control instruction (9%).
4.4 Assessment of Accessibility Issues by Testing Task
In the final stage of assessment, we examined how well screen readers and prototyping tools could be used to create a
functioning prototype (section 3.3.2 described this procedure). We analyzed the accessibility of commonly used
operations needed to create a functional user interface. Task success or failure was determined by conformance to
accessibility guidelines via screen reader, specifically if the individual step could not be completed at all, or could only
be partially completed such that the full task was insufficiently completed.
We assessed 28 tasks with Sketch, AdobeXD and UXPin (Figure 15), and 18 tasks with Balsamiq (Figure 16).
Narrator-AdobeXD failed 75% of tasks, Narrator-UXPin failed 64%, VoiceOver-AdobeXD failed 60%, and VoiceOver-
UXPin, 60%. VoiceOver-Sketch completed 57% of the tasks, VoiceOver-Balsamiq completed 56%, and Narrator-
Balsamiq, 56%. Balsamiq was assessed along slightly different tasks because the tool had substantially different
mechanisms to complete the same tasks as the other tools (Table 4 lists separate Balsamiq tasks). We present the results
of Balsamiq separately (Figure 16).
Task Category 1: Create Elements of Prototypes
The first few tasks that comprise this category of tasks (Creating the Canvas, setting the size, and assigning a name)
were included to determine the accessibility of creating elements of the user interface (Figure 17) (Table 3 contains the
full set of tasks). Elements included shapes (
, rectangle and oval) and text. Placing such elements on the Canvas are
some of the most basic functions used by a designer when creating their prototype. VoiceOver-Sketch and VoiceOver-
AdobeXD were partly successful, but with some challenges,
, after selecting the Rectangle Button, the drawing
board did not receive focus, and when creating the Rectangle element, the screen reader did not read the status and
operation of the GUI element. Thus, user instruction was not provided. Below we detail the task failures for Sketch,
AdobeXD and UXPin. Separately, we discuss Balsamiq due to a difference in procedure.
The screen reader and prototyping tools did not provide control instruction. Creating an element in the
prototyping tools requires a relatively complicated interaction, such as dragging the element’s border to change its
size, rather than discrete operations, such as pressing the space-bar to click a button. The interaction is visually
intuitive, but a control instruction is needed for the non-visual user interface to help screen reader users operate the
element.Visually-driven GUI actions could only be completed by mouse or track pad, while some could be edited by
adjusting the parameters of the element. Ultimately, these visually-driven interactions were not operable by keyboard
The process of creating elements did not receive focus. All the prototyping tools (Sketch, AdobeXD and UXPin)
violated accessibility guideline relating to focus (
, SC 2.4.3 (Focus Order) and SC 2.4.7 (Focus Visible)). In these tasks,
elements created in the Canvas were not selectable with a keyboard, thus were invisible to screen readers. As a result,
screen readers could not relay information necessary to the process of creating an element, could not access the
necessary function for creation, and thus constituted a severe violation.
Figure 15 Task completion rate (excluding Balsamiq)
Figure 16 Balsamiq task completion rate
The process of creating elements had an improper focus order. A user usually starts creating an element by
clicking an add-shape Button (
, "Add a Rectangle"). After clicking the Button, the focus should move to the element
on the Canvas, but it stayed on the add-shape Button.
Lack of sensory characteristics. When manipulating an element, no screen reader-tool dyads, except VoiceOver-
UXPin, provided characteristics of the element,
., shape, size, and location. VoiceOver-UXPin provided slightly more
information: when selecting the element, VoiceOver read the element's shape (
, "Rectangle") or color (
"#FFFFFF"). All screen reader-tool dyads violated SC 1.3.3 (Sensory Characteristics).
Key GUI elements were inoperable by keyboard in prototyping tools. In addition to Canvas-related
manipulation not being keyboard operable, Narrator-UXPin and VoiceOver-UXPin failed the first part of the first step
(Create an artboard. Set size) because a dropdown element used to select Canvas size was not operable by the keyboard.
It did not support SC 2.1.1 (Keyboard).
The key GUI element did not have an associated text to specify its type and name. A user typically begins
creating an element by clicking an add-shape Button (
, "Add a Rectangle"). Narrator-AdobeXD was unable to read
such a Button's name. It did not support SC 1.1.1 (Non-text Content).
Figure 17 Create elements in Canvas
In tests of Balsamiq for step 1, task 1.1, 3.1, and 5.1 were to create an Android Canvas, an Application Bar, and a
Button. Balsamiq had a collection of templates of the common prototype element (Figure 18) for users to assemble a
prototype quickly. The main issue was that the process of adding templates did not receive focus. Specifically, after
double pressing the Application Bar, the chosen template did not receive focus.
Figure 18 Templates of elements in Balsamiq
Altogether, the Canvas is the most important part of prototyping tools. Every prototyping tool has a Canvas and
every element is created and its parameters (
, color, border, shadow, etc.) are manipulated within the Canvas. We
found tools’ Canvases were not accessible to screen readers or were inoperable by keyboard. Specifically, the operation
of element on the Canvas did not receive focus, and elements on the Canvas lacked sensory characteristics.
Task Category 2: Elements Movement
Tasks 3.2, 4.2, 5.2, and 5.5 comprised this category and were designed to test the accessibility of element manipulation.
Once elements are created on the prototype Canvas, users may need to move the element to different positions to
compose a group of elements,
, a Button is composed of a shape (Rectangle) and a Text ("Submit").
Except for VoiceOver-Sketch, screen reader-tool dyads completed task 3.2 (placing a rectangle at the top of the
Canvas). Coupled with the rough positioning, screen readers could access and edit specific place values to manipulate
the X- and Y-positions or select the Align Button (
, Align Top). VoiceOver-Sketch failed because the text editing
area for the X and Y position GUI control elements did not have input instruction.
Element movement did not receive focus and lacked sensory characteristics. All screen reader-protoyping tool
dyads failed tasks 4.2, 5.2 and 5.5 (moving elements to a pixel accurate position). For instance, task 5.2 is to move a
rectangle to a position where the distance between it and another rectangle is 36 pixels. The first issue was that the
element did not receive focus when selected, such that screen readers could not convey movement was happening.
Only elements in VoiceOver-UXPin received focus when moving them, and movement was described with a series of
numbers related to the change in X and Y positions that were difficult to discern. This information described only the
movement of elements and did not support SC 3.3.2 (Labels or Instructions) and SC 1.3.3 (Sensory Characteristics).
VoiceOver-Balsamiq and Narrator-Balsamiq failed due to similar reasons related to insufficient information for
Task Category 3: Layer Operation
Tasks 3.3, 4.3, 5.4, 5.6, 5.7, 6.2, and 6.4 of this task category assessed common layer operations, such as renaming
elements, renaming the Canvas (page), grouping multiple elements, duplicating layers, moving layer, and changing
the layer order. Only VoiceOver-Sketch successfully completed the renaming and duplicating operation (supporting
SC 2.1.1 Keyboard operation), but the operation was inoperable by keyboard with the other dyads. VoiceOver-Balsamiq
had issues with focus on the Text Field on the Canvas (task 5.4), but otherwise successfully allowed grouping and
renaming layer operations. Narrator-Balsamiq failed the layer operation because it did not read the name of the Menu
Button that controls layer operation.
Task Category 4: Element Parameters Setting
Tasks 3.4, 3.5, and 3.6 of Step 4 assessed the accessibility of element parameter setting. Parameters of an element include
color, border, opacity, shadow, and so on. VoiceOver-Sketch failed due to the lack of input instruction, specifically,
VoiceOver did not read what element values changed. For example, when opacity is changed, it should describe the
change as "Opacity 100%," instead, the description only included "100%," thus not supporting SC 3.3.2 (Labels or
Instructions) and not indicating labels or instructions for user input. VoiceOver-UXPin and Narrator-UXPin completed
element parameter setting well without causing any accessibility issues. Although VoiceOver-AdobeXD and Narrator-
AdobeXD completed the tasks, we encountered focus order issues: after confirming the change of value by keyboard,
the focus moved to random places and did not stay in the value editing field. With Narrator-Balsamiq and VoiceOver-
Balsamiq, we were unable to change the color using the keyboard because there was no name description given (
the name of the color button).
Task Category 5: Link Elements
Tasks 7.1 and 7.2 within this category included linking elements with elements. For example, in task 7.1, users link a
Button with another screen and then assign interaction to this Button so that when a user presses the Button, it triggers
an event that changes to the linked screen. Step 5 is essential to adding interaction to the prototype. VoiceOver-Sketch
was the only dyad to complete Step 5 tasks, although it had a focus order issue: after selecting "Add new link to layer,"
the focus should move to the expanded section, but it moved to "Width Value" (Figure 19). Thus, it did not support SC
2.4.7 (Focus Visible).
Figure 19 Focus order in Sketch
Other dyads failed the tasks in Step 5 because GUI elements were inoperable by keyboard: in AdobeXD, the "Add
New Interaction" Button was not accessible by keyboard; the Dropdown Menu of the Link section in UXPin was
inoperable by keyboard. Though these specific issues violate SC 2.1.1 (Keyboard), they also affected the other tasks
such that Step 5 failed. Similarly, with VoiceOver-Balsamiq and Narrator Balsamiq, after selecting the element to add
a link, an incorrect focus order did not allow the Combo Box with target link to be manipulated.
Task Category 6: Export Prototyping
Tasks 8.1 and 8.2 comprise this category, assessing the accessibility of the prototypes created. All screen reader-
prototyping dyads successfully exported prototypes (Task 8.1). Screen reader-prototype dyads were unable to operate
the exported prototype by keyboard (Task 8.2), rendering the content unrecognizeable by the screen readers. Narrator-
UXPin and VoiceOver-UXPin had limited operability to navigate exported prototypes, such as only accessing text
In all, there was no perfect combination of screen reader-prototype dyads. In addition, we observed slight
differences in screen readers. Specifically, VoiceOver-AdobeXD and VoiceOver-UXPin had better accessibility than
Narrator-AdobeXD and Narrator-UXPin. VoiceOver-AdobeXD had higher task completion rates and lower
. Meanwhile, Narrator-Balsamiq assessed better than VoiceOver-Balsamiq.
4.5 Assessment Notes: Unique Observations of Screen Reader and Tool Features
We note that a function of VoiceOver, called rotor, accelerated navigation across the hundreds of GUI elements
included in the prototyping tools. The rotor lists common elements like “headings,” “forms,” and “content chooser,”
and lets users navigate directly to the GUI element of choice (Figure 20). For example, after a user changes the color of
"Button 1," they might want to change the color of "Button 2" as well. A user needs to select "Button 2" first, which
requires moving focus from color changing area to layer area. This series of actions could take dozens of steps to
complete and be very tedious. However, with the assistance of rotor, the user can quickly locate "Button 2" in a few
steps. Narrator did not have such features.
Figure 20 Rotor of VoiceOver
Finally, Balsamiq operated differently, employing streamlined operations and resulting in fewer functions that
required screen reader access. For example, it might take seven steps to create a Button in other tools where each step
had potential to be inoperable by screen readers, by contrast, the same goal may only have one step in Balsamiq,
subsequently requiring only one screen reader operation. This difference ultimately made Balsamiq slighty more
accessible to use with screen readers than the other tools.
Notwithstanding small variations across screen readers (VoiceOver generally performing better) and prototyping tools
(Balsamiq’s task efficiency as a benefit), our analysis showed that prototyping tools are not effectively accessible to
screen readers, and thus would be difficult to access by a person with a vision impairment. The problems we identified
were severe enough to negatively impact accessibility, but even partially accessible functionality could precipitate
usability issues .
5.1 Compatibility, usability, and accessibility.
Our analysis of accessibility issues between screen readers and desktop-based prototyping tools found some minimal
, 63% of GUI elements in AdobeXD and 60% of that in Sketch were accessible with VoiceOver), but
may not be enough to allow screen reader users to effectively use the tools (
, moving elements on the Canvas was
inaccessible with screen readers). The inability to use screen readers to independently operate and navigate
prototyping tools propagated through to other aspects of accessibility and, we argue, would ultimately affect usability
. For example, without alternative text, a Button element was not accessible to screen readers, limiting access to
the Button’s functions.
Although VoiceOver-Sketch had the highest task completion rate, and the lowest
score, in a practical sense,
Balsamiq was a more accessible choice with more efficient ways to execute tasks and overall better screen reader access
across all functions. Whereas other tools required accurate operations, Balsamiq provided hundreds of templates,
reducing tasks into fewer steps needed and fewer steps that required screen reader access. Overall, however, screen
readers were unable to accurately execute operations required of most prototyping tools. Ideally, screen readers could
access specific information about elements and their operational instructions [5,20,24]. Instead, elements in most of
the tools did not receive focus and were inoperable by the keyboard, resulting in operations that were invisible to
screen readers. Further, screen readers and prototyping tools did not provide detailed information, such as an element’s
X and Y positions, even if the element received focus, thus preventing element manipulation.
5.2 Recommendations to Improve Screen Reader and Prototyping Tool Accessibility
Although it can be expensive to improve conformance with accessibility guidelines , such as for screen reader
accessibility, recommendations based on our results can help narrow the focus of the accessibility issues for non-web
software. More specifically, accessibility issues that we uncovered could have downstream effects that—when
embedded in prototypes and designed solutions—are weaved through subsequent technology development .
The first recommendation is to improve the basic accessibility of GUI elements. Overall, 55% of the GUI elements
had accessibility issues. Fixing basic GUI elements will improve how well screen readers access the tools. We suggest
prioritizing elements: Button, Image, Group, Link and TextField, and Canvas, and accessibility recommendations for:
lack of input instruction, missing alternative text, lack of control instruction, improper focus order, lack of sensory
characteristics and unavailable keyboard operation.
The impact of inaccessible prototyping tools not only limits vision impaired designers, but also produces
inaccessible prototypes. Thus, the second recommendation is to make the prototype output of prototyping tools
accessible to screen readers. None of the prototyping tools exported prototypes that worked with screen readers.
Inaccessible prototypes may limit how designers can conduct usability testing with users who are blind and vision-
impaired. From this perspective, we may infer that inaccessible prototypes precludes inaccessible design.
The long tail of impact of inaccessible prototyping tools and their inaccessible prototypes bears a grim assessment
of the landscape of accessible design and how designers are supported, whether they are sighted or vision impaired, to
create accessible technology.
6 LIMITATIONS AND FUTURE WORK
Our testing was conducted by a sighted person using a screen reader. Even though the researcher made an effort to
learn to use screen readers and uncovered many accessibility issues, some may not have been identified due to lack of
skill. Our findings could not inform usable accessibility as it relates to how one navigates tools and works around
functional incompatibilities. Understanding technical accessibility likely correlates with usable accessibility toward
supporting end-to-end prototyping tasks, including downstream effects associated with inaccessible prototypes.
Therefore, it is important that follow-up studies engage blind and low vision screen reader users in evaluating such
tools. Our contribution of data on technical accessibility is useful in informing design of future studies that investigate
more nuanced research questions on usable accessibility with the involvement of blind and low vision individuals.
We centered our evaluation tasks around the design of a basic prototype rather than a complex one. We considered
a minimum viable prototype as a necessary first step toward understanding the accessibility of the fundamental
features of prototyping tools. Given our findings, we anticipate that more complex functions will be inaccessible,
considering the basic functions were unsatisfactory.
We limited our evaluation using only VoiceOver and Narrator. NVDA was excluded because it did not perform well
with the desktop tools we assessed. Despite its popularity, we opted to exclude JAWS due to cost. However, we
acknowledge JAWS is a very popular screen reader and worth further testing in future work.
This study showed that though some basic elements were accessible via screen reader, on the whole, prototype tools
were not accessible via screen readers. Specifically, though some tools’ GUI elements were basically accessible to screen
readers, key functions and manipulations were not. We suggest that accessibility issues should be prioritized for
specific elements: Button, Image, Group, Link and TextField, Canvas, and related issues: lack of input instruction,
missing alternative text, lack of control instruction, improper focus order, lack of sensory characteristics and
unavailable keyboard operation. Widespread accessibility issues were found in commonly used functions. Since
prototyping tools require many accurate operations, they should add accessibility features to improve compatibility
with screen reader users. Our findings contribute a body of empirical data that shows what changes should be made
to improve accessibility for high fidelity prototyping tools with screen readers. Improving the accessibility of these
design tools also improves opportunities for blind and low vision designers to engage in UI/UX design work.
Many thanks to the reviewers who provided valuable feedback to improve our work.
 Chadia Abras, Diane Maloney-Krichmar, and Jenny Preece. 2004. User-centered design.
Bainbridge, W. Encyclopedia of Human-Computer
Interaction. Thousand Oaks: Sage Publications
37, 4: 445–456.
 Dustin Adams, Sri Kurniawan, Cynthia Herrera, Veronica Kang, and Natalie Friedman. 2016. Blind Photographers and VizSnap: A Long-Term
Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility
, 201–208. Retrieved from
 Cynthia L. Bennett, Jane E, Martez E. Mott, Edward Cutrell, and Meredith Ringe l Morris. 2018. How Teens with Visual Impairments Take,
Edit, and Share Photos on Social Media.
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
, Paper 76.
Retrieved from https://doi-org.ezproxy.rit.edu/10.1145/3173574.3173650
 Cynthia L. Bennett, Kristen Shinohara, Brianna Blaser, Andrew Davidson, and Kat M. Steele. 2016. Using a Design Workshop To Explore
Accessible Ideation. In
Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’16)
 Yevgen Borodin, Jeffrey P. Bigham, Glenn Dausch, and I. V. Ramakrishnan. 2010. More than meets the eye: a survey of screen-reader
Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A)
, Article 13. Retrieved
 Robin N. Brewer. 2018. Facilitating discussion and shared meaning: Rethinking co-design sessions with people with vision impairments. In
Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare
 William Buxton. 2007.
Sketching user experiences: getting the design right and the right design
. Elsevier/Morgan Kaufmann, Amsterdam;
 AnneMarie Cooke. A History of Accessibility at IBM.
. Retrieved June 22, 2020 from https://www.afb.org/aw/5/2/14760
 Jan Derboven, Dries De Roeck, Mathijs Verstraete, David Geerts, Jan Schneider-Barnes, and Kris Luyten. 2010. Comparing user interaction
with low and high fidelity prototypes of tabletop surfaces.
Proceedings of the 6th Nordic Conference on Human-Compu ter Interaction:
, 148–157. Retrieved from https://doi.org/10.1145/1868914.1868935
 Priscilla Frank. 2016. 12 Blind And Partially Blind Photographers Changing The Way We See The World.
. Retrieved September 12,
2020 from https://www.huffpost.com/entry/blind-photographer-book_n_57d71a8ee4b0fbd4b7baf722?guccounter=1
 Batya Friedman, Peter Kahn, and Alan Borning. 2006.
Value Sensitive Design and Information Systems
. M.E. Sharpe, New York.
 Stephanie Hackett, Bambang Parmanto, and Xiaoming Zeng. 2003. Accessibility of Internet websites through time.
Proceedings of the 6th
international ACM SIGACCESS conference on Com puters and accessibility
, 32–39. Retrieved from https://doi.org/10.1145/1028630.1028638
 Vicki L. Hanson and John T. Richards. 2013. Progress on Website Accessibility?
ACM Trans. Web 7
, Article 2.
 Shawn Lawton Henry (ed.). 2019. Web Content Accessibility Guidelines (WCAG). Retrieved June 22, 2020 from
 Mary Hiland. Arts and Crafts After Vision Loss.
VisionAware: For Independent Living with Vision Loss
. Retrieved September 12, 2020 from
 Shaun Kane. 2011.
Usable Gestures for Blind People: Understanding Preference and Performance
 Shaun K. Kane, Jeffrey P. Bigham, and Jacob O. Wobbrock. 2008. Slide rule: making mobile touch screens accessible to blind people using
multi-touch interaction techniques. In
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
 Peter Korn, Loïc Martínez Normand, Mike Pluke, Andi Snow-Weaver, and Gregg Vanderheiden. 2013. Guidance on Applying WCAG 2.0 to
Non-Web Information and Communications Technologies (WCAG2ICT). Retrieved June 22, 2020 from https://www.w3.org/TR/wcag2ict/
 Chris Law, Julie Jacko, and Paula Edwards. 2005. Programmer-focused website accessibility evaluations.
Proceedings of the 7th international
ACM SIGACCESS conference on Computers and accessibility
, 20–27. Retrieved from https://doi-org.ezproxy.rit.edu/10.1145/1090785.1090792
 Jonathan Lazar, Aaron Allen, Jason Kleinman, and Chris Malarkey. 2007. What Frustrates Screen Reader Users on the Web: A Study of 100
International Journal of Human–Computer Interaction
22, 3: 247–269. https://doi.org/10.1080/10447310709336964
 Barbara Leporini, Maria Claudia Buzzi, and Buzzi Marina. 2012. Interacting with mobile devices via VoiceOver: usability and accessibility
OzCHI ’12 Proceedings of the 24th Australian Computer-Human Interaction Conference
 Youn-Kyung Lim, Erik Stolterman, and Josh Tenenberg. 2008. The anatomy of prototypes: Prototypes as filters, prototypes as manifestations
of design ideas.
ACM Trans. Comput.-Hum. Interact.
15, 2: 1–27.
 Michael McCurdy, Christopher Connors, Guy Pyrzak, Bob Kanefsky, and Alonso Vera. 2006. Breaking the fidelity barrier: an examination of
our current characterization of prototypes and an example of a mixed-fidelity success.
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems
, 1233–1242. Retrieved from https://doi-org.ezproxy.rit.edu/10.1145/1124772.1124959
 Meredith Ringel Morris, Jazette Johnson, Cynthia L. Bennett, and Edward Cutrell. 2018. Rich Representations of Visual Content for Screen
Reader Users. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
(CHI ’18), 1–11.
 Alan Newell, P Gregor, M Morgan, Graham Pullin, and C Macaulay. 2011. User-Sensitive Inclusive Design.
Universal Access in the
10, 3: 235–243.
 Jakob Nielsen. 1990. Paper versus computer implementations as mockup scenarios for heuristic evaluation. In
Proceedings of the IFIP TC13
Third Interational Conference on Human-Computer Interaction
 Natalia Pérez Liebergesell, Peter-Willem Vermeersch, and Ann Heylighen. 2018. Designing from a Disabled Body: The Case of Architect
Marta Bordas Eddy.
Multimodal Technologies and Interaction
2, 1. https://doi.org/10.3390/mti2010004
 Helen Petrie and Omar Kheir. 2007. The relationship between accessibility and usability of websites. In
Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems
 Venkatesh Potluri, Tadashi Grindeland, Jon E. Froehlich, and Jennifer Mankoff. 2019. AI Assisted UI Design for Blind and Low Vision
ASSETS 2019 workshop proceedings: AI Fairness for People with Disabilities
 Venkatesh Potluri, Liang He, Christine Chen, Jon E. Froehlich, and Jennifer Mankoff. 2019. A Multi-Modal Approach for Blind and Visually
Impaired Developers to Edit Webpage Designs.
The 21st International ACM SIGACCESS Conference on Computers and Accessibility
614. Retrieved from https://doi.org/10.1145/3308561.3354626
 André Rodrigues, Kyle Montague, Hugo Nicolau, and Tiago Guerreiro. 2015. Getting Smartphones to Talkback: Understanding the
Smartphone Adoption Process of Blind Users.
Proceedings of the 17th International ACM SIGACCESS Conference on Computers &
, 23–32. Retrieved from https://doi.org/10.1145/2700648.2809842
 Anne Spencer Ross, Xiaoyi Zhang, James Fogarty, and Jacob O. Wobbrock. 2018. Examining Image-Based Button Labeling for Accessibility in
Android Apps through Large-Scale Analysis. In
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and
 Anne Spencer Ross, Xiaoyi Zhang, James Fogarty, and Jacob O. Wobbrock. 2020. An Epidemiology-inspired Large-scale Analysis of Android
ACM Trans. Access. Comput. 13
, Article 4.
 J Rudd, K Stern, and S Isensee. 1996. Low vs. high-fidelity prot otyping debate.
3, 1: 76–85.
 Kristen Shinohara, Cynthia L. Bennett, and Jacob O. Wobbrock. 2016. How Designing for People Wi th and Without Disabilities Shapes
Student Design Thinking. In
Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’16)
 Kristen Shinohara, Nayeri Jacobo, Wanda Pratt, and Jacob O. Wobbrock. 2020. Design for Social Accessibility Method Cards: Engaging Users
and Reflecting on Social Scenarios for Accessible Design.
ACM Trans. Access. Comput.
12, 4: Article 17.
 Kristen Shinohara, Jacob O. Wobbrock, and Wanda Pratt. 2018. Incorporating Social Factors in Accessible Design. In
Proceedings of the 20th
International ACM SIGACCESS Conference on Com puters and Accessibility
 Ben Shneiderman. 2000. Universal usability.
Communications of the ACM
43, 5: 84–91.
 Ben Shneiderman and Catherine Plaisant. 2004.
Designing the user interface : strategies for effective human-computer interaction
Pearson/Addison Wesley, Boston.
 P. G. Ramachandran Shunguo Yan. 2019. The Current Status of Accessibility in Mobile Apps. In
ACM Transactions on Accessible Computing
 Derek Torsani. 2018. Being a Color Blind Designer.
. Retrieved September 12, 2020 from https://medium.com/@dmtors/being-a-color-
 Robert A. Virzi, Jeffrey L. Sokolov, and Demetrios Karis. 1996. Usability problem identification using both low- and high-fidelity prototy pes.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
, 236–243. Retrieved from
Screen Reader User Survey #8 Results
. Retrieved June 17, 2020 from https://webaim.org/projects/screenreadersurvey8/
 Jacob O. Wobbrock, Krzysztof Z. Gajos, Shaun K. Kane, and Gregg C. Vanderheiden. 2018. Ability-based design.
Communnications of the
61, 6: 62–71.
 Shunguo Yan and P. G. Ramachandran. 2019. The Current Status of Accessibility in Mobile Apps.
ACM Transactions on Accessible
Computing (TACCESS) 12
, Article 3.
 2017. Adobe Accessibility Conformance Report: Adobe Photoshop CC. Retrieved June 22, 2020 from
 2020. IBM Accessibility Checklist, version 7.1.
. Retrieved July 22, 2020 from
 Section 508. Retrieved from https://www.section508.gov/
Data collected for GUI Control Elements Testing as described in section 3.3.1 included: GUI element name, widget
categories, Success Criterion mapping, accessibility guideline mapping, and explanation as shown in example Table 8.
Table 8 Example Data Collection (GUI Element)
VoiceOver didn't read the
composite UI element’s
Data collected for Task Testing: Assessing Critical Workflows as described in section 3.3.2 included: step, task name,
task completion, Defect description, conformance level and accessibility guideline mapping as shown in example Table
Table 9 Example Data Collection (Workflow)
Create a Button
Creating a shape is