ArticlePDF Available

Why don't people read the manual?

Authors:

Abstract and Figures

Few users of computer applications seek help from the documentation. This paper reports the results of an empirical study of why this is so and examines how, in real work, users solve their usability problems. Based on in-depth interviews with 25 subjects representing a varied cross-section of users, we find that users do avoid using both paper and online help systems. Few users have paper manuals for the most heavily used applications, but none complained about their lack. Online help is more likely to be consulted than paper manuals, but users are equally likely to report that they solve their problem by asking a colleague or experimenting on their own. Users cite difficulties in navigating the help systems, particularly difficulties in finding useful search terms, and disappointment in the level of explanation found.
Content may be subject to copyright.
University of Texas at El Paso
DigitalCommons@UTEP
Departmental Papers (CS) Department of Computer Science
10-18-2006
Why don't people read the manual?
David G. Novick
University of Texas at El Paso, novick@utep.edu
Karen Ward
The University of Portland, Karen.Ward@acm.org
This Article is brought to you for free and open access by the Department of Computer Science at DigitalCommons@UTEP. It has been accepted for
inclusion in Departmental Papers (CS) by an authorized administrator of DigitalCommons@UTEP. For more information, please contact
lweber@utep.edu.
Recommended Citation
Novick, David G. and Ward, Karen, "Why don't people read the manual?" (2006). Departmental Papers (CS). Paper 15.
http://digitalcommons.utep.edu/cs_papers/15
Why Don’t People Read the Manual?
David G. Novick
Department of Computer Science
The University of Texas at El Paso
El Paso, TX 79968-0518
+1 915-747-5725
novick@utep.edu
Karen Ward
School of Engineering
The University of Portland
Portland, OR 97203-5798
+1 503-943-7436
Karen.Ward@acm.org
ABSTRACT
Few users of computer applications seek help from the
documentation. This paper reports the results of an empirical
study of why this is so and examines how, in real work, users
solve their usability problems. Based on in-depth interviews with
25 subjects representing a varied cross-section of users, we find
that users do avoid using both paper and online help systems. Few
users have paper manuals for the most heavily used applications,
but none complained about their lack. Online help is more likely
to be consulted than paper manuals, but users are equally likely to
report that they solve their problem by asking a colleague or
experimenting on their own. Users cite difficulties in navigating
the help systems, particularly difficulties in finding useful search
terms, and disappointment in the level of explanation found.
Categories and Subject Descriptors
H.5.2 [Information Interfaces and Presentation]: User
Interfaces – Evaluation/methodology, training, help, and
documentation.
General Terms
Documentation, Human Factors, Measurement
Keywords
Usability, problem-solving, manuals, online help
1. INTRODUCTION
The annual ACM SIGDOC conference is but one expression of the
enormous effort that systems developers—large and small alike—
put into providing documentation for their users. SIGDOC has 271
members, and the Society for Technical Communications has over
15,000 members. Many tens of thousands more people staff help
desks around the world. The cost of providing documentation and
help resources runs into the billions of dollars. The related costs of
avoiding the need for help by improving usability runs to billions
more. The extent of this effort reflects both a perceived need of
users for help in solving the usability problems they encounter in
using computing systems and a commitment from developers to
help users overcome these problems. Because computing experts
know that the answers to most users’ questions are already
expressed in an application’s documentation, they sometimes
sarcastically tell users to “read the #$%* manual.” Yet users
famously shun the very documentation that would help them. Why?
How then do users actually solve the problems they encounter? Do
they, in fact, solve the problems? Or are users not achieving what
they set out to do?
To understand why people don’t read the manual or use online help,
we began to study how people who use computers in their work
actually encounter and solve usability problems. The study builds
on previous research on usability and documentation, extending to a
broader cross-section of knowledge workers and going much deeper
into the users’ usability problems and their solutions. In particular,
this paper explores these questions:
In real work, how do users solve (or not solve) their
usability problems?
Do users avoid manuals and help systems? If so, why?
In answering these questions, we review related research,
particularly with respect to the causes and measurement of
frustration of users of computing systems; explain the study’s
methodology, including a characterization of the participants, a
description of the application domain and the task set, and a
presentation of the experimental design; present the study’s results;
and briefly discuss limitations and future work.
2. RELATED WORK
Research into the relationship between usability and documentation
grew out of research into understanding the nature of usability
problems. To set the foundations for study of the use (or non-use) of
documentation in responding to usability problems, we review how
recent research has explored the causes of and responses to usability
problems, and review the use of documentation in solving usability
problems.
2.1 Responses to Usability Problems
This study, focusing on use of documentation, developed out of
prior research into the usability problems that documentation was
intended to alleviate. The prior research showed repeatedly that
usability problems cause frustration in users. Early studies examined
users’ responses to these problems in terms of attitudinal or
emotional effects. More recent studies have looked at users’
responses in terms of the kinds of actions the users take to solve the
usability problems they encounter.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGDOC’06, October 18-20, 2006, Myrtle Beach, South Carolina, USA.
Copyright 2006 ACM 1 1-59593-523-1/06/0010...$5.00.
Usability problems tend to induce frustration in users. This has been
shown in tasks performed by college students and in tasks
associated with using commercial Web sites. Bessiere et al. [2]
obtained surveys from 108 college students who had worked on a
task for at least one hour. The subjects were primarily browsing the
Web, processing e-mail, and word-processing. The usability
problems they encountered resulted in high levels of frustration and
large loss of time. Hazlett [7], too, looked at the nature of users’
frustrations. In this study, users performed typical usability-testing
tasks based on commercial Web sites, and their emotional responses
were measured. The tasks were not intended to be representative of
tasks in working life. These studies demonstrated users’ attitudinal
and emotional responses to usability problems, but they did not look
at the ways in which users attempted to solve the problems that were
causing them to be frustrated. For example, Hazlett’s [7] study was
not designed to assess users’ solutions to usability problems, nor
were users provided with documentation because the tasks were
limited to browsing commercial Web sites.
Beyond attitudinal and emotional responses, users’ attempted
solutions to usability problems were assessed by Ceaparu et al. [6].
In this study, 59 college students spent an hour on a computer and
then reported their frustrating experiences. The participants were not
given a specific task but were asked to carry out tasks they did every
day. The study examined the frequency, cause and severity of
frustrating experiences, and time lost due to frustrating experiences.
The users’ solutions (or non-solutions) were compiled and
categorized. The study’s authors identified ten categories of
solutions, including these three of particular interest to researchers
in the field of design of communication:
I consulted online help
I asked someone for help
I consulted a manual or a book
In many instances, subjects encountering difficulties ignored the
problem or rebooted.
While Ceaparu et al. were able to classify users’ actions in response
to frustrating usability problems, their results had experimental
design issues in common with the earlier studies. In particular, their
subjects were college students and their data reflected a “snapshot”
of their subjects’ use of computers rather than tracking changes in
use over time. Mendoza and Novick [9] addressed these issues by
studying the experiences over eight weeks of middle school teachers
as they learned a new application. This study found that the nature
of the subjects’ usability problems, the levels of their frustration,
and the kinds of actions they took to solve the problems all changed
over time. Even aggregating users’ actions over time, the
distribution of the actions of the middle school teachers differed
greatly from those of the college students. The teachers tended to
seek help from a colleague. In 12 percent of the episodes, the
teachers ignored the problem, abandoned the task, or rebooted the
computer.
In short, the state of the field is that we know that users routinely
encounter frustrating usability problems with computer applications,
that they try to solve these problems using a variety of sources and
techniques (the college students tended to figure things out without
help, and the teachers usually asked a colleague for help), and that
in some cases users simply give up.
2.2 Using Documentation to Solve Usability
Problems
One pattern of responses was consistent across the studies
conducted by Ceaparu et al. [6] and Mendoza and Novick [9]: users
rarely used documentation, either printed or online. Ceaparu et al.
reported that only about 4 percent of the usability-problem episodes
experienced by college students were resolved by using online help
and only 1 percent by consulting a manual or book. Similarly,
Mendoza and Novick found that middle school teachers solved
usability problems about 3 percent of the time with online
documentation and 0 percent with manuals. Neither study, though,
looked at why the subjects did not use the documentation developed
specifically to help them with their computing problems.
The reality of users’ experience is that they can find themselves
overwhelmed by the profusion of functions offered by typical
workplace applications. Baecker et al. [1] studied 53 users of
Microsoft Word and found that 27.5 percent of these users were
“overwhelmed by how much stuff there is,” that 58.5 percent had “a
hard time finding the functions I need unless I use them regularly,”
and that for 62.3 percent, “[w]ading through unfamiliar functions
can often be annoying/frustrating.” Most users want the application
to provide a rich set of functions [1], though, so the problem for
developers of software and its associated documentation is to
provide usable guidance for users who want high levels of
functionality but who are perplexed by it.
The reluctance of users to consult documentation is part of the
lore of the field of technical communications [see, e.g., 10]. The
truth of this assumption was rejected by Smart et al. [11, 12, 13],
who reported that more than 99 percent of 400 users responding
to a six-minute telephone survey used print or online
documentation. However, this finding involved a number of
factors that tend to reduce its impact. First, only 17 percent of
those surveyed used documentation in any form more than once
per week. Second, the users, selected from lists of people who had
purchased a particular word-processing program, were
overwhelmingly novice users of that application; only 17 percent
had used the application for more than six months. Third, the
survey did not distinguish between business and non-business use;
rates of and motivation for use of the application are likely to
differ as a function of the context of use. Fourth, the survey
looked at one particular application rather than at the totality of
use of the various applications that characterize typical use
patterns. And fifth, patterns of availability and use of
documentation appear to have changed significantly since the
survey was conducted and first reported in 1995. As we observe
later in this paper, many popular business applications no longer
come with printed manuals. And more recent studies [e.g., 6, 9]
indicate much lower levels of use of both printed and online
documentation.
Smart et al. [12, 13] also complemented their telephone survey
with interviews of 18 subjects, using Contextual Inquiry
methodology [3]. From their data, they developed a consolidated
sequence model showing users’ overall strategies for solving
problems encountered in an application. They found that users
typically do not turn to documentation as their preferred means of
solving problems with software. In the model, users first look for
information within the application’s own user interface, then
sought help from a colleague, and then went to the
documentation. The data and the model suggest that users actually
do avoid turning to the documentation if they can otherwise avoid
it. The findings of Ceaparu et al. [6] and Mendoza and Novick [9]
can be seen as consistent with those of Smart et al. in the sense
that virtually all users turn to documentation at some point. But
the three studies are also consistent in their implication that most
users prefer not to do so.
As part of their research, Smart et al. [12, 13] recorded users’
attitudes toward both printed and online documentation. Although
Smart et al. did not provide quantitative summary analyses, it
appears that their subjects’ attitudes were generally positive toward
printed documentation and more negative toward online
documentation, largely because of usability problems with the
online documentation.
Users’ reluctance to use documentation should have been alleviated
by the movement toward minimal manuals, as advanced by John
Carroll [4]. In some ways, the minimal-manual movement has
largely won the day. Documentation now often takes the form of
embedded help, and user-centered design reduces the need for even
that help. Applications now frequently come with a start-up card
rather than a full manual. Yet [cf., 6, 9] many applications have, in
effect, migrated their huge printed manual to a huge online manual,
and users continue to have frequent frustrating experiences with
computer applications. It may be true that the users find their
minimal manuals less frustrating than earlier documentation, but the
evidence appears to be that users now rarely use any documentation,
minimal or otherwise. The empirical foundations of minimal
manuals involved well-conducted protocol studies of actual
computer use. But these studies tended to focus on particular
applications rather than the users’ holistic work environment, and
more recent literature associated with the minimal-manual
movement [e.g., 5] primarily provided techniques for minimizing
manuals rather than empirical insights into the reasons for users’
shunning the big manual or avoiding the use of documentation
generally. Consequently, questions of the reasons underlying users’
preferences and behaviors remain open and salient. Beyond college
students and middle school teachers, what usability problems and
solutions characterize work life more generally? Is it true that
people really do not use documentation? If so, why? Have attitudes
toward print and online documentation changed?
3. METHODOLOGY
To study workers’ use of and attitudes toward documentation of
computer applications, we conducted a series of 25 interviews
over three months. As this study is exploratory, we sought deeper
interaction with a smaller number of a subjects rather than broad
but shallow information from a large number of subjects. In this
section, we describe the participants, their work lives, and the
design of the study.
3.1 Participants
For this study, we recruited a varied cross-section of people who use
computers in their work lives. Initial participants included
acquaintances of the authors, and subsequent participants were
identified by asking each participant to recommend others. The
participants comprised 8 men and 17 women. Their average age was
44 years. In terms of education, two participants had some college,
ten had a bachelor’s degree, nine had a master’s degree, and four
had a Ph.D. Sixteen of the participants lived in El Paso, TX, and
nine lived in the metropolitan area of Portland, OR.
Of the 25 participants, 22 used Microsoft Windows as their
principal operating system at work, while two used OS X and one
used Unix.
3.2 Participants’ Work Lives
We sought participants from as wide a variety of work experiences
as possible; the only constant was that they routinely used
computers in their job. The participants included, among others,
business owners, a white-collar worker at an auto-supply company,
a foundation director, a restaurateur, and a musician. For breadth,
we included one college student and three college professors.
Among the 25 participants, we included four who could be
described as more technically sophisticated with respect to
computing and information systems; as the study was exploratory,
we planned to look for possible differences in the use of
documentation as a function of users’ relative technical
sophistication. In all, the participants can be seen as being
distributed within the six categories listed in Table 1.
Participant Occupation Category Number
Management in education/non-profit 8
Professional/technical 7
Human resources or academic advising 4
Business owner 3
Administrative assistant 2
Student 1
Table 1. Distribution of Occupations.
3.3 Interview Design
We used a straightforward interview approach in which we asked
participants about their experiences and attitudes toward the use of
various types of documentation. While interviews are often
inaccurate with respect to behavior, they offer the insight we seek
into the attitudes and perceptions that motivate users’ actions. Also,
this approach allowed us to explore a broad range of experiences
across many applications, something that would be difficult to
accomplish in a designed experiment. Our approach contrasts with
the participative evaluation [7] methodology used in [9], because we
wanted to be able to go more deeply into the participant’s usability
problems and solutions than would be permitted in surveys or self-
reports. This study was primarily exploratory, and so we sought the
flexibility to follow promising lines of inquiry when interacting with
the participants. There was no control group, as there was no
experimental manipulation.
Fifteen of the interviews were conducted in person at the
participant’s place of employment. The remaining ten interviews
were conducted by telephone.
The interviewers followed an outline-form interview guide, seeking
additional examples or going deeper into problems and solutions
where possible. The interviews covered the participants’ principal
software applications, their self-assessed proficiency with these
applications, problems they had encountered with the applications,
their self-assessed frustration with these problems, whether and how
they solved the problem, their self-assessed overall distribution of
problem-solving methods, when (if ever) they last used printed and
online documentation, the words they associated with printed and
online documentation, and the characteristics of good and bad
printed and online documentation. For the distribution of solution
methods, the participants were asked to distribute 100 percent across
five categories of solutions adapted from [6] and [9]:
Asked someone else
Used online documentation
Used a printed manual
Figured it out without documentation
Gave up
The interviews concluded with a request for any other comments of
the participants with respect to the topics covered and a request for
names and contact information for further participants. The
interviews were typically completed in about 40 minutes. The
interviewers entered notes as the interview progressed, using the
interview guide as a template. The full set of interviews was then
compiled and analyzed.
4. RESULTS
We now turn to the results of the study, which include both
quantitative and qualitative aspects. The quantitative results reflect
participants’ reported patterns of use of applications and associated
documentation. The qualitative results reflect users’ attitudes toward
documentation and the reasons they gave for preferring to solve
usability problems in different ways.
4.1 Quantitative Results
Our quantitative analyses included: demographic information about
the participants; self-assessed proficiency levels, usability problem
episodes, and frustration levels associated with the applications;
average distributions of solution methods; length of time since last
use of documentation, categories of words indicating attitudes
toward printed and online documentation; and correlation of
proficiency and frustration.
The participants indicated that the applications they used most
frequently were applications from Microsoft Office. Of the 25
participants, 22 frequently used Word, 18 frequently used Excel, 12
frequently used Outlook, and 10 used PowerPoint. In addition, 10
participants indicated that they frequently used a database
application, 3 used a browser, and 14 used a variety of other
applications, including specialized commercial software. The low
numbers for Outlook and, especially, for browsers suggests that
users may no longer view Web browsers and e-mail environments
as “applications.” We note that the relatively low number of
problems with browsers suggests that for many users browsers are
highly usable.
Application Proficiency Number of
Episodes Mean
Frustration
Word 3.64 21 3.23
Excel 3.33 17 3.15
Outlook 3.29 11 3.44
PowerPoint 3.20 9 4.00
Databases 3.00 5 3.33
Browsers 3.33 3 2.00
Other 3.25 13 3.58
Table 2. Users’ most-frequently-used applications, mean self-
reported proficiency, number of problem episodes reported, and
mean frustration levels.
To see how the experiences of the broader set of computer users
represented by our participants compared with those reported in
earlier studies, we compiled mean levels of self-assessed
proficiency, total numbers of reported usability problem
episodes, and mean levels of reported frustration associated
with these episodes. As suggested numerically in Table 2 and
graphically in Figure 1, participants unsurprisingly assessed
themselves as moderately proficient in all the applications they
used frequently. The 25 participants reported 111 separate
episodes of usability problems with the applications, an average
of 4.44 episodes per participant. Mean levels of frustration over
the various applications ranged from 2.00 for browsers to 4.00
for PowerPoint, with an average standard deviation of 0.71. The
overall mean for frustration was 3.38. These results appear to be
consistent with those reported by Ceaparu et al. [6] and
Mendoza and Novick [9]. However, the results did not support
finding a correlation between mean self-assessed proficiency
and mean frustration. From prior research, we expected to find
an inverse relationship between perceived proficiency and
frustration. From our data, we found a weak inverse correlation,
but the R-squared was too low to have confidence in this
relationship.
Word
Excel
Outlook
PowerPoint
DB
Browser
Other
0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
4.00
Mean N of Episodes Mean Frustration Mean Proficiency
Figure 1. Participants’ reported number of usability problem
episodes, mean level of frustration per application, and mean
proficiency per application.
We now turn from participants’ problems with applications to
participants’ responses to those problems. A strong finding,
consistent with [6] and [9], is that study participants reported that
they tend not to use printed documentation. As indicated in Table
3, the mean time since the users last used printed documentation
was over 61 months. The median time was 24 months.
Participants’ use of online documentation was more current: the
mean time since last use of on-documentation was 1.64 months,
and the median time was less than half a month. For both printed
and online documentation, the median values may be more
representative of users’ experiences than the mean.
As can be seen in Figure 2, almost all of the participants had
recently used online documentation; the mean reflects a few
participants who tended not to have used online documentation
for periods of 5 to 11 months. Conversely, only a few participants
had recently used printed documentation; most had not used
printed documentation in over a year, and some had not used
printed manuals in over 10 years.
Printed Online
Mean 61.37 1.64
Median 24 0
Table 3. Mean and median number of months since last use of
printed and online documentation.
0
50
100
150
200
250
Months Since Last Use
On-Line Printed
Figure 2. Distribution, months since last use of online and printed
documentation.
0124816 32 64 128 256
0
2
4
6
8
10
12
14
N
Months Since Last Use
On-Line Pr in te d
Figure 3. Histogram, months since last use of online and printed
documentation, exponential scale.
The distribution of months since last use of printed and online
documentation presented in Figure 2 is summarized in Figure 3
using an exponential time scale. The graphs suggest that there are
a small number of users who regularly consult printed
documentation and a much larger cluster of users who do so
rarely. Conversely, most participants reported that they consult
online documentation frequently; a smaller cluster do so only
once every year or so. With respect to printed documentation,
these findings are consistent with both the popular wisdom and
the results reported in [6] and [9]. However, the cause of this
pattern may not be participants’ aversion to printed manuals but
rather that software manufacturers are increasingly less likely to
provide a printed manual. For their top three applications, for
example, participants reported that they did not have a printed
manual. With respect to online documentation, the results are
again consistent with those of the earlier studies, in that reported
rates of use of online documentation are higher than those for
printed materials. However, rates of use of online materials
appear to be somewhat higher than previously reported. This may
be attributable to the differences among the studies with respect to
subject populations. Perhaps computer science students and
middle-school teachers are less likely than other computer users
to consult online documentation.
These relative patterns of use of documentation are confirmed by
participants’ reported distributions of methods with which they
responded to usability problems. As indicated in Figure 4,
participants reported that they are least likely to use a printed
manual (mean=3.16 percent) and are about equally likely to ask
someone else, use online help, or solve the problem without help.
In some cases, users give up (mean=10.48 percent). Table 4
presents detailed numerical results.
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
100.00
Asked
Other
Used On-
Line Help
Used
Printed
Manual
Solved
Without
Help
Gave Up
Percent
Range Mean
Figure 4. Range and mean of reported solution methods.
Partici
p
ants
Max Mean Std
Dev Median
Asked Other 90 28.98 24.14 25
Used Online Help 75 27.94 19.61 25
Used Printed
Manual 20 3.16 6.16 0
Solved Without
Help 95 29.36 19.47 25
Gave Up 50 10.48 12.36 5
Table 4. Range and mean of reported solution methods.
For each method at least one participant reported zero use, so the
range of use is zero to the maximum value. Again, the
participants’ reported use of printed documentation appears to be
consistent with that of prior studies, but their use of online
documentation appears to be higher than previously reported.
Users may not be reading the manual—perhaps because they do
not have one—but more participants than indicated in previous
studies reported that they are using online help systems, about 28
percent rather than 4 percent. These data, though, indicate users’
eventual solution methods rather than the methods they first tried.
4.2 Qualitative Results
Several of the questions asked were qualitative in nature. The
number of subjects limits the conclusions that can be drawn from
these data; several patterns can be seen in the responses, however.
In this section we discuss these observations.
One of our goals in this study was to assess attitudes underlying
or associated with reluctance to use documentation. Accordingly,
we asked 19 of the participants to offer adjectives or short phrases
describing their associations with paper manuals and with online
help facilities. Overall, the participants’ word associations
indicated strongly negative attitudes toward printed manuals and
largely positive attitudes toward online documentation.
Participants offered a total of 9 positive and 61 negative words
and phrases about printed documentation, and offered a total of 40
positive and 22 negative words and phrases about online
documentation. Several patterns in the responses were evident.
Participants suggested that manuals are
physically hard to handle (58 percent of the 19
participants offered descriptions such as “bulky,” and
“cumbersome”)
hard to navigate (37 percent)
too basic to be useful (37 percent)
hard to understand (26 percent)
unstylish (26 percent characterized manuals as
“unstylish,” “boring,” or “antiquated”)
out of date with respect to the software (21 percent)
Of the 19 participants, 32 percent identified at least one positive
attribute of paper manuals, though, including the observations that
manuals can be helpful, handy, informative, and that one can
write or place bookmarks in them. Sixteen percent said that
documentation is easier to read in paper form than on the screen.
Attitudes toward application online help facilities were more
positive. Participants said that online documentation is
convenient (58 percent)
helpful (26 percent)
searchable (21 percent)
easy to use (21 percent)
But participants also offered negative associations: like paper
manuals, online help can be hard to navigate (32 percent) and
may be too basic to be useful (16 percent).
The descriptions offered by the participants in this study accord
well with the findings of earlier studies [6, 9, 13] that users do not
view a paper manual as the source of choice.
The most common negative descriptions characterized a paper
manual as cumbersome, suggesting that users might not be using
them because they find it impractical to keep them physically
nearby and convenient. One participant mentioned that her
organization would be throwing out “tons of manuals” that they
no longer need. The perception that paper manuals are
cumbersome may be reinforced by the success of the movement
toward minimal manuals. We note that one quarter of the
participants reported viewing manuals as unstylish, and one fifth
characterized manuals as being out of date with respect to the
current software release, suggesting that printed manuals
(correctly) are associated with older versions of software.
The other negative perceptions are more troubling: both manuals
and online help facilities are viewed as hard to navigate (37
percent for paper manuals, 32 percent for online help) and
offering an inappropriate level of explanation (a total of 63
percent for manuals, 16 percent for online). While these
perceptions are stronger for manuals, they suggest that online help
is not completely successful in solving those problems.
These comments and the anecdotes offered by the study
participants suggest several specific factors that may be reducing
the usability, utility and attractiveness of documentation. These
factors, discussed in the following sections, include navigation,
search terms, level of explanation, screen real-estate, and
uncertain boundaries among applications, network, and operating
system.
4.2.1 Navigation
As applications and their associated help facilities become more
complex, navigation becomes an increasing problem.
Participants in our study repeatedly reported knowing how to
accomplish a task but not remembering where to find the
functionality, echoing the observations of Baecker et al. [1]. This
is an intrinsic limitation of graphical user interfaces: it can be hard
to find something that you cannot see. The solution of adding ever
more tool bars to keep functionality visible may be reaching a
limit, though; the clutter of tool-bar options may be making it
difficult to see the desired functionality even when it is clearly
displayed. When asked to describe a recent problem they had
encountered, for example, 28 percent of the 25 participants
recalled incidents involving problems turning off or hiding change
tracking in Word. In current editions of Word the relevant tool bar
is displayed by default when change tracking is enabled, and the
needed controls are visible. Despite this, participants reported
searching extensively, consulting help, and asking colleagues.
So when users have trouble navigating the application, they turn
to help—only to have trouble navigating that as well. Even when
intending to go to Microsoft’s Internet site, for example, one
subject reported that she uses Google to locate the page within the
Microsoft site to avoid having to navigate the site directly.
Another participant expressed frustration at getting into a
reference loop when using help. A third described finding the
information she needed as being like “searching for buried
treasure,” and a fourth complained of a lack of cues to know
whether the path will lead to the desired information. One person
mentioned an interesting twist on the help navigation problem: he
had difficulty locating the help functionality. He wanted the local
help documentation, but he kept accessing the Internet-based
documentation instead.
4.2.2 Search Terms
Participants reported feeling frustrated or having difficulty
locating information because the terms and keywords used in the
help facilities failed to match their own vocabulary; they didn't
know the right word to use in searching. Some participants
indicated that they compensated for this terminology mismatch by
scanning an index or table of contents instead of using the search
facility. They find it easier to locate the appropriate section of the
documentation and then browse for the specific information that
they need.
4.2.3 Level of Explanation
Participants mentioned problems with the level of explanation
being offered; as might be expected, however, they did not agree
as to whether the explanations were too basic or too technical.
One participant described one help facility as covering “stuff so
basic you wonder ‘does anyone really need to read that?’” Others
spoke of explanations being hard to understand, of assuming that
the user knows the jargons or symbols of the application, of not
being written for the casual user. Subject preferences for asking
another person or turning to unofficial Internet-based sources may
be a partial attempt to find help offered at an appropriate level of
complexity and expressed in correspondingly appropriate terms.
4.2.4 Screen Real Estate
Although most participants preferred accessing help online, we
saw some indications of limitations of on-screen presentation of
information. Five participants (25 percent) reported using the
online help facility to locate documentation so that they could
print it for use in solving the problem. Limited screen real estate
and window-manager behavior may be part of the problem: these
participants complained that help windows hide (or are hidden by)
application windows, making it impossible to see both at once.
4.2.5 Uncertain Boundaries
Four participants (16 percent) appeared to be uncertain as to the
boundaries among applications, network, and operating system.
For example, one participant reported looking in the application
help to determine why her mail reader had lost connection to the
mail server. Another described difficulties sharing a spreadsheet
across applications. This confusion is understandable in that
applications are converging on common GUI conventions and
more closely integrating what were once distinct applications. So
to the extent that users have less to learn about user interfaces, it
becomes ironically much less clear to users where a problem is
occurring and thus in which application’s documentation they
should look for answers. Likewise, no single application’s
documentation is likely to address incompatibilities or
interactions with other applications. Thus, a vexing class of
problems “falls through the cracks” and is not addressed by any
official documentation—and this may not be apparent to users
when they are dealing with such a problem.
4.2.6 Summary
The qualitative results suggest some systematic reasons why users
report that they do not turn first to documentation when
encountering problems. Whether presented on paper or online,
users find documentation difficult to navigate. It may not be clear
to users which application’s documentation—if any—holds the
solution to the problem that they are seeing. If they do manage to
locate an answer, it likely is written at the wrong level of detail or
expertise. And even if the answer is useful, a lack of screen real
estate makes it challenging to both read and follow the
instructions at the same time.
5. CONCLUSION
The interviews conducted in this study indicate that computer
users at work report that they generally do not use printed
documentation. On average, more participants reported that they
abandoned a task than used printed documentation. The median
proportion of the times that participants reported solving problems
with computer applications by using printed manuals was 0
percent. The most likely reasons for not using printed manuals are
their perceived unavailability, bulkiness, difficulty of navigation,
inappropriate level of detail or expertise relative to the user, and
being out of date, either in their content or just in the “dated”
quality of being a printed document.
In these interviews, computer users in work contexts reported
using online documentation more frequently than suggested by
previous studies. However, the median proportion of the times
that participants reported that they solved problems with
computer applications by using online help was only 25 percent.
Users believed that they were equally likely to solve a problem by
asking someone else or by finding a solution without help. While
they report using online documentation more frequently than
printed documentation, the reasons that users do not like online
help are similar to some of the reasons that they do not like
printed manuals, primarily that the documentation is hard to
navigate and that it is pitched at the wrong level of detail or
expertise. Additionally, online help and the user’s application
compete for screen area, which makes it hard for users to view
both at the same time.
5.1 Discussion
Users may not be using paper manuals because paper manuals are
not available. No participant in our study complained about a lack
of paper documentation, however, which suggests that minimal
manuals have been well accepted by these application users. In
fact, when asked to describe the characteristics of a good manual,
three participants indicated that they could not conceive of such a
thing, and two more talked about having the “paper” manual on
CD-ROM. These responses may indicate that the distinction
between printed manuals and online help lies not in paper vs.
electrons but in the organization of the content. Indeed, users’
reluctance to use online help may reflect the extent to which
online help is simply an electronic equivalent of traditional paper
manuals. And given the participants’ concern about the effects of
terminology mismatch on search, it becomes increasingly
apparent that simply adding a search function to this kind of
online manual does not create a useful help system.
Users may perceive the time cost of figuring something out via
documentation as comparable to the time cost of solving the
problem by trial and error. The one college student in this study
preferred to solve problems by trial and error, which is consistent
with the observations of Ceaparu et al. [6]. Most of the
participants in this study, however, were busy professionals who
may not have felt that they had time to spend in exploring either
the application interface or the documentation to find an answer.
In light of these disincentives, users may be reasonable in
preferring other sources of help. It may be faster to locate a
colleague than to navigate the “convenient” online help. The
colleague will not insist that you use the correct terms or
correctly identify the source of a problem, and most people will
adjust their explanations to the level of the questioner—or
better yet, just fix the problem themselves. Asking a
knowledgeable human being is the gold standard for a help
facility. Thus patterns of solution preferences may reflect
availability of preferred alternatives to documentation. The
middle-school teachers in [9] had many colleagues working on
exactly the same problem, so numerous relative experts with
current knowledge of relevant solutions were available to them.
5.2 Limitations and Future Work
When we designed and executed this study, we asked the
participants to tell us about their use of “online” help. The
online help category had been used by Ceaparu et al. [6] and
again used by Mendoza and Novick [9]. In our interviews, we
learned that subjects actually use three different kinds of online
help: the help provided with the application, help available from
the publisher via the Web, and help available from unofficial
sources such as online forums and newsgroups, usually located
via a search engine. While these three kinds of online help
remain aggregated in this study’s quantitative analysis,
qualitative analysis of the participants’ accounts of their use
suggests that participants in fact consider these to be distinct
sources of information. For example, one participant
commented that he did not want to be referred to a “chat room
with 100 postings” because he doesn’t have time to wade
through them.
Another category in which subcategories might be clarified is
“asked someone else.” In Ceaparu et al. [6], the reported data
did not indicate the kind of person whom the subject asked for
help. Similarly, in Mendoza and Novick [9] this category was
not subdivided because in virtually all cases the subject asked a
colleague. In retrospect, this apparent uniformity may have
resulted in part because the trainer was also one of the subjects’
colleagues. In the present study, the quantitative data remain
aggregated in the category of “asked someone else,” but
analysis of participants’ responses suggests that the “someone
else” might be either a colleague or, rather, a professional
staffing a help desk. This difference is likely to be important,
because (a) our data suggested that the availability of a help
desk varied hugely among the participants, and (b) we can
distinguish between informal social networks and formal
commercial relationships.
With these insights born of experience with the present study,
we have started a second study that will distinguish among
kinds of online help and among ways in which people “asked
someone else.” The study will also include protocol analyses of
interview subjects, observing the participants at work. We
expect the protocol analyses will enable us to assess the validity
of participants’ accounts of their use of documentation and will
help us model (cf., [13]) the processes through which the
participants seek to solve the usability problems they encounter.
6. ACKNOWLEDGMENTS
This work was supported by National Science Foundation
Award No. 0080940 and an endowment from SBC. Nigel Ward
provided a useful critique of this paper. We thank the
anonymous reviewers for their helpful comments. We extend a
special thanks to the participants, who generously gave us their
time.
7. REFERENCES
[1] Baecker, R., Booth, K., Jovicic, S., McGrenere, J., and
Moore, G. (2000). Reducing the gap between what users
know and what they need to know. Proceedings of the ACM
2000 International Conference on Intelligent User
Interfaces, 17-23.
[2] Bessiere, K., Ceaparu, I., Lazar, J., Robinson, J., and
Shneiderman, B. (2003). Social and psychological
influences on computer user frustration, CS Technical
Report 4410, Department of Computer Science, University
of Maryland.
[3] Beyer, H., and Holtzblatt, K. (1996). Contextual design:
Defining customer-centered systems. San Francisco:
Morgan-Kaufmann.
[4] Carroll, J. (1990). The Nurnberg funnel: Designing
minimalist instruction for practical computer skill.
Cambridge, MA: MIT Press.
[5] Carroll, J. (Ed.) (1998). Minimalism beyond the Nurnberg
funnel. Cambridge, MA: MIT Press.
[6] Ceaparu, I., Lazar, J., Bessiere, K., Robinson, J., and
Shneiderman, B. (2004). Determining causes and severity
of end-user frustration, International Journal of Human-
Computer Interaction, 17(3), 333-356.
[7] Hazlett, R. (2003). Measurement of user frustration: a
biologic approach, Conference on Human Factors in
Computing Systems (CHI 2003), April, 2003, Fort
Lauderdale, FL, 734-735.
[8] Hilbert, D. (1998). A survey of computer-aided techniques
for extracting usability information from user interface
events, Technical Report UCI-ICS-98-13, Department of
Information and Computer Science, University of
California at Irvine, March, 1998.
[9] Mendoza, V., and Novick, D. (2005). Usability over time,
Proceedings of SIGDOC 2005, Coventry, UK, September,
2005, 151-158.
[10] Rettig, M. (1991). Nobody reads documentation,
Communications of the ACM, 34(7), July, 1991, 19-24.
[11] Smart, K., De Tienne, K., and Whitting, M. (1995).
Documentation design decisions: Accounting for customer
preferences, Proceedings of SIGDOC 95, September-
October, 1995, Savannah, GA, 155-156.
[12] Smart, K., De Tienne, K., and Whiting, M. (1998).
Customers' use of documentation: The enduring legacy of
print, Proceedings of SIGDOC 98, September, 1998,
Quebec, Canada, 23-28.
[13] Smart, K., Whiting, M., and De Tienne, K (2001).
Assessing the need for printed and online documentation: A
study of customer preference and use, Journal of Business
Communication 38(3), 285-314.
... Users currently turn to various software help resources to learn and seek help for such software tasks. For example, they usually begin by querying online search engines using keywords to locate specific resources, such as video and text-based tutorials, forums posts, and blogs and articles [3,22,35,36]. However, online software-help seeking is a complex endeavour, demanding precise queries to pinpoint the most pertinent information that can be directly applied within the application [18,22]. ...
... HCI research has a rich history of investigating the challenges that users experience when learning and seeking help for complex feature-rich applications [11,18,22,35]. Help-seeking resources and approaches have evolved over the years: from formal documentation and manuals [36,39] to the use of videos [23,25], interactive tutorials [35], Google Search, Q&A or FAQ sites, blogs, dedicated forums [22] and even contextual help systems embedded within applications [7,11,17,19,24]. ...
Conference Paper
Full-text available
Large Language Model (LLM) assistants, such as ChatGPT, have emerged as potential alternatives to search methods for helping users navigate complex, feature-rich software. LLMs use vast training data from domain-specific texts, software manuals, and code repositories to mimic human-like interactions, offering tailored assistance , including step-by-step instructions. In this work, we investigated LLM-generated software guidance through a within-subject experiment with 16 participants and follow-up interviews. We compared a baseline LLM assistant with an LLM optimized for particular software contexts, SoftAIBot, which also offered guidelines for constructing appropriate prompts. We assessed task completion, perceived accuracy, relevance, and trust. Surprisingly, although SoftAIBot outperformed the baseline LLM, our results revealed no significant difference in LLM usage and user perceptions with or without prompt guidelines and the integration of domain context. Most users struggled to understand how the prompt's text related to the LLM's responses and often followed the LLM's suggestions verbatim, even if they were incorrect. This resulted in difficulties when using the LLM's advice for software tasks, leading to low task completion rates. Our detailed analysis also revealed that users remained unaware of inaccuracies in the LLM's responses, indicating a gap between their lack of software expertise and their ability to evaluate the LLM's assistance. With the growing push for designing domain-specific LLM assistants, we emphasize the importance of incorporating explainable, context-aware cues into LLMs to help users understand prompt-based interactions, identify biases, and maximize the utility of LLM assistants.
... These manuals, typically available in PDF or HTML format, provide guidance on the correct and recommended setup of configurations to system administrators, containing detailed textual descriptions of configuration parameters, their descriptions, usage, and constraints. Nevertheless, as these manuals are exceedingly voluminous, many administrators tend not to read them in detail and instead rely on intuition to configure software [34,59], frequently leading to misconfiguration and subsequent software failures. Hence, it is crucial to develop an automatic tool that extracts configuration specifications from these sources, to provide guidance to administrators, or integrate automated tools that suggest best practices. ...
Preprint
Full-text available
Software configurations play a crucial role in determining the behavior of software systems. In order to ensure safe and error-free operation, it is necessary to identify the correct configuration, along with their valid bounds and rules, which are commonly referred to as software specifications. As software systems grow in complexity and scale, the number of configurations and associated specifications required to ensure the correct operation can become large and prohibitively difficult to manipulate manually. Due to the fast pace of software development, it is often the case that correct software specifications are not thoroughly checked or validated within the software itself. Rather, they are frequently discussed and documented in a variety of external sources, including software manuals, code comments, and online discussion forums. Therefore, it is hard for the system administrator to know the correct specifications of configurations due to the lack of clarity, organization, and a centralized unified source to look at. To address this challenge, we propose SpecSyn a framework that leverages a state-of-the-art large language model to automatically synthesize software specifications from natural language sources. Our approach formulates software specification synthesis as a sequence-to-sequence learning problem and investigates the extraction of specifications from large contextual texts. This is the first work that uses a large language model for end-to-end specification synthesis from natural language texts. Empirical results demonstrate that our system outperforms prior the state-of-the-art specification synthesis tool by 21% in terms of F1 score and can find specifications from single as well as multiple sentences.
... This is significant considering the reliability and validity of a measure are "dependent on the consistency with which it can be administered, scored, and interpreted" [28]. Moreover, because users of clinical outcome measures frequently do not read clinical outcome measures users' manuals [29], we created scoring flow diagrams that are intuitive and allow for low burden and accurate scoring. ...
Article
Study design: Qualitative studies. Objective: To develop clear and specific administration and scoring procedures for the Spinal Cord Independence Measure Version 3.0 as a performance-based and interview assessment. Setting: Research lab. Methods: Modified Delphi Technique survey methods were used in this study. Previously developed SCIM-III administration and scoring procedures for performance-based and interview versions were presented to clinicians experienced in SCI and SCIM-III using the Qualtrix (Qualtrics, Provo, UT) online survey platform. Summary and descriptive statistics were used to assess the percent agreement survey responses. Results: Three survey rounds were necessary to achieve 80% agreement or above for the performance-based version. Two survey rounds were necessary to achieve 80% agreement or above on the interview version. Conclusions: This study describes the development of standardized administration and scoring procedures for the self-care and mobility sub-scales of the SCIM-III as a performance-based and interview version.
... This is signi cant considering the reliability and validity of a measure are "dependent on the consistency with which it can be administered, scored, and interpreted" [28]. Moreover, because users of clinical outcome measures frequently do not read clinical outcome measures users' manuals [29], we created scoring ow diagrams that are intuitive and allow for low burden and accurate scoring. ...
Preprint
Full-text available
Study Design: Qualitative studies Objective: To develop clear and specific administration and scoring procedures for the Spinal Cord Independence Measure Version 3.0 as a performance-based and interview assessment. Setting: Research lab Methods: Modified Delphi Technique survey methods were used in this study. Previously developed SCIM-III administration and scoring procedures for performance-based and interview versions were presented to clinicians experienced in SCI and SCIM-III using the Qualtrix (Qualtrics, Provo, UT) online survey platform. Summary and descriptive statistics were used to assess the percent agreement survey responses. Results: Three survey rounds were necessary to achieve 80% agreement or above for the performance-based version. Two survey rounds were necessary to achieve 80% agreement or above on the interview version. Conclusions: This study describes the development of standardized administration and scoring procedures for the self-care and mobility sub scales of the SCIM-III as a performance-based and interview version.
Article
A variety of consumer Augmented Reality (AR) applications have been released on mobile devices and novel immersive headsets over the last five years, creating a breadth of new AR-enabled experiences. However, these applications, particularly those designed for immersive headsets, require users to employ unfamiliar gestural input and adopt novel interaction paradigms. To better understand how everyday users discover gestures and classify the types of interaction challenges they face, we observed how 25 novices from diverse backgrounds and technical knowledge used four different AR applications requiring a range of interaction techniques. A detailed analysis of gesture interaction traces showed that users struggled to discover the correct gestures, with the majority of errors occurring when participants could not determine the correct sequence of actions to perform or could not evaluate their actions. To further reflect on the prevalence of our findings, we carried out an expert validation study with eight professional AR designers, engineers, and researchers. We discuss implications for designing discoverable gestural input techniques that align with users' mental models, inventing AR-specific onboarding and help systems, and enhancing system-level machine recognition.
Article
Full-text available
The emergence of smart wearable devices has gradually introduced a vast array of functions that may greatly serve the everyday needs of users. Although their ever-increasing list of capabilities may be compensated for by intuitive design, users may still have numerous reasons to engage with their user manuals. In this paper, we present four research efforts on the user manuals of smart wearable devices. Our work tackles various influence factors, investigates user behavior, and studies how the device itself affects user manual engagement, as well as the other way around. The research efforts address user experience and behavior, personal preference, device attributes, and the performance metrics of user manuals. We specifically studied smartwatches and smart bands, and explored engagement with medical-purpose smart wearables and their user manuals. As a novelty of our contribution, we classify test participants as regular users, professional users, and technical writers, based on their connection to user manuals, and separately analyze their responses to our questionnaire-based research efforts. The obtained results indicate that user experience and device complexity as influence factors have a statistically significant impact on user manual engagement for regular users, and highlight how regular users and technical writers differ in preference and search-related tasks.
Article
This paper presents and justifies Gephisto, an experimental tool visualizing networks in one click. Gephisto’s design exemplifies how we can interfere with a user’s utilitarian goals, by giving them what they wish (an easy way to get a network map) but in disobedient ways (the produced map is different every time the tool is used) that encourage them to engage further with the tool’s methodological tenets. As an apparatus, Gephisto aims to incentivize untrained users to become more critical of their network mapping practices. As an intervention into the field of digital methods, it aims to show that tools that support critical thinking do not have to be hard to use and hostile to beginners. We criticize the idea that tools range from easy-to-use black boxes for unreflexive lazy-thinkers, to complex and demanding instruments for hard-thinking experts. We argue that learners need ease of use and critical thinking at the same time, and that it is possible to design tools that support both needs at once. We offer an alternative model where we acknowledge the active role of the user in deciding the tradeoff between learning to master the tool, and progressing toward their utilitarian goals. We argue that the design of the tool should not oppose the beginner’s need for assistance in decision making, but find other ways to incentivize critical thinking.
Article
Full-text available
While computers are beneficial to individuals and society, frequently, users encounter frustrating experiences when using computers. This study attempts to measure, through 111 subjects, the frequency, cause, and the level of severity of frustrating experiences. The data showed that frustrating experiences happen on a frequent basis. The applications in which the frustrating experiences happened most frequently were web browsing, e-mail, and word processing. The most-cited causes of the frustrating experiences were error messages, dropped network connections, long download times, and hard-to-find features. The time lost due to the frustrating experiences ranged from 30.5% of time spent on the computer to 45.9% of time spent on the computer. These disturbing results should be a basis for future study.
Article
The document has become one of the few tangible deliverables of intellectual capi tal in an information age. As the value of business and technical communication has grown, the importance of designing information to meet customers' needs has increased. This article explores the design option of channel choice or medium selection (delivering information in print or electronic form) and reports the results of two studies that examine customers' preferences and use of printed man uals and online help, common documents used in the computer software industry. Through the past several years, many businesses have been anxious to move docu mentation online to reduce costs. However, research has not adequately addressed how users react to print versus online documentation or whether this approach is cost effective over time, taking into account customer satisfaction, repeat sales, and other business issues.The first study reports the results of a survey of 400 users of a word-process ing application and their preferences and use of printed and online documenta tion. The second study uses an ethnographic approach, contextual inquiry (CI), to examine 18 subjects' use of printed and online documentation in context. Results showed that users prefer different types of documentation for different types of tasks. The implications of these findings for business communication practice and research are discussed
Conference Paper
This paper describes the use of facial EMG to provide a continuous measure of the user's emotional state. Facial EMG was recorded while female users performed five tasks to one of two web sites. Frustration index scores were developed from the corrugator EMG data by calculating a percentage score of a pre-task baseline. As predicted, the frustration index was greater for (1) novices as compared to experienced users, (2) incorrect as compared to correct answered tasks, and (3) for the web site that was rated more difficult. The frustration index was able to provide important information on web page performance.
Conference Paper
Because of newly developed technologies and the escalating cost of printing, more organizations are delivering documentation through online mediums. This trend has serious implications for users of legacy documents (i.e., printed documentation). This paper reports the results of two studies that assessed customers’ use of documentation. The studies found that printed documentation remains an important source of information and support for some usersthat despite an increased amount of documentation delivered in online mediums, printed documentation continues to be significantly valued and used by customers with particular learning strategies and by customers in retrieving certain types of information. The paper concludes by discussing implications for organizations whose products require documentation and for technical communicators who produce it.