Content uploaded by Tim Comber
Author content
All content in this area was uploaded by Tim Comber
Content may be subject to copyright.
The Importance Of Text Width And White Space
for Online Documentation
Tim Comber
Acknowledgements
Page 2
ACKNOWLEDGEMENTS
I would like to express my appreciation of the time and effort
expended by John Maltby in supervising this research. His help
and guidance have enabled me to keep to schedule and focused
my research effort. Thanks are also due to Jamie Walton for his
advice on statistical procedures.
Abstract
Page 3
ABSTRACT
This study investigates the importance of text width and passive
white space on comprehension, speed of reading and user
satisfaction for text displayed on the monitor of a personal
computer. Thirteen subjects were tested with 8 different text
widths and white space or no white space, 16 conditions in all.
The results showed no relationships between the different text
widths and white space for reading speed and comprehension
but a significant relationship for satisfaction was found. The
results suggest that individual differences in reading abilities
are more important to reading speed and comprehension than
text formats. However for maximum user satisfaction, text
should have margins and be between 3 and 5 inches in width.
Guidelines based on studies of print media may not be entirely
applicable to computer displays. Future studies should
investigate longer text passages and interactive help.
Contents
Page 4
CONTENTS
Acknowledgements 2
Abstract 3
Contents 4
List of Tables and Illustrations 6
1. Introduction 7
1.1. Background 7
1.2. Hypothesis and Objectives 8
1.3. Justification 9
1.4. Methodology 10
1.5. Definitions 11
1.6. Limitations and Key Assumptions 12
1.7. Outline of Report 13
2. Literature Review 14
2.1. Psychological Processes 14
2.2. Theory 16
2.3. Reading from the Screen 17
2.4. Learning 20
2.5. User Manuals 21
2.6. Online Help 22
2.7. Presentation Of Text 24
2.8. Line Width 29
2.9. White Space 29
3. Methodology 32
3.1. Subjects 32
Contents
Page 5
3.2. Design 32
3.3. Data Collection 33
3.4. Method of Collection 34
3.5. Treatment of Data 39
4. Analysis of Data 41
4.1. Hypotheses and Null Hypotheses 41
4.2. Results and Discussion 42
5. Summary and Conclusions 48
5.1. Introduction 48
5.2. Findings About Hypotheses 48
5.3. Conclusions About the Research Problem 51
5.4. Implications 51
5.5. Limitations 52
5.6. Further Research 52
Bibliography 54
Appendix A - Experiment for Subject 1 57
Appendix B - Results 77
Raw Data 77
Comments by Subjects 83
Appendix C - Consent Form 85
Appendix D - Objectvision Decision Trees 87
List of Tables and Illustrations
Page 6
LIST OF TABLES AND ILLUSTRATIONS
Table 1: Mean reading time (in seconds) and comprehension (percent correct)
as a function of presentation mode and presentation order (Belmore 1985).
18
Figure 1: Taxonomy of online documentation (Shirk 1988) 22
Table 2: Comparison of VT100 and SVGA video displays 26
Figure 2: 2 inch no white space condition 33
Figure 3: 5 inch white space condition 33
Figure 4: Structure chart for the Objectvision program 35
Table 3: Order of presentation of text screens for all subjects 38
Table 4: Experimental condition associated with each screen 39
Table 5: Order of Text Conditions for Each Subject 39
Table 6: Means and standard deviations for width and white space for reading
times, answer times, satisfaction and comprehension 42
Figure 5: Comparison of average reading times 43
Table 7: Summary of Analysis of reading times 43
Figure 6: Comparison of average answer times 44
Table 8: Summary of analysis of answer times 44
Figure 7: Comparison of average comprehension scores 45
Table 9: Summary of analysis of comprehension scores 45
Figure 8: Comparison of average satisfaction rating 46
Table 10: Summary of analysis of satisfaction rating 46
Table 11: Comparison of results reported in literature for width 50
Table 12: Comparison of results reported in literature for white space
50
1. Introduction 1.1. Background
Page 7
1. INTRODUCTION
1.1. Background
"Human-computer interaction is a discipline concerned with the design,
evaluation and implementation of interactive computing systems for human
use and with the study of major phenomena surrounding them" (Hewett 1992).
The presentation of text on computer screens is one aspect of this interaction
that requires more investigation.
The Association for Computing Machinery predicts that computers will be
used for images, voice, sounds, video, text, and formatted data, all of which
can be exchanged via communication links. Electronic and print media will
continue to be cross-assimilated. New display technologies are expected to
enable very large displays (Hewett 1992).
The computer screen has become an important intermediary between the user
and electronically stored information (Rubens and Krull 1988). Online
information can take many forms:
•advertising and exhortations to pay for the software
(particularly shareware)
•canned demonstrations
•company documents
•data entry and retrieval prompts
•electronic journals
•electronic mail
•electronic newsgroups such as are found on Internet and
Compuserve
•entire books
•error and system messages
•full graphics
•full text
•latest changes to the software
•multimedia encyclopedias and other reference material
•online help
•support for interactive tasks
•user manual
•tutorials
•whole text databases such as ABI/Inform and Computer Select.
1. Introduction 1.2. Hypothesis and Objectives
Page 8
People that work with computer displays must extract information from the
screen according to the demands of the task. Each screen of information must
be presented in a manner that helps the reader to process the information
(Tullis 1983).
Many users do not consult the paper manuals but instead attempt to operate
software by exploration. Grill (in Brockman 1990) found that users said they
read the manual thoroughly, but after an average of nine minutes, it was put
aside in favour of trial and error.
There has been much research on the presentation and readability of text on
computer screens (Mills and Weldon 1987). There are a number of factors that
have the potential to affect the ease with which people can extract information
from the screen. Such factors as fonts, brightness, page size, windows, and
number of characters in the display have been investigated.
One area that has received some attention but has not been resolved is the
width of text on the screen. One viewpoint (Mills and Weldon 1987) argues
that more information and quicker retrieval is possible with greater density of
text on the screen. On the other hand, studies with written text (Brockman
1990) have shown that the wider the text the more effort required by the
reader to find the beginning of the next line. This slows reading speed and
reduces comprehension.
Too narrow a text width also has problems. Reading speed was reduced when
text width was reduced from about 2 ins to 1.67 ins (Brockman 1990).
Graphical interfaces do allow the user to narrow the window to any width but
this may not be satisfactory if width and white space are important because:
•Τhe user may not realise that the width of text has an effect on
reading.
•Narrowing cramps the screen, increasing the density of text and
reducing the amount of "white space". Well designed text
screens should have at least 50% white space (Brockman 1990).
1.2. Hypothesis and Objectives
People are increasingly relying on computer screens to provide textual
information. Guidelines for design of online documentation base
recommendations on research with print media and with earlier studies using
what is now outdated equipment. It is necessary to update research and to
investigate the effect text width and white space has on user’s reading
performance.
From the literature search for this thesis, empirical research on white space
with computer displays seems to be non-existent. In early computer
experiments, white space was given little consideration because the screens
1. Introduction 1.3. Justification
Page 9
could only display a limited amount of text. Modern screens are capable of
displays more closely approaching print media.
Expectations of previous research are that text width and passive white space
are important to reading speed, comprehension and satisfaction. The purpose
of this study is to confirm or otherwise this expectation for computer monitors
by investigating if there is a relationship between the dependent variables
•reading speed
•comprehension
•satisfaction
and the independent variables
•width of text
•amount of white space.
1.3. Justification
Current guidelines (Phillips and Crock 1992) frequently suggest that the width
of text should be between 40 and 80 characters. This recommendation is based
on research conducted in the 1980’s using VT100 or similar terminals e.g.
(Duchnicky and Kolers 1983; Cherry, Fischer, Fryer and Steckham 1989).
Since this research was done the technology has changed dramatically. Rubens
(1986) was able to suggest that a typical industrial-grade monitor had a screen
9 inches by 6 inches capable of displaying 80 characters to a line and 24 lines
deep. These early terminals were incapable of displaying the different fonts
and point sizes of modern terminals and as well were less wide and of much
lower resolution. As a consequence the old guidelines are no longer valid.
If line width is proved important then it needs to be expressed as standard
measure e.g. picas, inches or centimetres. Line width expressed as number of
characters has no relevance to graphical interfaces.
A number of references e.g. (Brockman 1990; Rubinstein 1988 ; Horton 1990)
state an opinion that white space is important. One explanation is that an area
of white space is needed to reduce visual distraction. This is felt to be less
important with online documentation because of the border or frame of the
monitor. However another viewpoint is that white space is important to:
•frame sections of text
•break large amounts of text into meaningful chunks
•draw the reader’s attention to text items (Horton 1990).
Horton believes that passive white space is not necessary for online documents
because the reader:
•already has the monitor frame to separate the text from visual
distractions,
1. Introduction 1.4. Methodology
Page 10
•does not need a margin to hold the document while turning
pages.
However he does not offer any evidence to support his claim.
1.4. Methodology
1.4.1. Description
Thirteen undergraduates of the University of New England, Northern Rivers
volunteered for an experiment on text presentation. The experiment was
expected to take between half to one hour.
The independent variables, width and white space, were arranged in a repeated
measures, two factors design. The width of the text was varied in 1 inch
increments from 2 inches to 9 inches and the text passages had either passive
white space or no passive white space. The text passages of approximately 150
words were taken from the "help" files of Microsoft Word 2.0.
The software automatically recorded the time to read each passage of text and
the time to answer each set of questions. After reading each text passage, each
subject rated readability on a scale from very good to very poor. This gave the
satisfaction rating for each screen condition. The rating question was followed
by three questions based on the passage just read which gave a comprehension
score for each passage.
1.4.2. Statistical Processes
The design of the experiment was based on a treatments-by-treatments-by-
subjects, or repeated-measures: two factors, design (Bruning and Kintz 1987).
This is a form of ANOVA that allows for the means to be compared for each
condition and for the interaction between the conditions. The calculations were
carried out using a spreadsheet and the application of the appropriate formulae.
The analysis was checked by analysing the summary data using a two factor
without replication ANOVA provided by Microsoft Excel 4.0. as part of the
analysis tools.
The four main assumptions underlying the use of these statistical procedures
are:
1. a normally distributed population
2. independent observations
3. measurements on interval or ratio scale
4. homogeneity of variance
If these assumptions are not valid there is increased probability of making a
Type I error i.e. accepting results as significant when they are not (Wright and
Fowler 1986). It is reasonable to expect that the population of readers is
normal. Subjects were assigned randomly to conditions and the conditions
1. Introduction 1.5. Definitions
Page 11
were randomised. The time and comprehension measurements are on a ratio
scale though the satisfaction rating is only ordinal. As there are no equivalent
non-parametric tests to apply to this experimental design the ANOVA is most
suitable.
1.4.3. Justification for the Methodology
Experiment or research design enables the experimenter to answer the research
question and exclude rival hypotheses and extraneous variables that could also
explain the cause-effect relationship, (Huck, Cormier and Bounds 1974).
This research used the experimental method for a variety of reasons:
•it provided evidence for the importance of the independent
variables
•it enabled control of font size, terminal type, lighting
conditions, computer type, operating system and interface
•it excluded rival hypotheses particularly fatigue, learning and
differences in the text.
The two factor design allows for the investigation of possible interaction
effects between variations in width and the presence or absence of white space.
1.5. Definitions
ANOVA stands for analysis of variance, a statistical technique to determine if
significant differences of means occur between two or more groups (Zikmund
1991).
Descenders are the part of a character that are below the body of the letter; the
part that descends below the line.
Leading is additional space inserted between each line of text.
Legibility is described by Tinker (1969) as designating the "effects of
typographical factors on the ease and efficiency of perception in reading." He
goes on to define legibility as those factors that together affect ease, accuracy
and speed of reading. Another similar definition is given by Gribbons (1988),
who states that legibility is the "speed, accuracy and ease of visually receiving
and comprehending meaningful continuous text."
Pica is the unit of measurement for line width, approximately 1/6 inch. A four
inch line is 24 picas wide.
Readability refers to the ease with which the meaning of text can be
comprehended and is measured by means of reading comprehension and
reading speed (Mills and Weldon 1987).
Set solid is text without leading.
1. Introduction 1.6. Limitations and Key Assumptions
Page 12
Size of type is measured in points where the point is approximately 1/72 inch.
Speed of reading is the time it takes the reader to read a passage of text. In this
experiment it is the time from when the text screen is displayed to the time the
left mouse button is clicked.
Text width is defined as the distance from the left border to the right border of
a block of text where the text is left justified and right-ragged. This can be
arrived at by measuring the block of text directly or by the following formula:
W = SW - 2SB - 2F - 2BW - LM - RM
where:
W = Text Width
SW = Screen W idth
SB = Screen Border Width
F = Window Frame Width
BW = Background Width
LM = Left Margin
RM = Right Margin
White space is the blank space surrounding text and objects. Active white
space is used to organize information by separating chunks of information
(Horton 1990). Passive white space, outside margins, is used to isolate the text
from external distractions. This experiment investigates the effect of passive
white space.
x-height is the height of the letter "x" and is used as a comparison measure for
different fonts.
1.6. Limitations and Key Assumptions
Guidelines for online documentation recommend left justified text and ragged
right margins, e.g.. (Phillips and Crock 1992; Horton 1990).This formatting is
also supported by research (Trollip and Sales 1986). Therefore this experiment
uses the same format. However this means that there are two different ways to
measure line length:
•an average for the block of text
•the length of the longest line in the block of text.
The last alternative, the length of the longest line in the block of text was used
because:
•it is easier to calculate the line width and standardize it for
different text passages
•it would be impossible to develop passages of text containing
exactly the right words to fit the line width
•fully justifying the text would make the lines the same length
but would be contrary to recommendations and would result in
more blank space between words
1. Introduction 1.7. Outline of Report
Page 13
•using single lines of text would not simulate real reading
conditions for users of online documentation.
Text was chosen from Microsoft help screens to provide a more realistic
simulation. To test narrow text conditions and to be able to compare these with
wider conditions it was necessary to use small text passages otherwise
scrolling would have been necessary to enable the reader to view all the text.
This would have affected reading times.
It is assumed that university students represent all computer users in their
ability to read and comprehend on screen information. Font type, font size and
leading was kept the same across all conditions as each of these may have
affected the results. Subjects were all tested on the same brand and model of
computers.
1.7. Outline of Report
This report begins with an overview of text presentation and its place in the
field of human-computer interaction. The research problem is defined as
investigating the importance of text width and white space for readers using
computer screens. The results indicate individual differences in reading
abilities and preferences are more important to reading text on computer
screens than text width and white space. Definitions of key terms are provided
in this chapter and a brief summary of the methodology is given. The
limitations and assumptions involved are listed.
The presentation of text is researched by many different disciplines including
psychologists, typographers, technical writers and information technologists.
The history of research in this area does not follow a logical or sequential
development. Therefore the literature review is organised under subject
headings.
The methodology chapter describes the experimental design, the recruitment of
subjects, data collection and treatment. The data is analysed using a repeated-
measures, two-factor design or 2 by 8 ANOVA. Graphs of the means provide
further information about the results.
The final chapter discusses the failure to reject the null hypotheses for reading
speed and comprehension and the importance of the significant result for
satisfaction. Further research is indicated to investigate the effect of text width
for long text passages, interactive help screens and the use of active white
space.
2. Literature Review 2.1. Psychological Processes
Page 14
2. LITERATURE REVIEW
2.1. Psychological Processes
2.1.1. Introduction
Reading is described by Noordman (1988) as an activity of information
processing that transforms patterns and symbols into an understanding of the
text. The same cognitive processes are involved when a user reads online
documentation. The reader transforms patterns of light and shadow into
meaning.
The ability of the reader to read is affected by the presentation of the text. The
study of eye movements, speed of reading, and comprehension provides an
explanation of why reading is not a simple or straightforward task.
2.1.2. Eye Movements
Javal (Singer and Ruddell 1985) discovered that when reading the eyes move
in jumps that he named saccades. Each saccade lasts about 20 ms. The eyes
then fixate on the text for about 240 ms. Finding the beginning of the next line
takes about 40 ms.
The fixation period is the only time that the reader can perceive print. As
explained by Noordman (1988) only during these stops is information
extracted from text though the fixations are often inaccurate, particularly for
the first fixation on a new line. Two hundred and ten ms of this period are
available for stabilisation and processing of the information; the other 30 ms is
required for seeing, (Singer and Ruddell 1985).
The retina is the area of the eye on which the lens focuses images, senses light
and begins the analysis of the image (Rubinstein 1988). The centre of the
retina is the fovea. This area of the eye is responsible for detailed focusing.
The eye is able to "see" over a wide angle but detail is obtained only over a
narrow region (about 2 degrees across) called the fovea, (Card, Moran and
Newell 1983). At a reading distance of 35 cm the eye can only see, in detail, a
circle about 10 mm in diameter. This circle can only contain some letters and
maybe a word or two. If the eye remains steady then it cannot see. To see the
eyeball must be moved by muscles to bring different areas of interest into
focus. Peripheral vision is used for orientation.
Detailed vision is also dependent on contrast. Lateral inhibition enables the
retina to detect edges. Circular areas of the fovea react most when the middle
receives more light. This effect is not dependent on absolute levels of light but
instead requires contrast.
2. Literature Review 2.1. Psychological Processes
Page 15
The speed of reading is limited to the ability of the visual system to handle
information. Speed can also be affected by screen displays that flicker or vary
in intensity. The critical fusion frequency (CFF) is the number of flickers per
second that will seem to be stable for 50% of all people. People differ greatly
in the level of CFF. This effect may explain some of the poor reading results
from computer screens reported early in the literature.
Finding the start of a new line is a perceptual problem according to van Nes
(1988). The movement to the start of the next line may not be directed quite
precisely enough if the angle between the required direction of motion and the
direction of the lines is small. These small "eye return angles" occur when the
lines are long or the distance between lines is small.
If the point of focus is more than about 30 degrees away from the eye, the head
moves reducing the angular distance. At the normal reading distance from a
computer screen, about eighteen inches, the eye scans four and a half inches
(Brockman 1990).
2.1.3. Limitations on Reading Speed
The reading process could go faster as a word can be seen in about 50 ms. We
could read about four times faster than we do. There are two explanations
(Noordman 1988):
•The bottleneck is the processing of the higher-order information
and not the perception of the visual information. Different
methods of text presentation may facilitate the higher-order
processes in text understanding.
•Slowness of reading is due to inefficient perceptual processes.
Speeding up the eye movements during reading could speed up
the process. Eye fixations are quite often inaccurate particularly
for the first fixation on a new line.
Other than training the user, there may be ways of presenting text to speed the
reading process. Noordman (1988) reports on the rapid serial visual
presentation (RSVP) technique. However this method does not appear to be
accepted except for displays that have very little space available e.g.
advertising banners, cash registers.
2.1.4. Comprehension
Adult readers perceive whole words after each saccade. Readers associate
words into verbal relationships across sentences. This is called chunking.
Comprehension is like solving a problem in mathematics. Text can follow a
formula such as question and answer or problem and solution to show the
relationship between ideas. The reader then can relate the text to their own
requirements.
2. Literature Review 2.2. Theory
Page 16
Comprehension and speed of reading are related but separate functions of
reading (Singer and Ruddell 1985). Where readers can identify words
automatically most of the effort can be devoted to comprehension.
The ability of a reader to comprehend text is affected by the interaction
between external information and memory (Klix, Krause, Hagendorf,
Schindler and Wandke 1989). Text comprehension is mostly interpretation.
Fatt (1991) found that word frequency, vocabulary load, and sentence
complexity influence the readability of biology texts. Text-related variables
such as sentence complexity and vocabulary load were examined in three
secondary school textbooks. Content and non-content words, technical and
non-technical words, rare and frequent words, and word repetitions were
considered.
Predictions based on mean sentence length and number of syllables may not be
the cause of textual difficulty in human and social biology texts. Sentence
length was not found to be an adequate measure of syntactic complexity.
Students judged the language as easy when there was low repetition of
technical content words and rare words, when the percentage of technical
words was low, or when the percentage of non-technical words was high.
2.1.5. Conclusion
Psychological processes show that the average reader does have limitations in
reading speed and the ability to read text. The computer screen needs to
optimize conditions to maximise the benefits that a computer can provide in
delivering information. However these cognitive studies have been done with
slides or print media and may not be applicable to computer displays.
2.2. Theory
Elkerton (1988) offers a task-analytic approach based on the GOMS model of
human-computer interaction. This theoretical approach allows online aiding
dialogues to be specified using the goals, operators, methods, and selection
rules of the computer interface. A fully specified GOMS model provides an
opportunity for usability problems to be identified analytically so that aiding
dialogues can be implemented effectively based on quantative predictions of
performance time, learning rate, and user memory load. The theory can be
used to predict improvements in assistance and instructional dialogues without
extensive user testing.
The objective of an aiding dialogue is to improve current and long-term user
performance with the computer interface. Users are faced with the problem of
learning additional interface procedures while still trying to complete their
current task efficiently. This approach develops a principled method for the
development of online aiding where each component of the GOMS model
provides the framework for the aiding dialogue.
2. Literature Review 2.3. Reading from the Screen
Page 17
Elkerton believes that the research and development of aiding interfaces can be
approached using a task analytic approach based on the GOMS model of
human computer interaction.
This approach develops a principled method for the development of online
aiding where each component of the GOMS model provides the framework for
the aiding dialogue. The task-analytic approach permits a quantitative
assessment of user interface problems that can be solved with online aiding
moreover provides a method for predicting any improvements in usability as a
result of the aiding dialogues. However, this approach does not offer solutions
for addressing the individual differences of users.
The GOMS model does not offer much insight into the design of text formats
except for the principle that minimising the number of actions improves the
interface.
2.3. Reading from the Screen
2.2.1. Introduction
As the amount of information in electronic form increases, reading and writing
with computers have grown to become increasingly important tasks.
The task of reading can be broken into different categories depending on the
task of the reader. Computers are not usually used to read for leisure at present
however there are two major types of reading that are important:
1. Reading text to solve future problems i.e. learning
2. Reading text to solve a present problem
Each of these types of reading involves different reading techniques due to the
different reading goals. The first involves careful reading and re-reading to
gain an understanding of the topic. The second usually involves scanning to
find keywords and sections of text relevant to the task in hand. This in turn
means that width and white space need to be investigated for both kinds of
reading.
The first type of reading, reading to learn, is the sort of reading tested for in the
experiment. Usually the sort of text associated with learning is long such as
online journals, multimedia encyclopedias, etc. The experiment could be
repeated with much longer text passages (at least a number of pages) to test if
differences in width and white space are important to this kind of text. It
would not be practical to test many widths as the experiment would take too
long. A test design using white space, no white space and 5 inch and 9 inch
would determine if width and white space are important to longer text
passages.
An example of the second type is reading help files. This is generally
undertaken to solve a problem raised by some other task. It involves scanning
2. Literature Review 2.3. Reading from the Screen
Page 18
the text looking for key words and looking for information appropriate to the
problem in hand.
2.2.2. Reading Computer Presented Text
Belmore (1985) compared reading time and comprehension between paper and
computer presented text to provide more information about the differences
between the two methods of display. She suggests that the difference in
reading time may be because the subjects that read computer text were
unfamiliar with the technology.
The subjects were shown eight short passages. The paper version had each
passage typed on a single sheet of paper. The computer version used an Apple
II Plus 48K microcomputer that displayed 24 lines per screen and 40
characters per line. No passage required more than one screen.
Half the twenty subjects experienced four computer-displayed passages first
then the paper; the remainder had paper first. After each passage the subjects
were tested for comprehension by answering questions from a sheet.
Belmore found that the subjects took significantly longer to read the computer
presented text and scored significantly lower for comprehension. However, it
was found that with the paper condition first the difference was reduced for
comprehension and reading speed (see Table 1).
Presentation Mode
Computer Paper Overall
Mean
Reading Time
C-P Order 103 79 91
P-C Order 85 89 87
Overall
Mean
94 84 89
Comprehension
C-P Order 16 58 37
P-C Order 52 69 61
Overall
Mean
34 64 49
Table 1: Mean reading time (in seconds) and comprehension (percent correct) as a function
of presentation mode and presentation order (Belmore 1985).
The researcher felt that the most reasonable explanation for the poorer
performance of the computer-first condition was that the subjects had no prior
experience with computers and so were distracted by the technology. However
when the subjects were familiar with the task they were more able to cope with
the video screen.
This research highlights one of the problems with research with computer
terminals - changes in display technology. The computer used in this
experiment was only capable of displaying 40 characters per line.
2. Literature Review 2.3. Reading from the Screen
Page 19
2.2.3. Variations in Reading Performance
Hansen and Haas (1988) present a framework of factors within which
variations among user’s results can be explained.
Quality and quantity of reading and writing depend upon page size, legibility,
responsiveness and tangibility. The four primary and three secondary factors
that the authors use to explain the influences that affect people using
computers to read and write are:
Primary Factors
•page size is the amount of text visible at one time
•legibility is the ease with which letters and words can be
recognises correctly
•responsiveness is the speed of system response to a user’s
action and has two components: the speed with which the
system begins to respond and the speed with which it completes
its response
•tangibility describes the extent to which the state of the system
appears to the user to be visible and modifiable by physical
apparatus.
Secondary Factors
•sense of directness is the user’s degree of feeling that the
changes on the screen are a direct result of the user’s actions
•sense of engagement is the feeling that the system is holding
an interesting, and even fascinating, conversation with the user
xsense of text is the user’s grasp of the structural and semantic
arrangement of the text the absolute and relative location of
each topic and the amount of space devoted to each.
The sense of text can be tested by assessing the spatial recall of subjects.
Spatial recall is the ability to remember the page and line of specific items.
The experimenters studied how spatial recall is affected by viewing a text on
the computer screen and on paper. They hypothesised that spatial recall is
different viewing text on a computer screen than reading from print.
Ten subjects were used in the experiment, five performing the task on paper
and five on the personal computer. A text of a thousand words equivalent to
nine pages or screens were given. The text screen was the same size as the
paper page and was kept constant. The subjects were asked to place eight
sentences from that text in the correct locations on a blank page or screen after
viewing the text.
The results showed that readers can recall the location of information more
accurately from paper than from a personal computer.
2. Literature Review 2.4. Learning
Page 20
In the following experiment Hansen and Haas (1988) theorised that it would
be easier to retrieve information to answer questions from paper than from a
computer screen. Subjects were asked to find answers to questions from an
1,800 word text. There were three conditions:
•paper
•advanced workstation
•personal computer (green monochrome monitor) used as a
terminal to a mainframe.
There were significant differences between the personal computer and the
other two conditions (workstation and paper), which did not differ from one
another. The personal computer took twice as long to retrieve information.
Most of the primary factors differed between the two computer conditions
leading to better performance from the workstation. The reasons the authors
give for the significant difference between the two computer conditions were
that the workstation had:
•twice the size page
•serifed font and bold headings
•faster response rate
•scrollbar for moving through the document.
Hansen and Haas followed this experiment with one to isolate two of the
factors from their previous experimentpage size and tangibilityand
investigate the effect on subject’s performance. The experiment compared
performance for large and small windows and scrollbar and function keys. The
task tested the ability of subjects to read critically to determine the correct
arrangement of disordered text.
Results for mean time to complete the task showed paper was better than large
windows which in turn were better than small windows. It was found that the
method of moving through the text scrollbar or function keys, made no
significant difference.
Every experiment showed that Hansen and Haas performed showed that paper
was superior for reading to any computer condition. The workstation results
were closer to those of paper than those of the personal computer.
2.4. Learning
The effects of screen size (12 inch or 15 inch) and text layout (well structured
or badly structured) on the learning of text when the text was displayed on a
personal computer were studied by de Bruijn, de Mul and van Oostendorp
(1992). The authors hypothesised that the study of text on a large screen would
have better learning performance than when a small screen is used; or, if no
difference in learning performance, that subjects need shorter learning times or
2. Literature Review 2.5. User Manuals
Page 21
need to invest less cognitive effort in the learning process. Secondly they
suggested that a well-structured text would reduce learning time and cognitive
effort.
Fifty-six university students were used in the experiment. A summary and a
multiple-choice test were used to measure the amount of information
remembered. Efficacy of learning was determined by learning time and by
cognitive effort, measured by the performance on a secondary task.
Neither screen size nor text layout had a significant influence on the required
cognitive effort or on the degree of learning. There was a significant effect of
screen size on time to learn: subjects using a 15 inch screen required less
learning time than subjects using a 12 inch screen. Learning performance was
the same across both conditions. The subjects favoured a well-structured text
more than an ill-structured text but this did not result in better learning, shorter
reading time or less cognitive effort.
The authors suggested that more efficient integration processes in constructing
the semantic representation are responsible for this reduction in learning time.
Even the length of words can affect the user’s perception of text. Campos and
Gonzalez (1992) gave subjects a list of pairs of words with one concrete and
one abstract noun in each pair. They found that long nouns are more abstract,
show less vividness of imagery, and have less meaningfulness than short
nouns, although the meaning of words is controlled.
2.5. User Manuals
User manuals are an important text resource for computer users. Wright (1988)
discusses the importance of design recommending that manuals should make
information easy to find, easy to understand, and sufficient to undertake the
task. Readers need to be able to find the information they are looking for and
to understand it once found. Design features that make information easy to
find are:
•consistency
•signposting
•arrangement.
Information is easy to understand that is:
•simple
•concrete
•natural.
Information sufficient to complete the task is:
•complete
•accurate
2. Literature Review 2.6. Online Help
Page 22
•exclusive.
Wright points out that people operating a computer system need the
documentation on their screen because this involves a smaller disruption of the
ongoing task than does turning to find a page in a printed manual. He also
points out the benefits of online documentation:
•documentation can be displayed dynamically
•it can be easier to update online documentation
•online documentation can be made context sensitive.
Online computer documentation is defined by Shirk (1988) as "documentation
written specifically for access only by means of a computer terminal". Her
taxonomyis shown in Figure 1. It is seen that presentation of text becomes
more important as the documentation increases in complexity.
Increasing
Writing
Complexity
Progressive
Self-
Containment
System
Messages
Error Messages
Help Facilities
Online Reference Guides
Software & Hardware Tutorials
Computer-Based Training (CBT)
Figure 1: Taxonomy of online documentation (Shirk 1988)
Online documentation is utilised when:
•the paper manual is located away from the user
•the user hopes for a quick fix for the problem
•the paper manual is inadequate or does not provide an answer
•there is no paper manual e.g. shareware
•someone else has the manual
•the software is pirated ( a potential purchaser may pirate the
software to evaluate it).
2.6. Online Help
Online help is a prominent element of many interfaces for computer systems.
At first, the text of online help was the same as the hard copy and was
displayed on screen when the user required help. This overwrote the screen the
2. Literature Review 2.6. Online Help
Page 23
user had been using. Currently online help systems provide a variety of types
of help text.
Computer user documentation needs to be presented to the user in a way that
facilitates the completion of tasks or the achievement of goals (Helander
1988). It should enable the user to understand and use the software. Online
help is not just for novices (Horton 1988); experienced users also require its
use.
Cherry, Fischer, Fryer, and Steckham (1989)1 investigated the effects of the
format used to display online help on user performance and attitudes.
The researchers hypothesised that the performance of application programmers
on editing tasks would be best with windowed help and worst with full screen
help. They also felt that attitudes would be most positive toward windowed
help and least positive toward full screen help.
They found that there was no significant performance or attitude advantages
for full screen, split screen or windowed help. Instead they observed that the
quality of help text is an important factor in determining the effectiveness of
an online help system. Comments from participants indicated that:
•there was too much information in help
•the task help unnecessarily repeated the field descriptions
•the paragraph format of the help text made it difficult to find
specific information
•subjects could not readily find the information they needed.
In their experiment the help was written in a narrative style with headings
followed by one or two paragraphs. The subjects were experienced
programmers with IBM PC experience. The computers used were IBM PC
XTs. The users spent more time with the split screen and windowed help
because they had to scroll through a greater number of help panels and they
were able to enter data while the screens were open.
The experimenters do not describe the format of their text apart from saying
that it had headings and paragraphs. It is possible that the different conditions
cancelled out the effects of different variables such as white space and text
width. Another possible problem is that the monitors may have been low
quality monochrome or CGA. It is possible that higher quality displays would
improve the performance of windowed or split screen text.
1This article was first published as part of the 1988 Conference Proceedings of the Society of
Technical Communication under a different title-(Cherry, J. M., B.M. Fryer, and M.J.
Steckham 1988)
2. Literature Review 2.7. Presentation Of Text
Page 24
2.7. Presentation Of Text
Presentation of text is important because when searching text the eyes skim
over the page guided by such text attributes as characteristic initials and word
lengths. Conspicuous symbols, words, or entire fragments of text can draw the
eyes (van Nes 1988).
One approach to screen design is to examine many software applications to see
if there is a consistent design. A common design geometry indicates that the
primary viewing area should be from columns 10 to 60 and from row 4
through 21 (Rubens and Krull 1988).
Kolers, Duchnicky and Ferguson (1981) used measurements of eye
movements to assess the readability of CRT displays. A computer controlled
the display of text on a television monitor while a television camera recorded
eye movements as the subject read the text. The experiment aimed to duplicate
the appearance of text as it would appear on a home television. The characters
were 7 by 9 dot matrix with descenders and were displayed as light characters
on a dark background. The text was displayed at 40 and 80 characters (which
resulted in lines of 35 or 70 characters with the same length), single and
double spacing, and five scrolling rates.
Twenty 300 word passages were used; each passage was followed by a set of
ten questions. The questions were designed to test whether the subject had
read the screen as half of the questions could not be answered from the
passage.
Twenty texts and twenty conditions were presented to twenty subjects each in
a different random order.
The analysis used the following performance measures:
1. total number of fixations to read a passage of text
2. number of fixations per line
3. number of words per fixation
4. rate of fixating including the duration of the fixation and the
time the eye took to move from one fixation point to the next.
5. fixation duration
6. total time.
The condition with 70 characters per line resulted in almost double the
fixations per line but with fewer total number of fixations, a larger number of
words per fixation, longer fixation duration, and shorter total time. There was
no significant difference in comprehension.
In their discussion, the authors suggested that because smaller, tightly packed
characters require less work for the eyes, lines should be 80 characters rather
2. Literature Review 2.7. Presentation Of Text
Page 25
than 40 characters. They also indicate that if character density is too great, line
finding would be difficult.
This research is useful in that it shows that eye movements can be used to
determine the legibility of text. It provides evidence that the format of text
affects eye fixations and thus reading speed and efficiency. Their
recommendation for 80 character lines appears to be the basis of present
guidelines e.g. (Phillips and Crock 1992).
However the differences between the text shown to their subjects and that
viewed on more recent terminals are quite dramatic. Their experiment used
light text on a dark background with also runs counter to normal computer
display practice and counter to recommendations (Phillips and Crock 1992).
Another point to consider is that the when they refer to 40 and 80 character
lines they mean a different size of font not different length of line.
Duchnicky and Kolers (1983) did further research on the readability of text as
a function of window size. They used three different line lengths, two different
character densities and five different window heights. Each of these variables
significantly affected the speed of reading.
Subjects were presented text passages in different formats and allowed to
control the scroll rate of the passage interactively. A format that used a higher
scrolling rate was considered to improve readability. The experiment used
30cm black and white VT100 display terminals. The display characters were 7
× 9 dot matrix characters with 2-dot descenders. The 40 character full-width
lines had 2.1 characters per cm and the 80 character had 4.2 characters per cm.
This corresponds to a maximum line length of approximately 7⋅5 inches or 19
cm. The letter ’M’ was 3 mm high for both conditions and 2 mm wide and 3
mm wide respectively.
The text width was divided into two formats, 80 and 40 characters per line,
and three conditions:
•full width78 characters and 39 characters
•two thirds52 characters and 26 characters
•one third26 characters and 15 characters.
The largest possible screen was 78 characters by 20 lines and the smallest was
one line of 15 characters.
Lines that were either the full width of the screen or two-thirds of the screen
showed a mean reading time 25% faster than lines of one-third screen width.
When the text was displayed at 80 characters per line, reading speed was 30%
faster than for 40 characters per line. Text in four line windows was read as
well as text in twenty line windows. Text in one or two line windows resulted
in a 9% slower reading rate. Comprehension was found not to vary as a
function of window size.
2. Literature Review 2.7. Presentation Of Text
Page 26
Kearney (1988), in discussing the difference between print and online help,
stated that online information had the following features:
•typically a single font
•only high-end monitors able to display graphics
•lines rather than pages
•24 lines by 80 columns
•typically black and white or few colours
•non-portable terminals.
Improvements in computer technology have negated all of these observations.
An entry-level IBM PC computer uses a SVGA screen and future
improvements in screen size and resolution can be expected. A consequence of
this is that a modern computer can display significantly wider texts with a
much greater range of font size as well as font types, as shown by the data in
Table 2.
Monitor Number of
dots
Descender Characters
per line
Height and
width of ’M’
Max. width of
line
VT100 7 × 9 2 dots 78 3 × 2 18⋅7 cm
SVGA 18 × 24 12 dots 93 5 × 4 25⋅5 cm
Table 2: Comparison of VT100 and SVGA video displays
The letter ’x’ has 18 × 24 dots and the descender for the letter ’y’ has 12 dots on
a typical VGA screen displaying MS Serif 12 point font. The letter ’M’ is 5 mm
high and 4 mm wide. With the initial letter capitalised and the rest lower case
the line length is 93 characters. The screen can be a maximum of 10 inches or
25⋅5 cm and with borders and window frames about 9⋅5 inches or 24 cm.
Horton (1990) points out that resolution sets the smallest legible font size.
According to Horton low cost monitors have a resolution of 50 dots per inch
whereas expensive monitors have 100 dots per inch.
The situation is more complex than this. It is possible for a user to vary the
font size and font type when using a graphics interface such as Windows3. It is
possible that the greater widths now possible will affect the user more than the
limited width of the VT100’s. The increased screen resolution (see Table 2)
means that the user can read text more comfortably and differences may
become apparent that were not visible then.
Another problem with this research is that its prime concern was with the
readability of scrolled text. The subjects used a knob to control the rate of
screen scrolling. Most current display terminals do not have the facility. It
could be made available with software but if it has been done it is very
uncommon. In a typical windows system the user can either page through the
display or use a scroll bar that moves the text. The scroll bar requires constant
user interaction to keep the text moving. Duchnicky and Kolers (1983)
suggested that the upward movement of the scrolling text made longer lines
easier to read. This effect may not occur where the text is not scrolled for the
2. Literature Review 2.7. Presentation Of Text
Page 27
user. Hansen and Haas (1988) found that the method of scrolling did not affect
the mean time to complete a task where the subjects could scroll either by
scroll bar or function keys.
Duchnicky and Kolers (1983) report that the two-thirds and full-screen widths
produced equal reading times that were 25% faster than the one-third screen
condition. It is not valid to extrapolate from this research and say that it does
not matter how wide the screen is. A text width that ranges between 12⋅5 cm
and 18⋅7 cm may, for computer screens, be reasonable for the reader whereas a
line of 25⋅5 cm may not.
Tullis (1981) tested different display formats for a computerised telephone line
testing system. He tested four different display conditions:
•Narrative used whole words and phrases and electrical
measurements to show line testing results. A single sentence on
the first line provided a summary of the test results.
•Structured displayed key information in a frame at the top of
the screen. The results of the test were organised into logical
categories. Electrical measurements were presented in tables.
Data descriptions were reduced or deleted.
•Black-and-white graphics used the same format as the
structured condition with a schematic of the telephone line.
Various patterns and shapes indicated different aspects of the
telephone circuit.
•Colour graphics used the structured format also but replaced
shading with colour coding. Each of the eight subjects was
trained and tested in ability to interpret the testing results.
Questions were asked about the displayed information.
The experiment showed that accuracy did not significantly vary with format.
However format did influence response time. Subjects graded the different
formats on a scale from excellent to poor. Format was significant for these
gradings. Colour graphics was preferred most and the narrative least. Overall
the two graphic formats had shorter response times, fewer training exercises,
and the subjective assessment of quality was higher. With additional practice
the structured format resulted in response times equal to the graphic formats.
Tullis recommends that structured information have the following features:
•key information displayed in a prominent position
•data should be logically chunked and each chunk kept separate
•a fixed, tabular format is the best presentation method
•the information should be concise.
Tullis (1983) estimates that, for the Bell System computer system (Automated
Repair Service BureauARSB), an additional 55 person-years would be
needed to extract information if the time required for each screen was
increased by one second. His 1981 study shows that by changing the screen
2. Literature Review 2.7. Presentation Of Text
Page 28
format from a narrative to a structured format, savings in reading time of 3⋅3
seconds per screen are possible. This is a saving of 79 person-years.
Guidelines for screen presentation often recommend that screen design should
minimize the complexity of presentation or maximise the visual predictability.
A general technique for measuring the complexity of text presentation was
described by Tullis (1983).
Rectangles are drawn around every distinct item on the page. The rectangles
do not overlap. Each of these rectangles is an event. These events are used to
provide measures of system order and distribution order. System order is a
count of the unique widths and heights of each rectangle on the page
(Bonsiepe 1968). Tullis argues that the concept of distribution order is more
appropriate for computer displays and uses the formula :
C N p pn
n
m
n= −
=
∑
1
2log
where:
C = complexity of the system expressed in bits
N = number of events either widths or heights
m = number of event classes (number of unique widths or heights
pn = Probability of occurrence of the nth event class (based on the frequency
of events within that class)
This formula is based on the work of Shannon and Weaver (1949) and
provides a figure for the complexity of the page that can be used to compare
different text presentations. Tullis applied this theory to his 1981 research on
layout with the following results:
Narrative Format:
22 horizontal distances in 6 unique classes = 41 bits
22 vertical distances in 20 unique classes = 93 bits
overall complexity = 134 bits
Structured Format:
18 horizontal distances in 7 unique classes = 41 bits
18 vertical distances in 8 unique classes = 93 bits
overall complexity = 96 bits
If this theory is valid it should also apply to full text displays; a wider screen
display is less complex because there are fewer lines of text on the screen for a
given number of words.
2. Literature Review 2.8. Line Width
Page 29
2.8. Line Width
One book of guidelines (Document Design Project 1981) suggests for printed
documents that too much or too little information on a line can make it harder
to read. The best line length is between 50 and 70 characters. This line length
is not so short that the reader’s eyes must keep jumping from line to line and
not so long that the eyes become tired. The length of the line is affected by the
font as well as by the number of characters.
Horton (1990) states that longer lines are tiring to read, harder to find the start
of the new line, and require more saccades per line.
Most guidelines refer to the work of Tinker and Paterson (1969) when
recommending line width. Their influential research on the legibility of print
was carried out between 1940 and 1969 when Tinker’s book "The Legibility of
Print" was published. These experiments dealt with printed material and were
intended to provide guidelines for newspaper and book publishers.
The first investigation involved determining just what line widths were in use.
They surveyed 1,500 journals and books. The most common line width for
journals was near 24 picas or about 4 inches and for books around 21 picas or
3⋅5 inches.
In an experiment using 10 point type set solid and with 435 readers, Tinker
and Paterson found that very short lines of 9 picas (about 1⋅5 inches) and very
long lines of 43 picas (about 7⋅2 inches) were read significantly more slowly
than widths in between. The 43 pica and 9 pica lines were also judged least
legible by 224 readers.
In another experiment, the eye movements of the subjects were analysed. The
fixation frequency, pause duration, and perception time were significantly
larger for both 9 pica and 43 pica lines compared to 19 pica lines. However,
they also found that the 43 pica lines had an increased regression frequency of
56⋅7%. The major problem the subjects had was in finding the beginning of
each new line. They hypothesised that this upset the usual reading process and
hindered the re-establishment of efficient eye movements for each line.
Their next two studies looked at 12 point type, set solid and with 2 point
leading respectively. The first study ranged from 17 to 45 picas. Line widths
from 17 to 37 picas were found to be equally legible but line lengths of 41 and
45 picas significantly slowed reading speed. The second experiment also found
that the longer lines of 41 picas slowed reading speed as did short lines of 9
picas.
2.9. White Space
White space, the blank areas on the screen, is considered a desirable feature.
Gribbons (1988) states that using white space appropriately affects the
legibility and attractiveness of text. Horton (1990) classifies white space into
2. Literature Review 2.9. White Space
Page 30
active and passive. Active white space serves to organise information whereas
passive white space distinguishes information from its surroundings.
Margins are passive white space necessary for people reading paper
documents. Margins enable the text to stand out from the other objects in the
reader’s view and allow the reader to turn pages without obscuring text. Horton
claims that margins are not required for screen displays because the window
border and screen bezel separate the text from the surrounds. This is also
supported by Kearney (1988) who unequivocally states that right and left
margins should be eliminated. However, neither author offers any
experimental evidence to support their assertions.
Horton (1990) does qualify this observation; information should not be packed
too tightly where the user will scan the text. He advocates that text for
continuous reading should be condensed based on the work of Tullis (1981)
but does not provide any indication of the factor by which it should be
condensed. He does believe that active white space is important to separate
different parts of the text and highlight relationships. This observation
conforms with Tullis’s (1981) use of structure to make displays more readable.
Structuring text necessarily creates more white space than unstructured text.
Rubens (1986) provides another recommendation for the placement of text. He
suggests a ten character margin. Surprisingly Horton advocates this layout
even though previously stating that margins were unnecessary for screen
displays.
Smith and McCombs (1971), in their paper based research, predicted that
readers would prefer a story with extensive white space to a story with less
white space. They used four versions of the same story:
•version 1 - low white space, longer words and sentences
•version 2 - low white space, shorter words and sentences
•version 3 - large amount of white space, longer words and
sentences
•version 4 - large amount of white space, shorter words and
sentences.
The extra white space was made by breaking the text into more paragraphs and
using more open punctuation. The hypothesis predicted that readers would
prefer version 1 the least and version 4 the most.
The text was set in fully justified 2 inch columns, printed and transferred to
slides in side-by-side pairs in all possible combinations. Twenty-four subjects
were used. The experiment was divided into three phases. During the first
phase, subjects were allowed one second to view the text and then were
required to choose which column of text would be preferable to read. For the
second phase, the subjects read the same slides for six seconds and were asked
to choose based on the appearance and reading ease of the text. They were not
told that it was the same text.
2. Literature Review 2.9. White Space
Page 31
The third phase involved each subject reading one of the four versions of the
story, rating it on a five point scale for dislike/like, difficult/easy, and
boring/interesting. They were then required to answer a ten-question quiz
testing comprehension of the material.
The results were arrayed on an interval scale of preference using Thurstone’s
paired-comparisons technique. This technique involves the assignment of scale
values to statements and asking subjects to respond to the statements
(Zikmund 1991). The data supported the theory; reader preference improved
for the shorter words and sentences and for more white space. The results of
the comprehension tests showed no differences between conditions.
This research provides evidence that active white space is important to reading
preferences and supports Horton’s (1990) contention that active white space
helps to organize information. Unfortunately the researchers did not indicate
the margins that were used nor did they collect any data on reading speed.
3. Methodology 3.1. Subjects
Page 32
3. METHODOLOGY
3.1. Subjects
Thirteen undergraduates of the University of New England, Northern Rivers
volunteered to sit for the experiment. No compulsion in any form was applied
to the subjects apart from appeals to their altruism. Sessions were conducted at
times to suit the subjects. The subjects were told that the experiment would
take between half and one hour.
All subjects had some experience with the operating environment and with the
subject matter (Microsoft Word) of the text passages.
Subjects were presented with the following instructions:
•This is not a test of how fast you can go or how well you do.
Please act as if you were genuinely consulting help files.
•Take care using the mouse.Do not double click. If you do make
a mistake, call me and I will see what I can do.
•After every set of questions there is a screen called next - if you
require a rest, stop at this point.
•You will be asked to rate the screen of text - do not base your
assessment on the font used.
3.2. Design
The independent variables, width and white space, were arranged in a repeated
measures, two factors design. The width of the text was varied in 1 inch
increments from 2 inches to 9 inches and the text passages had either passive
white space or no passive white space. Thus there were 16 different text
passages to be read by each subject. See Figure Error! Bookmark not
defined. for an example of the 2 inch no white space condition and see Figure
Error! Bookmark not defined. for an example of the 5 inch white space
condition.
3. Methodology 3.3. Data Collection
Page 33
Figure 2: 2 inch no white space condition
Figure 3: 5 inch white space condition
The text passages were taken from the "help" files of Microsoft Word 2.0.
Each passage was edited to fit the 2 inch wide block so that there were
approximately 150 words in each passage.
The text passages were presented in random order to balance out learning and
fatigue effects. The text was varied for each subject so that no two subjects had
the same text for any condition to balance any effects from differences in text.
3.3. Data Collection
The software automatically recorded the time to read each passage of text and
the time to answer each set of questions for each subject.
After reading each text passage, each subject was required to rate readability
on a scale from very good to very poor. This was done by left clicking the
mouse button to select the appropriate option. This gave the "satisfaction"
rating for each screen condition.
3. Methodology 3.4. Method of Collection
Page 34
The rating question was followed by three questions based on the passage just
read. The subject was presented with four or five possible answers including a
"Do not know" option. Again the mouse was used to register the chosen
option. The score for the three questions was totalled and the result stored.
This gave a "comprehension" score for each passage.
At the end of the experiment subjects were given the opportunity to record any
comments they wished about the experiment. These were stored in a separate
file.
3.4. Method of Collection
Introduction
A laboratory experiment was chosen to test the hypotheses and to collect data.
An experiment has the advantages of being able to control conditions such as
font whilst changing the variables of interest. The software enabled the easy
and accurate collection of subject responses and measurement of reading and
answering times.
A test instrument was developed to run on IBM PC compatible computers
using Windows 3.1 and VGA terminals. Subjects were asked to read a short
passage of text and then answer some brief questions about the passage.
Design of Test Instrument
The software was constructed using Objectvision. Objectvision is an object
oriented, graphics-based, front end program designed to build simple
Windows database applications. Building a program in Objectvision consists
of:
•creating forms
•applying rules
•reading from files
•writing to files.
Compared to normal programming environments such as COBOL, C, and
Pascal, Objectvision has a number of peculiarities. The important ones to this
project are:
•Individual data items cannot be stored or read; only a tuple or
row of data.
•Every data field name, every button name and every form name
must be distinct as most objects are global. Thus if a button is
named "Button A", another "Button A" cannot be created. If
"Button A" is copied to another form then it will not only be
identical but any changes to the actions for the copy will also
change the original. Different objects can have the same name.
3. Methodology 3.4. Method of Collection
Page 35
•Objectvision dispenses with iteration. Figure 4 appears to show
iteration but this is a simplification. The apparent iteration is
achieved by a series of conditional statements. This can be seen
be referring to Appendix D where the form "Next Screen"
controls the order of text screens.
•Objectvision applications are not compiled and there is no
series of instructions as there is for standard computer
languages.
•If there is a valid value in a data field that is a key to an indexed
database file then Objectvision can provide a read link to that
database file. For this program, this means that a valid subject
id number will mean that the rest of any tuple will also be
displayed in any relevant fields. Thus the width codes and
screen order numbers are read from database files as soon as a
valid subject id is entered.
An overview of the structure of the program can be seen in Figure 4. The
structure of the program is very simple.
While
counter < 17
M
ain
Display
Initial
Screen
Display
Intstructions
Display
Next
Screen
Body
Display
Comments
Display
Conclusion
Loop *
Text
Display
Text
Display
Questions
3. Increment
counter
Display
Next
1. Initialise
variables
4. Calculate
TimeRead
5. Calculate
TimeAnswer
6. Assign
Rating
7. Calculate
Comprehension
9. Write Data
tuple
8. Write Comment
tuple
Structure of Width Program
2. Calculate
TimeRead
Figure 4: Structure chart for the Objectvision program
When the "Initial Screen" form is opened all data fields are cleared. This is
necessary as the program was designed to be used repeatedly. Unless cleared,
3. Methodology 3.4. Method of Collection
Page 36
the data fields would contain and display the previously entered values.
Literals are assigned to the variables so that the data file contains the names of
the variables, making importing of the values easier. Entering a valid subject
id number (hard coded for the first sixteen subjects) automatically locates and
reads into the program the correct tuple in the database files for the order of
screens and the order of conditions.
When the subject is ready, the "Begin" button is clicked. A check is made to
determine if a valid subject number was entered. The program was designed to
handle up to fifty subjects with the first 16 subjects to be assigned to different
runs. Thus the first 16 versions of the program had the subject id hard coded.
When a valid subject id is entered the "Initial Screen" is closed and the
"Instructions" form is opened.
On closing the "Initial Screen" the form counter is initialised to 0, a code is
assigned preparatory to displaying the next screen, and a starting time for the
experiment is assigned.
Opening the "Instructions" form assigns a starting time for reading the form.
This data was not used in the analysis. Clicking the mouse anywhere inside the
form initialises the form counter, closes the form and opens the "Next Screen"’
form. When the form is closed the code for the introduction screen and the
time taken to read it are assigned.
On opening the "Next Screen" form, the form counter is incremented. Clicking
the button labelled "Next Screen" on the form selects the next form to appear
depending on the value of the counter. This also closes the form and assigns
the appropriate code for the text condition. When the form is closed the tuple
is written to the data file.
The next screen the subject sees is the first text passage. The screens are
labelled 1 to 16. This makes it possible to use a database file to control the
order of screens. Instead of Objectvision being asked to open a form with a
literal as is normally done, the open action is asked to open a variable, Form 1
in this case. The form labelled "1" has text about the template features of Word
for Windows. Opening the form assigns the time on the system clock to the
variable TimeStart. Clicking anywhere in the window closes the form and
selects the form "Template". In Appendix D, this is the page labelled "Text
Item - Strip 1 of 1". Closing the form assigns the present time to TimeEnd.
Objectvision expresses time as a decimal. The formula:
+86400*(TimeEnd-TimeStart)
converts the time to seconds. This result is assigned to SpeedRead1 initially
and then assigned to TimeRead (during program development, each data item
was assigned to a separate variable to aid debugging. As this feature may
prove useful in the future this was left intact).
The form labelled with the description of the text, "Template" in this case, has
the rating and comprehension questions. Opening the form assigns the time to
3. Methodology 3.4. Method of Collection
Page 37
TimeStart. Clicking the button "End Template" activates a simple decision tree
which will not let the user proceed if all the questions have not been answered.
If the form has been completed then the code:
@IF(@LEFT(Template
1,1)="1",@ASSIGN(Template,1),@ASSIGN(Template,0))
@IF(@LEFT(Template 2,1)="3",@ASSIGN(Template,Template+1),0)
@IF(@LEFT(Template 3,1)="1",@ASSIGN(Template,Template+1),0)
strips the numbers from the selected answers and adds the number to the
variable "Template" if the answer was correct. The same logic is followed for
each different text screen. When the form is closed the time is assigned to
TimeEnd and the time to answer the questions, TimeAnswer is calculated as
for TimeRead. The comprehension score, "Template", is assigned to score and
the satisfaction rating, R_Template, is assigned to ScreenRate.
The program then opens the "Next Screen" as before where the data in the
variables TimeRead, TimeAnswer, Score and ScreenRate are written to the
data file.
When the counter reaches 17, all the text passages have been viewed and the
penultimate screen "Comments" is opened. This screen allows the subject to
enter any comments about the experiment. When closed, the comments and
the time the experiment took to complete are saved to the comments file. The
completion time was not used in the analysis. The final screen thanks the
subject.
Appendix D provides more detail of the program design. Most objects within
Objectvision can have actions assigned to them and along with the actions
decision trees can be constructed. Appendix D shows a subset of the actions
and decisions used in this program. This output is generated by Objectvision
and unfortunately does not list all the statements associated with the action.
Not all of the decision trees have been printed as essentially the same rules are
followed each pass of the loop.
Description of Software
The mouse is used to select answers and to continue with the next screen
allowing for direct comparisons between the times for each screen as the
subject did not have to hunt for different keys. The only typing the subject is
required to do is to enter comments at the end of the experiment.
The first screen ("INITIAL SCREEN") is used to verify the subject
identification number.
The second screen ("INSTRUCTIONS") provides a standard set of instructions
for each subject and gives a standard reading time for each subject.
The third screen ("NEXT SCREEN") appears at this point and after each set of
questions. It provides three functions:
3. Methodology 3.4. Method of Collection
Page 38
•it allows the user a chance to rest so that fatigue is less of a
factor
•it provides a more reasonable simulation of real life usage of
online documentation
•it increments a counter that controls the order of screens.
The sixteen text screens, detailed in Appendix A together with associated
question screens, appear in random order for each subject. Table 3 shows the
order of presentation of text screens for the thirteen subjects. The column
heading "Id" refers to the unique identifier for each subject. The column
headings "1", "2" etc refer to the order in which the screens are displayed i.e. 1
is displayed first and 16 is displayed last. The numbers in the cells refer to the
form number. Thus Subject 1 viewed Form 1 first, followed by Form 11 and
so on finishing with Form 10. Each passage of text is associated with a form
number throughout the experiment e.g.. form 1 has the passage describing
templates. Numbers are used to allow Objectvision to display screens in
different orders. This information is stored in a database table and accessed by
Objectvision.
Order of Screens - 1 o 16
Id 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
11 11 3 7 9 14 12 16 5 13 15 4 2 8 6 10
213 6 5 8 11 14 16 3 15 1 2 7 12 9 10 4
314 13 10 16 6 7 2 3 15 12 11 8 9 1 4 5
412 15 2 10 9 4 5 1 8 6 14 11 3 7 16 13
515 11 13 3 10 7 1 4 16 6 14 2 9 5 8 12
67 5 8 3 9 13 11 10 12 6 1 15 4 16 2 14
74 5 1 2 8 11 13 6 15 14 9 7 16 12 3 10
810 8 7 1 12 5 14 16 15 11 4 3 2 6 13 9
94 1 11 7 6 10 13 9 14 12 8 16 3 5 2 15
10 3 6 11 14 15 9 10 7 5 1 8 4 16 13 2 12
11 14 16 13 10 2 11 4 7 5 12 3 9 1 6 15 8
12 10 13 6 2 15 14 1 5 12 16 3 8 7 11 4 9
13 14 11 7 1 4 8 6 12 13 10 16 5 2 3 9 15
Table 3: Order of presentation of text screens for all subjects
For each of the subjects, text width and white space are systematically
assigned to a different screen number. The effects of differences in text are
then evenly distributed. Each experimental condition is assigned a code:
w = White Space
n = No White Space
2 = 2 inch .... 9 = 9 inch
Thus "w2" is the condition where the text is in a 2 inch wide block, a 1 inch
margin on the left and the remainder of the window white space. Similarly
"n4" is the condition where the text is in a 4 inch wide block and the window
frame is as close to the text as possible i.e. no margins (see Appendix A for the
3. Methodology 3.4. Method of Collection
Page 39
complete program for Subject 1). Table 4 in conjunction with Table 3 shows
the order of the assignment of conditions to screens. As before, "Id" is the
subject number and the "F" stands for Form. Thus Subject 1 sees Form 1 first.
Form 1 is a "White Space" condition and the text is 2 inches wide. The next
screen is Form 11. Form 11 is a "No White Space" condition and the text is 4
inches wide.
Form Number
ID F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16
1 w2 n4 w4 w8 n2 n7 n5 n9 w6 n6 n8 w5 w3 w9 w7 n3
2 n7 w8 w7 n2 n5 n8 w2 w5 n9 w3 w4 w9 n6 n3 n4 w6
3 n9 n8 n5 w3 w9 n2 w5 w6 w2 n7 n6 n3 n4 w4 w7 w8
4 n8 w3 w6 n6 n5 w8 w9 w5 n4 n2 w2 n7 w7 n3 w4 n9
5 w4 n8 w2 w8 n7 n4 w6 w9 w5 n3 w3 w7 n6 n2 n5 n9
6 n5 n3 n6 w9 n7 w3 n9 n8 w2 n4 w7 w5 n2 w6 w8 w4
7 n3 n4 w8 w9 n7 w2 w4 n5 w6 w5 n8 n6 w7 w3 n2 n9
8 w2 n8 n7 w9 w4 n5 w6 w8 w7 w3 n4 n3 n2 n6 w5 n9
9 n5 n2 w4 n8 n7 w3 w6 w2 w7 w5 n9 w9 n4 n6 n3 w8
10 n5 n8 w5 w8 w9 w3 w4 n9 n7 n3 w2 n6 n2 w7 n4 w6
11 w9 n3 w8 w5 n5 w6 n7 w2 n8 w7 n6 w4 n4 n9 n2 w3
12 w6 w9 w2 n6 n3 n2 n5 n9 w8 n4 n7 w4 w3 w7 n8 w5
13 n3 w8 w4 n6 n9 w5 w3 w9 n2 w7 n5 w2 n7 n8 w6 n4
Table 4: Experimental condition associated with each screen
Thus the order of presentation of the text for each subject is given in Table 5.
As can be seen from this table, no subject viewed the same order of
presentation.
Screens
ID F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16
1 w2 n8 w4 n5 w6 w9 w5 n3 n2 w3 w7 w8 n4 n9 n7 n6
2 n6 n8 n5 w5 w4 n3 w6 w7 n4 n7 w8 w2 w9 n9 w3 n2
3 w4 n4 n7 w8 n2 w5 n8 n5 w7 n3 n6 w6 w2 n9 w3 w9
4 n7 w4 w3 n2 n4 n6 n5 n8 w5 w8 n3 w2 w6 w9 n9 w7
5 n5 w3 n6 w2 n3 w6 w4 w8 n9 n4 n2 n8 w5 n7 w9 w7
6 n9 n7 n8 n6 w2 n2 w7 n4 w5 w3 n5 w8 w9 w4 n3 w6
7 w9 n7 n3 n4 n5 n8 w7 w2 n2 w3 w6 w4 n9 n6 w8 w5
8 w3 w8 w6 w2 n3 w4 n6 n9 w5 n4 w9 n7 n8 n5 n2 w7
9 n8 n5 n9 w6 w3 w5 n4 w7 n6 w9 w2 w8 w4 n7 n2 n3
10 w5 w3 w2 w7 n4 n7 n3 w4 w9 n5 n9 w8 w6 n2 n8 n6
11 n9 w3 n4 w7 n3 n6 w5 n7 n5 w4 w8 n8 w9 w6 n2 w2
12 n4 w3 n2 w9 n8 w7 w6 n3 w4 w5 w2 n9 n5 n7 n6 w8
13 n8 n5 w3 n3 n6 w9 w5 w2 n7 w7 n4 n9 w8 w4 n2 w6
Table 5: Order of Text Conditions for Each Subject
3. Methodology 3.5. Treatment of Data
Page 40
3.5. Treatment of Data
The times for reading and answering questions were calculated in Objectvision
as decimal numbers. The results of the time calculations were converted to
seconds and rounded to whole numbers.
The data from each subject was saved to a text file. Each text file was loaded
into a database program, Microsoft Access, and imported into a single table.
The query features of the database were used to re-order the data into a form
suitable for analysis. The data was then exported into the Microsoft Excel
spreadsheet program for statistical analysis.
4. Analysis of Data 4.1. Hypotheses and Null Hypotheses
Page 41
4. ANALYSIS OF DATA
4.1. Hypotheses and Null Hypotheses
Expectations of previous research are that text width and passive white space
are important to reading speed, comprehension and satisfaction. On the basis
of the literature, it would be reasonable to expect that:
1. Comprehension of text increases as the width of text increases
until an optimum point is reached, at 5 inches. Thereafter
comprehension decreases.
2. Speed of reading increases as the width of text decreases from 9
inches to 2 inches.
3. Readers will experience greatest satisfaction with text that is 5
inches wide and less satisfaction with text that is wider or
narrower.
4. Text without passive white space (removing margins) is more
difficult to comprehend than the equivalent text with margins.
5. Text without passive white space is slower to read than text
with white space.
6. Text without passive white space is less satisfying to read than
text with white space.
This study determines if there is a relationship between the dependent
variables
•reading speed
•comprehension
•satisfaction
and the independent variables
•width of text
•amount of white space.
The hypotheses and null hypotheses tested are:
1. Ho: The means for width of text and reading speed are equal.
2. Ho: The means for width of text and time required to answer
questions are equal.
3. Ho: The means for width of text and comprehension are equal.
4. Ho: The means for width of text and satisfaction with the
readability of the text are equal.
4. Analysis of Data 4.2. Results and Discussion
Page 42
5. Ho: The means for passive white space and reading speed are
equal.
6. Ho: The means for passive white space and time required to
answer questions are equal.
7. Ho: The means for passive white space and comprehension are
equal.
8. Ho: The means for passive white space and satisfaction with the
readability of the text are equal.
4.2. Results and Discussion
The means and standard deviations for the independent variables are presented
in Table 6. Time Read is the time it took for the subject to read the text
passage. Time Answer is included as the time it takes a person to answer
questions is another aspect of comprehension. Comprehension is the score
obtained for questions about the text passage. Satisfaction is the legibility
rating given to the text passage just read. This table provides an overview of
the results obtained from the experiment.
MEANS AND STANDARD DEVIATIONS FOR INDEPENDENT VARIABLES
Time Read Time Answer Comprehension Satisfaction
Width M SD M SD M SD M SD
NO WHITE SPACE
2 Inch 62.31 29.72 34.85 16.52 2.46 1.13 1.92 0.95
3 Inch 64.31 26.31 36.69 24.14 3.00 1.29 1.77 1.17
4 Inch 68.85 28.98 38.15 19.54 3.00 1.08 1.77 0.93
5 Inch 66.69 23.87 54.38 46.74 3.15 1.21 1.69 1.03
6 Inch 64.08 24.14 32.85 19.65 2.77 0.83 1.92 0.86
7 Inch 68.62 24.53 36.08 20.02 2.38 0.96 1.92 1.04
8 Inch 64.62 29.68 42.69 26.15 3.23 1.30 1.69 0.85
9 Inch 59.62 34.03 35.15 19.51 2.46 1.20 1.77 1.01
WHITE SPACE
2 Inch 69.00 23.36 37.38 21.26 2.62 1.33 1.92 1.12
3 Inch 69.54 25.00 36.08 26.95 3.38 1.19 1.85 0.99
4 Inch 60.54 18.76 38.00 21.46 3.31 0.63 1.85 0.99
5 Inch 66.00 20.42 37.38 18.49 3.46 1.20 1.69 0.85
6 Inch 58.69 16.97 38.15 15.21 3.23 1.01 1.62 1.04
7 Inch 57.08 17.39 30.54 14.89 3.15 0.90 1.92 1.12
8 Inch 65.69 24.75 33.31 11.98 2.92 0.76 1.85 0.90
9 Inch 62.15 17.60 33.00 11.99 3.08 0.95 1.85 0.80
Table 6: Means and standard deviations for width and white space for reading times, answer
times, satisfaction and comprehension
4.2.1. Time to Read Text.
The expectation that text width and passive white space are important to
reading speed was not confirmed. The average times that subjects took to read
the text passages are shown in Figure 5. The hypothesised curve was not
realised. The narrowest screens were not the quickest to read. White space did
4. Analysis of Data 4.2. Results and Discussion
Page 43
not necessarily improve reading speed. The 9 inch no white space condition
proved almost as quick to read as the 8 inch white space condition.
T ext W i dth
55.00
60.00
65.00
70.00
No
White
S pace
White
S pace
Figure 5: Comparison of average reading times
The summary for the repeated measures two factor design on reading times is
shown in Table 7. The results show that there is no significant relationship at
the 20% level between widths, white space or interaction between width and
white space for reading times.
Time to Read
Source SS df ms F p
Total 118879.50 207
Subjects 51513.64 12
White 87.62 1 87.62 0.17 n.s.
Width 936.73 7 133.82 0.41 n.s.
White x Width 1936.34 7 276.62 0.75 n.s.
Error white 6155.07 12 512.92
Error width 27355.60 84 325.67
Error white x width 30894.47 84 367.7913
Table 7: Summary of Analysis of reading times
A possible explanation of the non-significant results for reading time and
width can be obtained from cognitive studies of reading which have shown
that finding the start of the next line in a normal passage of text can take about
40 ms (Singer and Ruddell 1985). The text passages as used here varied
between 7 and 30 lines. This means that the cost in reading time between the
widest and narrowest conditions would only be 1 second. The widest screens
did not give a result uniformly one second longer than the narrowest screens
therefore it can be inferred that differences in the text and reading ability were
more important to reading time than the cost of finding the next line.
4.2.2. Answer Times
The answer times also failed to show the expected relationship for
comprehension as can be seen from Figure 6. The 7 inch white space condition
was the quickest to answer but the 6 inch no white space was almost as quick.
Text without white space was not always more difficult to answer.
4. Analysis of Data 4.2. Results and Discussion
Page 44
T ext W i dth
25.00
30.00
35.00
40.00
45.00
50.00
55.00
No
White
S pace
White
S pace
Figure 6: Comparison of average answer times
The repeated measures two factor design was used to test for a relationship
between the variables. The summary of the analysis is shown in Table 8. There
is no significant relationship at the 20% level between widths, white space or
interaction between width and white space for answer times.
TIME TO ANSWER
Source SS df ms F p
Total 101603.10 207
Subjects 42057.67 12
White 592.31 1 592.31 2.45 n.s.
Width 2767.92 7 395.42 1.18 n.s.
White x Width 2315.80 7 330.83 1.21 n.s.
Error white 2907.00 12 242.25
Error width 28032.02 84 333.71
Error white x width 22930.38 84 272.98
Table 8: Summary of analysis of answer times
The results for comprehension also show no relationships. The graph in Figure
7 does not show a regular relationship between width and comprehension.
White space does not reliably improve comprehension.
4. Analysis of Data 4.2. Results and Discussion
Page 45
T ext W i dth
1.30
1.40
1.50
1.60
1.70
1.80
1.90
2.00 No
White
S pace
White
S pace
Figure 7: Comparison of average comprehension scores
The repeated measures two factor design was also used to test this
relationship. The summary of the analysis is shown in Table 9. There is no
significant relationship at the 20% level between widths, white space or
interaction between width and white space for comprehension scores.
COMPREHENSION
Source SS df ms F p
Total 187.69 207
Subjects 14.63 12
White 0.00 1 0.00 0.00 n.s.
Width 1.11 7 0.16 0.18 n.s.
White x Width 0.88 7 0.13 0.12 n.s.
Error white 11.68 12 0.97
Error width 73.45 84 0.87
Error white x width 85.93 84 1.02
Table 9: Summary of analysis of comprehension scores
Figure 8 shows that the satisfaction rating appears to slightly improve as text
width increases to 5 inches and then drops with the exception of the 7 inch no
white space. There does seem to be a consistent preference for white space
over no white space. The graph does support the hypothesis that the greatest
satisfaction will be with text that is 5 inches wide except for the anomalous 7
and 8 inch no white space conditions.
4. Analysis of Data 4.2. Results and Discussion
Page 46
T ext W i dth
2.30
2.50
2.70
2.90
3.10
3.30
3.50
No
White
S pace
White
S pace
Figure 8: Comparison of average satisfaction rating
Table 10 shows the results of the analysis using the repeated measures two
factor design. There is no interaction effects between width and white space.
There is a significant relationship at the 5% level between text width and
satisfaction and between white space and satisfaction.
SATISFACTION
Source SS df ms F p
Total 246.88 207
Subjects 95.32 12
White 5.89 1 5.89 3.62 <.05
Width 12.38 7 1.77 2.49 <.05
White x Width 4.76 7 0.68 1.16 n.s.
Error white 19.55 12 1.63
Error width 59.68 84 0.71
Error white x width 49.30 84 0.59
Table 10: Summary of analysis of satisfaction rating
4.2.4. Comments by Subjects
The following comments from subjects showed that some did have strong
feelings about text presentation:
•Subject 1 "When the text was listed across the page in a lengthy
form, my eyes had to move more then what was comfortable to
read off the screen."
•Subject 9 "The smaller boxes with a few lines were hard to
read, also hard to read were the long narrow boxes. The more
squarer the box the easier it was to read."
•Subject 10 "The text which I found best to read were the ones
in narrow vertical columns and a large box around the outside
of the screen. The one which I found hard to read were long
horizontal ones with a box encasing the text tightly."
•Subject 13 "The narrow texts are a real pain in the butt, so are
the wider ones."
4. Analysis of Data 4.2. Results and Discussion
Page 47
These attitudes were reflected in the graphs and analysis and provide
confirmation of the significance of the statistical results for satisfaction. The
remaining subjects did not feel strongly enough to record comments about text
presentation.
5. Summary and Conclusions 5.1. Introduction
Page 48
5. SUMMARY AND CONCLUSIONS
5.1. Introduction
The results do not support the hypotheses that text width and passive white
space are important to reading speed and comprehension where text passages
are short and displayed on a computer monitor. However, satisfaction
increases for text as it approaches 5 inches and is greatest for the white space
condition.
The effects of fatigue, learning, and differences in text were controlled for by
varying the order of presentation and assignment of text to different
conditions. The computers and monitors used were identical and ran identical
software. The subjects viewed the text under very similar conditions.
Therefore it appears that individual differences in reading abilities are more
important to reading speed and comprehension than text width and white
space. The presentation does affect the attitude of the user to the legibility of
the text.
5.2. Findings About Hypotheses
Width Of Text And Reading Speed
Kolers, Duchnicky and Ferguson (1981) reported improved reading speed for
80 character lines against 40 character lines. However, when they refer to 40
and 80 character lines they mean a different size of font not different length of
line. They examined actual line width in another experiment, (Duchnicky and
Kolers 1983), where they found that full and two-third width lines were read
25 % faster than one-third width. Their results may have been significant
because the combination of low screen resolution and narrow width may have
made the text in the one-third condition very difficult to read.
Tinker and Paterson’s work (1969) tested line length for print media and found
that wide and narrow lines were slower to read. It seemed reasonable to expect
that reading a modern computer display would be comparable with reading
print and show the same differences. One possible explanation is books can be
held by the reader at any angle whereas a computer monitor is usually vertical.
This may affect reading times. Another explanation is that, despite
improvements in screen display technology, a computer monitor still cannot be
compared directly to print media.
Width Of Text And Comprehension
This experiment failed to prove a relationship between width and
comprehension. This result disagrees with Hansen and Haas (1988) who listed
the increase in page size as leading to better learning performance from the
5. Summary and Conclusions 5.2. Findings About Hypotheses
Page 49
workstation. They did not test for the effect of different page size on the one
computer. The other factors in their list could have caused the difference in
results.
The experiment agrees with de Bruijn, de Mul and van Oostendorp (1992)
where comprehension was found to be not affected by different screen sizes.
They did find that learning time was affected by screen size. However an
examination of the details of the experiment shows that both monitors
displayed 80 characters per line so that the experiment investigated different
numbers of lines on the page not page size. They also used a number of pages
of text.
Kolers, Duchnicky and Ferguson (1981) also found that comprehension was
not affected by differences in text presentation as did Cherry, Fischer, Fryer,
and Steckham (1989).
Width Of Text And Satisfaction With The Readability Of The Text
The significant result for width of text and satisfaction was not supported by
the experiment of Cherry, Fischer, Fryer, and Steckham (1989). They found no
significant attitude advantages for full screen, split screen or windowed help.
Participant’s comments indicated that the quality of the help text was an
important factor in determining the effectiveness of an online help system for
their experiment. It appears from the authors’ comments that there were many
features of the help text that the subjects were not happy about. These
problems may have led to the difference in results.
Tinker and Paterson (1969) also reported that subjects judged medium length
lines as being more legible.
Passive White Space And Reading Speed, Comprehension Satisfaction
The literature search for this thesis showed that white space has had very little
investigation. Often it is included in studies as part of some other variable
without being specifically mentioned. For example in the study of de Bruijn,
de Mul and van Oostendorp (1992), well-structured text was described as:
•having paragraph headings in the left-hand margin
•reasons, features etc numbered
•left hand margin a maximum of 28 characters
•a text line of 52 characters
•paragraphs separated by a blank line.
Thus there are two types of white space included in this study: active white
space between paragraphs and passive white space in the left-hand margin.
They did agree that comprehension had no effect and that satisfaction was
significant.
5. Summary and Conclusions 5.2. Findings About Hypotheses
Page 50
Tullis (1981) did find a significant difference in response times and
satisfaction when using structured text. This provides partial corroboration of
the findings that white space is more legible than no white space. However
structured text includes both active and passive white space. Tullis’ results may
have been significant because of increased spacing between items or only
because of margins. This highlights the need to be precise when describing
text formatting.
Smith and McCombs (1971) also found that the white space or no white space
had no effect on comprehension and had a significant effect on satisfaction.
The most likely reasons for the lack of interest in white space are:
•difficult to measure
•often expressed as something else e.g. leading, margins
•often expressed as part of something else e.g. well-structured
text
•difficult to define
•inferred from some other variable e.g. measurements of text
density imply corresponding white space
•until recently white space was not readily achievable on a
normal computer monitor.
Summary
Table 11 and Table 12 summarise the results reported in the literature for
research on text width and white space compared with the results for this
study. The column headings for the dependent variables show the variable
name and result. For each study either the variable was not reported, a similar
result was found (agree) or a different result (disagree) was reported.
Comparison of Research on Width
Study Reading Speed
no effect
Comprehension
no effect
Satisfaction
significant effect
This research agree agree agree
Cherry (1989) not reported agree disagree
de Bruijn (1992) not reported agree not reported
Hansen (1988) not reported disagree not reported
Kolers (1981) disagree agree not reported
Kolers (1983) disagree agree not reported
Tinker (1969) disagree not reported agree
Table 11: Comparison of results reported in literature for width
Comparison of Research on White Space
Study Reading Speed
no effect
Comprehension
no effect
Satisfaction
significant effect
This research agree agree agree
de Bruijn (1992) not reported agree agree
Smith (1971) not reported agree agree
Tullis (1981) not reported not reported agree
Table 12: Comparison of results reported in literature for white space
5. Summary and Conclusions 5.3. Conclusions About the Research Problem
Page 51
5.3. Conclusions About the Research Problem
In this study the results failed to prove a relationship between the dependent
variables reading speed and comprehension and the independent variables
width of text and amount of white space. There is evidence of a relationship
between satisfaction with the legibility of the text and width and white space.
The graph of the means indicates that the middle widths are preferred to the
extremes and that white space is generally preferred to no white space
Three conclusions can be drawn:
1. Guidelines that specify text width for short text passages
displayed on a computer screen can not categorically state that a
given width is ideal.
2. White space is only important when satisfaction is important
for readers of short text passages displayed on a computer
screen.
3. It appears that individual differences in reading abilities and
preferences are more important to reading speed and
comprehension for text on computer screens than text width
and white space.
Guidelines that attempt to specify a text width need to indicate that different
widths do not seem to affect reading speed or comprehension. Different widths
do seem to affect user attitudes. Thus if screen space is critical such as for
portable computers, cash registers etc then text width can be changed without
detriment to reading performance. At the other extreme where user satisfaction
can be critical e.g. multimedia displays, information kiosks, program tutorials
etc, the text should be a medium width and there should be margins. It is
suggested that where user preferences are important, text should have margins
and not be too wide or narrow but, if the needs or capabilities of the system
preclude formatted text, then there will no detriment to reading performance.
5.4. Implications
It was predicted on the basis of prior research with computers and paper that
text width and white space would be important to readers of computer
displayed text.
This research indicates that guidelines specifying an exact width for text on
computer displays may be erroneous. It appears the text can be varied between
2 inches and 9 inches in width with no detriment to reading performance.
However, there is evidence that width is important to the user and where other
considerations are not important text should be formatted to a 5 inch width.
Similarly passive white space in the form of margins does not seem to be
necessary. This means that at least for short text passages the text can fill the
window or indeed the entire screen allowing for more text to be displayed at
one time and reducing the need for scrolling. Again if there are no other
5. Summary and Conclusions 5.5. Limitations
Page 52
restrictions then text should be formated with margins. Where screen space is
critical, margins may not be necessary but the usere will be less satisfied with
the appearence of the screen.
Messages to the user can be displayed in a window that can be designed to fit
the program. For instance if a wide but short message window is necessary
then there will be little detriment to reading performance. Likewise if text
needs to be displayed in a narrow column then this can also be used. This does
not imply that "help" text could be treated in the same way (see section 5.6.
Further Research).
5.5. Limitations
The subjects were assumed to reflect the population of computer users.
However, it is possible that more experienced or less experienced computer
users could produce different results. The small sample size must also be
considered a limiting factor.
There exists the possibility that text width could be more important where the
text is displayed on top of the task such as where a user consults a help
program. In this experiment there was no background task to be performed.
It is also possible that longer text passages such as found in electronic journals
or online databases may still exhibit differences with wide or narrow text. This
type of text involves different reading strategies’ i.e. the reader does not scan
the text looking for key words.
This experiment only investigated passive white space in the form of margins
at the sides and bottom of the text. A top margin was not used because it
would have reduced the amount of text in the narrowest conditions to a trivial
length. Active white space was not investigated for the same reason.
5.6. Further Research
Initially this research could be repeated using the same experimental tool but
with a larger sample and with different user types; novice vs experienced for
example. A survey could be easily incorporated into the software to distinguish
between different user types or potential subjects could be classified into
different user types prior to the experiment.
Active white space still needs to be investigated. This type of white space can
be created by increasing leading and by giving headings and key words more
white space around them. Active white space is claimed to make text more
legible and it may impart more meaning to text and make it easier for a reader
to scan when looking for keywords. It could be expected that as subjects
expressed higher satisfaction with passive white space they would also prefer
more active white space.
5. Summary and Conclusions 5.6. Further Research
Page 53
Text width and white space may be more important where the text is used to
solve a problem rather than for providing information for unknown questions.
There needs to be further research to determine if text width, active white
space and passive white space affect the performance of a subject using
interactive help.
In conclusion there still needs to be research into the effects of differences in
passive white space and width where the text is long and where the text is part
of an interactive help function. There also needs to be research into the effects
of active white space to determine if it is important to readers. Active white
space needs to be investigated for short text passages, long text passages and
interactive help.
Bibliography 5.6. Further Research
Page 54
BIBLIOGRAPHY
Belmore, S. M. (1985). “Reading computer presented text.” Bulletin of the
Psychonomic Science 23: 12.
Bonsiepe, G. A. (1968). “A method of quantifying order in typographic
design.” Journal of Typogaphic Research 2: 203.
Brockman, R. J. (1990). Writing Better Computer User Documentation, From
Paper to Hypertext. New York, John Wiley & Sons.
Bruning, J. L. and B. L. Kintz (1987). Computational Handbook of Statistics.
Glenville, Illinois, Scott, Foresman and Company.
Campos, A. and M. Gonzalez (1992). “Word Length: Relation to Other Values
of Words When Meaning is Controlled.” Perceptual and Motor Skills (74):
380.
Cherry, J. M., B.M. Fryer, and M.J. Steckham (1988). “Do formats for
presenting online help affect user performance and attitudes.” Proceedings of
the 35th International Technical Communication Conference, Philadelphia,
Society for Technical Communication.
Cherry, J. M., M. J. Fischer, B.M. Fryer, and M.J. Steckham (1989). “Modes
of presentation for on-line help: full screen, split screen and windowed
formats.” Behaviour & Information Technology 8(6): 405.
de Bruijn, D., S. de Mul and H. van Oostendorp (1992). “The Influence of
Screen Size and Text Layout on the Study of Text.” Behaviour and
Information Technology 11(2): 71.
Duchnicky, R. L. and P. A. Kolers (1983). “Readibility of text scrolled on
visual display terminals as a function of window size.” Human Factors (25):
683.
Elkerton, J. (1988). Online Aiding for Human-Computer Interfaces. Handbook
of Human-Computer Interaction. Amsterdam, North-Holland. 345.
Fatt, J. (1991). “Text-related Variables in Textbook Readability.” Research
Papers in Education 6(3): 225.
Gribbons, W. M. (1988). White Space Allocation: Implications for Document
Design. Proceedings of the 35th International Technical Communication
Conference, Philadelphia, Society for Technical Communication.
Hansen, W. J. and C. Haas (1988). “Reading and Writing with Computers: A
Framework for Explaining Differences in Performance.” Communications of
the ACM 31(9): 1080.
Bibliography 5.6. Further Research
Page 55
Hewett, T. T. (1992). Curricula for Human-Computer Interaction. New York,
The Association for Computing Machinery.
Horton, W. (1988). Myths of Online Documentation. Proceedings of the 35th
International Technical Communication Conference, Philadelphia, Society for
Technical Communication.
Horton, W. (1990). Designing and Writing Online Documentation: Help Files
to Hypertext. New York, John Wiley & Sons.
Huck, S. W., W. H. Cormier and W.G. Bounds (1974). Reading Statistics and
Research. New York, Harper & Row.
Kearney, M. P. (1988). Using a Word Processor to Format Text for Online
Display. Proceedings of the 35th International Technical Communication
Conference, Philadelphia, Society for Technical Communication.
Klix, F., B. Krause, H. Hagendorf, R. Schindler and H. Wandke (1989).
Psychological problems concerning the lay-out of human-computer
interaction: A challenge to research in cognitive psychology. Man-Computer
Interaction Research. Amsterdam, North-Holland.
Kolers, P. A., R. L. Duchnicky and D.C. Ferguson (1981). “ Eye movement
measurement of readability of CRT displays.” Human Factors 23: 517.
Noordman, L. G. M. (1988). Visual Presentation of Text: The Process of
Reading from a Psycholinguistic Perspective. Human-Computer Interaction
Psychonomic Aspects. New York, Springer-Verlag.
Phillips, J. and M. Crock (1992). Interactive Screen Design Principles.
ASCILITE 92.
Document Design Project2 (1981). Guidelines for Document Designers.
Washington, DC, American Institutes for Research.
Rubens, P. (1986). “ Online information, traditional page design, and reader
expectation.” IEEE Transactions on Professional Communication PC-29(4):
75.
Rubens, P. and R. Krull (1988). Designing Online Information. Text, ConText,
and HyperText. Cambridge, MA, The MIT Press.
Rubinstein, R. (1988). Digital Typography: An Introduction to Type and
Composition for Computer System Design. Reading, MA, Addison-Wesley.
Shannon, C.E. and Weaver, W. The mathematical theory of communication,
Urbana, IL, The University of Illinois Press, 1949.
2No author was given. The book appears to be a product of a team effort.
Bibliography 5.6. Further Research
Page 56
Shirk, H. N. (1988). Technical Writers as Computer Scientists: The
Challenges of Online Documentation. Text, ConText, and HyperText.
Cambridge, MA, The MIT Press.
Singer, H. and R. Ruddell (1985). Theoretical Models and Processes of
Reading. Newark, Delaware, International Reading Association.
Smith, J. M. and M. E. McCombs (1971). “ Research in brief: The graphics of
prose.” Visible Language (5): 365.
Tinker, M. A. (1969). Legibility of Print. Ames, Ia, Iowa State University.
Trollip, S. and G. Sales (1986). “ Readability of Computer-Generated Fill
Justified Text.” Human Factors 28(2): 159.
Tullis, T. S. (1981). “ An evaluation of alphanumeric, graphic, and color
information displays.” Human Factors 23(5): 541.
Tullis, T. S. (1983). “ The formatting of alphanumeric displays: a review and
analysis.” Human Factors 25(6): 557.
van Nes, F. L. (1988). The Legibility of Visual Display Texts. Human-
Computer Interaction Psychonomic Aspects. New York, Springer-Verlag.
Wright, G. and C. Fowler (1986). Investigative Design and Statistics.
Middlesex, England, Penguin Books.
Wright, P. (1988). Issues of Content and Presentation in Document Design.
Handbook of Human-Computer Interaction. Amsterdam, North-Holland. 629.
Zikmund, W.G. (1991). Business Research Methods. Orlando, Florida, The
Dryden Press.
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 57
APPENDIX A - EXPERIMENT FOR SUBJECT 1
Form Number
ID F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16
1 w2 n4 w4 w8 n2 n7 n5 n9 w6 n6 n8 w5 w3 w9 w7 n3
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 58
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 59
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 60
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 61
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 62
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 63
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 64
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 65
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 66
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 67
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 68
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 69
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 70
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 71
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 72
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 73
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 74
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 75
Appendix A - Experiment for Subject 1 5.6. Further Research
Page 76
Appendix B - Results Raw Data
Page 77
APPENDIX B - RESULTS
Raw Data
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
1 n2 61 36 2 1
1 n3 39 42 3 0
1 n4 81 26 5 2
1 n5 60 22 2 2
1 n6 73 27 4 0
1 n7 51 19 4 3
1 n8 41 41 4 1
1 n9 50 45 5 3
1 w2 45 76 3 0
1 w3 56 24 4 3
1 w4 59 24 3 0
1 w5 71 19 5 2
1 w6 77 25 4 1
1 w7 49 33 5 1
1 w8 53 23 5 2
1 w9 61 34 2 0
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
2 n2 39 15 4 3
2 n3 49 27 5 1
2 n4 49 21 4 2
2 n5 44 33 3 1
2 n6 44 46 2 2
2 n7 41 24 4 1
2 n8 57 25 3 1
2 n9 46 25 5 2
2 w2 58 19 4 2
2 w3 46 23 2 2
2 w4 56 27 4 3
2 w5 72 36 3 3
2 w6 46 29 4 1
2 w7 45 18 5 2
2 w8 44 17 4 1
2 w9 47 35 2 1
Appendix B - Results Raw Data
Page 78
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
3 n2 83 27 3 3
3 n3 98 50 3 3
3 n4 101 49 3 2
3 n5 95 32 4 3
3 n6 60 44 3 1
3 n7 94 72 4 2
3 n8 90 58 2 3
3 n9 73 36 4 2
3 w2 95 51 3 2
3 w3 73 29 4 2
3 w4 164 51 2 2
3 w5 91 31 2 3
3 w6 54 27 3 3
3 w7 103 27 2 3
3 w8 106 110 4 0
3 w9 100 39 3 2
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
4 n2 103 23 2 3
4 n3 116 16 2 2
4 n4 37 38 1 2
4 n5 68 24 2 2
4 n6 95 23 2 2
4 n7 81 31 1 2
4 n8 103 25 2 1
4 n9 72 17 3 0
4 w2 85 27 2 3
4 w3 65 29 2 2
4 w4 76 14 2 3
4 w5 81 23 2 3
4 w6 42 27 3 2
4 w7 46 22 3 1
4 w8 60 26 2 3
4 w9 84 14 1 3
Appendix B - Results Raw Data
Page 79
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
5 n2 84 29 3 2
5 n3 112 41 2 2
5 n4 74 29 4 3
5 n5 64 56 3 3
5 n6 63 58 4 3
5 n7 141 44 2 1
5 n8 70 39 3 3
5 n9 56 56 3 0
5 w2 112 40 4 2
5 w3 120 98 4 0
5 w4 80 56 3 2
5 w5 67 39 4 2
5 w6 139 36 3 1
5 w7 73 53 3 3
5 w8 68 44 4 2
5 w9 75 40 3 3
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
6 n2 88 76 2 2
6 n3 40 58 2 1
6 n4 90 98 2 2
6 n5 91 51 2 0
6 n6 78 42 2 2
6 n7 108 103 2 0
6 n8 113 93 2 2
6 n9 75 76 1 2
6 w2 85 95 1 2
6 w3 93 54 3 3
6 w4 51 57 2 1
6 w5 76 60 2 2
6 w6 61 68 2 2
6 w7 68 86 1 2
6 w8 58 42 3 2
6 w9 64 45 1 2
Appendix B - Results Raw Data
Page 80
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
7 n2 53 24 3 2
7 n3 42 33 2 3
7 n4 61 19 2 1
7 n5 61 27 2 2
7 n6 46 36 2 2
7 n7 73 37 2 2
7 n8 73 20 1 3
7 n9 55 25 1 0
7 w2 36 17 2 3
7 w3 48 19 2 2
7 w4 40 19 2 1
7 w5 32 31 1 1
7 w6 22 30 2 2
7 w7 36 37 3 2
7 w8 39 21 3 2
7 w9 34 40 2 2
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
8 n2 53 34 2 0
8 n3 90 36 3 3
8 n4 56 24 2 2
8 n5 86 20 4 3
8 n6 56 47 3 0
8 n7 83 30 4 3
8 n8 42 34 4 2
8 n9 35 34 3 0
8 w2 71 34 3 2
8 w3 81 68 2 2
8 w4 89 51 3 0
8 w5 42 21 3 2
8 w6 51 22 3 3
8 w7 46 31 3 2
8 w8 44 39 3 1
8 w9 44 27 3 2
Appendix B - Results Raw Data
Page 81
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
9 n2 103 71 3 2
9 n3 66 31 3 3
9 n4 63 35 4 1
9 n5 54 44 1 1
9 n6 72 27 4 2
9 n7 61 29 3 1
9 n8 117 109 4 2
9 n9 64 85 3 2
9 w2 55 38 2 3
9 w3 78 22 4 1
9 w4 51 37 5 2
9 w5 66 26 3 3
9 w6 63 41 4 1
9 w7 61 39 1 0
9 w8 39 58 3 0
9 w9 95 41 4 2
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
10 n2 49 69 4 1
10 n3 79 52 4 3
10 n4 79 41 4 2
10 n5 62 29 4 1
10 n6 48 49 5 2
10 n7 60 36 5 2
10 n8 83 92 4 1
10 n9 54 38 5 3
10 w2 67 82 4 2
10 w3 93 44 3 3
10 w4 57 23 4 3
10 w5 61 181 4 0
10 w6 34 80 4 0
10 w7 81 54 4 1
10 w8 58 25 3 2
10 w9 83 29 3 0
Appendix B - Results Raw Data
Page 82
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
11 n2 31 15 2 3
11 n3 40 24 4 1
11 n4 54 26 3 2
11 n5 39 20 3 2
11 n6 41 20 4 3
11 n7 37 14 4 3
11 n8 28 17 4 2
11 n9 29 25 3 2
11 w2 27 20 3 3
11 w3 25 23 1 0
11 w4 47 19 3 3
11 w5 31 19 2 2
11 w6 31 15 1 2
11 w7 84 26 4 2
11 w8 25 21 2 3
11 w9 48 24 1 1
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
12 n2 55 21 3 2
12 n3 47 27 4 0
12 n4 37 64 2 0
12 n5 113 18 4 0
12 n6 33 17 5 2
12 n7 31 20 4 3
12 n8 32 22 4 2
12 n9 39 26 3 2
12 w2 34 29 4 3
12 w3 42 27 3 1
12 w4 43 30 4 2
12 w5 41 34 4 0
12 w6 64 32 3 1
12 w7 41 22 3 2
12 w8 41 29 5 2
12 w9 60 23 3 2
Appendix B - Results Comments by Subjects
Page 83
Subject
Number
Screen
Code
Time
Read
Time
Answer
Screen
Rating
Compre-
hension
Score
13 n2 74 32 4 1
13 n3 57 32 2 1
13 n4 52 21 3 1
13 n5 69 18 3 3
13 n6 68 17 1 2
13 n7 64 30 1 2
13 n8 58 37 2 2
13 n9 87 20 1 1
13 w2 82 24 2 2
13 w3 78 23 4 2
13 w4 56 28 2 3
13 w5 60 30 5 2
13 w6 60 17 3 3
13 w7 75 18 3 3
13 w8 69 28 1 2
13 w9 65 33 4 2
Comments by Subjects
Subject 1 "When the text was listed across the page in a lengthy form, my
eyes had to move more then what was comfortable to read off the screen."
Subject 2 "The explanations that i was given were easy to understand and
to follow. some of the text that i was given however proved to be a bit
straining upon the eyes for you had to look close to understand the words.
There were no problems that I could see."
Subject 4 "The explanations outside the test texts were clear. The text
itself was hard to read because of the size of the characters, and the long
sentences. The punctuation was not very good. Readability was not good. I
generally had to read everything twice before it sank in, but that could have
been due to some noise around me, making concentration difficult."
Subject 6 "I assume the ""explanations"" were the textual descriptions
contained in each window being assessed. My answer would be not always.
The lack of formatting of text made it difficult to read."
Subject 7 "I think it would have been easier to read if the text had been
displayed with line and a half spacing between each line. The really wide
columns are hard to read. White space around the text makes it easier to read
too. Explanations were clear and I didn’t have any problems. Some of the
results may not be based on my reading of the text as I have some knowledge
of some Word functions already, and not others."
Subject 8 "Explanations were clear. Text was fairly easy to read. Didn’t
encounter any real problems."
Appendix B - Results Comments by Subjects
Page 84
Subject 9 "Some explanations were a bit hard to understand, those that
used command refferences i.e.. press Ctrl C, V, then drag this to somewhere
etc. Maybe you could have told on which item to pull down from the menu
bar at the top of the screen. Overall it was well presented and easy to
understand and use. The explainations were sufficient for a unfamilliar user to
get a hold on what to do and the different functions available for use. The
smaller boxes with a few lines were hard to read, also hard to read were the
long narrow boxes. The more squarer the box the easier it was to read."
Subject 10 "The explanations were clear, and I think are good for the topic
being discussed. The text which I found best to read were the ones in narrow
vertical columns and a large box around the outside of the screen. The one
which I found hard to read were long horizontal ones with a box encasing the
text tightly. I had no real problems except in remembering all of the details of
the text but if I was in need of hearing about the topic being discussed I would
have paid closer attention. I think if the line spacing were a bit geater the text
would also have been easier to read. Overall I think the content of the material
was very good. I even learnt something about Word which I didn’t know
before."
Subject 12 "There may be a bias in question 1 : I did not at first realise that
the four menu selections where related to the four questions on the left hand
side. Also at first it was not clear that I should read the text in order to answer
the questions, I thought I was just going to be asked about style, layout, font,
etc."
Subject 13 "The explanations to perform the experiment were clear. The
text was horrible but that was consistent so did not cause any problems. I
found that for the screens I found difficult to read I read the text two or three
times because I was unable to remember what it said. For those that were easy
to read I only read them once. Consequently I found that I could remember
more about those paragraphs that were hard to read than thse that wwere easy
to read. Some were easy to answere the questions regardless of how hard the
paragraph was to read because of my basic to good knowledge of word. I did
not find any of the different formats excellent to read. I personnally find those
menus that have some space between the top of the window and the beginning
of the text easier to read that those used here. I found myself not understanding
or reading the first line. The narrow texts are a real pain in the butt, so are the
wider ones. Although the wider ones are easier to comprehend than the narrow
ones."
Appendix C - Consent Form Comments by Subjects
Page 85
APPENDIX C - CONSENT FORM
The University Of New England - Northern Rivers
Name of Project: Presentation of Online Documentation
You are invited to participate in a study of the presentation of text on computer
screens. We hope to learn more information about the interaction between
people and computers in the area of text presentation.
If you decide to participate, we will ask you to run a computer program. The
program will present you with instructions, different screens of text, ask
questions about the text and finally ask for some information about your
attitutdes and feelings. The total time for the experiment is not expected to
exceed one hour.
Any information that is obtained in connection with this study and that can be
identified with you will remain confidential and will be disclosed only with
your permission.
If you decide to participate, you are free to withdraw your consent and to
discontinue participation at any time without prejudice.
If you have any questions, we expect you to ask us. If you have any additional
questions later, Dr John Maltby (203724) or Tim Comber (203119) will be
happy to answer them.
You will be given a copy of this form to keep.
Consent
Appendix C - Consent Form Comments by Subjects
Page 86
I have read the information above, and agree to participate in this study. I
am over the age of 18 years.
Name of Subject: ...............................................................
Signature of Subject:
......................................................................................Date: ................
Independent Witness: ................................................................
Signature of Witness:
......................................................................................Date: ................
Signature of Researcher:
......................................................................................Date: ..............
Appendix D - Objectvision Decision Trees Comments by Subjects
Page 87
APPENDIX D - OBJECTVISION DECISION TREES