ArticlePDF Available

Working towards Usable Forms on the World Wide Web: Optimizing Date Entry Input Fields

Article

Working towards Usable Forms on the World Wide Web: Optimizing Date Entry Input Fields

Abstract and Figures

When an interactive form in the world wide web requires users to fill in exact dates, this can be implemented in several ways. This paper discusses an empirical online study with n = 172 participants which compared six different versions to design input fields for date entries. The results revealed that using a drop-down menu is best when format errors must be avoided, whereas using only one input field and placing the format requirements left or inside the text box led to faster completion time and higher user satisfaction.
Content may be subject to copyright.
Hindawi Publishing Corporation
Advances in Human-Computer Interaction
Volume 2011, Article ID 202701, 8pages
doi:10.1155/2011/202701
Research Article
Working towards Usable Forms on the World Wide Web:
Optimizing Date Entry Input Fields
Javier A. Bargas-Avila, Olivia Brenzikofer, Alexandre N. Tuch,
Sandra P. Roth, and Klaus Opwis
Department of Psychology, Center for Cognitive Psychology and Methodology, University of Basel, 4055 Basel, Switzerland
Correspondence should be addressed to Javier A. Bargas-Avila, javier.bargas@unibas.ch
Received 15 February 2010; Revised 13 May 2011; Accepted 7 June 2011
Academic Editor: Armando Bennet Barreto
Copyright © 2011 Javier A. Bargas-Avila et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
When an interactive form in the world wide web requires users to fill in exact dates, this can be implemented in several ways. This
paper discusses an empirical online study with n=172 participants which compared six dierent versions to design input fields
for date entries. The results revealed that using a drop-down menu is best when format errors must be avoided, whereas using
only one input field and placing the format requirements left or inside the text box led to faster completion time and higher user
satisfaction.
1. Introduction
Most websites use interactive online forms as the main
contact point between users and the company. The design of
these forms can be a crucial factor for the success of online
transactions. Users do not visit a website with the intention
or goal of filling in a form. Focusing on the example of
online shopping: once users have chosen the items that
they wish to buy, they want to complete their shopping as
quickly, easily, and safely as possible. In this context, a form
may often be perceived as a hurdle. If they are dicult to
use, it may even lead to customers aborting the transaction,
resulting in loss of profit [1]. A successful revision and rede-
sign of a suboptimal online form may result in an increased
completion rate in the range of 10%–40% [1]. The eBay User
Experience and Design Group reported that a redesign of
the eBay registration form made a significant contribution
to eBay’s business and user success [2].
A growing body of research and guidelines have been
published on how to make online forms more usable (e.g.,
[1,3,4]). Some of these have been empirically tested; others
instead have been derived from experience and best practice
of usability experts. Although the knowledge in this field is
increasing, there are still many open questionswhen it comes
to designing an online form.
2. Theoretical Background
In the last decade, many aspects of online forms have been
explored. There are several aspects of usable form inter-
action: (1) form content, (2) form layout, (3) input types,
(4), error handling, and (5) form submission. The following
section provides a brief summary of the most important re-
sults within these areas. This study will explore aspects within
the area of “input types”.
Form Content. There are many dierent aspects to consider
when designing web forms. One of the basic guidelines of
user-centered design is to map the natural environment,
which is already familiar to the user, as closely as possible
to the virtual one [5]. If users are familiar with a concept in
real life, it is probable that they will also understand this con-
cept if it is applied to the online environment. In the case
of web forms, this may, for example, be achieved by using
a layout analogous to paper forms. Beaumont et al. [4]state
that users’ preferred input types for providing answers online
are textboxes. A demonstration by Nielsen [6] showed that
providing a separate drop-down menu for entering the street
type (e.g., road, street, and avenue) caused people to turn
back to the previous field because they were used to entering
the street type into the textbox for the address. Miller and
2Advances in Human-Computer Interaction
Jarrett [7] recommend not using too many dierent input
types in one form as this can cause confusion, and Beaumont
et al. [4] suggest keeping an intuitive order of the questions,
for example, first ask for the name, then the address and, at
the end, for the telephone number.
To keep forms simple and fast, Beaumont et al. [4]rec-
ommend asking only those questions that really need to be
answered, for example, the shipping address in the case of
an online shop. Other “nice-to-know” questions only an-
noy users and require more time to fill in the form. On the
other hand, these questions may provide insight into the
user population and may be helpful for marketing purpos-
es. In this case, users must be enabled to distinguish between
required and optional fields [8,9]. Nowadays, this is often
realized through the use of asterisks. Pauwels et al. [10]ex-
amined whether highlighting required fields by color coding
leads to faster completion time compared to an asterisk
next to required fields. Participants were faster, made fewer
errors, and were more satisfied when the required fields were
highlighted in color. Tullis and Pons [11] found that people
were fastest at filling in required fields when the required and
optional fields were separated from each other.
Form Layout. Penzo [12] examined the position of labels
relative to the input field in a study using eye-tracking. He
compared left-, right- and top-aligned labels and came to the
conclusion that with left-aligned labels people needed nearly
twice as long to complete the form as with right-aligned
labels. Additionally, the number of fixations needed with
right-aligned labels was halved. The fastest performance,
however, was reached with top-aligned labels, which required
only one fixation to capture both the label and the input
field at the same time. As a result of this study, Wroblewski
[1] recommends using left-aligned labels for unfamiliar data
where one wants users to slow down and consider their an-
swers. On the other hand, if the designer wants users to
complete the form as quickly as possible, top-aligned labels
are recommended. Another advantage of top-aligned labels
is that label length does not influence placement of the
input fields. Based on an eye-tracking study, Das et al. [13]
recommend right-aligned labels in the context of forms with
multiple columns. Finally Jarrett [14] emphasizes, that the
question of label placement is secondary, as long as users
know what to fill in the fields, whether they are willing to
reveal the information, and whether the validations do not
prevent them from entering the answers of their choice. The
literature review shows that there is little consensus when it
comes to label placement.
In terms of form layouts, Robinson [15] recommends
that a form should not be divided into more than one col-
umn. A row should only be used to answer one question.
Concerning the length of input fields, Wroblewski [1]rec-
ommends matching the length of the field tothe length of the
expected answer. This provides a clue or aordance to users
as to what kind of answer is expected from them. Christian
et al. [16] examined the date entry with two separated text
fields for month and year. Participants gave more answers in
the expected format (two characters for the month and four
for the year) if the field for the month was half the size of
the one for the year. In another study by Couper et al. [17],
people gave more incorrect answers if the size of the input
field did not fit the length of the expected input.
Input Types. Another question in web form design relates to
which input type should be used. As mentioned, Beaumont
et al. [4] recommend using textboxes as often as possible.
However, if the number of possible answers has to be restrict-
ed, radio buttons, checkboxes, or drop-down menus can be
used [8]. These input types are also recommended to avoid
errors, prevent users from entering unavailable options, and
simplify the decision process. Radio buttons and drop-down
menus are used for choosing only one option (single choice);
with checkboxes, users can select as many options as they
like. Concerning the use of drop-down menus and radio but-
tons, Miller and Jarrett [7] see the advantage of radio buttons
in the fact that all options are visible at once whereas the
advantage of drop-down menus lies in the saving of screen
real estate. With the help of the Keystroke-Level Model [18],
it can be theoretically calculated that interaction with a drop-
down menu takes longerthan interaction with radio buttons,
mainly because of an additional point and click (PK) needed
to open the drop-down menu. In an empirical study, Healey
[19] found that on the single-question level, radio buttons
were faster to choose from than drop-down menus, but the
use of drop-down menus instead of radio buttons did not
aect the overall time to fill in the whole questionnaire.
Hogg and Masztal [20] could not find any dierences in
the time needed to select answers between radio buttons
and drop-down menus. Heerwegh and Loosveldt [21]found
that people needed significantly more time to select options
from drop-down menus than from radio buttons, but these
findings could not be replicated in a second study. Concern-
ing the drop-out rate, no dierences between radio buttons
and drop-down menus could be found [1921]. According
to Miller and Jarrett [7], radio buttons should be used
when two to four options are available; with more than four
options they recommend using drop-down menus. When
drop-down menus are used, Beaumont et al. [4] suggest
arranging the options in an order with which the user is
already familiar (e.g., for weekdays, the sequence Monday,
Tuesday, etc.). Where there is no intuitive sequence, an
alphabetical order should be considered. If users are required
to indicate multiple options, Bargas-Avila et al. [22]show
that checkboxes (instead of list boxes) enhance usability and
user satisfaction—at least when a smaller number of options
are provided.
A frequent issue concerning data input is the design of
date entries. With date entries, it is important that they are
entered in the expected format to avoid confusion between
month and day. There are many dierent ways of designing
input fields for date entries and many possibilities for how
they have to be completed. Christian et al. [16]examined
date entries where the month and year field consisted of
two separate text boxes. Their study revealed that 92.9%–
95.8% provided their answer in the correct format when
symbols (MM and YYYY) were used to state the restrictions.
Positioning the date instructions to the right of the year
field led to fewer correct answers. Linderman and Fried [8]
Advances in Human-Computer Interaction 3
suggest using drop-down menus to ensure that no invalid
dates are entered. There are other ways of designing date
entries and their format requirements, for example, placing
the requirements inside the answer boxes or using a single
text box. Currently no studies are known to the authors that
compare these dierent versions. Concerning the formatting
of other answers, accepting entries in every format is recom-
mended, as long as this does not cause ambiguity [8]. This
prevents users from having to figure out which format is re-
quired and avoids unnecessary error messages.
Error Handling. It is important to guide users as quickly
and error-free as possible through forms. Errors should be
avoided from the start by explaining restrictions in advance.
Often, errors cannot be avoided; in this case, it is important
to help users to recover from them as quickly and easily as
possible. To assure usable error messages in the web, Nielsen
[23] and Linderman and Fried [8] state that an error message
must be written in a familiar language and clearly state what
the error is and how it can be corrected. Nielsen [23]also
advises never deleting the completed fields after an error has
occurred, as this can be very frustrating for users. Bargas-
Avi la et al. [24] compared six dierent ways of presenting an
error message, including inline validation, pop-up windows,
and embedded error messages. People made fewer consecu-
tive errors when error messages appeared embedded in the
form next to the corresponding input fields or one by one
in a pop-up window. This was only the case if the error
messages showed up at the end after clickingthe send button.
If the error messages appeared at the moment the erroneous
field was left (inline validation), the participants made signif-
icantly more errors completing the form. They simply ig-
noredor,inthecaseofpop-upwindows,evenclickedaway
the appearing error messages without reading them.
Form Submission. At the end of the fill-in process, the form
hastobesubmitted.Thisisusuallyrealizedthroughabut-
ton with an action label. Linderman and Fried [8] suggest
disabling the submit button as soon as it has been clicked to
avoid repeated submissions due to long loading time. Some
web forms also oer a reset or cancel button in addition to
the submit button. Many experts recommend eliminating
such a button as it can be clicked by accident and does not
provide any real additional value [1,8,15]. After a successful
transaction, the company should confirm the receipt of the
user’s data by e-mail [1,8].
3. Goal of the Study
In date entry on the World Wide Web, there can be ambigu-
ities, especially in global context. In most European coun-
tries, dates are written in the format day/month/year, where-
as in the US the day follows the month. In an online environ-
ment, where a date is filled into an input field, it is therefore
important to tell users in which format the date has to be
entered to avoid confusion. Prior studies have shown that
providing format restrictions to users in advance lowers error
rates and increases user satisfaction [25].
Therearemanywaysofstatinghowthedateentryhas
to be formatted. A study by Christian et al. [16]examined
date entry using two separate input fields for month and year.
They placed above the corresponding input fields the words
“Month” and “Year” or the characters “MM” and “YYYY”,
respectively. They found that people provided more correct
answers using characters. Another factor that decreased
errors was when the year field was twice as long as the month
field. Their study also showed that the way the question was
asked, namely, “when” versus “what month and year”, had
no influence on the number of correct answers. Also, the
position of the symbols—left, above, or right of the corre-
sponding input fields—did not change the percentage of cor-
rect answers. Using a half-size month-box, separating the
month— from the year-box, and grouping a symbolic in-
struction with the corresponding input field led to a rate
between 92.9% and 95.8% of participants reporting the date
in the desired format.
There are more possibilities for designing input fields
for date entries with formatting requirements, for example,
using one field and placing the requirements next to or inside
the field. There are even ways where no formatting require-
ments need to be stated, namely, drop-down menus and pop-
up calendars. Because no studies are known that had tested
the performance of these possibilities, an online experiment
was conducted to shed more light into these options.
4. Methods
4.1. Error Categorization. When entering dates in forms,
there are two dierent types of errors that can occur.
(1) Wrong format. The user enters the correct day (e.g.,
his/her birthday), but chooses a wrong format. This
can happen for instance when the required format is
month-day-year, and the user enters first the day and
then the month and year, or when two-digit numbers
are enforced, but the user enters the date using single-
digits. Format errors stem usually from insucient
communication of the applied format restrictions or
from users overlooking the instructions.
(2) Wrong dat e. The user enters the wrong date. This usu-
ally happens if the wrong keys are pressed on the key-
board or the wrong entries are selected in a menu or
calendar widget.
4.2. Design. This study was conducted as online experiment,
where six dierent date entry designs were compared using a
one-way related design.
We chose four designs that required users to enter the
dates using variation of entry field(s), one where the dates
were entered with drop-down menus, and one design pro-
vided a calendar widget. The six designs used as independent
variables are illustrated and explained in Figure 1.
As dependent variables, the following metrics were as-
sessed.
(i) Wrong format: date entries in a wrong format (see
Section 4.1 for a definition).
4Advances in Human-Computer Interaction
Version 1 (separate). Three separate input fields are used for day, month and year and symbols above of the
corresponding input field state how many digits have to be entered. The year field is twice the size of the month
and day field.
Version 2 (drop-down). Three separate drop-down menus are used for day, month and year. The menu for the
year includes dates from 1900 to 2007.
Version 3 (left). Only one input field is used. The formatting requirements are stated in form of symbols to the
left of the input field.
Version 4 (inside, permanent). The formatting requirement is placed inside the input field. It stays visible when
the user clicks inside the field and has to be overwritten.
Version 5 (inside). The formatting requirement is placed inside the input field. It disappears when the user clicks
into the input field.
DD
Day
dd.mm.yyyy
dd.mm.yyyy
dd.mm.yyyy
Month Year
MM YYYY
Version 6 (calendar). A calendar pops-up when the icon right to the input field is clicked.
Figure 1: Design versions used as independent variables: input types for date entries (translated by the authors).
(ii) Completion time: time needed to fill in the dates.
(iii) Wrong dat e: dates that did not correspond to the ones
required (see Section 4.1 for a definition).
(iv) User satisfaction: a satisfaction questionnaire meas-
ured whether entering the date was perceived as being
comfortable and ecient.
4.3. Participants. A total of n=172 subjects participated
in the study. Fifty-three of the participants were male, 113
were female, and six did not specify their gender. The mean
age was 30.33 years (SD =12.07), with the youngest person
being 15 and the oldest 68 years old. All participants were
recruited through the University recruitment database and
contacted by e-mail. In this database, people interested in
participating in user studies can leave their contact address.
As incentive, they had the chance of winning an iPod Shue
or one out of 10 USB memory sticks.
4.4. Procedure. The study was conducted online. The authors
selected five arbitrary dates that were used for the experi-
ment. Each of these five dates was presented to each partic-
ipant in all six design versions. This leads to a total of 30
cycles for each participant (5 dates ×6designs).Allcycles
were presented to participants in random order to counter
learning eects.
First the participants received a short introduction and
instruction. They were told that they would see 30 tasks,
where a date is presented, and they will need to copy this
date as fast as possible with the provided input mechanism.
Advances in Human-Computer Interaction 5
Welcome Instruction Questions 1/30 Evaluation Demography Thank you
DD MM YYYY
Next
17.05.1957
Figure 2: Example of how a task was presented to users (translated by the authors).
After acknowledging this instruction, the study started. The
presentation on each screen consisted of a date to fill in and
the corresponding input field. Figure 2 shows an example for
version 1 (separate). The task was to fill in the date presented
in the input field as quickly as possible and to click on the
button to proceed to the next screen. In line with the stated
format requirements, dates had to be entered using two digits
for the day and the month and four digits for the year.
At the end, a post-test questionnaire appeared where each
design version was presented again on the same screen, with
thequestions“Fillinginthedatewascomfortable”and“I
could fill in the date quickly and eciently” placed under
each version. Participants had to answer these questions for
each design on a six-point Likert scale (scale: 1 =does not
apply; 6 =applies). Finally, participants were asked for their
age and gender and thanked for their participation.
5. Results
All data were checked for outliers (dierence larger than
three standard deviations), normal distribution, and linear-
ity. For a better fit to these criteria, response time was log-
transformed. To analyze the dierences between the six ver-
sions, the mean for the dependent variables of the five dates
was calculated for each input type. In version 6 (calendar),
where there was the possibility of choosing the date via the
pop-up calendar, only data where the calendar was used
were included into the calculation (n=126; it was possible
to enter the date without using the calendar by typing
directly into the date field). The outlier analysis revealed that
one subject took an abnormally long time to answer ver-
sion 2 (drop-down). Therefore, this subject was excluded
from further analyses. An alpha level of .05 was used for all
statistical tests.
5.1. Errors: Wrong Format. The number of format errors was
not normally distributed and the assumption of variance
homogeneity was violated. Therefore, to test whether the six
design versions diered in the number of answers that were
giveninanwrongformat(seeSection 4.1), a non-parametric
Friedman ANOVA was used. Results indicate that there were
significant dierences between the six versions, χ2
r(5) =
168.864, p<.001. As shown in Tab l e 1, version 2 (drop-
down) and version 6 (calendar) performed significantly
better than the other four versions. They both had zero
entries in a wrong format, because these versions make it
impossible to enter a date in a wrong format. There was no
dierence between the other four versions; they all led to the
same number of incorrect date formats.
5.2. Response Time. Response time data were not normally
distributed and therefore log-transformed. To test whether
there were dierences between the six versions for the time
needed to fill in the dates, a one-way ANOVA for related
samples was conducted. Because the sphericity assumption
was violated, degrees of freedom were adjusted using Green-
house-Geisser. Again, for version 6 (calendar) only data
where the calendar was used were included into the analysis.
The global analysis revealed significant dierences in the log-
arithmic transformed mean response time, F(2.72, 339.49) =
112, 07, p<.001, with version 2 (drop-down) and 6 (calen-
dar) requiring more time to be filled in than the other four,
F(1, 125) =290.90, p<.001. The fastest performance was
reached with versions 3 (left) and 5 (inside), which were both
faster than versions 1 (separate) and 4 (inside, permanent),
F(1, 125) =72.88, p<.001. Mean values for the time needed
to fill the dates are shown in Ta bl e 2 .
5.3. Errors: Wrong Dates. Data did not meet assumptions of
distribution and sphericity. Therefore, again a nonparamet-
ric Friedman ANOVA was applied to compare the number
of wrong dates entered. Tabl e 1 shows the mean values for
all groups. The six versions diered significantly, χ2
r(5, N=
126) =76.63, p=.001. Post hoc analysis revealed that in
version 6 (calendar) more dates that diered from the ones
required (wrong dates, see Section 4.1) were entered than in
the other five versions (see Tab l e 1).
5.4. Satisfaction, Eciency, and Conformance Ratings. Con-
cerning the question whether the action of entering the
dates was perceived as being comfortable, the global analysis
revealed significant dierences between the six versions
χ2
r(5, N=121) =69.622, p<.001, with version 4 (inside,
permanent) being perceived as significantly less comfortable
than each of the other versions. There was no significant dif-
ference between the other five versions regarding perceived
comfortableness (see Tab le 3 ). Significant dierences were
found between the six versions for the perceived eciency
of entering the dates, χ2
r(5, N=116) =78.016, p<.001.
Again, version 4 (inside, permanent) was perceived as being
less ecient than the other versions. Version 2 (drop-down)
was perceived as being less ecient than versions 3 (left) and
5 (inside). In addition, version 3 (left) was rated significantly
more ecient than the calendar (see Ta bl e 4 ).
6Advances in Human-Computer Interaction
Tab le 1: Statistic parameters for number of errors: entries in a wrong format and wrong date.
Vers i o n Wrong format Wrong date
M(%) SD M(%) SD
Version 1 (separate) 18.5 30.5 3.0 8.1
Version 2 (drop-down) 0 0 2.1 7.6
Version 3 (left) 22.3 33.5 1.6 5.6
Version 4 (inside, permanent) 22.3 32.8 1.7 5.7
Version 5 (inside) 25.7 32.5 1.9 7.5
Version 6 (calendar) 0 0 18.3 29.7
Tab le 2: Statistics for time needed to fill the dates.
Vers i o n Time needed (s)
M(s) SD
Version 1 (separate) 3.68 2.69
Version 2 (drop-down) 6.94 2.88
Version 3 (left) 3.61 2.82
Version 4 (inside, permanent) 4.15 2.49
Version 5 (inside) 3.41 2.32
Version 6 (calendar) 5.03 5.40
6. Discussion
6.1. Discussion of the Findings. The results show that no for-
matting errors occurred with the drop down and calendar
entry options. However, this benefit came with the cost of
longer input times. Concerning the other four versions, none
of them outperformed another regarding the mean number
of registered formatting errors. There is a trade-oin the
number of formatting errors and completion time. If it is
crucial that the requested date is entered in the correct format
(e.g., the date for a flight when buying an online ticket),
either the calendar or the drop-down version can be used at
the expense of more time. If, on the other hand, completion
time is a more important factor than correctly formatted
answers (e.g., optional fields like one’s date of birth), one
of the other versions can be considered. When analyzing
completion time, versions 3 (left) and 5 (inside) performed
best and received the highest satisfaction ratings. Therefore
when formatting errors are secondary, these two versions can
be recommended. In version 5 (inside), where the format
advice disappears as soon as the field is activated, users
might accidentally delete the format requirement using the
tab key. This is a legitimate argument, which could not be
tested in this study as there was only one input field per
screen, and tabbing was not possible or necessary. Version 1
(separate), which has been proposed by Beaumont et al. [4]
and Christian et al. [16], did not lead to fewer formatting
errors and took more time to be filled in than versions 3
(left) and 5 (inside). Version 4 (inside, permanent) was rated
as being significantly less comfortable and ecient than the
other versions. As it does not seem to prevent formatting
errors any better than other versions, using this type of date
entry field should be avoided. Tab le 4 gives a performance
overview for the tested versions.
Concerning the dierence between drop-down and cal-
endar version, using the drop-down menu to ensure that
fewer errors occur might have some advantages. With the
calendar interface, more incorrect dates were registered (see
Section 5.3). With calendars, it is usually possible to enter
the date manually in the field without using the calendar.
Therefore, it may happen that people either do not see the
calendar icon or decide not to use it. In this study, 28% of
participants never used the calendar and only 25% used it
for all five dates that they had to enter. If a designer wants
to force the use of the calendar, it has to be implemented
in such a way that the calendar automatically pops up
when clicking into or activating the corresponding input
field. Also, the calendar is the only interface element that
requires the usage of a mouse: in situations where a mouse
is unavailable (e.g., mobile applications) or the form is used
by handicapped user population, this version might bear
some serious disadvantages compared to regular drop-down
menus. A calendar that can be activated by the user can be
very helpful for date entries where it is important to know
the exact weekday of the date. This type of calendar is, for
example, commonly used on travel websites to book flights
and hotels. However, in cases where the required date lies far
in the future or past, it is often not practical to use a calendar
to enter a date, because it will require many clicks to arrive at
the desired timeframe.
The reported findings should be tested in a complete
form to see whether these rules also apply in a “natural
environment”. The interaction with the labels should be
taken into account, especially in version 3 (left), where the
label and the formatting advice are placed to the left of the
input field. In this case, it should also be tested as to whether
the formatting advice should be associated with the label or
the input field.
6.2. Limitations. This study helps to clear up an important
question regarding the design of usable forms: how should
date entry input fields be presented to reduce errors and
processing time, and to increase user satisfaction? At the
same time, it must be stressed that the presented findings
have to be regarded as a first step. The study was conducted
in a rigorous laboratory setting. First, all chosen tasks were
artificial: participants had not to enter real dates they had
to retrieve from their memory. They simply had to copy
dates from an instruction into a user interface—it remains
to be seen if these results scale to more realistic settings.
Advances in Human-Computer Interaction 7
Tab le 3: Subjective evaluation of the six versions.
Vers i o n Comfortableness Speed/Eciency
MSDMSD
Version 1 (separate) 3.75 1.71 3.80 1.71
Version 2 (drop-down) 3.65 1.85 3.30 1.74
Version 3 (left) 4.04 1.44 4.23 1.44
Version 4 (inside, permanent) 2.66 1.38 2.72 1.39
Version 5 (inside) 3.95 1.39 4.10 1.50
Version 6 (calendar) 3.70 1.82 3.35 1.73
6-point Likert scales, 1 =does not apply; 6 =applies.
Tab le 4: Performance overview of the six versions.
Avo id wro n g form a t Avo id wro n g dat e Ti m e eciency User satisfaction
Version 1 (separate) 0 + 0 0
Version 2 (drop-down) + + 0
Vers i o n 3 ( l e f t ) 0 + + +
Version 4 (inside, permanent) 0 + 0
Version 5 (inside) +++
Version 6 (calendar) + −−0
Second, the tasks were very repetitive, something that is
usually not found in online form. Third, the interactions
were not embedded in a realistic setting, like for instance a
shopping or registration process. The ecological validity of
the presented findings is therefore low. Fourth, we instructed
the participants to enter the dates as quickly as possible. This
performance oriented setting may dier from a more relaxed,
natural task setting.
To overcome these limitations, future studies may vary
interface elements within a real setting using real tasks.
Another question is whether these results can be generalized
beyond web forms to, for example, forms in mobile applica-
tions.
6.3. Future Outlook. Regarding the future outlook, there are
many open questions concerning usable forms that must be
answered. In recent years new developments, like for exam-
ple, Web 2.0, have led to new ways of implementing interac-
tions on the Internet. Nowadays for example, users get more
and more accustomed to receiving immediate feedback in
webforms through the use of AJAX technology. It remains
to be seen if these technologies can be used to enhance date
selection in online forms.
There is a growing body of empirical research and best
practice recommendations by usability experts to achieve
usable forms on the World Wide Web. Most studies—like the
one presented here—choose to explore one specific aspect
or interface element to find the optimal solution. Usually
this is done in a laboratory situation, using abstract forms
with artificial tasks. In the near future, this knowledge needs
to be consolidated in practical guidelines. These guidelines
must be empirically tested using real forms in realistic user
situations, to see whether they really lead to better usability,
manifested in faster form-completion time, fewer errors,
higher user satisfaction, and reduced dropout rate.
References
[1] L. Wroblewski, Web Form Design: Filling in the Blanks,Rosen-
feld Media, 2008.
[2] J. Herman, “A process for creating the business case for
user experience projects,” in Proceedings of the Conference on
Human Factors in Computing Systems, pp. 1413–1416, ACM,
New York, NY, USA, 2004.
[3] J. Bargas-Avila, O. Brenzikofer, S. Roth, A. Tuch, S. Orsini, and
K. Opwis, “Simple but crucial user interfaces in the world wide
web: introducing 20 guidelines for usable web form design,” in
User Interfaces (INTECH ’10), R. Matrai, Ed., pp. 1–10, 2010.
[4] A. Beaumont, J. James, J. Stephens, and C. Ullman, Usable
Forms for the Web, Glasshaus, Birmingham, UK, 2002.
[5] J. Garrett, The Elements of User Experience,NewRiders,
Indianapolis, Ind, USA, 2002.
[6] J. Nielsen, “Drop-down menus: use sparingly,” 2000, http://
www.useit.com/alertbox/20001112.html.
[7] S. Miller and C. Jarrett, “Should I use a drop-down? Four
steps for choosing form elements on the web, 2001, http://
www.formsthatwork.com/files/Articles/dropdown.pdf.
[8] M. Linderman and J. Fried, Defensive Design for the Web: How
to Improve Error Messages, Help, Forms, and Other Crisis Points,
New Riders Publishing, Thousand Oaks, Calif, USA, 2004.
[9] T. Wilhelm and C. Rehmann, “Nutzergerechte formula-
rgestaltung,” 2006, http://www.eresult.de/studien artikel/for-
schungsbeitraege/formulargestaltung.html.
[10] S. L. Pauwels, C. H¨
ubscher, S. Leuthold, J. A. Bargas-Avila,
and K. Opwis, “Error prevention in online forms: use color
instead of asterisks to mark required-fields,” Interacting with
Computers, vol. 21, no. 4, pp. 257–262, 2009.
8Advances in Human-Computer Interaction
[11] T. Tu l l i s a n d A. P o n s , “ D e s i g n at in g r e q u i r e d vs. o p t i o n a l i n p u t
fields,” in Proceedings of the Conference on Human Factors in
Computing Systems, pp. 259–260, ACM, New York, NY, USA,
1997.
[12] M. Penzo, “Label placement in forms,” 2006, http://www.
uxmatters.com/MT/archives/000107.php.
[13] S. Das, T. McEwan, and D. Douglas, “Using eye-tracking to
evaluate label alignment in online forms,” in Proceedings of
the 5th Nordic Conference on Human-Computer Interaction:
Building Bridges, pp. 451–454, ACM, 2008.
[14] C. Jarrett, “Label placement in forms: what’s best?” in Proceed-
ings of the 22nd British HCI Group Annual Conference on People
and Computers: Culture, Creativity, Interaction-Volume 2,pp.
229–230, British Computer Society, 2008.
[15] D. Robinson, “Better web forms,” 2003, http://www.7nights
.com/dkrprod/gwt four.php.
[16] L. M. Christian, D. A. Dillman, and J. D. Smyth, “Helping
respondents get it right the first time: the influence of
words, symbols, and graphics in web surveys,Public Opinion
Quarterly, vol. 71, no. 1, pp. 113–125, 2007.
[17] M. P. Couper, M. W. Traugott, and M. J. Lamias, “Web survey
design and administration,” Public Opinion Quarterly, vol. 65,
no. 2, pp. 230–253, 2001.
[18]S.K.Card,T.P.Moran,andA.Newell,“Keystroke-level
model for user performance time with interactive systems,”
Communications of the ACM, vol. 23, no. 7, pp. 396–410, 1980.
[19] B.Healey,“Dropdownsandscrollmice:theeect of response
option format and input mechanism employed on data quality
in web surveys,Social Science Computer Review, vol. 25, no. 1,
pp. 111–128, 2007.
[20] A. Hogg and J. J. Masztal, “Drop-down, radio buttons, or
fill-in-theblank? Eects of attribute rating scale type on web
survey responses,” in Proceedings of the ESOMAR Congress—
Marketing Transformation (ESOMAR ’01), Rome, Italy, 2001.
[21] D. Heerwegh and G. Loosveldt, “An evaluation of the eect
of response formats on data quality in web surveys,Social
Science Computer Review, vol. 20, no. 4, pp. 471–484, 2002.
[22] J.A.Bargas-Avila,O.Brenzikofer,A.N.Tuch,S.P.Roth,andK.
Opwis, “Working towards usable forms on the worldwide web:
optimizing multiple selection interface elements,” Advances in
Human-Computer Interaction, vol. 2011, Article ID 347171, 6
pages, 2011.
[23] J. Nielsen, “Error message guidelines,” 2001, http://www.useit
.com/alertbox/20010624.html.
[24] J. A. Bargas-Avila, G. Oberholzer, P. Schmutz, M. de Vito, and
K. Opwis, “Usable error message presentation in the world
wide web: do not show errors right away,Interacting with
Computers, vol. 19, no. 3, pp. 330–341, 2007.
[25] J.A.Bargas-Avila,S.Orsini,H.Piosczyk,D.Urwyler,andK.
Opwis, “Enhancing online forms: use format specifications for
fields with format restrictions to help respondents,Interacting
with Computers, vol. 23, no. 1, pp. 33–39, 2011.
... Participants were faster, made fewer errors, and were more satisfied when the required fields were highlighted in color." Direct Source - [13] Indirect/In-Text Sources - [19,20,21] "Form Layout. "Penzo [23] examined the position of labels relative to the input field in a study using eye-tracking. ...
... Another advantage of top-aligned labels is that label length does not influence placement of the input fields. ... In terms of form layouts, Robinson [24] recommends that a form Direct Source - [13] Indirect/In-Text Sources - [22,23,24,25,26] should not be divided into more than one column. A row should only be used to answer one question. ...
... Positioning the date instructions to the right of the year field led to fewer correct answers. Linderman and Fried [27] suggest using Direct Source - [13] Indirect/In-Text Sources - [27,28,29,19,30,25] drop-down menus to ensure that no invalid dates are entered." "Error Handling. ...
Technical Report
Full-text available
This Study aims to identify design guidelines for complex Web Forms. A Systematic Review of the Literature (SRL) process was conducted with a schematic scientific database search on the field of Human Computer Interaction (HCI), including IEEE, Ebsco and ACM, by using a specific advanced search formula and only peer reviewed papers. In the Identification phase 1127 results were obtained. In the Eligibility phase a final sample of 13 studies was reached. From this sample a content analysis was performed using NVivo software (v11). Results, conclusion and future work are presented.
... One of the most efficient ways to avoid format errors on date insertions. Commonly used on travel websites to book hotels and flights due to its easiness observing the exact day of the week and month [2]. ...
... Possibility to enter the date manually in the field, which may lead to an incorrect date [2]. ...
... Used when only one option must be chosen [2]. Easy to recognize as something that can be selected [26]. ...
Conference Paper
Progressive Web App (PWA) is a new approach to the development of mobile application proposed by Google in 2015. It combines technology resources of both web and native applications. The challenges of designing interfaces for different applications platforms, such as web and native Android, has been discussed in recent years. However, PWAs are a recent technology and their impact regarding user experience have been little exploited. In this paper, we present the findings of an experimental study with 8 participants that explored the aspects of user experience on three different platforms. We carried out a qualitative analysis that focused on the comparison of the user experience during the participants' interaction with PWA, web mobile and native Android applications. Two distinct perspectives were defined to support our data analysis. First, the user perspective was considered and the participants' feedback was explored. After, focusing on the human-computer interaction specialist perspective we examined the users' facial expressions with the aims of identifying which emotions they sensed during interactions with each application. We gathered evidence that an overall positive user experience can be achieved even if the user had some interaction issues. There is no bias indicating that either a specific platform or interface element offer more enjoyable interactions.
... Previous work on forms has focused on interoperable data exchange between information systems and separation of concerns regarding the modelling, layout, and processing of data in forms [e.g., XML-based markup languages like XForms and Extensible Forms Description Language (XFDL)]. Recent research has emphasized the improvement of form usability in web and mobile applications and has introduced design guidelines for forms (e.g., Wroblewski 2008;Jarrett and Gaffney 2009;Bargas-Avila et al. 2010) that, for example, make recommendations for date fields (Bargas-Avila et al. 2011) or recommend how to design electronic forms for the elderly (Money et al. 2011). ...
Article
Full-text available
Forms are central interfaces for information exchange between a government and its citizens. As a way to translate laws into practice, forms are an essential part facilitating this exchange. Unfortunately, forms often require substantial development effort to ensure they comply with legal requirements, with the result that citizens often describe them as highly complex. Standardization of forms through reference modeling would help to minimize governments’ effort by reusing elements and would reduce complexity for citizens by providing a unified representation of information. The article contributes a meta-model for a modeling language that can be used in representing reference models for forms. It follows a design science research approach to elicit form structure and editorial process requirements and to iteratively design the meta-model. The paper demonstrates and evaluates the meta-model using focus groups and application in three case studies. It extends research on standardization to reference modeling and government forms.
... We therefore updated this checklist by removing some evaluation questions that are specific to mobile ERP and were then left with 125 usability evaluation questions. These questions were derived from sub-heuristics for mobile applications coupled with those from a number of usability heuristic studies and usability guidelines for online web forms [15,16]. ...
Article
Full-text available
Background New Specific Application Domain (SAD) heuristics or design principles are being developed to guide the design and evaluation of mobile applications in a bid to improve on the usability of these applications. This is because the existing heuristics are rather generic and are often unable to reveal a large number of mobile usability issues related to mobile specific interfaces and characteristics. Mobile Electronic Data Capturing Forms (MEDCFs) are one of such applications that are being used to collect health data particularly in hard to reach areas, but with a number of usability challenges especially when used in rural areas by semi literate users. Existing SAD design principles are often not used to evaluate mobile forms because their focus on features specific to data capture is minimal. In addition, some of these lists are extremely long rendering them difficult to use during the design and development of the mobile forms. The main aim of this study therefore was to generate a usability evaluation checklist that can be used to design and evaluate Mobile Electronic Data Capturing Forms in a bid to improve their usability. We also sought to compare the novice and expert developers’ views regarding usability criteria. Methods We conducted a literature review in August 2016 using key words on articles and gray literature, and those with a focus on heuristics for mobile applications, user interface designs of mobile devices and web forms were eligible for review. The data bases included the ACM digital library, IEEE-Xplore and Google scholar. We had a total of 242 papers after removing duplicates and a total of 10 articles which met the criteria were finally reviewed. This review resulted in an initial usability evaluation checklist consisting of 125 questions that could be adopted for designing MEDCFs. The questions that handled the five main categories in data capture namely; form content, form layout, input type, error handling and form submission were considered. A validation study was conducted with both novice and expert developers using a validation tool in a bid to refine the checklist which was based on 5 criteria. The criteria for the validation included utility, clarity, question naming, categorization and measurability, with utility and measurability having a higher weight respectively. We then determined the proportion of participants who agreed (scored 4 or 5), disagreed (scored 1 or 2) and were neutral (scored 3) to a given criteria regarding a particular question for each of the experts and novice developers. Finally, we selected questions that had an average of 85% agreement (scored 4 or 5) across all the 5 criteria by both novice and expert developers. ‘Agreement’ stands for capturing the same views or sentiments about the perceived likeness of an evaluation question. Results The validation study reduced the initial 125 usability evaluation questions to 30 evaluation questions with the form layout category having the majority questions. Results from the validation showed higher levels of affirmativeness from the expert developers compared to those of the novice developers across the different criteria; however the general trend of agreement on relevance of usability questions was similar across all the criteria for the developers. The evaluation questions that were being validated were found to be useful, clear, properly named and categorized, however the measurability of the questions was found not to be satisfactory by both sets of developers. The developers attached great importance to the use of appropriate language and to the visibility of the help function, but in addition expert developers felt that indication of mandatory and optional fields coupled with the use of device information like the Global Positioning System (GPS) was equally important. And for both sets of developers, utility had the highest scores while measurability scored least. Conclusion The generated checklist indicated the design features the software developers found necessary to improve the usability of mobile electronic data collection tools. In the future, we thus propose to test the effectiveness of the measure for suitability and performance based on this generated checklist, and test it on the end users (data collectors) with a purpose of picking their design requirements. Continuous testing with the end users will help refine the checklist to include only that which is most important in improving the data collectors’ experience.
... Forms are considered to be a bridge between users and applications ( Seckler et al., 2014). Users interact with various form elements such as form content, form template, data input types, and error messages ( Bargas-Avila, Brenzikofer, Tuch, Roth & Opwis, 2011). Designing user-friendly form fields is one of the most critical aspects of overall user experience with Web forms ( ). ...
Article
Error messages presented to users are one of the most important elements of Web forms. Error messages are embedded in different parts of the forms available on the Internet and presented in various formats. One of the measures of a user-friendly error message design is the ability to easily capture users’ attention and facilitate fast error correction. In this empirical study, I tested four different locations of error messages frequently used in Web forms on 32 participants. In addition, I analysed the participants’ interactions with error messages through their eye movements. The results of the study showed that the participants spotted the error message fastest when it was displayed on the right side of the erroneous input field. When error messages displayed further the input field users have less saccades to and fixations on error messages compared to those located near to this field, suggesting that less effort has been spent to understand the given message. However, group mean differences were not statistically significant for form completion time, error recognition time, the number of saccades, and error correction time.
Preprint
Background Previous research shows that men often receive more research funding than women, but does not provide empirical evidence as to why this occurs. In 2014, the Canadian Institutes of Health Research (CIHR) created a natural experiment by dividing all investigator-initiated funding into two new grant programs: one with and one without an explicit review focus on the caliber of the principal investigator. Methods We analyzed application success among 23,918 grant applications from 7,093 unique principal investigators in a 5-year natural experiment across all investigator-initiated CIHR grant programs in 2011-2016. We used Generalized Estimating Equations to account for multiple applications by the same applicant and an interaction term between each principal investigator’s self-reported sex and grant programs to compare success rates between male and female applicants under different review criteria. Results The overall grant success rate across all competitions was 15.8%. After adjusting for age and research domain, the predicted probability of funding success in traditional programs was 0.9 percentage points higher for male than for female principal investigators (OR 0.934, 95% CI 0.854-1.022). In the new program focused on the proposed science, the gap was 0.9 percentage points in favour of male principal investigators (OR 0.998, 95% CI 0.794-1.229). In the new program with an explicit review focus on the caliber of the principal investigator, the gap was 4.0 percentage points in favour of male principal investigators (OR 0.705, 95% CI 0.519- 0.960). Interpretation This study suggests gender gaps in grant funding are attributable to less favourable assessments of women as principal investigators, not differences in assessments of the quality of science led by women. We propose ways for funders to avoid allowing gender bias to influence research funding. Funding This study was unfunded.
Chapter
Full-text available
Mobile Electronic Data Capturing Forms (MEDCFs) are electronic form applications that are primarily used for data capture using mobile devices in the place of paper-based routines. Translating paper-based forms to MEDCFs presents several usability challenges due to the design limitations of using mobile devices. The main objective of this study therefore was to define the most important design features that need to be considered when developing MEDCFs. Fifteen mobile form developers each received a semi-structured questionnaire via Email. The questions were derived from sub heuristics for mobile applications and were based on features that are common to forms such as form content, form layout, input type, error handling and form submission. The study identified the eighteen most important design features that all MEDCFs should have in order to provide a usable tool. These include feedback, logic implementation, form navigation, data input format requirements, unique identification, language translation and error handling among others. With a shorter design feature checklist specific to MEDCFs, and collaboration efforts amongst the various stakeholders, it will be possible to develop usable mobile electronic data collection forms.
Conference Paper
This paper presents the results of an experimental study that compared the usability of four different input methods in the context of smart phones with touch screen property. Twenty users were asked to fill in a questionnaire with four different input methods which were radio button, text field, spinner and button. Time required to fill in the questionnaire and the errors occurred were recorded. Overall, radio button was found to be the fastest by causing no error while text field was found to be the slowest input method and more error prone. In addition, participants were asked about their perceived performance before and after filling in the questionnaire. These results were compared with their actual performance. Most of the users could not predict their performance before use and many of the participants still could not make correct predictions about their own performance after use.
Article
Full-text available
We utilize and apply visual design theory to experimentally test ways to improve the likelihood that web respondents report date answers in a particular format desired by the researcher, thus reducing possible deleterious effects of error messages or requests for corrections. These experiments were embedded in a series of web surveys of random samples of university students. We seek to examine the sequential and cumulative effects of visually manipulating the size and proximity of the answer spaces, the use of symbols instead of words, the verbal language of the question stem, and the graphical location of the symbolic instruction. Our results show that the successive series of visual language manipulations improve respondents' use of the desired format (two digits for the month and four digits for the year) from 45 percent to 96 percent. These results suggest that writing effective questions for web surveys may depend as much or more on the presentation of the answer categories/spaces as the question
Article
Full-text available
If an interactive form in the worldwide web requires users to select multiple answers from a given list, this can be implemented in several ways. This paper discusses an empirical study with n = 106 participants, where two interface elements for choosing multiple answers (checkboxes and list boxes) were compared. Results showed that participants chose the same amount of options in both conditions but were faster and more satisfied using checkboxes. The time differences disappeared after several trials, revealing a learning effect for the list box element. As a conclusion, it can be recommended that website developers and online researchers should use checkboxes instead of list boxes for their online forms and questionnaires to enhance usability and user satisfaction—at least for a smaller number of options.
Chapter
Full-text available
Twenty guidelines for usable web form design have been presented. This compilation of guidelines enables an easier overview of important aspects that have to be considered when designing forms. Many guidelines already exist, scattered about empirical and practical studies and reports. This paper provides a comprehensive and structured summary of applicable design guidelines, which are highly relevant not only for research but also for practitioners. Applying only few of these guidelines may already have a major impact on usability and economical benefits. Future research should examine to what extend the overall application of these guidelines improves usability, shortens form completion time, prevents errors, and enhances user satisfaction. Further, it should be investigated whether the postulated guidelines lead to higher completion rates of web forms. It remains to be seen if the catalog is complete, or if there are important aspects that are currently missing.
Article
When designing a web questionnaire, various HTML input elements can be used to register respondents' answers. Two input elements, radio buttons and drop-down boxes, can be regarded as technically equivalent because they serve the same purpose-both allowing selection of one answer from a list of response options. However, these input elements are dissimilar in many respects, and there is reason to believe that some people are less familiar with drop-down boxes. By means of two experiments, this study evaluates the effect of the two response formats on data quality. Lower data quality was expected in the drop-down boxes condition than in the radio buttons condition. The data showed that a slight preference for radio buttons might be justified, although both response formats have benefits and drawbacks. The choice between these response formats is not self-evident and needs to be guided by other considerations such as sample composition and survey design.
Article
Many claims are being made about the advantages of conducting surveys on the Web. However, there has been little research on the effects of format or design on the levels of unit and item response or on data quality. In a study conducted at the University of Michigan, a number of experiments were added to a survey of the student population to assess the impact of design features on resulting data quality. A sample of 1,602 students was sent an e-mail invitation to participate in a Web survey on attitudes toward affirmative action. Three experiments on design approaches were added to the survey application. One experiment varied whether respondents were reminded of their progress through the instrument. In a second experiment, one version presented several related items on one screen, while the other version presented one question per screen. In a third experiment, for one series of questions a random half of the sample clicked radio buttons to indicate their answers, while the other half entered a numeric response in a box. This article discusses the overall implementation and outcome of the survey, and it describes the results of the imbedded design experiments.
Article
When designing a web questionnaire, various HTML input elements can be used to register respondents' answers. Two input elements, radio buttons and drop-down boxes, can be regarded as technically equivalent because they serve the same purpose--both allowing selection of one answer from a list of response options. However, these input elements are dissimilar in many respects, and there is reason to believe that some people are less familiar with drop-down boxes. By means of two experiments, this study evaluates the effect of the two response formats on data quality. Lower data quality was expected in the drop-down boxes condition than in the radio buttons condition. The data showed that a slight preference for radio buttons might be justified, although both response formats have benefits and drawbacks. The choice between these response formats is not self-evident and needs to be guided by other considerations such as sample composition and survey design.
Article
Online researchers face whether to use radio buttons or drop downs when presenting respondents with “select one answer from many” questions. However, empirical evidence regarding response effects does not provide direction for favoring one. Using data collected in a New Zealand general population Web survey of 2,400 people, this study contributes to the decision process by investigating format response effects at multiple levels and exploring the potential for input mechanisms to interfere with drop-down answer selection. Format choice did not significantly affect survey completions, number of nonsubstantial answers, or time to completion. However, drop downs led to higher item nonresponse and longer response times. Furthermore, the 76% of respondents using scroll mice to complete the survey were prone to accidentally changing an answer if presented with drop-down questions. This increased average response times and skewed distribution of responses to the drop-down treatment questions toward the bottom of the response list. Implications of these findings for Web-based data collection are discussed.
Article
In this study, a simple but important user interface design choice is examined: when marking required-fields in online forms, should GUI designers stick with the often used asterisk that many form design guidelines cite as the de-facto web standard, or should they choose a colored background as a new design solution to visually signal which input fields are required? An experiment with 24 participants was conducted to test the hypotheses that efficiency, effectiveness and satisfaction ratings of colored required-fields exceed those of asterisk-marked required-fields. Results indicate that colored required field marking leads to fewer errors, faster form fill-in in and higher user satisfaction.
Conference Paper
This paper describes a study comparing different techniques for visually distingishing required from optional input fields in a form-filling application. Seven techniques were studied: no indication, bold field labels, chevrons in front of the labels, check marks to the right of the input fields, a different background color, grouping them separately, and a status bar indication. Performance and preference data were collected. In general, we found that the two worst methods were no indication and the status bar. The best method was separate groups.