ArticlePDF Available

Standards for Internet-Based Experimenting

Hogrefe Publishing
Experimental Psychology
Authors:

Abstract and Figures

This article summarizes expertise gleaned from the first years of Internet-based experimental research and presents recommendations on: (1) ideal circumstances for conducting a study on the Internet; (2) what precautions have to be undertaken in Web experimental design; (3) which techniques have proven useful in Web experimenting; (4) which frequent errors and misconceptions need to be avoided; and (5) what should be reported. Procedures and solutions for typical challenges in Web experimenting are discussed. Topics covered include randomization, recruitment of samples, generalizability, dropout, experimental control, identity checks, multiple submissions, configuration errors, control of motivational confounding, and pre-testing. Several techniques are explained, including "warm-up," "high hurdle," password methods, "multiple site entry," randomization, and the use of incentives. The article concludes by proposing sixteen standards for Internet-based experimenting.
Content may be subject to copyright.
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
Standards for Internet-Based
Experimenting
Ulf-Dietrich Reips
University of Zürich, Switzerland
Abstract. This article summarizes expertise gleaned from the first years of Internet-based experimental research and
presents recommendations on: (1) ideal circumstances for conducting a study on the Internet; (2) what precautions have to
be undertaken in Web experimental design; (3) which techniques have proven useful in Web experimenting; (4) which
frequent errors and misconceptions need to be avoided; and (5) what should be reported. Procedures and solutions for
typical challenges in Web experimenting are discussed. Topics covered include randomization, recruitment of samples,
generalizability, dropout, experimental control, identity checks, multiple submissions, configuration errors, control of moti-
vational confounding, and pre-testing. Several techniques are explained, including “warm-up,” “high hurdle,” password
methods, “multiple site entry,” randomization, and the use of incentives. The article concludes by proposing sixteen stan-
dards for Internet-based experimenting.
Key words: Internet-based experimenting, Web experiment, standards, experiment method, psychological experiment,
online research, Internet research, Internet science, methodology
Introduction
We are in the midst of an Internet revolution in ex-
perimental research. Beginning in the mid-nineties
of the last century, using the world-wide network for
experimental research became the method of choice
for a small number of pioneers. Their early work was
conducted soon after the invention of forms on Web
pages established user-server interaction (Musch &
Reips, 2000). This medium holds the promise to
achieve further methodological and procedural ad-
vantages for the experimental method and a pre-
viously unseen ease of data collection for scientists
and students.
Several terms are used synonymously for In-
ternet-based experiments: Web experiment,on(-)line
experiment,Web-based experiment,Wo rl d Wi d e We b
(WWW) experiment, and Internet experiment. Here
the term Web experiment will be used most often
because historically this term was used first and ex-
I would like to thank Tom Buchanan, William C.
Schmidt, Jochen Musch, Kevin O’Neil and an anonymous
reviewer for very helpful comments on earlier versions of
this article.
Prof. K. C. Klauer was the action editor on this
paper.
DOI: 10.1027//1618-3169.49.4.243
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
periments delivered via the Web are clearly the most
accessible and popular, as experiments using Internet
services other than the Web (such as e-mail, ICQ,
Telnet, Gopher, FTP, etc.) are rarely conducted.
Many of the issues discussed in this article apply to
experiments conducted via these services as well,
and even to nonexperimental Internet-based meth-
ods, such as Web surveying or nonreactive data col-
lection (for examples see Reips & Bosnjak, 2001).
Web experiments may be used to validate results
from field research and from laboratory experiments
(see Krantz & Dalal, 2000; Pohl, Bender, & Lach-
mann, 2002, this volume; Reips, 1997; Reips,
Morger, & Meier, 2001), or they may be used for
new investigations that could only be feasibly accom-
plished in this medium. For instance, in 2000,
Klauer, Musch, and Naumer published an article on
belief bias in Psychological Review that contains an
example of a study that reasonably could only be
conducted as an Internet-based experiment, because
several thousand participants were needed to obtain
accurate estimates of model parameters. Similarly,
Birnbaum (2001) recruited experts in decision mak-
ing via a decision making researchers’ e-mail list and
sent these experts to a Web page where many of them
contradicted their own theories in a Web experiment
on choices between gambles. Experiments such as
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
244 Ulf-Dietrich Reips
these would prove impractical and burdensome if de-
livered in another medium.
In addition to the benefit of increased access to
participants, Internet-based experimenting always in-
cludes the possibility to use the programmed experi-
mental materials in a traditional laboratory setting
(Reips, 1997). In contrast, a laboratory experiment,
even if built with Internet software technologies, can-
not simply be turned into a Web experiment by con-
necting it to the Internet. Successful and appropriate
use of the Web medium requires careful crafting and
demands methodological, procedural, technical, and
ethical considerations to be taken into account!
While laboratory experiments can be built directly
with Internet software technologies, it seems wise to
conceptualize experiments as Web experiments
whenever possible, given their many advantages.
Sheer numbers, reduced cost, and accessibility of
specific participants are only a few of the Internet-
specific properties in Web experimenting that create
an environment that has been greeted with great en-
thusiasm by experimental psychologists.
When and When not to Conduct an
Experiment on the Internet?
Before standards for Internet-based experimenting
can be established, a few words should be devoted to
the question of criteria that should be used in decid-
ing the mode an experiment is best conducted in.
Implementing an Experiment: A General
Principle
Because many laboratory experiments are conducted
on computers anyway, nothing is lost when an ex-
periment is designed Web-ready: It can always also
be used in the laboratory. In distributed Web experi-
menting, local collaborators recruit and assist partici-
pants who all log onto the same Internet-based ex-
periment (Reips, 1999).
Solving Long-Standing Issues in
Experimental Research
The experimental method has a long and successful
tradition in psychological research. Nevertheless, the
method has been criticized, particularly in the late
1960s and early 1970s (e.g., Chapanis, 1970; Orne,
1962; Rosenthal, 1966; Rosenthal & Fode, 1973; Ro-
senthal & Rosnow, 1969; Smart, 1966). This criti-
cism is aimed in part at the validity of the method
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
and in part at improper aspects of its realization; for
instance experimenter effects, volunteer bias, and
low power. A solution for many of these problems
could be the implementation of experiments as Web
experiments. In total, about eighteen advantages
counter seven disadvantages of Web experimenting
(Reips, 2000, see Table 1).
Why Experimenters Relish Internet-Based
Experimenting
Speed, low cost, external validity, experimenting
around the clock, a high degree of automation of the
experiment (low maintenance, limited experimenter
effects), and a wider sample are reasons why the In-
ternet may be the setting of choice for an experiment.
Seventy percent of those who have conducted a Web
experiment intend to certainly use this method again
(the other 30 % maybe). This result from a survey
of many of the early pioneers in Web experimenting
conducted by Musch and Reips (2000) is indirect
evidence that learning and using the methods of In-
ternet-based experimenting is certainly worthwhile.
Surveyed Web experimenters rated “large number of
participants” and “high statistical power” as the two
most important factors why they made the decision
to conduct a Web experiment.
The survey conducted by Musch and Reips is in
itself a good example that Internet-based research
may be the method of choice if a special subpopula-
tion is to be reached. Access to specific groups can
be achieved through Internet newsgroups (Hewson,
Laurent, & Vogel, 1996; Schmidt, 1997), Web site
guestbooks, chat forums, or topic-related mailing
lists. Eichstaedt (2002, Experiment 1) recruited per-
sons using either Macintosh or Windows operating
systems for his Web experiment via newsgroups de-
voted to the discussion of issues related to these op-
erating systems. The participants performed a Java-
based tachistoscopic word recognition task that in-
cluded words typically used in ads for these com-
puter systems. Word recognition was faster for words
pertaining to a participant’s computer system. Some
target groups may be easier to study via Internet,
because persons belonging to this group will only
reveal critical information under the protection of an-
onymity, for example drug dealers (Coomber, 1997),
or Ecstasy users (Rodgers, Buchanan, Scholey, Hef-
fernan, Ling, & Parrott, 2001).
When Not to Conduct an Experiment on
the Internet
Obviously, Web experiments are not the most suit-
able method for all research projects. For instance
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
245Standards for Internet-Based Experimenting
Tab l e 1 . Web Experiments: Advantages, Disadvantages and Solutions (Adapted from Reips, 2000)
Advantages of Web Experiments Disadvantages with Solutions
(1) Ease of access to a large number of demographi- (1) Possible multiple submissions Ðcan be avoided
cally and culturally diverse participants (for an exam- or controlled by collecting personal identification
ple of a study conducted in three languages with 440 items, by checking internal consistency as well as
women from more than nine countries see in this vol- date and time consistency of answers (Schmidt,
ume Bohner, Danner, Siebler, & Samson, 2002); 1997), and by using techniques such as sub-sampling,
participant pools, or handing out passwords (Reips,
(2) . . . as well as to rare and specific participant pop- 1999, 2000, 2002b). There is evidence that multiple
ulations (Schmidt, 1997). submissions are rare in Web experiments (Reips,
(3) Better generalizability of findings to the general 1997).
population (Horswill & Coster, 2001; Reips, 1995). (2) Generally, experimental control may be an issue
(4) Generalizability of findings to more settings and in some experimental designs, but is less of an issue
situations (because of high external validity, e.g., when using between-subjects designs with random dis-
Laugwitz, 2001). tribution of participants to experimental conditions.
(5) Avoidance of time constraints. (3) Self-selection can be controlled by using the
multiple site entry technique.
(6) Avoidance of organizational problems, such as
scheduling difficulties, as thousands of participants (4) Dropout is always an issue in Web experiments.
may participate simultaneously. However, dropout can be turned into a detection de-
vice for motivational confounding. Also, dropout can
(7) Highly voluntary participation. be reduced by implementing a number of measures,
(8) Ease of acquisition of just the optimal number of such as promising immediate feedback, giving f inan-
participants for achieving high statistical power while cial incentives, and by personalization (Frick,
being able to draw meaningful conclusions from the Bächtiger, & Reips, 2001).
experiment. (5) The reduced or absent interaction with partici-
(9) Detectability of motivational confounding. pants during a Web experiment creates problems, if
instructions are misunderstood. Possible solutions are
(10) Reduction of experimenter effects. pretests of the materials and providing the partici-
(11) Reduction of demand characteristics. pants with the opportunity for giving feedback.
(12) Cost savings of laboratory space, personnel (6) The comparative basis for the Web experiment
hours, equipment, administration. method is relatively low. This will change.
(13) Greater openness of the research process (in- (7) External validity of Web experiments may be lim-
creases replicability). ited by their dependence on computers and networks.
Also, many studies cannot be done on the Web. How-
(14) Access to the number of nonparticipants. ever, where comparable, results from Web and labora-
(15) Ease of comparing results with results from a lo- tory studies are often identical (Krantz & Dalal,
cally tested sample. 2000).
(16) Greater external validity through greater techni-
cal variance.
(17) Ease of access for participants (bringing the ex-
periment to the participant instead of the opposite).
(18) Public control of ethical standards.
whenever physiological parameters of participants
are to be measured directly, specialized hardware is
required, or when a tightly controlled setting is im-
portant, then laboratory experiment administration is
still required.
A further basic limitation lies in Web experi-
ments’ dependency on computers and networks hav-
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
ing psychological, technical, and methodological im-
plications. Psychologically, participants at computers
will likely be subject to self-actualization and other
influences in computer-mediated communication
(e.g., Bargh, McKenna, & Fitzsimons, 2002; Joinson,
2001). Technically, more variance is introduced in
the data when collected on the Internet than in the
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
246 Ulf-Dietrich Reips
laboratory, because of varying network connection
speed, varying computer speed, multiple software
running in parallel, etc. (Reips, 1997, 2000).
On first view one may think that the Internet al-
lows for easy setup of intercultural studies, and it is
certainly possible to reach people born into a wide
range of cultures. However, it is a widespread misun-
derstanding that Internet-based cultural research
would somehow render unnecessary the use of many
techniques that have been developed by cross-cul-
tural psychologists (such as translation Ðback
translation, use of the studied cultures’ languages,
and extensive communication and pretests with peo-
ple from the cultures that are examined). Issues of
access, self-selection, and sampling need to be re-
solved. In many cultures, English-speaking computer
users are certainly not representative of the general
population. Nevertheless, these people may be very
useful in bridging between cultures, for instance, in
cooperative studies based on distributed Web experi-
menting.
Finally, for ethical reasons, many experiments
cannot be conducted that require an immediate de-
briefing and adjunctive procedures through direct
contact whenever a participant terminates participa-
tion.
In the following section we will turn to central
issues and resulting proposals for standards that
specifically apply to Internet-based experimenting.
Checks and Solutions for
Methodological Challenges in Web
Studies
The following section contains methodological and
technical procedures that will reduce or alleviate is-
sues that are rooted within the very nature of In-
ternet-based studies.
Web Experiment Implementation
Data collection techniques on the Internet can be po-
larized into server-side and client-side processing.
Server-side methods (a Web server, often in combi-
nation with a database application, serves Web pages
that can be dynamically created depending on a us-
er’s input, Schmidt, 2000) are less prone to platform-
dependent issues, because dynamic procedures are
performed on the server so that they are not subject
to technical variance. Client-side methods use the
processing power of the participants’ computers.
Therefore, time measurements do not contain error
from network traffic and problems with server avail-
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
ability are less likely. Such measurements do rely on
the user’s computer configuration however (Schmidt,
2000). Combinations of server-side and client-side
processing methods are possible; they can be used to
estimate technical error variance by comparison of
measurements.
General experimental techniques apply to Web
experimentation as well. For example, randomized
distribution of participants to experimental condi-
tions is a measure against confounding and helps
avoiding order effects. Consequently, in every Web
experiment at least one randomization technique
should be used. In order of reliability, these are
roughly: (1) CGI or other server-side solutions, (2)
client-side Java, (3) Javascript, and (4) “the birthday
technique” (participants pick their experimental con-
ditions by mouse-clicking on their birthday or birth-
day month; Birnbaum, 2000; Reips, 2002b).
Generating an Experiment
For several years, Web experimenters created their
experimental materials and procedures “by hand.
With enough knowledge about HTML and the ser-
vices and structures available on the Internet con-
ducting a Web experiment was only moderately com-
plicated. However, many researchers hesitate before
acquiring new technical skills. Fortunately, several
recently developed applications considerably ease the
development of a Web experiment. For within-sub-
jects designs, Birnbaum (2000) developed
FactorWiz,
1
a Web page that creates Web pages with
items combined according to previously defined fac-
torial designs. WEXTOR,
2
by Reips and Neuhaus
(2002), a Web-based tool for generating and visualiz-
ing experimental designs and procedures, guides the
user through a ten-step program of designing an ex-
periment that may include between-subjects, within-
subjects, and quasi-experimental (natural) factors.
WEXTOR automatically creates the experiments in
such a way that certain methodological requirements
of Internet-based experimentation are met (for exam-
ple, nonobvious file naming [Reips, 2002a] is imple-
mented for experimental materials and conditions,
and a session ID is generated that helps identify sub-
missions by the same participant).
Recruitment
A Web experiment can be announced as part of the
collection of Web studies by the American Psycho-
1
http://psych.fullerton.edu/mbirnbaum/programs/
factorWiz.htm
2
http://www.genpsylab.unizh.ch/wextor/index.html
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
247Standards for Internet-Based Experimenting
logical Society.
3
This Web site is maintained by John
Krantz. A virtual laboratory for Web experiments is
the Web Experimental Psychology Lab
4
. This Web
site is visited by about 4500 potential participants
a month (Reips, 2001). Internet-based experiments
should always be linked to the web experiment list,
5
a Web site that is intended to serve as an archive of
links and descriptions of as many experiments con-
ducted on the Internet as possible. Other ways of re-
cruiting participants that may be combined with link-
ing to experiment Web sites is the use of online pan-
els, newsgroups, search engines, banners, and e-mail
lists. Also, participants for Web experiments can be
recruited offline (e.g., Bamert, 2002; Eichstaedt,
2002, this volume; Pohl et al., 2002; Reips et al.,
2001; Ruppertsberg, Givaty, Van Veen, & Bülthoff,
2001).
Generalizability
Self-Selection
In many areas of Psychology self-selection is not
considered much of a problem in research because
theory testing is the underlying model of epistemol-
ogy and people are not considered to vary much on
the essential criteria, for example, in research on cog-
nition and perception. However, at least in research
more socially oriented, self-selection may interfere
with the aim of the study at hand and limit its gene-
ralizability. The presence and impact of self-selec-
tion in an Internet-based study can be tested by using
the multiple site entry technique (Reips, 2000,
2002b). Via log file analysis it is possible to deter-
mine a Web experiment’s degree of appeal for parti-
cipation for each of the samples associated with re-
ferring Web sites.
The multiple site entry technique can be used in
any Internet-based study (for a recent example of
longitudinal trauma survey research implementing
this technique see Hiskey & Troop, 2002). Several
links to the study are placed on Web sites, in discus-
sion groups, or other Internet forums that are likely
to attract different types of participants. Placing iden-
tifying information in the published URLs and ana-
lyzing different referrer information in the HTTP
protocol can be used to identify these sources (see
Schmidt, 2000). Later the data sets that were col-
lected are compared for differences in relative degree
of appeal (measured via dropout), demographic data,
central results, and data quality, as a function of re-
3
http://psych.hanover.edu/APS/exponnet.html
4
http://www.genpsy.unizh.ch/Ulf/Lab/
WebExpPsyLab.html
5
http://www.genpsy.unizh.ch/Ulf/Lab/webexplist.html
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
ferring location. Consequently, an estimate of biasing
potential through self-selection can be calculated
(Reips, 2000). If self-selection is not found to be a
problem, results from psychological experiments
should be the same no matter where participants are
recruited, and should therefore show high generaliza-
bility.
Dependence on Technology
Limited generalizability of results from Internet-
based research may also arise due to dependence on
computers and networking technology. Web-based
experimenting can be seen as a form of computer-
mediated communication. Hence, differences found
in comparisons between behaviors in computer-me-
diated situations and face-to-face situations (see, for
example, Buchanan, 2002; Kiesler & Sproull, 1986;
Postmes, Spears, Sakhel, & DeGroot, 2001) need to
be taken into account when results from online re-
search are interpreted.
Advantages
Apart from the challenges to generalizability men-
tioned above, Internet-based experimenting has three
major advantages in this respect:
(1) Increased generalizability through nonlocal sam-
ples with a wider distribution of demographic
characteristics (for a comparison of data from
several Web experiments see Krantz & Dalal,
2000, and Reips, 2001).
(2) “Ecological” validity: “the experiment comes to
the participant, not vice versa” (Reips, 1995,
1997). Participants in Web experiments often re-
main in familiar settings (e.g., at their computer
at home or at work) while they take part in an
Internet-based experiment. Thus, any effects can-
not be attributed to being in an unfamiliar setting.
(3) The high degree of voluntariness Ðbecause there
are fewer constraints on the decisions to partici-
pate and to continue participation, the behaviors
observed in Internet-based experiments may be
more authentic and therefore can be generalized
to a larger set of situations (Reips, 1997, 2000).
A high degree of voluntariness is a trade-off for po-
tential effects of self-selection. Voluntariness refers
to the voluntary motivational nature of a person’s
participation, during the initial decision to participate
and during the course of the experiment session. It
is influenced by external factors, for example, the
setting, the experimenter, and institutional regula-
tions. If participants in an experiment subjectively
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
248 Ulf-Dietrich Reips
feel that they are participating entirely voluntarily as
opposed to feeling somewhat coerced into participat-
ing (as is often the case in the physical laboratory),
then they are more likely to initiate and complete an
experiment, and to comply with the instructions.
The participant’s impression of voluntariness re-
duces effects of psychological reactance (actions
following an unpleasant feeling brought about by a
perceived threat to one’s freedom Ðsuch as being
“made” to do something by an experimenter); effects
such as careless responding, deliberately false an-
swers, and ceasing participation. Therefore, the drop-
out rate in Web experiments can serve as an indicator
of the reactance potential of an experiment. Gen-
erally, participant voluntariness is larger in Web ex-
periments than in laboratory experiments (Reips,
1997, 2000).
Although Web experiments can introduce a reli-
ance on technology, their improved “ecological” va-
lidity may be seen as a counterweight to this factor
when contemplating arguments in favor and opposi-
tion of one’s choice of method.
Dropout: Avoiding it, Reducing its Impact,
Using it
Dropout curves (dropout progress), or at least drop-
out rates (attrition rates, or the opposite: return rates)
should be reported for all studies conducted on the
Internet, separately for all between-subjects experi-
mental conditions. Dropout may pose similar prob-
lems as self-selection. However, dropout concerns
the end of participation (a comprehensive discussion
of other forms of nonresponse is found in Bosnjak,
2001) instead of a decision to initiate participation.
If participants end their participation selectively, then
the explanatory power of the experiment is severely
compromised, especially if dropout varies systemati-
cally with levels of the independent variable(s) or
with certain combinations of levels. For example, if
the impact of two different tasks is to be measured,
one of which is more boring, then participants are
more likely to drop from that condition. In this case
motivational confounding is present (Reips, 2000,
2002a, 2002b).
Two types of evasive measures can be taken to
counter the problem of dropout: (1) reducing drop-
out, and (2) reducing the negative impact of dropout.
Measures to be Taken in Reducing Dropout
Dropout is reduced by all measures that form a moti-
vating counterweight to the factors promoting drop-
out. For instance, some dropout reducing measures
include the announcement of chances for financial
incentives and questions about personal information
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
at the beginning of the Web experiment (Frick et al.,
2001; Musch & Reips, 2000; but see O’Neil & Pen-
rod, 2001). It is potentially biasing to use scripts that
do not allow participants to leave any items unan-
swered. If the average time needed to fill in an ex-
periment questionnaire is longer than a few minutes,
then data quality may suffer from psychological reac-
tance or even anger by those participants who do not
wish to answer all questions, because some partici-
pants will fill in answers randomly and others will
drop out.
If all or most of the items in a study are placed
on a single Web page then meaningful dropout rates
can not be reported: we do not know at which item
people decide to drop out. In the terminology of
Bosnjak (2001), such a design is not suited to distin-
guish unit nonresponders from item nonresponders,
item nonresponding dropouts,answering dropouts,
lurking dropouts,lurkers, and complete responders.
They all appear as either dropouts or complete re-
sponders, and it becomes impossible to measure a
decision not to participate separately from a decision
to terminate participation. Consequently, a “one-
item-one-screen” design or at least a multipage de-
sign (with a parcel of items on each page) is recom-
mended. Layout, design, loading time, appeal, and
functionality of the experimental materials play a
role as well (Reips, 2000, 2002b; Wenzel, 2001).
Sophisticated technologies and software used in
Internet research (e.g., Javascript, Flash, multimedia
elements such as various audio or streaming video
formats as part of Web experiments, or experiments
written in Authorware or as Java applets) may con-
tribute to dropout (Schmidt, 2000). Potential partici-
pants unable or unwilling to run such software will
be excluded from the start, and others will be
dropped, if resident software on their computer in-
teracts negatively with these technologies
(Schwarz & Reips, 2001). Findings by Buchanan and
Reips (2001) indicate that samples obtained using
such technologies will inevitably incorporate biases:
People using certain technologies differ psychologi-
cally from those who don’t. Buchanan and Reips
showed that there are personality differences between
different groups of respondents: Mac users scored
higher on Openness, and people in the Javascript-
enabled condition had lower average education levels
than those who could or did not have Javascript ena-
bled in their browsers. Consequently, whenever pos-
sible, Internet studies should be conducted using
only the most basic and widely available technology.
Measures to be Taken in Reducing the Negative
Impact of Dropout
Sometimes, there might be no alternative to avoiding
the negative impact of dropout by forgoing self-se-
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
249Standards for Internet-Based Experimenting
Tab l e 2 . Checklist High-Hurdle Technique
Component Description
Seriousness Tell participants participation is serious, and that science needs good data.
Personalization Ask for e-mail address and/or phone number, and other personal data.
Impression of control Tell participants that their identity can be traced (for instance, via their comput-
er’s IP address).
Patience: Loading time Use image files to successively reduce the loading time of Web pages.
Patience: Long texts Place most text on the first page of the Web experiment, and reduce amount
page by page.
Duration Give an estimate of how long participation in the Web experiment will take.
Privacy Prepare the participants for any sensitive aspects of your experiment (e.g., “you
will be asked about your financial situation”).
Preconditions Name software requirements (and provide hyperlinks for immediate download).
Technical pretests Perform tests for compatibility of Java, JavaScript, and other technologies, if ap-
plicable.
Rewards Indicate extra rewards for full compliance.
lected Internet participation and choosing to conduct
the study with a precommitted offline sample. If the
absence of dropout is crucial, a dropout-oriented ex-
perimental design may help. Three simple techniques
can be used to reduce dropout: high hurdle,serious-
ness check, and warm-up. All three of these tech-
niques have been repeatedly used in the Web Experi-
mental Psychology Lab (e.g., Musch & Klauer,
2002; Reips, 2000, 2001; Reips et al., 2001).
In the high-hurdle technique, motivationally ad-
verse factors are announced or concentrated as close
to the beginning of the Web experiment as possible.
On the following pages, the concentration and im-
pact of these factors should be reduced continuously.
As a result, the highest likelihood for dropout result-
ing from these factors will be at the beginning of
the Web experiment. The checklist in Table 2 shows
factors to be bundled and measures to be taken in
the high-hurdle technique (also see Reips, 2000).
A second precaution that can be taken to reduce
dropout is asking for the degree of seriousness of a
participant’s involvement (Musch & Klauer, 2002) or
for a probability estimate that one will complete the
whole experiment. If it is pre-determined that only
data sets from persons with a motivation for serious
participation will be analyzed, then the resulting
dropout rate is usually much lower.
The warm-up technique is based on the observa-
tion that most dropout will take place at the begin-
ning of an online study, forming a “natural dropout
curve” (Reips, 2002b). A main reason for the initial
dropout is the short orientation period many partici-
pants show before making a final decision on their
participation (orientation often takes place even after
clicking on a submit button confirming “informed
consent,” because informed consent forms tend to be
filled with abstract “legalese” while the study materi-
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
als are concrete
6
). To keep dropout low during the
experimental phase, as defined by the occurrence of
the experimental manipulation, it is wise to place its
beginning several Web pages deep into the study. The
warm-up period can be used for practice trials, pilo-
ting of similar materials or buildup of behavioral
routines, and assurance that participants are comply-
ing with instructions. Figure 1 shows the resulting
dropout during warm-up phase and experimental
phase after implementation of the warm-up tech-
nique in the experiment by Reips et al. (2001).
0
10
20
30
40
50
60
70
80
90
100
Start Instr 1 Instr 2 Instr 3 Instr 4 Item 1 Item 12 Last Item
Web page
Warm-up phase
Experi-
mental
phase
Figure 1. Percentage of remaining participants as a
function of progress into the Web experiment by
Reips et al. (2001). The implementation of the warm-
up technique resulted in very low dropout after intro-
duction of the experimental manipulation (beginning
with “Item 1”).
6
At Central European research institutions, informed
consent is often provided in the form of a brief statement
that the person will become a participant in a study by
going further, accompanied by some (again: brief) infor-
mation about the research institution, the study and the
researchers. Many researchers feel that participants are per
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
250 Ulf-Dietrich Reips
Dropout may also be used as a dependent variable
(Frick et al., 2001; Reips, 2002a), for example in
usability research.
Control
Experimental Setting
It can be seen as a major problem in Web experi-
menting that the experimenter has limited control of
the experimental setting
7
. For example, in Web-based
perception experiments it is more difficult than in
the laboratory to guarantee that stimuli are perceived
as intended or even assess how they were presented
(Krantz, 2000, 2001). While we cannot control this
principal property of Web experimentation, we can
certainly use technical measures to collect informa-
tion about the computer-bound aspects of the setting
at the participant’s end of the line. Via HTTP proto-
col, Javascript, and Java, the following information
can be accessed: (1) type and version of Web
browser, (2) type and version of operating system,
(3) screen width and height, (4) screen resolution,
(5) color depth of monitor setting, (6) accuracy of
the computer’s timing response (Eichstaedt, 2001),
and (7) loading times.
Javascript, Java, and plug-in based procedures
may inherently reduce experimental control, because
these programming languages and routines often in-
teract with other software on the client computer.
They will lead to an increase in loading times of Web
pages, and increase the likelihood of inaccessibility
of the Web experiment. A study by Schwarz and
Reips (2001) showed that an otherwise identical copy
of a Web experiment had a 13 % higher dropout rate
if Javascript was used to implement client-side rou-
tines for certain experimental procedures. Therefore,
the cons of using Javascript and Java carefully need
to be considered against the value of gathering the
above-listed information. These methods can, how-
ever, produce timing capabilities unavailable to any
other methods.
se knowledgeable about their basic rights and need not be
coerced into reading pages of legalese texts. This seems
particularly true in Internet-based studies, where the end
of participation is always only one mouse click away. Also,
lengthy “informed consent forms” may have a biasing im-
pact on the resulting data.
7
I prefer the view that reduced control of situational
variables allows for wider generalizability of results, if an
effect is found.
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
Identity
Multiple submissions and reduced quality of data
(missing responses, etc.) create potential problems in
Internet-based research, because a participant’s iden-
tity can hardly be determined without doubt. Several
scenarios are thinkable:
ÐThe same person uses the same computer (IP ad-
dress) to participate repeatedly.
ÐThe same person uses different computers to par-
ticipate repeatedly.
ÐDifferent persons use the same computer to par-
ticipate.
ÐMultiple datasets are submitted from the same
computer, but are assigned different IP addresses.
ÐMultiple datasets are submitted from different
computers, but some Web pages or page elements
are assigned the same IP addresses by proxy
servers.
There are many indications that the rate of repeated
participations (below 3% in most studies) is not a
threat to the reliability of Internet-based research
(Krantz & Dalal, 2000; Musch & Reips, 2000; Reips,
1997). However, adjunctive measures should be
taken and be reported. Data quality is better in Web-
based studies if identifying information is asked at
the study’s onset (Frick et al. 2001). Many attempts
at reducing the number of multiple submissions and
increasing data quality concentrate on uncovering the
identities of participants or computers. Also, in light
of the high numbers of participants that can be re-
cruited on the Internet certain data sets can easily be
excluded following clear predetermined criteria.
In order to solve the identity problem and result-
ing issues with data quality the techniques listed in
Table 3 may be used in Web experimenting. These
techniques can be grouped as techniques of avoid-
ance and techniques of control of multiple submis-
sions.
Users will return to a Web experiment for re-
peated participation only when they are motivated
by entertainment or other attractive features of the
experiment. Multiple submissions are likely in Web
experiments that are designed in game style (e.g.,
Ruppertsberg et al., 2001), with highly surprising or
entertaining outcomes (for example, the “magic” ex-
periment in the Web Experimental Psychology Lab,
available at http://www.genpsylab.unizh.ch/88497/
magic.htm) or with high incentives.
Quality of Data
Some of the techniques that were discussed earlier
in this article are suited to maximize or improve data
quality in Web experimentation. For example, Frick
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
251Standards for Internet-Based Experimenting
Tab l e 3 . Avoidance and Control of Multiple Submissions
Avoidance of Multiple Submissions Control of Multiple Submissions
(1) Informing participants on the experiment’s start (1) Collecting information that allows personal identi-
page that multiple submissions are unwanted and det- fication.
rimental to the study’s purpose. (2) Asking participants on the experiment’s start page
(2) Redirecting the second and all further requests whether they have participated before.
for the experiment’s start page coming from the same (3) Continuous session IDs created through dynamic
IP address (this technique likely excludes a number hidden variables or Javascript, in combination with
of people using Internet providers with dynamic IP analysis of HTTP information (e.g., browser type and
addressing). operating system).
(3) Implementation of password-dependent access. (4) Use of the sub-sampling technique (Reips, 2000,
Every participant is given a personal password that 2002b): for a limited random sample from all data
can only be used once (e.g., Schmidt, 2000). Of sets every measure is taken to verify the participants’
course, the password scheme should be not too obvi- verifiable responses (for example, age, sex, occupa-
ous (e.g., do not use “hgz3,” “hgz4,” “hgz5,” etc. as tion) and identity, resulting in an estimate for the to-
passwords). tal percentage of wrong answers and multiple submis-
(4) Limiting participation to members of a controlled sions.
group (participant pool or online panel). (5) Limiting participation to members of a controlled
group.
(6) Controlling for internal consistency of answers.
(7) Controlling for consistency of response times.
(8) Placement of identifiers (cookies) on the hard
disks of participants’ computers Ðhowever, this tech-
nique is of limited reliability and will exclude a large
portion of participants.
et al. (2001) found that participants who requested
the Web page with questions about their age, gender,
nationality, and e-mail address substantially more
often completed responses when these personal data
were collected at the beginning (95.8%) rather than
at the end of an experiment (88.2%). The warm-up
technique ensures that data collected in the experi-
mental phase come from the mostly highly commit-
ted participants. Providing information about incen-
tives for participation certainly helps, but may not be
legal in some jurisdictions (for example, in Ger-
many) if the incentive is in the form of sweepstakes
and entry is dependent on complete participation.
Missing responses can be reduced by making partici-
pants aware that they might have accidentally
skipped the corresponding items or even by repeat-
edly presenting these items. Consistency checks help
in detecting inappropriate or possibly erroneous an-
swers, for example participants claiming to be 15
years old and having a Ph.D. Quality of data may
also be ensured by excluding data sets that do not
meet certain criteria (e.g., more than ¥missing en-
tries, overly long response times).
In general, incomplete data sets are more frequent
in Internet-based research than in offline research
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
(unless the experimenter forces participants into a
decision between complete responding or dropout).
Fortunately, there is evidence that complete data sets
collected online are mostly accurate: Voracek,
Stieger, and Gindl (2001) were able to control re-
sponses for biological sex via university records in
an online study conducted at the University of Vi-
enna. In contradiction to the frequently heard “online
gender swapping” myth, they found that the rate of
seemingly false responses was below 3 %. Further-
more, the data pattern of the participants suspected
of false responding was in accordance with their re-
ported biological sex, leading to the conclusion that
many of these cases were opposite-sex persons using
a friend’s or roommate’s university e-mail account.
Misunderstandings and Limited Ways of
Interaction
In laboratory experiments it is often necessary to ex-
plain some of the procedures and materials to the
participants in an interactive communication. In the
laboratory, experimenters are able to answer ques-
tions directly and verify through dialogue and read-
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
252 Ulf-Dietrich Reips
ing of nonverbal behavior that the instructions were
understood. In Web experiments such interaction is
difficult to achieve, and mostly not desired, because
it would counter some of the method’s advantages,
for instance its low expenditure and the ability to
collect data around the clock.
Pretesting of the materials and collecting feed-
back by providing communication opportunities for
participants are the most practical ways of avoiding
misunderstandings. They should be used with care,
in order to identify problems early on.
Errors and Frequent Misconceptions in Internet-
Based Experimenting
Apart from preventing misunderstandings on part of
participants careful pretesting and monitoring of an
Internet-based experiment will help in detecting the
presence of any of a number of frequently made con-
figuration errors and possibly of misconceptions on
part of the experimenter.
Configuration Errors and Technical
Limitations
Configuration Errors
There are five frequently observed configuration er-
rors in Internet-based experimenting (Reips, 2002a)
that can easily be avoided, if awareness of their pres-
ence is raised and precautions are taken. These errors
are:
ÐAllowing external access to unprotected directo-
ries (configuration error I). In most cases, placing
a file named “index.htm” or “index.html” into
each directory will solve the problem.
ÐPublic display of confidential participant data
through URL Ðdata may be written to a third
party Web server (configuration error II). To
avoid this error, do not collect data with the GET
method
8
on any Web page that is two nodes away
from a Web page with links to external sources.
ÐFile and folder names and/or field names reveal
the experiment’s structure (configuration error
8
The GET method is a request method used in the
WWW transmission protocol. The two most often used
methods for transmitting form data are GET and POST.
Information from a form using the GET method is ap-
pended onto the end of the action address being requested;
for example, in http://www.genpsylab.unizh.ch?webexp=
yes, the answer “yes” in an item “webexp” was appended
to the URL of the Web page that a user’s action (pressing
a submit button, etc.) leads to.
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
III). Use a mixture of logical and random letters
and numbers to avoid this error.
ÐIgnorance towards the technical variance present
in the Internet (configuration error IV). Differ-
ences in Web browsers, versions of Web
browsers, net connections, hardware components
etc. need to be considered in the implementation.
ÐBiased results from improper use of form ele-
ments. For example, if no neutral answer (e.g.,
“Choose here”) is preselected (configuration er-
ror V), there may be errors of omission. For ex-
ample, one might erroneously believe that many
seniors participated in an experiment until one re-
alizes that the preset answer in the pull-down
menu for age was “69 or older”. . . Biasing may
also result from order effects or type of form ele-
ment used (i.e., pull-down menus, select
multiples in lists, etc.).
Technical Limitations
There are several technical and methodological
sources of error and bias that need to be considered.
Security holes in operating systems and Web server
applications (see Reips, 2002a; Securityfocus.com,
n.d.) may allow viruses to shut down the experiment
or corrupt data or even let intruders download confi-
dential participant information. While there are signif-
icant differences in vulnerability of operating systems,
as monitored permanently by Securityfocus.com, any
operating system with a Web server collecting data
from human participants over the Internet needs to
be maintained with the highest degree of responsibil-
ity. Another issue of a researcher’s responsibility is a
frequently observed breach of the basic principle of
record taking in research, if only aggregated data are
collected. Raw logfiles with full information need to
be stored for reanalysis by other researchers and for
meta-analyses.
Records on incomplete data sets (dropout figures)
should be collected as well! As mentioned, some-
times dynamic IP addressing makes it more difficult
to identify coherent data sets. In any event, drawing
conclusions about real behavior from Internet data is
sometimes difficult, because log data may be ambig-
uous. For instance, long response times can be
caused by the participant’s behavior and also by a
slow computer or a slow network connection. High
dropout rates caused by technical problems (e.g.,
when using Javascript, Schwarz & Reips, 2001) may
in some cases bias results. As mentioned, personality
types correlate with the use of certain technologies
(Buchanan & Reips, 2001), leading to biases in cases
of systematic technical incompatibilities.
Finally, due to the public accessibility of many
Web experiments, researchers need to be aware of
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
253Standards for Internet-Based Experimenting
one more likely source of bias: participation of ex-
perts (colleagues). In an Internet-based experiment
on a causal learning phenomenon in cognition
(Reips, 1997) 15 % of participants indicated on an
“insider” control item that they were working in or
studying cognitive psychology. Providing partici-
pants with such an item that allows them to look
at the experiment without having to fear potential
corruption of a colleague’s data avoids this bias.
Misconceptions
There are two misconceptions about Internet-based
experimenting that carry dangerous implications that
might lead to blind eyes against errors.
Misconception 1: Experimenting on the Web is
Just Like Experimenting in the Lab
The Internet is not a laboratory. Internet participants
may end their commitment at any time that they no
longer have interest. Even though lab participants
can terminate participation at any time as well, they
are less likely to given that they are required to face
another human and encounter a potentially embar-
rassing situation when doing so (e.g., Bamert, 2002;
Reips, 1997, 2000).
Web experiments are usually online around the
clock, and usually they reach a much larger number
of people than laboratory experiments. Therefore,
overlooking even small factors may have wide conse-
quences. Web experiments are dependent on the
quality of networks and are subject to great variance
in local settings of participants. Last but not least,
experimenting on the Web is much more public than
working in the laboratory. In addition to a wider pub-
lic, one’s colleagues will be able to inform them-
selves in a much more direct way about one’s work.
Misconception 2: Experimenting on the Web is
Completely Different from Experimenting in the
Lab
Even though Web experiments are always dependent
on networks and computers, most laboratory experi-
ments are conducted on computers.
Often a standardized user interface is used to dis-
play the experimental materials on the screen. Web
browsers are highly standardized user interfaces. Ex-
perimental materials made with the help of tools like
WEXTOR (Reips & Neuhaus, 2002) can be used in
both laboratory and Internet-based experiments. Cre-
ating the materials is quite easy with basic knowl-
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
edge in experimental design and handling of HTML.
Fundamental ideas and methodological procedures
are the same in Web and physical lab, and similar
results have been produced in studies conducted in
both settings (for a summary see Krantz and Dalal,
2000).
As mentioned earlier, the combination of labora-
tory and Internet-based experimenting in distributed
Web experimenting shows that Web and lab can be
integrated in creative ways to arrive at new variations
of the experimental method.
Dynamics of Data Collection
Internet-based experimenting creates its own dy-
namics. Once an experiment is online and linked to
a variety of Web sites it will be present on the In-
ternet for a long time, even if the site itself is re-
moved (which it shouldn’t be Ðit should be replaced
by information about the experiment instead). Some
search engines will cache the experiment’s contents,
and so do some proxy servers, even if anticaching
meta tags are used in the Web pages.
Internet-based laboratories often have large num-
bers of visitors and are linked extensively (Reips,
2001). As a consequence, they may even create pres-
sure to offer new Web experiments within short
periods of time to satisfy “the crowd’s desires.” Meet-
ing these desires creates the risk of producing super-
fluous data Ðan issue that is in need of being dis-
cussed by the community of Internet scientists. The
flood of data bears the danger of losing the sense for
best care and attention towards data and participants.
Would this loss of reasonable diligence be a simple
“more is worth less” phenomenon that could result
in long-term attitude changes in researchers? Or
would it reflect an interaction of the realm of possi-
bilities of techno-media power with limited educa-
tion in Internet-based experimenting? In any case,
the scientific process in psychology is changing pro-
foundly.
Summary: Sixteen Standards for
Internet-Based Experimenting
In this article, a number of important issues in In-
ternet-based experimenting were discussed. As a
consequence, several routines and standards for In-
ternet-based experimenting were proposed.
When reporting Web experiments, the implemen-
tation of and specifics about the mentioned tech-
niques should be included. The following list of rec-
ommendations summarizes most of what needs to be
remembered and may be used as a standards check-
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
254 Ulf-Dietrich Reips
list when conducting an Internet-based experiment
and reporting its results.
Standard 1: Consider using a Web-based software
tool to create your experimental materials. Such tools
automatically implement standard procedures for
Web experiments that can guard against many prob-
lems. Examples are WEXTOR and FactorWiz (see
footnotes 1 and 2 for URLs). If you use FactorWiz,
make sure to protect your participants’ data by
changing the default procedure of storing them in a
publicly accessible data file and be aware that
FactorWiz creates only one-page Web experiments,
so you will not be able to measure dropout in a
meaningful way.
Standard 2: Pretest your experiment for clarity of in-
structions and availability on different platforms.
Standard 3: Make a decision whether the advantages
of non-HTML scripting languages and plug-ins out-
weigh their disadvantages.
Standard 4: Check your Web experiment for config-
uration errors (I-V; Reips, 2002a).
Standard 5: Consider linking your Web experiment
to several Internet sites and services (multiple site
entry technique) to determine effects of self-selec-
tion and estimate generalizability.
Standard 6: Run your experiment both online and
offline, for comparison.
Standard 7: If dropout is to be avoided, use the
warm-up technique.
Standard 8: Use dropout to determine whether there
is motivational confounding.
Standard 9: Use the high-hurdle technique, incentive
information, and requests for personal information to
influence time and degree of dropout.
Standard 10: Ask filter questions (seriousness of
participation, expert status, language skills, etc.) at
the beginning of the experiment to encourage serious
and complete responses.
Standard 11: Check for obvious naming of files,
conditions, and, if applicable, passwords.
Standard 12: Consider avoiding multiple submis-
sions by exclusively using participant pools and pass-
word techniques.
Standard 13: Perform consistency checks.
Standard 14: Keep experiment log and other data
files for later analyses by members from the scien-
tific community.
Standard 15: Report and analyze dropout curves or
at least dropout rates for experimental conditions
separately for between-subjects factors.
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
Standard 16: The experimental materials should be
kept available on the Internet, as they will often give
a much better impression of what was done than any
verbal description could convey.
Conclusion
Internet-based experimenting is fast becoming a
standard method and therefore it is a method that
needs standards. Many established experimentalists
as well as students are currently making their first
attempts in using the new method. So far, in many
universities there is no curriculum that teaches In-
ternet-based experimenting. Still only few people
have both the technical and the methodological ex-
perience to give advice to those who would like to
commence with the venture of conducting Web ex-
periments. Without established standards the likeli-
hood is high for making grave errors that would re-
sult in loss or reduced quality of data, in biased re-
sults, or in breach of ethical practices. Consequently,
in the present paper an attempt was made to collect
and discuss what has been learned in Internet-based
experimenting in order to make recommendations,
warn about errors, and introduce useful techniques.
As a result, a set of standards for Internet-based ex-
perimenting could be defined that hopefully will
serve as a guide for future experiments on the In-
ternet.
Those who have done Web experiments keep con-
ducting them. Many of those who haven’t will do
soon. And many of those who would never conduct
an experiment on the Internet will be confronted with
the methodology as reviewers and readers of publica-
tions or as teachers of students who ask for conve-
nient and proper ways of collecting experimental
data in that international communication network
that allow us to investigate the psychology of the
many distant human beings out there. Hopefully,
with the guidance of standards and examples from
the present special issue of Experimental Psychology,
Internet-based experimenting will come one step
closer to being established and used as an equivalent
tool for scientists.
References
Bamert, T. (2002). Integration von Wahrscheinlichkeiten:
Verarbeitung von zwei Wahrscheinlichkeitsinformatio-
nen [Integration of probabilities: Processing two pieces
of probability information]. Unpublished master’s the-
sis, University of Zurich, Switzerland.
Bargh, J. A., McKenna, K. Y. A., & Fitzsimons, G. M.
(2002). Can you see the real me? Activation and ex-
pression of the “true self ” on the Internet. Journal of
Social Issues, 58, 33Ð48.
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
255Standards for Internet-Based Experimenting
Birnbaum, M. H. (2000). SurveyWiz and FactorWiz: Ja-
vaScript Web pages that make HTML forms for re-
search on the Internet. Behavior Research Methods, In-
struments, and Computers, 32, 339Ð346.
Birnbaum, M. H. (2001). A Web-based program of re-
search on decision making. In U.-D. Reips & M. Bosn-
jak (Eds.), Dimensions of Internet Science (pp. 23Ð
55). Lengerich, Germany: Pabst Science.
Bohner, G., Danner, U. N., Siebler, F., & Samson, G. B.
(2002). Rape myth acceptance and judgments of vul-
nerability to sexual assault: An Internet experiment.
Experimental Psychology, 49 (4), 257Ð269.
Bosnjak, M. (2001). Participation in non-restricted Web
surveys: A typology and explanatory model for item
non-response. In U.-D. Reips & M. Bosnjak (Eds.), Di-
mensions of Internet Science (pp. 193Ð208). Lenger-
ich, Germany: Pabst Science.
Buchanan, T. (2002). Online assessment: Desirable or dan-
gerous? Professional Psychology: Research and Prac-
tice, 33, 148Ð154.
Buchanan, T., & Reips, U.-D. (2001, October 10). Plat-
form-dependent biases in Online Research: Do Mac us-
ers really think different? In K. J. Jonas, P. Breuer, B.
Schauenburg, & M. Boos (Eds.), Perspectives on In-
ternet Research: Concepts and Methods. Retrieved De-
cember 27, 2001, from http://server3.uni-
psych.gwdg.de/gor/contrib/buchanan-tom
Chapanis, A. (1970). The relevance of laboratory studies
to practical situations. In D. P. Schultz (Ed.), The sci-
ence of psychology: Critical reflections. New York:
Appleton Century Crofts.
Coomber, R. (1997, June 30). Using the Internet for survey
research. Sociological Research Online,2, Retrieved
June 16
th
, 2002, from http://www.socresonline.org.uk/
2/2/2.html
Eichstaedt, J. (2001). Reaction time measurement by
JAVA-applets implementing Internet-based experi-
ments. Behavior Research Methods, Instruments, and
Computers, 33, 179Ð186.
Eichstaedt, J. (2002). Measuring differences in preactiva-
tion on the Internet: The content category superiority
effect. Experimental Psychology, 49 (4), 283Ð291.
Frick, A., Bächtiger, M. T., & Reips, U.-D. (2001). Finan-
cial incentives, personal information, and dropout in
online studies. In U.-D. Reips & M. Bosnjak (Eds.),
Dimensions of Internet Science (pp. 209Ð219). Leng-
erich, Germany: Pabst Science.
Hewson, M., Laurent, D., & Vogel, C. M. (1996). Proper
methodologies for psychological and sociological
studies conducted via the Internet. Behavior Research
Methods, Instruments, and Computers, 28, 186Ð191.
Hiskey, S., & Troop, N. A. (2002). Online longitudinal sur-
vey research: Viability and participation. Social Sci-
ence Computer Review, 20 (3), 250Ð259.
Horswill, M. S., & Coster, M. E. (2001). User-controlled
photographic animations, photograph-based questions,
and questionnaires: Three instruments for measuring
drivers’ risk-taking behavior on the Internet. Behavior
Research Methods, Instruments, and Computers, 33,
46Ð58.
Joinson, A. (2001). Self-disclosure in computer-mediated
communication: The role of self-awareness and visual
anonymity. European Journal of Social Psychology, 31,
177Ð192.
Kiesler, S., & Sproull, L. S. (1986). Response effects in
the electronic survey. Public Opinion Quarterly, 50,
402Ð413.
2002 Hogrefe & Huber Publishers Experimental Psychology 2002; Vol. 49(4): 243Ð256
Klauer, K. C., Musch, J., & Naumer, B. (2000). On belief
bias in syllogistic reasoning. Psychological Review,
107, 852Ð884.
Krantz, J. H. (2000). Tell me, what did you see? The stimu-
lus on computers. Behavior Research Methods, Instru-
ments, and Computers, 32, 221Ð229.
Krantz, J. H. (2001). Stimulus delivery on the Web: What
can be presented when calibration isn’t possible. In
U.-D. Reips & M. Bosnjak (Eds.), Dimensions of Inter-
net Science (pp. 113Ð130). Lengerich, Germany: Pabst
Science.
Krantz, J. H., & Dalal, R. S. (2000). Validity of Web-based
psychological research. In M. H. Birnbaum (Ed.), Psy-
chological experiments on the Internet (pp. 35Ð60).
San Diego, CA: Academic Press.
Laugwitz, B. (2001). A Web experiment on color harmony
principles applied to computer user interface design. In
U.-D. Reips & M. Bosnjak (Eds.), Dimensions of In-
ternet Science (pp. 131Ð145). Lengerich, Germany:
Pabst Science.
Musch, J., & Klauer, K. C. (2002). Psychological experi-
menting on the World Wide Web: Investigating content
effects in syllogistic reasoning. In B. Batinic, U.-D.
Reips, & M. Bosnjak (Eds.), Online Social Sciences
(pp. 181Ð212). Göttingen, Germany: Hogrefe.
Musch, J., & Reips, U.-D. (2000). A brief history of Web
experimenting. In M. H. Birnbaum (Ed.), Psychologi-
cal experiments on the Internet (pp. 61Ð88). San
Diego, CA: Academic Press.
O’Neil, K. M., & Penrod, S. D. (2001). Methodological
variables in Web-based research that may affect results:
Sample type, monetary incentives, and personal infor-
mation. Behavior Research Methods, Instruments, and
Computers, 33, 226Ð233.
Orne, M. T. (1962). On the social psychology of the psy-
chological experiment: With particular reference to de-
mand characteristics and their implications. American
Psychologist, 17, 776Ð783.
Pohl, R. F., Bender, M., & Lachmann, G. (2002). Hindsight
bias around the world. Experimental Psychology, 49
(4), 270Ð282.
Postmes, T., Spears, R., Sakhel, K., & DeGroot, D. (2001).
Social influence in computer-mediated communication:
The effect of anonymity on group behavior. Personality
and Social Psychology Bulletin, 27, 1243Ð1254.
Reips, U.-D. (1995). The Web experiment method. Re-
trieved January 6, 2002, from http://www.genpsy.u-
nizh.ch/Ulf/Lab/WWWExpMethod.html
Reips, U.-D. (1997). Das psychologische Experimentieren
im Internet [Psychological experimenting on the In-
ternet]. In B. Batinic (Ed.), Internet für Psychologen
(pp. 245Ð265). Göttingen, Germany: Hogrefe.
Reips, U.-D. (1999). Online research with children. In U.-
D. Reips, B. Batinic, W. Bandilla, M. Bosnjak, L. Gräf,
K. Moser, & A. Werner (Eds.). Current Internet sci-
ence Ðtrends, techniques, results. Aktuelle Online-
Forschung ÐTrends, Techniken, Ergebnisse. Zürich:
Online Press. Retrieved April 7, 2002 from http://
dgof.de/tband99/
Reips, U.-D. (2000). The Web experiment method: Advan-
tages, disadvantages, and solutions. In M. H. Birnbaum
(Ed.), Psychological experiments on the Internet
(pp. 89Ð114). San Diego, CA: Academic Press.
Reips, U.-D. (2001). The Web Experimental Psychology
Lab: Five years of data collection on the Internet. Be-
havior Research Methods, Instruments, and Comput-
ers, 33, 201Ð211.
39038$ $$$2 Hogrefe & Huber Publishers ÐEXPPSY 49/04/2002 Ð3. Bel. Ð30-09-02 16:33:22 ÐRev 16.04x
256 Ulf-Dietrich Reips
Reips, U.-D. (2002a). Internet-based psychological experi-
menting: Five dos and five don’ts. Social Science Com-
puter Review, 20 (3), 241Ð249.
Reips, U.-D. (2002b). Theory and techniques of conduct-
ing Web experiments. In B. Batinic, U.-D. Reips, & M.
Bosnjak (Eds.), Online Social Sciences (pp. 229Ð250).
Seattle: Hogrefe & Huber.
Reips, U.-D., & Bosnjak, M. (2001). Dimensions of In-
ternet Science. Lengerich, Germany: Pabst Science.
Reips, U.-D., Morger, V., & Meier B. (2001). “Fünfe ge-
rade sein lassen”: Listenkontexteffekte beim Kategori-
sieren [“Letting five be equal”: List context effects in
categorization]. Unpublished manuscript. Retrieved
April 7, 2002 from http://www.psychologie.unizh.ch/
genpsy/reips/papers/re_mo_me2001.pdf
Reips, U.-D., & Neuhaus, C. (2002). WEXTOR: A Web-
based tool for generating and visualizing experimental
designs and procedures. Behavior Research Methods,
Instruments, and Computers, 34, 234Ð240.
Rodgers, J., Buchanan, T., Scholey, A. B., Heffernan,
T. M., Ling, J., & Parrott, A. (2001). Differential ef-
fects of Ecstasy and cannabis on self-reports of mem-
ory ability: A web-based study. Human Psychopharma-
cology: Clinical and Experimental, 16, 619Ð625.
Rosenthal, R. (1966). Experimenter effects in behavioral
research. New York: Appleton-Century-Crofts.
Rosenthal, R., & Fode, K. L. (1973). The effect of experi-
menter bias on the performance of the albino rat. Be-
havioral Science, 8, 183Ð189.
Rosenthal, R., & Rosnow, R. L. (1969). Artifact in behav-
ioral research. New York: Academic Press.
Ruppertsberg, A. I., Givaty, G., Van Veen, H. A. H. C., &
Bülthoff, H. (2001). Games as research tools for visual
perception over the Internet. In U.-D. Reips & M.
Bosnjak (Eds.), Dimensions of Internet Science
(pp. 147Ð158). Lengerich, Germany: Pabst.
Schmidt, W. C. (1997). World-Wide Web survey research:
Benefits, potential problems, and solutions. Behavior
Research Methods, Instruments, and Computers, 29,
274Ð279.
Experimental Psychology 2002; Vol. 49(4): 243Ð256 2002 Hogrefe & Huber Publishers
Schmidt, W. C. (2000). The server-side of psychology Web
experiments. In M. H. Birnbaum (Ed.), Psychological
experiments on the Internet (pp. 285Ð310). San Diego,
CA: Academic Press.
Smart, R. (1966). Subject selection bias in psychological
research. Canadian Psychologist, 7a, 115Ð121.
Schwarz, S., & Reips, U.-D. (2001). CGI versus Ja-
vaScript: A Web experiment on the reversed hindsight
bias. In U.-D. Reips & M. Bosnjak (Eds.), Dimensions
of Internet Science (pp. 75Ð90). Lengerich, Germany:
Pabst Science.
Securityfocus.com. (n.d.). BUGTRAQ Vulnerability Data-
base Statistics. Retrieved April 7, 2002, from http://
www.securityfocus.com
Voracek, M., Stieger, S., & Gindl, A. (2001). Online repli-
cation of Evolutionary Psychology evidence: Sex dif-
ferences in sexual jealousy in imagined scenarios of
mate’s sexual versus emotional infidelity. In U.-D.
Reips & M. Bosnjak (Eds.), Dimensions of Internet
Science (pp. 91Ð112). Lengerich, Germany: Pabst Sci-
ence.
Wenzel, O. (2001). Webdesign, Informationssuche und
Flow: Nutzerverhalten auf unterschiedlich strukturier-
ten Websites [Web design, search for information, and
flow: User behavior on differently structured Web
sites]. Lohmar, Germany: Eul.
Ulf-Dietrich Reips
Experimental and Developmental Psychology
University of Zürich
Attenhoferstr. 9
CH-8032 Zürich
Switzerland
Tel.: +41 1 6342930
Fax: +41 1 6344929
E-mail: ureips@genpsy.unizh.ch
... Online experiments (i.e., research studies that use experimental designs conducted through websites or platforms) offer unique advantages. They provide cost-effective and scalable access to diverse and geographically dispersed populations, allowing researchers to observe, for example, sustainable consumption behaviors in more externally valid, real-world settings (Reips 2002;Dandurand et al. 2008;Clifford and Jerit 2014;Arechar et al. 2018). Despite their potential, there has been limited examination of how online experiments have been applied to study sustainable consumption. ...
... Online experiments, also known as web-based or Internet-based experiments, have become an increasingly popular research methodology across various fields, including psychology, marketing, and social sciences (Reips 2002;Clifford and Jerit 2014;Arechar et al. 2018). Their growth has been facilitated by technological advancements, which have made it easier to design, conduct, and analyze experiments online (e.g., by developing dedicated platforms for conducting web-based research, such as Amazon Mechanical Turk). ...
... Their growth has been facilitated by technological advancements, which have made it easier to design, conduct, and analyze experiments online (e.g., by developing dedicated platforms for conducting web-based research, such as Amazon Mechanical Turk). Online experiments offer several advantages over traditional laboratory studies, such as increased access to diverse and geographically dispersed participant pools, cost efficiency, and the ability to conduct studies in ecologically valid settings (Reips 2002;Birnbaum 2004). These features make online experiments particularly suited for investigating behaviors that are influenced by real-world contexts, such as those related to sustainable consumption. ...
Article
Full-text available
Sustainable consumption is essential to achieve the world's sustainability objectives. However, there is still limited understanding of how to effectively promote the adoption of these behaviors among consumers. Researchers have employed various methods to address this broad question, with experimental approaches proving particularly useful in shedding light on causal relationships that facilitate behavioral change. In recent years, online experiments have become increasingly popular among consumer researchers, including those studying sustainable consumption, owing to advancements in internet technology and data collection. Although online experiments offer several advantages over traditional experiments, they also present conceptual, practical, and methodological challenges that must be addressed to ensure their reliability. Currently, there is no comprehensive analysis of the use of online experiments to understand sustainable consumption behavior. This study provides a systematic methodological literature review of online experiments on sustainable consumption published between 2015 and the present day. Our findings reveal that online experiments on sustainable consumption are in their early stages, as they focus on a limited range of themes within the sustainable consumption field, and the methods employed frequently do not meet the scientific standards for online research to control for subject misrepresentation and data quality. We offer recommendations for researchers interested in using this method to enhance the reliability and rigor of online experimental methodology and promote a deeper understanding of material and impactful sustainable consumption behaviors.
... As a result, they may have superficially reported inflated intentions. Compared with laboratory experiments, where participants communicate with and are observed by an experimenter in a face-to-face setting, the online nature of this study is likely to have mitigated the influence of demand characteristics (Reips, 2002) because participants were informed that their data would remain anonymous, and they rated their intentions in the absence of an experimenter. However, even in this online study, participants may have inferred an experimenter's expectations and consequently modified their responses to align with them. ...
Article
When people experience conflicts between their ideal standards and their partner’s actual state, they often resolve conflict through communication. Numerous observational studies suggest that direct regulation attempts (e.g., requesting one’s partner to change) are positively associated with the behavior change of the target partner. However, previous research using between-person and correlational designs has provided limited evidence. Moreover, the psychological components of partner regulation that affect targets’ intentions and behavior remain unclear. Therefore, we employed a within-person experimental paradigm to rigorously test targets’ psychological processes underlying interpersonal conflict resolution through communication. This focused on the discrepancy between targets’ actual states and requesters’ ideals. In the paradigm, we systematically manipulated targets’ perceived discrepancy. In our experiment (N = 78 couples), targets were asked to rate the actual frequency of 40–80 important actions, and requesters were asked to rate the ideal frequency of the targets’ actions. These actions were then randomly assigned to either the discrepancy feedback or no-feedback condition. Results showed that, in the feedback condition, discrepancies were positively associated with targets’ intentions to improve their behavior (but not with behavioral changes). These findings suggest that although people can facilitate their partners’ intention to change important actions by simply communicating their ideals, they must make additional efforts (e.g., suggesting a solution and promoting prospective memory) to get their partners to execute the intention. This study provides critical insight into the psychological process underlying conflict resolution in close relationships using a within-person experiment.
... The questionnaire concludes with several questions on demographic characteristics (age, gender, education level, country of residence, and family migration history). 5 In Study 1, the ELIN questionnaire was completed in a controlled Web-study (Reips 2002) with two groups of university students: Ukrainians (National University of "Kyiv-Mohyla Academy", Kyiv, Ukraine; N = 41; 29 females; mean age 26.1; L1 Ukrainian, L2 Russian) and Russians (University of Volgograd, Russia; N = 40; 19 females; mean age 21.2). Each participant was presented with 4 to 5 emotion terms and asked to rate how likely it was that a number of features were part of the meaning of those words. ...
Article
Full-text available
This paper reports the results of two psycholinguistic studies on the meaning of anger words in Ukrainian and Russian. In Study 1, meaning profiles of nine Russian anger terms were obtained from L2 Russian speakers from Ukraine (Kyiv) and monolingual Russian speakers from Russia (Volgograd). In Study 2, the meanings of five anger-related emotions were evaluated by two groups of Ukrainian bilinguals (L1 Ukrainian and L1 Russian). The results show that Ukrainians (in both their L1 and L2) consider anger-related emotions to be less likely subjected to regulatory control and societal disapproval, which may highlight cultural differences between Ukraine and Russia.
... According to Creswell (2013) and Bryman (2016), online data collection methods provide flexibility and anonymity, which can lead to more honest and reliable responses. Reips (2002) also emphasizes that the absence of physical and social pressure in online surveys enhances the accuracy of responses. This method contributed to the reliability and validity of the data, particularly in the context of cyberbullying and online safety issues. ...
Article
Full-text available
Purpose: This study aims to cultivate awareness of the phenomena of bullying and security within the framework of cyber psychology. In line with this primary objective, the experiences and awareness levels of participating school administrators and teachers were examined. Method: A descriptive phenomenological design, supported by a large sample, was employed to explore the semantic spectrum of cyber security and cyber bullying concepts. The study group consisted of 192 teachers and 64 school administrators, selected through purposeful sampling, specifically convenience sampling. A semi-structured online interview form served as the data collection tool. Content analysis was conducted with the goal of conceptualizing the data and identifying themes that describe the phenomenon. The findings are presented through descriptive narratives. Findings: The findings reveal the significant emotional distress and security concerns negatively influenced by cyber psychology. Additionally, it has been found that cyberbullying and security issues lead to serious consequences for individuals, such as loss of self-confidence, social isolation, persistent anxiety, and long-term psychological trauma.The data reveal that females are the gender most frequently subjected to cyber bullying, with the most vulnerable age group being 14-18, followed by the 11-14 age range. The identified sub-themes of cyber bullying include fraud, hacking, violation of privacy, verbal abuse, blackmail, threatening and psychological violence. Highlights: By examining the experiences of participants across a large sample, the study broadens the understanding of cyber bullying and security concepts. The detailed analysis of sub-themes provides both theoretical insights and practical recommendations for policymakers and practitioners. The study serves as an important reference to enhance awareness among teachers and school administrators who are the closest to children and youth in school and to develop measures that address the increasing challenges of internet use in education.
... El proceso de percepciónLos elementos visuales y la percepción humana, son importantes en el proceso de valoración y gestión del paisaje a cualquier escala de trabajo, como pone de manifiesto la revisión bibliográfica presentada, en la que muchos de los trabajos se apoyan en encuestas a personas de diferente índole. Sin embargo, el observador es el elemento más complejo de analizar, ya que en la percepción intervienen no sólo procesos de tipo fisiológicos relacionados con la visión y común a todos los seres humanos, sino también procesos psicológicos de carácter personal mucho más difíciles de medir(Abello R.P. & Bernáldez F.G., 1986;Tveit et al., 2006).Estos procesos psicológicos dependen de las circunstancias del individuo, y de su contexto socio-cultural(Zube E.H., 1984, Bell S., 2001; así variables como la edad, el sexo, la educación, el lugar de procedencia, o incluso la religión, pueden estar matizando pautas de respuestas ante un mismo paisaje percibido.Para abordar estos aspectos cognitivos que van más allá de los aspectos meramente visuales, muchos de los trabajos vistos solventan este problema mediante la elaboración de encuestas en Internet y/o en papel con una muestra suficiente de encuestados con diferentes rangos de edad, profesiones, lugares de procedencia, sexo, etc., con la finalidad de analizar de una manera objetiva las preferencias de una serie de observadores ante una serie de paisajes en función del objeto de estudio(Shang H., & Bishop I.D., 2000;García L. et al. 2003; Hernández J. et al. 2004;García L. et al. 2006; Montero-Parejo et al., 2016,entre otros).En este sentido el uso de Internet, se presenta cada vez más como una herramienta más que fiable y económica para abordar este tipo de investigaciones(Bishop I.D., 1997;Ulf-Dietrich R., 2002;Roth M., 2006). Por otra parte la mayoría de estos trabajos se apoyan en la fotografía y técnicas de tratamiento de imágenes, consideradas tradicionalmente como sustitutos válidos de la realidad en muchas de las investigaciones que se centran en el ámbito del paisajismo(Shang H.,y Bishop I.D.,2000, García et al., 2003, Garrido et al., 2018 nombrar unos pocos).Viendo la importancia que la participación pública tiene en el proceso planificador(Sang N. & Birnie R., 2008), trabajos de este tipo pueden conseguir aunar fines ...
Chapter
De las diferentes definiciones realizadas del paisaje se destaca, en casi todas, la existencia de una vinculación unánime entre el paisaje y su apreciación sensorial, casi exclusivamente visual. El observador percibe un conjunto más o menos estructurado, de formas, colores, líneas y texturas en un escenario de unas determinadas características topográficas y en unas condiciones de visibilidad. Es lo que se conoce como paisaje visual o percibido. Este conjunto de atributos que nos define el paisaje percibido se denomina elementos visuales del paisaje. Teniendo esto en cuenta, son muchas las citas científicas que, desde el último tercio del siglo XX hasta hoy, recogen el estudio de los elementos visuales del paisaje, poniendo de manifiesto la creciente preocupación por la integración adecuada de las actuaciones del hombre en los entornos naturales. En el presente trabajo, se hace una retrospectiva completa de las citas más importantes en el ámbito de la integración de edificaciones aisladas en entornos rurales o periurbanos, destacando en gran medida el papel estratégico que la vegetación tiene en la interpretación de los elementos visuales de cualquier proyecto urbanístico.
... While there is research on internet use among migrants, the specific impact of social networking on social adjustment remains underexplored, highlighting the need for further research. However, internet researchers must be cautious of potential errors and ethical concerns (see Reips, 2002). ...
Article
Full-text available
Introduction This research investigates resilience and lived experiences of transnational Russophone families amidst global changes, with a focus on the intricate dynamics of communities spread across borders. The study emphasizes the importance of considering individual migrant experiences in understanding language learning and integration. We explored perceptions of local language proficiency among Russophones; challenges faced by adult Russophones in learning a new language; attitudes and experiences of adults regarding language learning; and strategies Russophone immigrants use to address gaps in the target language. Methods Methodologically, the research employs ethnographic and thematic analyses, drawing on a diverse array of sources including interviews, social media posts, and personal communications. This approach highlights the necessity of considering both prompted responses and spontaneous discussions to capture authentic opinions on language learning from various perspectives. Results The study underscores the interconnectedness and interdependence within transnational families, illustrating how their lives are shaped by factors that transcend national boundaries. The examination of their diverse experiences reveals their capacity to endure and overcome challenges of integrating into new societies. Russophone migrants’ attitudes toward language learning highlight how learning the host country’s language enhances integration and social mobility, while maintaining the native language preserves cultural heritage, although second-generation immigrants often feel disconnected from their linguistic roots. Discussion Studies by various authors discuss challenges which immigrants face when adapting to a new linguistic environment. This project emphasizes the impact of language learning on identity and reveals cultural flexibility in attaining social justice in multicultural contexts. These insights suggest that language programs and policies should address both the practical needs of immigrants and the preservation of their cultural identities taking into account their naïve views about language learning.
Article
Among the subjects of psychology, the mind has been tried to be understood since the pre-scientific period. With Wundt's establishment of the first psychology laboratory, human behaviors were investigated with scientific methods, and the psychology journey began with these studies. Schools such as structuralism, functionalism, and behaviorism came to the fore according to the conditions of the period and cognitive studies have gained importance, especially with the development of technology in the recent period. Although most researchers state that behaviorism lost power with the cognitive revolution, it is also suggested that behaviorism transformed and continued its existence in parallel with the cognitive revolution. Behavioral psychologists such as Tolman and Hull were among the researchers who referred to the mind and provided the transition between behaviorism and cognitive psychology. Today, cognitive psychology studies have become dominant as one of the strongest sub-fields of experimental psychology. However, detailed experimental methods conducted during behaviorism continue to be used in cognitive studies. In this context, experimental psychology continues as the umbrella concept of many fields such as psychophysics, perception, attention, memory, thinking, decision-making, intelligence, development, social psychology, environmental psychology, and motivation. It has undergone significant changes since its inception, shifting from basic sensory and perception experiments to complex studies of cognitive, emotional, and social processes. With the pandemic period, online research has also increased outside of the laboratory environment. Especially during and after the pandemic period, a wide range of psychology studies continue to be conducted, from protecting human psychological health to understanding the mind in artificial intelligence research.
Article
Many studies have demonstrated spatial-numerical associations, but the debate about their origin is still ongoing. Some approaches consider cardinality representations in long-term memory, such as a Mental Number Line, while others suggest ordinality representations, for both numerical and non-numerical stimuli, originating in working or long-term memory. To investigate how long-term memory and working memory influence spatial associations and to disentangle the role of cardinality and ordinality, we ran three preregistered online experiments (N = 515). We assessed spatial response preferences for letters (which only convey ordinal but no cardinal information, in contrast to numbers) in a bimanual go/no go consonant-vowel classification task. Experiment 1 (‘no-go’ trials: non-letter symbols) validated our setup. In Experiments 2 and 3, participants learned an ordinal letter sequence prior to the task, which they recalled afterwards. In Experiment 2, this sequence was merely to be maintained (‘no-go’ trials: non-letter symbols), whereas in Experiment 3, it needed to be retrieved during the task (‘no-go’ trials: letters outside the sequence). We replicated letter-space associations based on the alphabet stored in long-term memory (i.e., letters earlier/later in the alphabet associated with left/right, respectively) in all experiments. However, letter-space associations based on the working memory sequence (i.e., letters earlier/later in the sequence associated with left/right, respectively) were only detected in Experiment 3, where retrieval occurred during the task. Spatial short- and long-term associations of letters therefore seem to coexist. These findings support a hybrid model that incorporates both short- and long-term representations, which applies similarly to letters as to numbers.
Article
Full-text available
[The Impact of Usability on Employees’ Work Engagement and Emotional Exhaustion: An Experimental Longitudinal Study] Working conditions fundamentally affect employees’ well-being. While stress factors such as overtime are well-researched, the impact of poorly designed software is rarely systematically examined. This study, therefore, examines the effect of usability on work engagement and emotional exhaustion among employees. It hypothesizes that higher usability of workplace software leads to increased work engagement and reduced emotional exhaustion. An experimental longitudinal study with a control group design was used for the investigation. At two measurement points, usability, emotional exhaustion, and work engagement were assessed among 327 employees at various locations within a corporate group. Between the measurements, the intervention group received usability improvements to the system and training. The intervention group reported significantly higher work engagement and reduced emotional exhaustion after the intervention. These effects were not observed in the control group. The findings of this study support the hypothesis that usability influences aspects of well-being. Additionally, the results suggest that in the field of occupational psychology, the usability of interactive systems must be increasingly considered.
Article
Full-text available
Numbers are associated with space, but it is unclear how flexible these associations are. We investigated whether the SNARC effect (Spatial-Numerical Association of Response Codes; Dehaene et al. 1993 J. Exp. Psychol. 122, 371–396. (doi:10.1037/0096-3445.122.3.371); i.e. faster responses to small/large number magnitude with the left/right hand, respectively) is fully flexible (depending only on relative magnitude within a stimulus set) or not (depending on absolute magnitude as well). Evidence for relative-magnitude dependency came from studies observing that numbers 4 and 5 were associated with the right in a 0–5 range but with the left in a 4–9 range (Dehaene et al. 1993; Fias et al. 1996 Math. Cogn. 2, 95–110 (doi:10.1080/135467996387552). Within this Registered Report, we conducted two online experiments running Bayesian analyses with optional recruitment stopping at moderate evidence (BF10 above 3 or below 1/3). Experiment 1 (n = 200) replicated relative-magnitude dependency using the original stimuli. However, Experiment 2 (n = 300) additionally demonstrated absolute-magnitude dependency, while considering recent advances in SNARC research using 1–5 excluding 3 and 4–8 excluding 6. In contrast to the frequently perpetuated notion of fully flexible Spatial-Numerical Associations, some fixed relation to absolute magnitude prevails. These findings have important consequences for understanding how Spatial-Numerical Associations might support numerical processing.
Article
Full-text available
A multinomial model is used to disentangle the respective contributions of reasoning processes and response bias in conclusion-acceptance data that exhibit belief bias. A model-based meta-analysis of 22 studies reveals that such data are structurally too sparse to allow discrimination of different accounts of belief bias. Four experiments are conducted to obtain richer data, allowing deeper tests through the use of the multinomial model. None of the current accounts of belief bias is consistent with the complex pattern of results. A new theory of belief bias is proposed that assumes that most reasoners construct only one mental model representing the premises as well as the conclusion or, in the case of an unbelievable conclusion, its logical negation. New predictions derived from the theory are confirmed in 4 additional studies.
Chapter
Full-text available
This chapter discusses the pros and cons of the experimenting methods of e-research in order to understand why conducting Web experiments are an opportunity for research science. To analyze the contemporary methods it is important to know the traditional approaches and their flaws. However, the current researches have an advantage of generalizability, volunteer bias, detectablity of motivational confounding, and other advantages like cost. Openness is one of the fundamental principles of science and this can be achieved in a much better way through Web experiments than in laboratory experiments. Traditional experiments may contain features that are not described in the method section that may turn out to be important but public Web experiments are openly accessible and can remain indefinitely on the World Wide Web for documentation purposes. Therefore, the Web experimental method opens the door to research areas that were almost inaccessible for established methods.
Article
Full-text available
Two studies examined hypotheses derived from a Social Identity model of Deindividuation Effects (SIDE) as applied to social influence in computer-mediated communication (CMC) in groups. This model predicts that anonymity can increase social influence if a common group identity is salient. In a first study, group members were primed with a certain type of social behavior (efficiency vs. prosocial norms). Consistent with the model, anonymous groups displayed prime-consistent behavior in their task solutions, whereas identifiable groups did not. This sug- gests that the primed norm took root in anonymous groups to a greater extent than in identifiable groups. A second study repli- cated this effect and showed that nonprimed group members con- formed to the behavior of primed members, but only when anony- mous, suggesting that the primed norm was socially transmitted within the group. Implications for social influence in small groups are discussed. This article is concerned with processes of social influ- ence in groups communicating by means of computers. A common feature of communication via e-mail and the Internet is the relative anonymity of contact with others, especially in initial interactions. In two studies, we inves- tigate the effect of visual anonymity on social influence in computer-mediated communication (CMC). In the process, we address basic issues of general concern to social psychology and examine the effects of this increas- ingly popular communication medium. Deriving predic- tions from the Social Identity model of Deindividuation Effects (SIDE) (Reicher, Spears, & Postmes, 1995), we try to show in the first study that anonymity can enhance the influence of a primed norm. The second study in- vestigates evidence for the transmission of this norm in the interaction between group members who are
Article
Full-text available
This article explores the viability of conducting longitudinal survey research using the Internet in samples exposed to trauma. A questionnaire battery assessing psychological adjustment following adverse life experiences was posted online. Participants who signed up to take part in the longitudinal aspect of the study were contacted 3 and 6 months after initial participation to complete the second and third waves of the research. Issues of data screening and sample attrition rates are considered and the demographic profiles and questionnaire scores of those who did and did not take part in the study during successive time points are compared. The results demonstrate that it is possible to conduct repeated measures survey research online and that the similarity in characteristics between those who do and do not take part during successive time points mirrors that found in traditional pencil-and-paper trauma surveys.
Chapter
The World-Wide Web (WWW) is a rich tool for the presentation of psychological stimuli and the automated collection and screening of data. The Web offers a unified interface for data collection, regardless of a subject's geographical location or computer operating system. An important part of Internet-related data collection depends upon the execution of programs on the server side of a Web session. This chapter discusses the the client-server relationship and highlights the features of the WWW client-server interaction that can afford psychologists with more control in the delivery of experiments on the Web. Whenever the server is involved in receiving and recording data or in making decisions based upon participant-submitted responses, server-side programming is required. Server-side programming can be used to improve the quality of data collected, control and identify the participants sampled, and guard against tampering. If experimenters are to avoid placing technological requirements on potential participants then server-side programming is the only alternative to client-side solutions such as Java and JavaScript. Heavy processing jobs, such as averaging the results of thousands of subjects, data can be time-consuming. Server-side programs should take measures to accomplish such tasks in a computationally efficient manner. Another server-side issue that is discusses in the chapter is the problem of file collisions that results when two running versions of a program attempt to access a file simultaneously. The chapter emphasizes that placing more computations on the server-side allows all people with Web access to participate regardless of technological sophistication.
Article
This article explores the viability of conducting longitudinal survey research using, the Internet in samples exposed to trauma. A questionnaire battery assessing psychological adjustment following adverse life experiences was posted online. Participants who signed up to take part in the longitudinal aspect of the study were contacted 3 and 6 months after initial participation to complete the second and third waves of the research. Issues of data screening and sample attrition rates are considered and the demoaraphic profiles and questionnaire scores of those who did and did not take part in the study during successive time points are compared. The results demonstrate that it is possible to conduct repeated measures survey research online and that the similarity in characteristics between those who do and do not take part during successive time points mirrors that found in traditional pencil-and-paper trauma surveys.