PreprintPDF Available

How to implement real-time interaction between participants in online surveys: A practical guide to SMARTRIQS

Authors:

Abstract and Figures

This paper provides researchers a practical guide to SMARTRIQS, a free and open-source application that turns Qualtrics surveys into fully interactive experiments. SMARTRIQS fills in a crucial gap in contemporary experimental research methods: It offers researchers the ability to design surveys that feature real-time interaction between participants-including live text chat-without requiring researchers to learn any programming language, install any software, or pay for any third-party services. This paper not only provides a comprehensive guide to designing interactive studies in SMARTRIQS in general but also walks readers through the step-by-step instructions for setting up a particular study (Dictator Game with chat). The tutorial starts from the very basics, and is accessible to everyone, even those who are less-or not at all-familiar with Qualtrics. Finally, it recommends further readings and materials to researchers who wish to learn more about SMARTRIQS, its features, and its applications.
Content may be subject to copyright.
¦2020 Vol. 16 no. 4
How to implement real-time interaction between
participants in online surveys: A practical guide to
SMARTRIQS
Andras Molnar aB
aDepartment of Social and Decision Sciences, Carnegie Mellon University
Abstract SMARTRIQS is an open-source add-on to the popular survey platform Qualtics, offer-
ing researchers the ability to design online surveys that feature real-time interaction between
participantsincluding live text chatwithout requiring researchers to learn any programming
language or install any software. Using SMARTRIQS does not incur any additional costs to re-
searchers who have an institutional Qualtrics account. This paper not only provides an overview of
SMARTRIQS and its potential applications but also walks readers through the step-by-step instruc-
tions for setting up a particular study (Dictator Game with chat). These instructions start from the
very basics, assuming no prior expertise in online experimentation, and are accessible to everyone,
even those who are lessor not at allfamiliar with Qualtrics.
Keywords chat, real-time interaction, survey design, tutorial. Tools Qualtrics.
Bandrasm@andrew.cmu.edu
10.20982/tqmp.16.4.p334
Acting Editor De-
nis Cousineau (Uni-
versit´
e dOttawa)
Reviewers
One anonymous re-
viewer
Introduction
Online experimentation is becoming increasingly popu-
lar in social sciences (Arechar, G¨
achter, & Molleman,
2018; Gosling & Mason, 2015)largely due to the mas-
sive technological improvements in survey software (e.g.,
Qualtrics,2020;SurveyMonkey,2020), and the advance
of large-scale online participant panels (Chandler, Rosen-
zweig, Moss, Robinson, & Litman, 2019) and crowdsourc-
ing platforms such as Amazon Mechanical Turk (Bohan-
non, 2016; Mason & Suri, 2012; Paolacci & Chandler, 2014),
Prolic, or CrowdFlower (Peer, Brandimarte, Samat, & Ac-
quisti, 2017). These tools allow researchers to collect and
analyze data at an unprecedented scale, speed, and e-
ciency, which makes them superior to conventional lab
studies (i.e., pen-and-paper experiments or computerized
interactions implemented on local networks, e.g., via z-
Tree, Fischbacher, 2007) in many ways.
While lab studies typically provide researchers more
experimental control over the procedures and have lower
attrition rates than online experiments (Arechar et al.,
2018; Zhou & Fishbach, 2016), they have several major
limitations (see Oine RTIin Table 1). Oine studies
are usually more expensiveboth in terms of participant
compensation and administrative costsrequire a physi-
cal lab and equipment, and have considerably smaller and
less heterogeneous subject pools (Berinsky, Huber, & Lenz,
2012; Henrich, Heine, & Norenzayan, 2010). Also, since the
number of participants who can concurrently participate
in lab studies is limited, data collection takes signicantly
longer.
Online studies can overcome most of the limitations
of conventional lab studies, but they also pose unique
methodological challenges. Most importantly, implement-
ing real-time interaction and communication in an online
experiment is notoriously dicult, unlike allowing partic-
ipants to seamlessly interact with each other in a conven-
tional lab environment. Setting up such a study takes a
considerable amount of time, and requires researchers ei-
ther to have advanced programming skills, or to pay for
third-party services and products. These challenges pre-
vent many social scientists from utilizing online methods
in their research. Researchers who wish to study social in-
teraction, group dynamics, communication, or any other
form of interpersonal decision-making, often have to re-
sort to simpler but also inferior alternatives, which only al-
he uantitative ethods for sychology 334
2
¦2020 Vol. 16 no. 4
Table 1 A comparison of methods for studying social interaction between participants
Type Method Main advantages Limitations
(1) Oine RTI* Lab (e.g., z-Tree, High experimental control Requires lab & equipment;
pen-and-paper studies) Low attrition rates Expensive; time consuming;
Limited subject pool
(2) Online non-RTI* Waverecruitment Does not require programming Potential selection confound;
Slow; manual matching
Strategy method Does not require programming; Might affect behavior;
More observations per subject Manual matching
Deception Does not require programming; Subject pool contamination;
Easy setup Potential IRB issues
(3) Online RTI* Experimental platforms Highly customizable; Requires programming
(e.g., oTree) Repeated, complex interaction
Third-party services Highly customizable; Expensive; closed source;
(e.g. iDecisionGames) Advanced features (e.g., video chat) Limited experimental control
Specialized applications Does not require programming; Limited to one type of interaction
(e.g., Chatplat) Easy setup; platform-independent
SMARTRIQS Does not require programming; None of the above
Easy setup; highly customizable (see Table 2)
Note.RTIindicates true (non-deceptive) real-time interaction between participants.
low for non-real-time interaction or simulated interaction
between participants (see Online non-RTIin Table 1).
Simple but awed alternatives to real-time interaction
in online studies
There are three simple methods that allow researchers to
study social interactions in online experiments, without
requiring participants to interact in real-time: wavere-
cruitment, the strategy method, and deception. While all of
these alternatives help researchers to overcome the limita-
tions of conventional lab studies, they also introduce new
limitations, which, depending on the research objectives
and the standards of the researchers discipline, can ren-
der these methods unsuitable for conducting online exper-
iments.
Wave recruitment
Wave recruitment refers to the practice of recruiting a
group of participants, recording their decisions, and dis-
playing these decisions to a second group of participants,
who are recruited in subsequent sessions. Although this
alternative does not require any programming skills and is
relatively easy to set up, researchers have to match partic-
ipants and transmit their responses manually, which is not
only very labor-intensive and error-prone, but also makes
complex interactions (e.g. multiple rounds, large groups)
extremely challengingif not impossibleto implement.
Strategy method
When using the strategy method, participants make con-
ditional decisions for each possible action of other partic-
ipants. Similarly to the wave recruitment method, par-
ticipants are matchedpost hoc, and decisions are im-
plemented only after the experiment. While the strategy
method allows researchers to collect more data per partic-
ipant, it is also very labor-intensive and error-prone, since
it requires the experimenter to manually match partici-
pants and determine outcomes. More importantly, par-
ticipants might behave differently when their choices and
preferences are elicited using the strategy method, as op-
posed to a direct-response method, which might affect the
validity of the experiment. For instance, Casari and Ca-
he uantitative ethods for sychology 335
2
¦2020 Vol. 16 no. 4
son (2009) found that people showed signicantly lower
levels of trustworthiness when using the strategy method,
compared to the direct-response method, which suggests
that the strategy method should be used with caution when
studying trust-related behaviors.
Deception
Deception is arguably the most problematic alternative
to implementing real-time interaction in online studies.
When using deception, participants are explicitly told or
led to believe that they are interacting with other people,
while they are actually interactingwith the computer.
This not only violates the research standards of several dis-
ciplines (e.g., experimental economics and experimental -
nance), but also conicts with most Institutional Review
Board (IRB) policies, which explicitly specify that decep-
tion can only be used when there are no reasonably effec-
tive alternative methods available to achieve the goals of
the research. The practice of deceiving subjects out of con-
venience, because it is dicult (but not impossible) to im-
plement real-time interaction online, is thus violating IRB
guidelines. Furthermore, the excessive use of participant
deception can contaminate subject poolsi.e., erode the
credibility of experimental instructions and alter partici-
pantsbehavior over timeespecially in large-scale crowd-
sourcing platforms such as MTurk or Prolic (for a detailed
discussion and a review of empirical evidence on the ef-
fects of deception, see Hertwig & Ortmann, 2008).
Existing methods that allow for real-time interaction
in online studies
The existing solutions that allow for conducting interac-
tive studies online, can be classied into three broad cat-
egories: standalone experimental platforms, closed-source
third-party services, and specialized applications (see On-
line RTIin Table 1).
Standalone experimental platforms
The rst category, standalone experimental platforms
such as ConG (Pettit, Friedman, Kephart, & Oprea, 2014),
LIONESS Lab (Giamattei, Molleman, Seyed Yahosseini,
& Gachter, 2019), MWERT (Hawkins, 2015), nodeGame
(Balietti, 2017), oTree (Chen, Schonger, & Wickens, 2016),
Psynteract (Henninger, Kieslich, & Hilbig, 2017), or So-
PHIE (Hendriks, 2012) offer researchers great exibility in
study design and are typically freely accessible and open-
source. However, all of these platforms require users to
have at least some expertise in a programming language
(e.g., Python, JavaScript, PHP). This is a limitation that pre-
vents many social scientists from studying behavior in on-
line settings, unless they are willing to invest a substantial
amount of time in learning a programming language.
Third-party services
Another alternative is to outsource programming tasks
to a third-party or to hire a professional programmer,
however, doing so can be prohibitively expensive, espe-
cially for smaller labs or junior researchers who have
limited funds. Furthermore, individual programmers
usually lack the experience with behavioral experimen-
tal research and need a lot of guidance and consulta-
tion when designing and editing studies. By contrast,
there are private companies that specialize in designing
and conducting online experiments (e.g., iDecisionGames,
https://idecisiongames.com). While these services offer
great exibility in experimental design and allow re-
searchers to use advanced features such as live video chat
between participants, researchers have no direct control
over the design process and have to communicate every
minor edit to the third-party company, which can slow
down the design and testing phase substantially. Further-
more, since these are for-prot companies, their products
are closed-source, which forces researchers to keep pay-
ing for services, even if they just want to run an exact
replication of an experiment conducted by another lab.
Since replicability and transparent research practices are
increasingly important in the social sciences (see e.g., Col-
laboration*, 2015), this is a rather serious limitation.
Specialized applications
Finally, there are specialized applications such as Chatplat
(https://www.chatplat.com), that address the limitations of
both generic experimental platforms and third-party ser-
vices. These applications are free, do not involve any pro-
gramming, and can be set up relatively easily. However,
these advantages come at the expense of exibility: Spe-
cialized applications are designed for a specic type of in-
teraction, and their applicability is limited to a narrow
range of experiments. For example, Chatplat allows par-
ticipants to chat with each other in online surveys, but
does not allow for setting up different types of studies (e.g.,
which require transmitting decisions).
A new tool for studying real-time interaction in online
studies: SMARTRIQS
The previous sections highlight not only the demand for,
but also the limitations of, methods that allow researchers
to implement real-time interaction between participants in
online studies. These methods are either too dicult to use
(i.e., require programming), too expensive, or have limited
applications. SMARTRIQS is a rst-of-its-kind solution that
addresses all of these challenges, and allows researchers
to conduct interactive experiments without the hassle of
programming or paying for extra services. Table 2sum-
he uantitative ethods for sychology 336
2
¦2020 Vol. 16 no. 4
Table 2 Features and limitations of SMARTRIQS
Feature Description Limitation
Cost and access Free* and open-source Requires Qualtrics account*
Implementation Default or custom (private) server
Ease of use Requires no programming;
Requires no installation
Integation with Qualtrics All data saved in Qualtrics
Participant matching Fixed groups No re-matching
Group size 28 participants per group; Max. 8 participants per group
Unlimited number of groups
Randomization Random or custom assignment
to roles and conditions
Type of interaction One-shot or repeated;
synchronous or sequential
Types of data that Any data type that is supported Transmission of personal data
can be transmitted in Qualtrics (numeric, text, scale) may be prohibited (consult IRB)
Supported communication Highly customizeable text chat No audio or video chat
(e.g., group or private chat;
multiple stages; custom format)
Advanced features Waiting rooms;
Built-in math operations
(e.g., sum, rank, min, max);
Dropout management;
Bots and default responses;
Free survey templates and demos;
Data collection monitor
Note. SMARTRIQS requires the researcher to have an institutional subscription to Qualtrics, but there is no additional
fee associated with using SMARTRIQS.
marizes the main features and limitations of SMARTRIQS.
The conceptual framework and the architecture of
SMARTRIQS were introduced in Molnar (2019). In essence,
SMARTRIQS is a collection of generic scripts that can be
added to any Qualtrics survey, allowing the researcher to
match survey respondents in real-time and transmit re-
sponses between them. However, researchers do not have
to edit any of these scripts when creating their own studies,
instead, the design and customization process is done en-
tirely within the Qualtrics Survey Editor, which has a very
intuitive and user-friendly graphical user interface.
While Molnar (2019) provided a comprehensive con-
ceptual overview, it addressed a specialist audience (re-
searchers in experimental nance) and did not provide
any instructions to how to design interactive studies. The
current paper, by contrast, focuses on the practical applica-
tions of SMARTRIQS and provides researchers step-by-step
instructions and hands-on guidance. The next section lists
and discusses various applications of SMARTRIQS across
several disciplines of the social sciences, while the rest of
the paper walks the reader through a detailed tutorial.
This guide is designed to help researchers to become fa-
miliar with SMARTRIQS and to set up and customize inter-
active studies effortlessly, without requiring them to have
any prior experience with online experimentation.
Applications of SMARTRIQS
SMARTRIQS has a great potential for becoming a funda-
mental tool of social scientists, as it offers researchers
the ability to run interactive studies online with unprece-
dented ease and eciencywithout having to learn any
programming language, installing any software, or paying
for third-party services. Social scientists will nd many po-
tential uses of SMARTRIQS (see Table 3).
First and foremost, social scientists will be able to run
interactive experiments online, which will allow them to
recruit participants from larger and more diverse sam-
ples than when conducting conventional lab experiments.
They can also easily convert their existing (non-interactive)
Qualtrics surveys into interactive ones. Being able to
match participants in real-time also makes many instances
of deception unnecessary, and allows for studying human
he uantitative ethods for sychology 337
2
¦2020 Vol. 16 no. 4
Table 3 Potential applications of SMARTRIQS across disciplines
Fields / disciplines Sample applications
Cognitive Science Coordination & joint action;
& Linguistics Text- and sentiment analysis in real-time communication
Economics Allocation & reciprocity (e.g., Dictator Game, Ultimatum Game);
Collective decision-making (e.g., Public Goods Game);
Hierarchical beliefs & expectations (e.g., p-Beauty contest)
Finance Simple market interactions (e.g., multiple buyers and sellers);
Investment decisions (e.g., Trust/Investment Game);
Auctions (e.g. blind, Vickrey)
Management & Group processes (e.g., collaboration, competition);
Organizational Research Work & principal-agent problems (effort allocation & provision);
Negotiation and strategic interactions
Marketing & Buyer-seller interactions (e.g., endowment effect studies);
Consumer Research Consumer focus group studies
Political Science Persuasion & spread of beliefs;
Political focus group studies;
Voting behavior
Social Psychology Moral behavior (e.g., dishonesty, cheating, punishment);
Identity & group dynamics (e.g., minimal group paradigms);
Compliance to, and enforcement of social norms;
Impression management & social signaling
Miscellaneous Educational applications (in-class experiments);
applications Field applications (eld studies with participant interaction);
Lab studies (computerized interaction in conventional labs)
behavior in more naturalistic contexts.
Economists and researchers studying game theory can
easily set up various classiceconomic games (e.g., Dicta-
tor Game, Public Goods Game), as well as novel, custom ex-
perimental designs. Researchers interested in experimen-
tal nance can conduct simple market experiments, and
will be able to simulate investment decisions and auctions
online.
There are plenty of potential applications in manage-
rial and organizational contexts as well: For example, re-
searchers can study various group processes such as col-
laboration and competition, task allocation, and effort pro-
vision within groups. The advanced chat feature allows
scientists to study various aspects of communication: per-
suasion, negotiation, impression management, or even the
linguistic properties of conversations, which are relevant
not only in organizational research but also in political sci-
ence or consumer research.
SMARTRIQS allows social psychologists to study a wide
range of phenomena in online contexts, including but not
limited to moral behavior, group dynamics, social norms,
and impression management strategies. Psychologists in-
terested in personality and individual differences can sup-
plement real-time interactions with various inventories to
study how personality traits and attitudes correlate with
social behavior.
Importantly, the applications of SMARTRIQS are not
limited to experiments conducted on Amazon Mechanical
Turk or similar crowdsourcing platforms. Since SMAR-
TRIQS has minimal technical requirements, and does not
require participants to install any softwareany device
with an Internet access and a browser that supports
HTML5 and JavaScript sucesresearchers can recruit
participants in virtually any context: in conventional com-
he uantitative ethods for sychology 338
2
¦2020 Vol. 16 no. 4
puter labs, in classrooms, or even in the eld. This makes
it possible to conduct studies with real-time interaction in
places where it would have been challenging before (e.g.,
developing countries, remote locations, events). SMAR-
TRIQS can also be used for educational purposes: Teachers
can set up simple interactive experiments, then have their
students complete these studies in class. With the built-
in Results-Reports function of Qualtrics, teachers can even
display the results to students in real-time.
SMARTRIQS can also be combined with other useful
tools developed for Qualtrics. For example, researchers
who study dynamic cognitive processes can supplement
SMARTRIQS surveys with a tool that captures mouse cur-
sor trajectories (Mathur & Reichling, 2019). Similarly, re-
searchers who want to measure how much time partici-
pants spend on-screen versus off-screen (a crucial metric
of participant attention and engagement) can supplement
SMARTRIQS with TaskMaster, a simple tool that tracks par-
ticipantsactivity (Permut, Fisher, & Oppenheimer, 2019).
Finally, since SMARTRIQS is open-source and provides
a standardized generic framework for interactive studies,
it will not only serve as a useful experimental tool but also
contribute to, and propagate, open research practices.
Step-by-Step Tutorial
This section provides a comprehensive guide to using
SMARTRIQS and walks the reader through the step-by-step
instructions for setting up a particular study: the Dictator
Game. In this classic dyadic interaction, participants are
randomly assigned to either the role of the Allocatoror
the Recipient.At the beginning of the experiment the
Allocator is endowed with an amount (e.g., 100 tokens).
Then, she can transfer any amount out of her endowment
to the Recipient. The Recipient does not make any decision,
simply receives whatever amount is transferred to her.
Variations of this experiment are widely used to measure
social preferences and attitudes towards others (Forsythe,
Horowitz, Savin, & Sefton, 1994). To demonstrate the live
chat feature of SMARTRIQS, this section also covers how to
allow chat between participants.
Prerequisites
Researchers need the following before they start designing
interactive studies with SMARTRIQS:
1. An institutional subscription to the Qualtrics Sur-
vey Software. Free or trial accounts are not supported.
This tutorial does not assume any prior experience
with Qualtrics. However, having some familiarity with
certain Qualtrics concepts (e.g., Survey Flow,Embed-
ded Data,Branch Logic,Piped Text) is recommended,
as it will make it easier to follow this guide and to un-
derstand how SMARTRIQS works.
For those who are new to Qualtrics or wish to refresh
their knowledge, there are plenty of free tutorials avail-
able online. For example, Dare McNamara has two
excellent video tutorials: Beginner Qualtrics Training
(10 minutes) and Advanced Qualtrics Training (17 min-
utes).
2. A SMARTRIQS researcher ID, which can be obtained
by submitting the registration form. Researchers have
to provide their full name and email address, and ac-
cept the Data Policy, in order to receive their unique re-
searcher ID. Researchers may also provide additional
information about their aliation, status, and eld of
study, but these are all optional, not required for ob-
taining a researcher ID.
3. One or more SMARTRIQS survey templates
(Qualtrics Survey Format” files, or QSFfor short).
The purpose of these templates is to offer researchers a
solid starting point when designing new studies, so that
they dont have to set up everything from scratch. Im-
portantly, all of these templates contain the generic
scripts that allow interaction between participants.
While researchers can also add these scripts (source
code: https://github.com/andras-molnar/smartriqs)
manually to any blank Qualtrics survey, it is strongly
recommended to start with one of the SMARTRIQS tem-
plates, which already have all the necessary compo-
nents. The survey templates (QSF les) can be down-
loaded from the OSF repository.
Importing survey templates (QSF les) to Qualtrics
Throughout this tutorial we will use the Generic In-
teractive Survey Template(GIST). However, researchers
can start with any template that works best for them.
The GIST has every SMARTRIQS feature (including chat)
and using it is recommended for custom studies that
are very different from other templates. First, down-
load the Generic_Interactive_Survey_Template_GIST.qsf
le from the OSF repository. Then, log in to Qualtrics,
click on Create new project(top right corner of the
main page), then select Surveyunder Create your
ownand click on From a File / Choose le.In
the pop-up window, select the QSF le you wish to im-
port (Generic_Interactive_Survey_Template_GIST.qsfin
this tutorial) and click Open.Finally, rename the project
and click on Get Started.For more information about
how to import QSF les to Qualtrics, visit the correspond-
ing Qualtrics support page.
Matching participants: the MATCH block
Before participants can chat or interact with each other,
they have to be matched rst. In SMARTRIQS surveys this
happens in the MATCH block.SMARTRIQS offers a lot
he uantitative ethods for sychology 339
2
¦2020 Vol. 16 no. 4
of exibility in customizing interactions (e.g., group size,
number of stages, participant roles), and most of these set-
tings must be set before the MATCH block. To set up a
MATCH block, open the imported survey and then open
the Survey Flow (top left of the main page). On the top
of the Survey Flow there are two green panels titled Re-
quired parametersand Optional parameters(see Fig-
ure 1). Note that blank Qualtrics surveys do not have
these panels, only the SMARTRIQS survey templates con-
tain these. When creating an interactive survey from
scratch (instead of importing an existing SMARTRIQS tem-
plate), the researcher has to add these panels manually.
The two panels contain the Embedded Data elds (olive
green), which are essentially the variables and parameters
used in Qualtrics surveys. SMARTRIQS studies use Embed-
ded Data for two purposes: 1) as parameters that dene
the characteristics of the interaction, and 2) as variables
that store participantsresponses. Throughout the subse-
quent sections of paper, boldface text indicates Embed-
ded Data elds. After scrolling down in the Survey Flow,
there will be several grey panels and additional green pan-
els with more Embedded Data. Each grey panel represents
aQuestion Block. Blocks are either built-in SMARTRIQS
blocks that are responsible for various types of interaction
(MATCH, CHAT, SEND, GET, and COMPLETE) or standard
Qualtrics blocks that contain regular survey items (e.g., in-
structions, decisions, questionnaires). The latter can be
added and edited manually in the Survey Editor. The rst
panel of Embedded Data contains the required parameters
for matching participants. These parameters must be set
before the MATCH block:
researcherID: Enter the SMARTRIQS researcher ID
here (obtained by submitting the registration form).
studyID: Enter the name of the study here. The name
can be any combination of alphanumeric characters
(09, AZ, az), dash (-) or underscore (_), up to a length
of 256 characters. For example: Dictator_Game_Demo-
1. Note that names are case-sensitive.
groupID: This is the eld that identies groups within a
study. Leave this eld blank, this will be automatically
populated by SMARTRIQS.
participantID: Keep the default value in this eld
($e://Field/ResponseID) in order to assign the built-in
Qualtrics response ID to participants (recommended).
This guarantees that everyone will receive a random,
unique, and anonymous ID.
groupSize: Determine how many participants should
be in each group. SMARTRIQS supports group interac-
tion up to 8 participants per group. Set this to 2 for
dyadic interactions.
numStages: Determine the number of decisions to
be transmitted across participants. This usually cor-
responds to experimental stages. If participants take
turns or make multiple decisions, there should be mul-
tiple stages. In the Dictator Game example, we set this
eld to 1, since there is only one decision that needs to
be transmitted (the Allocators transfer).
roles: Determine the set of available roles within each
group, where the roles are separated by commas. For
example: Allocator,Recipient. Note that there is no
space between the comma and the roles. Since SMAR-
TRIQS uses these roles to identify participants within
groups, it is important that each participant should
have a unique rolewithin their group, even if their
tasks are identical (e.g., simple group chat). Also note
that the number of roles must be the same as the value
of the groupSize parameter (2 in this example). Roles
are case-sensitive: Make sure to use the correct cases
when setting up role-specic blocks in the Survey Flow
(see section Setting up role-dependent questions and
blocks: Branch Logic).
participantRole: Determine the role of the participant.
To assign roles randomly within groups, enter random
here. To learn more about custom role assignment
methods, visit https://smartriqs.com/randomization.
timeOutLog: SMARTRIQS saves error logs in this eld,
for example, when participants drop out from the
study. Leave this eld blank, this will be automatically
populated.
The second panel contains optional parameters, which
allow researchers to customize the non-essential features
of the MATCH block (e.g., messages displayed before
and during matching). In this tutorial we leave these
elds blank, and let SMARTRIQS apply the default val-
ues. To learn more about optional parameters, visit
https://smartriqs.com/matching/#matchBlock.
Real-time communication between participants: the
CHAT block
One of the most advanced features of SMARTRIQS is its
built-in chat interface, which allows two or more partici-
pants to chat in real-time. The chat feature is exible and
customizable: Researchers can set up chat with or without
a time limit; allow chat between the whole group or just
some participants within the group; have multiple stages
of chat within the study (e.g., interrupt the chat with a task,
then continue the chat after participants nish the task); or
customize the design of the chat window (e.g., time stamps,
window size, chat instructions).
SMARTRIQS saves the content of the CHAT block, in-
cluding participantsroles, messages, and time stamps (if
set) to an Embedded Data eld (chat log) that can be ac-
cessed by downloading the collected data. The CHAT block
he uantitative ethods for sychology 340
2
¦2020 Vol. 16 no. 4
Figure 1 Sample screenshot of the setup of required and optional parameters before the MATCH block (Dictator Game).
Note that both the Required parametersand Optional parameterspanels must be placed before (above) the MATCH
block in the Survey Flow.
has two required parameters in the Survey Flow: an Em-
bedded Data eld that stores the actual chat log (chatLog
in this example), and chatName, which species the name
of the chat log into which the chat will be saved. First, we
set up the actual chat log. The name of this eld can be any
combination of alphanumeric characters (09, AZ, az),
dash (-) or underscore (_), up to 32 characters length. In this
example we keep the default name of this eld (chatLog).
The value of this eld should always be set to text. Then, we
dene the chatName parameter. The value of this param-
eter should be the name of the chat log eld above, which
is chatLog in this example (Figure 2).
The CHAT block has six optional parameters in the Sur-
vey Flow, which allow researchers to apply time limits and
customize the chat design. In this example we keep the
default design but implement a time limit of two minutes
by setting the chatDuration parameter to 120 (seconds).
This means that the chat will end after two minutes. By
defaultif this eld is left blankthere is no time limit,
and participants can chat for as long as they wish. We
also set the allowExitChat parameter to no, which indi-
cates that participants are not allowed to exit the chat be-
fore the time is up (i.e., they must chat for two minutes).
By defaultif this eld is left blankparticipants can exit
any time.
To learn more about time limits, custom de-
signs, and more advanced chat features (i.e., multiple
stages, interrupted chat, private and group chat), visit
https://smartriqs.com/chat. Researchers who do not want
to include any chat in their studies, should delete the CHAT
block from the Survey Flow, along with the Embedded Data
elds above the CHAT block.
he uantitative ethods for sychology 341
2
¦2020 Vol. 16 no. 4
Figure 2 Sample screenshot of the setup of the CHAT block. The chat log is saved in the rst Embedded Data eld
(chatLog). The value of the second eld (chatName) should always be the case-sensitive name of the chat log dened
above. The third green panel contains the optional parameters for the CHAT block. Embedded Data elds should always
be placed before (above) the CHAT block in the Survey Flow.
Adding instructions and recording responses: Ques-
tion Blocks
At this point, participants can already chat with each other.
If the study does not require any other response to be
transmitted, the SEND and GET blocks should be deleted
from the Survey Flow. In that case, the survey is ready for
testing and data collection. However, if the study requires
responses to be transmitted (e.g., the Allocators transfer),
researchers should add new Question Block(s). In this ex-
ample, we add a block that allows the Allocator to make a
decision (see Figure 3):
1. Click on Save Flowto close the Survey Flow, then
scroll down to the bottom of the CHAT block in the Sur-
vey Editor and click on Add Block.This adds a blank
Question Block to the survey.
2. Rename the new block by clicking on its name (e.g., Al-
locators Transfer).
3. Click on Create a New Question.
4. Click on the green button labeled Multiple choiceon
the right side of the screen (below Change Question
Type) and select Slider.
5. Change Choicesto 1.
6. Check Force Responseto prevent participants from
proceeding without making a decision. It is best prac-
tice to always force responses in interactive studies. To
learn more about forced responses and other types of
response validation, visit the corresponding Qualtrics
support page.
7. Label the question (e.g., Allocators Transfer) and the
choice (Your transfer), and add some instructions by
editing the question text (You have been assigned to
the role of the Allocator. Please indicate below how
many tokens you want to transfer to the Recipient).
Next, we repeat the above procedure for the Recipi-
ent. In the Dictator Game the Recipient does not make
any decisions, she simply receives the Allocators transfer.
However, we still need to add another Question Block, in
which we inform the Recipient about the transfer. Add
a new Question Block and rename it to Recipients Pay-
ment.Create a new question, and change the question
type to Descriptive Text.Label the question (e.g., Recip-
ients Payment) and add the following text: The Alloca-
tor has decided to transfer you ${e://Field/transfer} tokens
(see Figure 4).
The expression following the $ sign is a Qualtrics Piped
Text, which represents a dynamic text that depends on a
variable or a previous response. When participants take
the survey, they see the actual value of this variable (e.g.,
50), instead of the expression. In this example, this Piped
he uantitative ethods for sychology 342
2
¦2020 Vol. 16 no. 4
Figure 3 Adding a new block and question in the Qualtrics Survey Editor. This example shows the setup of a question
that records the Allocators transfer in the Dictator Game (slider scale from 0 to 100). To change the default value (0), drag
the slider to the desired position.
Text depends on an Embedded Data eld called transfer,
which has not been dened yet. We will add this eld to
the Survey Flow when we set up the GET block.
Setting up role-dependent questions and blocks:
Branch Logic
In most studies participants are presented different in-
structions and make different decisions, depending on
their roles. In the Dictator Game the Allocator transfers
an amount, while the Recipient does not make any deci-
sion. This means that we should not display every block
to everyone. Instead, we should present the Allocators
Transferblock to Allocators only, and the Recipients Pay-
mentblock to Recipients only. To achieve this, we utilize
the Qualtrics feature called Branch logic, which allows for
displaying only certain blocks to participants, conditioned
on some criteria.
Open the Survey Flow, and scroll down to the section
with the two new blocks created in the previous section.
Click on Add Belowon the panel of the CHAT block and
select Branch.Then click on Add a Condition,and se-
lect Embedded Datafrom the drop-down menu. Enter
participantRole in the rst empty eld, and enter Alloca-
tor in the second empty eld. Note that these are case-
sensitive. Then click on "Move" on the panel of the Al-
locators Transferblock, and while holding the left mouse
button, drag this block under the branch. Repeat this pro-
cess for the Recipient: Add a new branch below the CHAT
block, add a condition, and select Embedded Data.Enter
participantRole in the rst empty eld and Recipient in the
second empty eld. Finally, drag the Recipients Payment
block under this new branch (see Figure 5).
The Dictator Game is an example with a single deci-
sion and one-shotinteraction, so we do not have to add
more questions. However, one of SMARTRIQSgreatest
strengths is the ability to facilitate multi-stage interactions.
For example, researchers might want to have participants
to make consecutive decisions. For practical examples, see
the demo studies with more complex Branch Logic (e.g.,
Trust Game, Ultimatum Game, or Third-Party Punishment)
at https://smartriqs.com/demos. The corresponding QSF
templates can be downloaded from the OSF repository.
he uantitative ethods for sychology 343
2
¦2020 Vol. 16 no. 4
Figure 4 Sample survey question (Recipients payment)
Figure 5 Using Branch Logic to conditionally display blocks to participants. In this example we use the participantRole
variable as the condition. Depending on the value of this variable (i.e, the participants role), we display either only the
transfer block or only the payment block.
Transmitting responses across surveys: the SEND and
GET blocks
So far we have added all the necessary questions to the sur-
vey, however, these are still individual responses, which
need to be transmitted across participants. SMARTRIQS
uses two blocks to achieve this: the SEND block submits
responses to the SMARTRIQS server, while the GET block
retrieves responses from the server. In the Dictator Game,
we want to send the Allocators transfer to the server, so
that the Recipient can retrieve it.
Sending data: the SEND block
Open the Survey Flow and scroll down to the SEND block.
There are two green panels above the SEND block: The top
panel has one Embedded Data eld (response), while the
bottom one has two elds (sendData and sendStage). All
three are required parameters, and they should always be
set before (above) the SEND block. There are no optional
parameters for the SEND block. The green panels and the
SEND block should always be placed after (below) the ques-
tion block that contains the response to be transmitted. In
this example, the response to be transmitted is in the Allo-
cators Transferblock, so move the two green panels and
the SEND block under the Allocators branch and below the
Allocators Transferblock (Figure 6).
The next step is setting up the values of the three re-
quired Embedded Data elds. By default, the name of the
rst eld is response but it is best practice to change this
to something more informative and specic (max. 32 al-
phanumeric characters: 0-9, A-Z, a-z, -, _). Not changing
the name can not only make data analysis dicult but also
lead to errors in the Survey Flow. In this example, we re-
name this eld to transfer. The value of this eld should
always be a Piped Text that refers to a particular response
provided by the participant. This response can be of any
type: multiple choice, scale, open-ended text, etc. In this
example, the Piped Text should refer to the Allocators de-
he uantitative ethods for sychology 344
2
¦2020 Vol. 16 no. 4
Figure 6 Sample screenshot of the setup of the SEND block (Dictator Game).
cision. Click on Set a Value Now,then click on the blue
arrow, and select Insert Piped Text Survey Question.
The pop-up menu will show all the eligible questions and
other variables than can be inserted as Piped Text. Find the
question that has the response to be transmitted (in this
example: Allocators Transfer). When this question is
highlighted, click on the response to be transmitted (Your
Transfer,see Figure 7).
Note that Your Transferis the manually set label of
the response, not the response itself. It is worth reiterat-
ing that the use of intuitive and informative labels is best
practice. In this instance, it will make it easier to insert the
correct response using Piped Text.
The second eld (sendData) indicates which previ-
ously dened Embedded Data eld should be sent to the
server. The value of this eld should always be the name
of another Embedded Data eldit should not refer di-
rectly to a question response. In this example, change the
value of sendData to transfer. This indicates that the SEND
block will access the value stored in the transfer eld, and
send that value to the SMARTRIQS server. The third eld
(sendStage) species which experimental stage does the
selected response correspond to. The value of this eld
should be a positive integer that is less than or equal to
the numStages parameter, which was dened before the
MATCH block. Since the Dictator Game has only one stage
(numStages = 1), set sendStage = 1.
Retrieving data: the GET block
The function of the GET block is to retrieve responses from
the SMARTRIQS server. A response cannot be retrieved if
it was not previously transmitted to the server via a SEND
block. In this example, the GET block retrieves the Alloca-
tors transfer from the server and saves this response to the
Recipients survey. Qualtrics can then display the response
to the Recipient. Open the Survey Flow and scroll down to
the GET block. There are two green panels above the SEND
block: the top one has three required elds of Embedded
Data (getData,getStage, and saveData), while the bottom
one has nine optional elds. The green panels should al-
ways be placed before (above) the GET block. Always place
the GET block before (above) the question block(s) in which
the retrieved response is displayed. Move the panels and
the GET block under the Recipient branch, but above the
Recipients Paymentblock (Figure 8).
The getData eld species which response should be
retrieved from the server. The value of this eld should
be the role of the participant whose response is retrieved
(e.g., Allocator). The getStage eld species which stage
the response is retrieved from. This should be a positive
integer that is less than or equal to the numStages param-
eter. Since the Allocators response is transmitted in Stage
1, set getStage = 1. The saveData eld species the name of
the Embedded Data eld into which the retrieved response
is saved. This Embedded Data eld should be added man-
ually to the Survey Flow. To do this, click on Add a New
Field,and name the new eld, leaving its value blank.
Then use the name of this new eld as the value for save-
Data. For example, create a new eld named transfer, and
set the value of saveData to transfer.
The GET block has nine optional parameters. The
defaultData parameter species a default (computer-
generated) response, which is applied if the original (hu-
man) response cannot be retrieved. This occurs in two
cases:
1. If the other participantis a bot.
he uantitative ethods for sychology 345
2
¦2020 Vol. 16 no. 4
Figure 7 Setting up a SEND block: Inserting a response as Piped Text.
2. If the other participant has timed out,that is, if he or
she has not submitted a response before the maximum
waiting time expired (see maxWaitTime).
Note that to avoid any deception, participants in SMAR-
TRIQS are always informed in real-time whether they have
been matched with other participants or bots. This is a
built-in feature that cannot be customized. By default,
there are no bots or default responses in SMARTRIQS sur-
veys. To learn more about using bots and default re-
sponses, visit https://smartriqs.com/bots.
The maxWaitTime parameter species the maximum
waiting time limit (in seconds). Participants will wait in
the GET block until the response is successfully retrieved,
or until they reach this time limit. If they reach the time
limit before retrieving a response, the survey is either
terminated (the default setting), or a default (computer-
generated) response is applied (if using bots or using de-
fault responses is allowed). If this optional parameter
is left blank, participants will wait up to 3 minutes for
each response. The getWaitText parameter customizes
the message displayed to participants while they are wait-
ing in the GET block, while the other six optional param-
eters allow researchers to do mathematical operations on
retrieved responses (for example, to calculate the average
or the maximum of retrieved responses). In the Dictator
Game example there is no need for these optional settings
or mathematical operations, so leave these elds blank. To
learn more about optional parameters and operations, visit
https://smartriqs.com/sending-retrieving/#getBlock.
Concluding the study: the COMPLETE block
The COMPLETE block concludes the survey and indicates
that the participant has completed the study. It also records
issues in the timeOutLog eld (e.g., participant dropouts
and response timeouts). If a participant has not experi-
enced any issue during the study, the timeOutLog eld will
take the following value: OK no issues.Otherwise, the
eld will contain a brief description of the issue encoun-
tered (e.g., Allocator timed out in stage 1).
The COMPLETE block does not require any parameters
and cannot be customized, so there is no need to dene any
Embedded Data elds before this block. Do not put any
blocks after the COMPLETE block, otherwise SMARTRIQS
could incorrectly indicate that a participant has completed
the study, even if that participant actually dropped out at
some point after the COMPLETE block. The only excep-
tion is the End of Survey Element which should always
be placed after the COMPLETE block. This element is op-
tional: Researchers can include it to display a custom end
of survey message or to redirect participants to another
page (Figure 9).
Launching studies and monitoring data collection
To launch the study, save and exit the Survey Flow, then
click on the green Publishbutton (top right), and click
Publishagain. Qualtrics will generate a public URL ad-
dress (Anonymous Survey Link), which gives access to
the survey. If the study has been already published, the
URL can be accessed under Distributions Anonymous
Link.Thoroughly test any survey and ensure that all fea-
tures work as expected before distributing the study link
to participants. To test the Dictator Game, open the link
in two tabs (or on two devices), and complete the study
in both roles. If everything has been set up properly, the
study should conclude without any issues, and the data
should be available under the Data & Analysis menu in
Qualtrics. Otherwise, an error message will be displayed,
describing the issue. To optimize participant experience,
he uantitative ethods for sychology 346
2
¦2020 Vol. 16 no. 4
Figure 8 Sample screenshot of the setup of the GET block: required parameters (top green panel) and optional parame-
ters (bottom green panel). These panels and the GET block should always be placed before (above) the block in which the
retrieved response is displayed (Recipients Paymentin this example).
and to minimize the risk of having technical issues, please
read the best practices and other useful tips for data collec-
tion at https://smartriqs.com/best-practices.
SMARTRIQS has a progress monitorwebsite that al-
lows researchers to monitor data collection and partici-
pant activity. In addition to displaying the status of each
group, the progress monitor also shows the activity and
responses of individual participants in real-time (see Fig-
ure 10).
To access the progress monitor, go to https://server.
smartriqs.com/php/monitor.html and enter the re-
searcherID and studyID.
Advanced settings and features
SMARTRIQS also allows researchers to run more complex
interactive studies, including but not limited to: multiple
conditions within studies, group interaction up to 8 peo-
ple per group, multiple stages, or repeated interaction be-
tween participants. This section briey introduces some
of the most important advanced settings of SMARTRIQS.
More information, along with practical examples can be
found at https://smartriqs.com/getting-started.
Figure 9 The COMPLETE block and an optional End of Survey Element. The COMPLETE block should always be the last
block in SMARTRIQS surveys, unless, if there is a custom End of Survey Element. In that case (as in this example), the End
of Survey Element should be placed after the COMPLETE block.
he uantitative ethods for sychology 347
2
¦2020 Vol. 16 no. 4
Figure 10 Sample screenshot of the SMARTRIQS progress monitor. Each row represents a group. Column 1 shows the
group ID, column 2 shows the condition (if set), and column 3 shows the group status. Columns 46 show the Allocators
ID, the Allocators time of last activity, and the Allocators stage 1 decision (transfer to the Recipient). Columns 78 show
the Recipients ID and the Recipients last activity. Column 9 is blank since Recipients do not submit any decision in the
Dictator Game.
Multiple conditions
Researchers can set up multiple conditions within a survey
by lling in the optional parameters conditions and par-
ticipantCondition before the MATCH block. SMARTRIQS
will then assign groups to conditions, based on these spec-
ications, see https://smartriqs.com/randomization. Once
participants are assigned to conditions, researchers can
decide what should happen in each condition by using
Branch Logic in the Survey Flow. For example, imagine
a modied Dictator Game with three possible levels: low
(10 tokens), medium (100 tokens), and high (1000 tokens),
indicating that the Allocator would either allocate 10, 100,
or 1000 tokens, depending on the condition. To achieve
this, set conditions =low,medium,high (note that there is
no space between the commas and the name of the con-
ditions) and participantCondition =random. Then, use
either Branch Logic in the Survey Flow or Display Logic in
the Survey Editor to determine which version of the Dic-
tator Game is displayed to which participant. To learn
more about how to implement studies such as the above
example, see the Dictator Game, 3 conditionssurvey
at https://smartriqs.com/demos and download the corre-
sponding template from the OSF repository.
Larger groups (38 participants per group)
SMARTRIQS supports group interactions up to 8 partici-
pants per group. Researchers can assign: (a) the same
role with an identical task to everyone (e.g., auction, col-
lective decision), (b) unique roles with different tasks to
each participant (e.g., negotiation), or (c) any combination
of the above. Note that the names of the roles should al-
ways be unique to each participant within a group, even if
their tasks are identical. To increase the group size, mod-
ify the value of the groupSize eld to the desired number,
and then add this many roles to the roles eld, separat-
ing them by commas. For example, in a study with groups
of four, where participants are assigned to the roles of
Blue,Red,Green and Yellow, set groupSize = 4, and roles =
Blue,Red,Green,Yellow (note that there is no space between
role titles).
When having groups of 38 participants, it is also
possible to set up private and group chats. While pri-
vate chats are between selected participants only (exclud-
ing at least one participant), group chats include every-
one in the group. Researchers can customize which par-
ticipants, with whom, when, and for how long, chat in
a study. As with dyadic chat, it is possible to inter-
rupt private and group chats with unrelated tasks. In
later stages, participants can also join those private chats
from which they were excluded from before. To learn
more about how to set up private and group chats, visit
https://smartriqs.com/chat. For demos, see the Com-
municationsection at https://smartriqs.com/demos. The
Group Interactionsection showcases surveys with larger
groups. The corresponding templates can be downloaded
from the OSF repository.
Turn-taking and multiple stages
In many studies, participants take turns or have to respond
to their partners choices. For example, the Ultimatum
Game (G¨
uth, Schmittberger, & Schwarze, 1982) has two
consecutive stages. The rst stage is identical to the Dic-
tator Game: The Allocator decides how to split a sum of
money between herself and the Recipient. In the second
stage, however, the Recipient can decide whether to accept
he uantitative ethods for sychology 348
2
¦2020 Vol. 16 no. 4
or to reject the Allocators offer. Rejecting the offer leaves
both empty-handed.
To implement the above, duplicate the Dictator Game
survey created in the previous section, then rename the
studyID. Note that each study should have a unique
studyID. Change the numStages to 2, indicating that there
are two responses to be transmitted. Then, add a new ques-
tion block in which the Recipient reacts to the offer, and
add a new SEND block in the Survey Flow below this new
question block. Finally, pipe in the Recipients response
into a new embedded data eld reaction, then set send-
Data =reaction and sendStage = 2 (see Figure A1 in the
Appendix).
Next, go to the Allocators branch, and add a new Em-
bedded Data eld named reaction under the SEND block.
Also set getData =Recipient,getStage = 2, and saveData
=reaction, to indicate that the Recipients reaction should
be retrieved and saved into the eld reaction. Then in-
sert a new GET block under these elds. Finally, use either
Branch Logic in the Survey Flow or Display Logic in the
Survey Editor to display the nal payoffs, conditioned on
whether the offer was accepted or rejected (see Figure A2
in the Appendix).
The Trust Gameand Third-Party Punishmentde-
mos at https://smartriqs.com/demos also rely on sequen-
tial interaction and multiple stages. These demos, along
with the Ultimatum Game demo, were designed to help re-
searchers learn how to implement sequential interactions
and multiple stages. Corresponding survey templates are
also available at the OSF repository.
Simultaneous responses and waiting rooms
In some cases researchers might want to ensure that par-
ticipants start certain stages of an experiment at the ex-
act same time (e.g., group chat, effort task). Also, if par-
ticipants have to read long instructions before a task, it is
likely that some of them will spend considerably more time
reading instructions than others, which introduces asyn-
chrony between participants. To ensure that participants
start stages simultaneously, researchers can set up wait-
ing roomsin studies. When participants enter a waiting
room, they cannot proceed to the next stage before every-
one else in their group joins them in the waiting room.
To set up a waiting room, insert a SEND block and a
GET block before the task that participants should start
simultaneously. Set the following parameters before the
SEND block: participantStatus =ready,sendData =partic-
ipantStatus, and sendStage = 1 (this must be a number that
is not used in any other SEND block). Importantly, each
waiting room counts as a separate stageseparate from
other decisionswhich means that researchers should not
use its stage number when sending or retrieving other re-
sponses. For example, if there is a waiting room in the Dic-
tator Game (before the Allocator makes a decision) then the
waiting room is Stage 1 and the Allocators transfer is Stage
2. Or, if there is a waiting room in the Ultimatum Game (be-
fore the Allocators initial offer) then the waiting room is
Stage 1, the Allocators offer is Stage 2, and the Recipients
response is Stage 3.
Next, set the following parameters before the GET
block: getData =Allocator,Recipient (the roles in the
group), getStage = 1 (this number should match the one de-
ned above), and saveData =null,null.1Finally, set a cus-
tom message that participants will see in the waiting room.
This will reduce participant concerns related to their sta-
tus in the study. Setting getWaitText =The task will start
soon. Please wait for the other participant., for example,
will display this message while participants wait to start
(see Figure A3 in the Appendix).
Learn more about setting up waiting rooms by review-
ing the Effort Competition with Waiting Roomdemo at
https://smartriqs.com/demos and downloading the corre-
sponding survey template from the OSF repository.
Transmitting sensitive and personal data: custom pri-
vate servers
The SMARTRIQS Data Policy prohibits researchers from
submitting any personal or sensitive data to the default
SMARTRIQS server. Since the server is hosted at Ama-
zon Web Serviceswhich constitutes as a third-party
participants condentiality cannot be guaranteed. Re-
searchers are allowed to submit any anonymous data (e.g.,
participant IDs, decisions, roles, chat logs, or open-ended
text), as long as these do not contain any personal iden-
tiers or personal addresses. In addition, some fund-
ing agencies might forbid research involving non-private
servers due to potential data privacy issues. Researchers
who wish to use SMARTRIQS for submitting personal data
or other identiers, should set up their own server on
a secure, private web-server. Finally, by using a cus-
tom server researchers can also freely modify the server-
side scripts, in case they wish to modify the built-in set-
tings of SMARTRIQS (e.g., maximum group size), or add
new features to it (e.g., video chat). A step-by-step guide
to setting up a custom SMARTRIQS server is available at
https://smartriqs.com/custom-server.
1Note that here the retrieved responsesare not actual responses that we want to save. They are simply status indicators that determine whether
the participant can proceed. As such, in this case we do not have to refer to any other Embedded Data when setting up the saveData parameter. Use
null instead, and separate them by commas (no space between).
he uantitative ethods for sychology 349
2
¦2020 Vol. 16 no. 4
Authors note
I thank Ankita Sastry for her help in developing SMAR-
TRIQS and Peggy He, Jinny Hwang, Denise Lin, and Som-
mer Schneller for their assistance in testing SMARTRIQS.
I also thank Denis Cousineau, Russell Golman, Nik Gur-
ney, Einav Hart, Mark Hurlstone, Daisung Jang, Melis Kir-
gil, Jonathan Lee, Stephanie Permut, Eric VanEpps, Chao
Wang, Simon van der Zandt, and an anonymous reviewer
for their valuable feedback and suggestions.
References
Arechar, A. A., G¨
achter, S., & Molleman, L. (2018). Conduct-
ing interactive experiments online. Experimental Eco-
nomics,21(1), 99131. doi:10.1007/s10683-017-9527-2
Balietti, S. (2017). nodeGame: Real-time, synchronous, on-
line experiments in the browser. Behavior Research
Methods,49(5), 16961715. doi:10 .3758 / s13428- 016 -
0824-z
Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluat-
ing Online Labor Markets for Experimental Research:
Amazon.coms Mechanical Turk. Political Analysis,
20(3), 351368. doi:10.1093/pan/mpr057
Bohannon, J. (2016). Mechanical Turk upends social sci-
ences. Science,352(6291), 12631264. doi:10 . 1126 /
science.352.6291.1263
Casari, M., & Cason, T. N. (2009). The strategy method low-
ers measured trustworthy behavior. Economics Let-
ters,103(3), 157159. doi:10.1016/j.econlet. 2009 . 03 .
012
Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J.,
& Litman, L. (2019). Online panels in social sci-
ence research: Expanding sampling methods beyond
Mechanical Turk. Behavior Research Methods,51(5),
20222038. doi:10.3758/s13428-019-01273-7
Chen, D. L., Schonger, M., & Wickens, C. (2016). oTreeAn
open-source platform for laboratory, online, and eld
experiments. Journal of Behavioral and Experimental
Finance,9, 8897. doi:10.1016/j.jbef.2015.12.001
Collaboration*, O. S. (2015). Estimating the reproducibility
of psychological science. Science,349(6251), aac4716
aac4716. doi:10.1126/science.aac4716
Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-
made economic experiments. Experimental Eco-
nomics,10(2), 171178. doi:10.1007/s10683-006-9159-
4
Forsythe, R., Horowitz, J. L., Savin, N., & Sefton, M. (1994).
Fairness in Simple Bargaining Experiments. Games
and Economic Behavior,6(3), 347369. doi:10 . 1006 /
game.1994.1021
Giamattei, M., Molleman, L., Seyed Yahosseini, K., &
Gachter, S. (2019). LIONESS Lab A Free Web-Based
Platform for Conducting Interactive Experiments On-
line. SSRN Electronic Journal. doi:10 . 2139 / ssrn .
3329384
Gosling, S. D., & Mason, W. (2015). Internet Research in Psy-
chology. Annual Review of Psychology,66(1), 877902.
doi:10.1146/annurev-psych-010814-015321
G¨
uth, W., Schmittberger, R., & Schwarze, B. (1982). An ex-
perimental analysis of ultimatum bargaining. Journal
of Economic Behavior & Organization,3(4), 367388.
doi:10.1016/0167-2681(82)90011-7
Hawkins, R. X. D. (2015). Conducting real-time multiplayer
experiments on the web. Behavior Research Methods,
47(4), 966976. doi:10.3758/s13428-014- 0515-6
Hendriks, A. (2012). SoPHIE - Software Platform for Human
Interaction Experiments, University of Osnabrueck.
Henninger, F., Kieslich, P. J., & Hilbig, B. E. (2017). Psyn-
teract: A exible, cross-platform, open framework for
interactive experiments. Behavior Research Methods,
49(5), 16051614. doi:10.3758/s13428-016-0801-6
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weird-
est people in the world? Behavioral and Brain Sci-
ences,33(2-3), 6183. doi:10.1017/S0140525X0999152X
Hertwig, R., & Ortmann, A. (2008). Deception in Exper-
iments: Revisiting the Arguments in Its Defense.
Ethics & Behavior,18(1), 5992. doi:10 . 1080 /
10508420701712990
Mason, W., & Suri, S. (2012). Conducting behavioral re-
search on Amazons Mechanical Turk. Behavior Re-
search Methods,44(1), 123. doi:10.3758/s13428- 011-
0124-6
Mathur, M. B., & Reichling, D. B. (2019). Open-source soft-
ware for mouse-tracking in Qualtrics to measure cate-
gory competition. Behavior Research Methods. doi:10.
3758/s13428-019-01258-6
Molnar, A. (2019). SMARTRIQS: A Simple Method Allowing
Real-Time Respondent Interaction in Qualtrics Sur-
veys. Journal of Behavioral and Experimental Finance,
22, 161169. doi:10.1016/j.jbef.2019.03.005
Paolacci, G., & Chandler, J. (2014). Inside the Turk. Cur-
rent Directions in Psychological Science,23(3), 184
188. doi:10.1177/0963721414531598
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017).
Beyond the Turk: Alternative platforms for crowd-
sourcing behavioral research. Journal of Experimental
Social Psychology,70(January), 153163. doi:10.1016/
j.jesp.2017.01.006
Permut, S., Fisher, M., & Oppenheimer, D. M. (2019).
TaskMaster: A Tool for Determining When Subjects
Are on Task. Advances in Methods and Practices in
Psychological Science,2(2), 188196. doi:10 . 1177 /
2515245919838479
he uantitative ethods for sychology 350
2
¦2020 Vol. 16 no. 4
Pettit, J., Friedman, D., Kephart, C., & Oprea, R. (2014). Soft-
ware for continuous game experiments. Experimental
Economics,17(4), 631648. doi:10 .1007 / s10683- 013 -
9387-3
Qualtrics. (2020). Provo, Utah, USA: Qualtrics. Retrieved
from https://www.qualtrics.com
SurveyMonkey. (2020). San Mateo, California, USA: Survey-
Monkey Inc. Retrieved from www . surveymonkey .
com
Zhou, H., & Fishbach, A. (2016). The pitfall of experiment-
ing on the web: How unattended selective attrition
leads to surprising (yet false) research conclusions.
Journal of Personality and Social Psychology,111(4),
493504. doi:10.1037/pspa0000056
Appendix
This appendix provides useful links and additional screen captures.
Useful Links
1. Live demos: https://smartriqs.com/demo
2. SMARTRIQS registration form: LINK
3. Survey templates (QSF les): https://osf.io/cgejr
4. Data collection progress monitor: https://server.smartriqs.com/php/monitor.html
5. Data Policy & Data Submission Policy Agreement: https://smartriqs.com/data-policy
6. Source code (JavaScript and PHP): https://github.com/andras-molnar/smartriqs
he uantitative ethods for sychology 351
2
¦2020 Vol. 16 no. 4
Figure A1 Sample setup of the Recipients branch in an Ultimatum Game. First, the Allocators offer is retrieved via a
GET block, then the Recipients reaction is transmitted via a SEND block. The new panels below the Recipients Reaction
block were added by clicking on the Add a New Element Herebutton, and then either selecting Embedded Dataor
Block.
he uantitative ethods for sychology 352
2
¦2020 Vol. 16 no. 4
Figure A2 Sample setup of the Allocators branch in an Ultimatum Game. First, the Allocators offer is transmitted via a
SEND block, then the Recipients reaction is retrieved via a GET block. The branches below the GET block use the retrieved
response to display either the Acceptedor the Rejectedblocks.
he uantitative ethods for sychology 353
2
¦2020 Vol. 16 no. 4
Figure A3 Sample setup of a waiting room. Here the waiting room is inserted after the instructions, but before the
Allocators branch. Both participants have to read the instructions rst and proceed to the waiting room. Note that a
waiting room counts as a separate stage, so in this example the Allocators transfer would be already Stage 2.
Citation
Molnar, A. (2020). How to implement real-time interaction between participants in online surveys: A practical guide to
SMARTRIQS. The Quantitative Methods for Psychology,16(4), 334354. doi:10.20982/tqmp.16.4.p334
Copyright © 2020, Molnar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use,
distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in
this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with
these terms.
Received: 15/02/2020 Accepted: 21/04/2020
he uantitative ethods for sychology 354
2
Preprint
Full-text available
What drives victims toward revenge? The most prevalent theories of why people punish others focus exclusively on the desire to discourage bad behavior in the future (deterrence), the desire to equate material outcomes (distributive justice), or the desire to cause comparative suffering to the offender. However, these motivations cannot explain why many acts of revenge seem narrowly designed to change the perpetrator’s beliefs above and beyond what can be attributed to the desire to cause suffering, and without any hope for downstream behavior-change or impact on material outcomes—what we call sending a message. Across three studies, including an observational study featuring real-world revenge stories and a pre-registered experiment (N = 1,705) involving a stylized workplace interaction, we both introduce and experimentally test the idea that purely belief-based motives can drive revenge behavior—that avengers care what offenders believe. We not only demonstrate a direct causal link between belief-based motives and actual revenge decisions but also show that many subjects are willing to compromise on distributive justice to make sure that the offender understands the circumstances of punishment. Additionally, we find very little evidence that the primary concern is to cause comparative suffering, and also demonstrate that the preference for affecting offenders’ beliefs cannot be sufficiently explained by deterrence motives. This work suggests that allowing victims to communicate a message to the offender could be a useful tool in organizational grievance procedures to help satisfy victims, reduce the prevalence of more deleterious forms of revenge, and curtail cycles of aggression.
Article
Full-text available
Amazon Mechanical Turk (MTurk) is widely used by behavioral scientists to recruit research participants. MTurk offers advantages over traditional student subject pools, but it also has important limitations. In particular, the MTurk population is small and potentially overused, and some groups of interest to behavioral scientists are underrepresented and difficult to recruit. Here we examined whether online research panels can avoid these limitations. Specifically, we compared sample composition, data quality (measured by effect sizes, internal reliability, and attention checks), and the non-naivete of participants recruited from MTurk and Prime Panels—an aggregate of online research panels. Prime Panels participants were more diverse in age, family composition, religiosity, education, and political attitudes. Prime Panels participants also reported less exposure to classic protocols and produced larger effect sizes, but only after screening out several participants who failed a screening task. We conclude that online research panels offer a unique opportunity for research, yet one with some important trade-offs.
Article
Full-text available
Mouse-tracking is a sophisticated tool for measuring rapid, dynamic cognitive processes in real time, particularly in experiments investigating competition between perceptual or cognitive categories. We provide user-friendly, open-source software (https://osf.io/st2ef/) for designing and analyzing such experiments online using the Qualtrics survey platform. The software consists of a Qualtrics template with embedded JavaScript and CSS along with R code to clean, parse, and analyze the data. No special programming skills are required to use this software. As we discuss, this software could be readily modified for use with other online survey platforms that allow the addition of custom JavaScript. We empirically validate the provided software by benchmarking its performance on previously tested stimuli (android robot faces) in a category-competition experiment with realistic crowdsourced data collection.
Article
Full-text available
Online labor markets provide new opportunities for behavioral research, but conducting economic experiments online raises important methodological challenges. This particularly holds for interactive designs. In this paper, we provide a methodological discussion of the similarities and differences between interactive experiments conducted in the laboratory and online. To this end, we conduct a repeated public goods experiment with and without punishment using samples from the laboratory and the online platform Amazon Mechanical Turk. We chose to replicate this experiment because it is long and logistically complex. It therefore provides a good case study for discussing the methodological and practical challenges of online interactive experimentation. We find that basic behavioral patterns of cooperation and punishment in the laboratory are replicable online. The most important challenge of online interactive experiments is participant dropout. We discuss measures for reducing dropout and show that, for our case study, dropouts are exogenous to the experiment. We conclude that data quality for interactive experiments via the Internet is adequate and reliable, making online interactive experimentation a potentially valuable complement to laboratory studies. Electronic supplementary material The online version of this article (doi:10.1007/s10683-017-9527-2) contains supplementary material, which is available to authorized users.
Article
Full-text available
The success of Amazon Mechanical Turk (MTurk) as an online research platform has come at a price: MTurk has suffered from slowing rates of population replenishment, and growing participant non-naivety. Recently, a number of alternative platforms have emerged, offering capabilities similar to MTurk but providing access to new and more naïve populations. After surveying several options, we empirically examined two such platforms, CrowdFlower (CF) and Prolific Academic (ProA). In two studies, we found that participants on both platforms were more naïve and less dishonest compared to MTurk participants. Across the three platforms, CF provided the best response rate, but CF participants failed more attention-check questions and did not reproduce known effects replicated on ProA and MTurk. Moreover, ProA participants produced data quality that was higher than CF's and comparable to MTurk's. ProA and CF participants were also much more diverse than participants from MTurk.
Article
Full-text available
ConG is software for conducting economic experiments in continuous and discrete time. It allows experimenters with limited programming experience to create a variety of strategic environments featuring rich visual feedback in continuous time and over continuous action spaces, as well as in discrete time or over discrete action spaces. Simple, easily edited input files give the experimenter considerable flexibility in specifying the strategic environment and visual feedback. Source code is modular and allows researchers with programming skills to create novel strategic environments and displays.
Article
Full-text available
nodeGame is a free, open-source JavaScript/ HTML5 framework for conducting synchronous experiments online and in the lab directly in the browser window. It is specifically designed to support behavioral research along three dimensions: (i) larger group sizes, (ii) real-time (but also discrete time) experiments, and (iii) batches of simultaneous experiments. nodeGame has a modular source code, and defines an API (application programming interface) through which experimenters can create new strategic environments and configure the platform. With zero-install, nodeGame can run on a great variety of devices, from desktop computers to laptops, smartphones, and tablets. The current version of the software is 3.0, and extensive documentation is available on the wiki pages at http://nodegame.org .
Article
In this Tutorial, we introduce a tool that allows researchers to track subjects’ on- and off-task activity on Qualtrics’ online survey platform. Our TaskMaster tool uses JavaScript to create several arrayed variables representing the frequency with which subjects enter and leave an active survey window and how long they remain within a given window. We provide code and instructions that will allow researchers to both implement the TaskMaster and analyze its output. We detail several potential applications, such as in studies of persistence and cheating, and studies that require sustained attention to experimental outcomes. The TaskMaster is designed to be accessible to researchers who are comfortable designing studies in Qualtrics, but who may have limited experience using programming languages such as JavaScript.
Article
SMARTRIQS is an open-source solution that allows researchers to design and implement interactive online experiments using the popular survey platform Qualtrics. Unlike other existing platforms that allow real-time interaction, SMARTRIQS does not require any programming skills or installing any software. SMARTRIQS is fully integrated into Qualtrics: researchers create and edit their interactive experiments in the standard survey editor after importing the generic SMARTRIQS survey template. Moreover, all data resulting from real-time interaction are saved in Qualtrics. The potential applications of SMARTRIQS in experimental economics and finance are very versatile, including but not limited to experiments featuring cooperation, coordination, competition, allocation, investment, auction, and other market interactions. Researchers working in related fields (organizational research, consumer research, decision science, social psychology, political science, etc.) might also benefit from using SMARTRIQS, as it offers a convenient and simple method of studying real-time social interaction, including communication, collaborative work, collective decision-making, and voting behavior.
Article
We introduce a novel platform for interactive studies, that is, any form of study in which participants' experiences depend not only on their own responses, but also on those of other participants who complete the same study in parallel, for example a prisoner's dilemma or an ultimatum game. The software thus especially serves the rapidly growing field of strategic interaction research within psychology and behavioral economics. In contrast to all available software packages, our platform does not handle stimulus display and response collection itself. Instead, we provide a mechanism to extend existing experimental software to incorporate interactive functionality. This approach allows us to draw upon the capabilities already available, such as accuracy of temporal measurement, integration with auxiliary hardware such as eye-trackers or (neuro-)physiological apparatus, and recent advances in experimental software, for example capturing response dynamics through mouse-tracking. Through integration with OpenSesame, an open-source graphical experiment builder, studies can be assembled via a drag-and-drop interface requiring little or no further programming skills. In addition, by using the same communication mechanism across software packages, we also enable interoperability between systems. Our source code, which provides support for all major operating systems and several popular experimental packages, can be freely used and distributed under an open source license. The communication protocols underlying its functionality are also well documented and easily adapted to further platforms. Code and documentation are available at https://github.com/psynteract/ .