Content uploaded by Oliver Lindemann
Author content
All content in this area was uploaded by Oliver Lindemann
Content may be subject to copyright.
Content uploaded by Oliver Lindemann
Author content
All content in this area was uploaded by Oliver Lindemann
Content may be subject to copyright.
Expyriment: A Python library for cognitive
and neuroscientific experiments
Florian Krause & Oliver Lindemann
#
Psychonomic Society, Inc. 2013
Abstract Expyriment is an open-source and platform-
independent lightweight Python library for designing and
conducting timing-critical behavioral and neuroimaging exper-
iments. The major goal is to provide a well-structured Python
library for script-based experiment development, with a high
priority being the readability of the resulting program code.
Expyriment has been tested extensively under Linux and
W indows and is an all-in-one solution, as it handles stimulus
presentation, the recording of input/output events, communica-
tion with other devices, and the collection and preprocessing of
data. Furthermore, it offers a hierarchical design structure,
which allows for an intuitive transition from the experimental
design to a running program. It is therefore also suited for
students, as well as for experimental psychologists and neuro-
scientists with little programming experience.
Keywords Software
.
Programming library
.
Python
.
Experimental design
.
Stimulus presentation
Introduction
In all disciplines of modern experimental psychology and
cognitive neuroscien ce, most empirical work in the laboratory
is based on the use of standard personal computers, and the
acquisition of data re quires very high accuracy in the timing of
the presentation of the experimental stimulus materials, a s well
as the recording of the participant’s responses. To facilitate the
creation of time-accurate experimental procedures, researchers
need specialized software tools that implement the necessary
procedures and provide high-level abstraction. This software
can generally be classified into two categories: experiment
builders and programming libraries.
Experiment builders are full applications with a graphical user
interface, through which the user creates experimental settings by
arranging readymade pieces on a visual sketchpad (in a point-
and-click manner). This category includes a variety of commer-
cial (see Stahl, 2006, for a review) as well as open-source (e.g.,
Mathôt,Schreij,&Theeuwes,2012) applications. Although
experiment builders provide an intuitive way for nonprogram-
mers to rapidly create experiments, they suffer from the disad-
vantage that the use of a graphical interface results in a lack of
flexibility and control. First, the scope of experimental settings
that can be created is limited by the application itself, and second,
crucial aspects of the resulting experiments will be controlled by
the experiment builder software and not by the user . For many
experimenters, these limitations represent a serious problem, and
sometimes they even make it impossible to implement the re-
quired experimental protocols. It is therefore often necessary to
develop custom software and to base the implementation of an
experiment on classical programming or scripting languages.
For this purpose, specific programming libraries for experi-
ments have been developed. These provide collections of data
structures and routines, adding specific functionality to existing
programming languages. A popular example for this category is
the open-source Psychophysics Toolbox (Brainard, 1997)for
the commercial MATLAB programming language (MathWorks
Inc., Natick, MA). Using a programming library specializing in
experimental settings and having the ability to combine it with
other libraries ensures flexibility and puts the user in control.
In the present article, we present Expyriment (pronounced
), a library written in and for the Python
F. Krause
Donders Institute for Brain, Cognition and Behaviour,
Radboud University, Nijmegen, The Netherlands
O. Lindemann
Division of Cognitive Science, University of Potsdam,
Potsdam, Germany
F. Krause (*)
P.O. Box 9104, 6500 HE Nijmegen, The Netherlands
e-mail: f.krause@donders.ru.nl
Behav Res
DOI 10.3758/s13428-013-0390-6
programming language (Van Rossum & Drake, 2011), aimed at
designing and conducting behavioral, as well as neuroimaging
experiments. Python has also been used in related projects that
have aimed to offer solutions for the control of behavioral
experiments or the timing-critical presentation of stimuli, such
as PsychoPy (Peirce, 2007)orVisionEgg(Straw,2008). We
specifically chose to develop for the Python programming
language because we believe that it is especially suited for
scien tific computing and is ideal fo r the development of
script-based experiments, for a number reasons: First, Python
is open-source software and is freely available to researchers as
well as to students. Second, being pla tform independent,
Python runs on Windows, Linux, and Mac OS. Third, Python
has the reputation of being one of the easiest programming
languages to learn (even for nonprogrammers), mainly due to
the clear and simple syntax, making it a very popular tool for
researchers worldwide (Bassi, 2007). Furthermore, the easy
syntax can result in very readable programming code (an
important concern for a scientific community in which algo-
rithms and routines will be shared amongst researchers).
Fourth, Python is an interpreted language, allowing for rapid
write–test cycles during development and immediate testing of
short snippets of code (via the interactive interpreter). Fifth, in
contrast to other open-source programming languages com-
monly used in the scientific community, such as R (R
Development Core Team, 2012), Python is not only applicable
for particular tasks (e.g., data analysis), but aims to be an all-
purpose programming language. As a final, and maybe the
most important, argument for choosing a particular open-
source sof tware for scientific purpo ses (cf. Halchenko &
Hanke, 2012), Python has a large and active community, which
ensures the continuous maintenance and development of the
language and offe rs extensive documentation and support. Due
to the broad scope of applications and the community-driven
approach, programmers are not restricted to the large standard
library with which Python is natively equipped, since a huge
number of free third-party libraries are nowadays available to
extend Python’s functionality. For instance, NumPy (Oliphant,
2006) and SciPy (Jones, Oliphant, Peterson, et al., 2001)are
libraries specialized in scientific computing and data handling.
Importantly, Expyriment builds on these advantages and
represents a cross-platform library that has been developed to
run under all common desktop operating systems, such as
Windows, Linux, and MacOS. Additional experimental sup-
port for An droid devices exists and is under active development.
Expyriment has been intensively tested under W indows XP and
7, as well as under Linux distributions based on Debian Linux.
1
Expyriment is free software and rele ased under the Open Source
GNU General Public Licence (Free Software Foun dation, 2007).
The development of the Expyriment was guided by several
motivational principles:
1. Expyriment is a programming library and is not meant to
be a full application. It does not rely on a graphical user
interface and does not implement an internal control loop
to automatically handle events in the background. Instead,
we have focused on providing a lightweight Python
library entirely written in Python itself, with only a few
dependencies on other Python libraries (see below), in
order to give the user full control over timing-critical
event handling.
2. Expyriment has a strong focus on experimental design.
That is, in contrast to other existing Python libraries—for
instance, PsychoPy (Pierce, 2007)orVisionEgg(Straw,
2008)—an Expyriment program is not necessarily centered
around stimulus presentation (i.e., the creation and manip-
ulation of experimental designs is an integral part of the
library itself, and Expyriment can even be used solely for
this purpose). Expyriment offers a structure for experimen-
tal designs that is independent of the presentation software
actually used and that can be exported as pure text files. By
building a hierarchical relation between the constitu ents of
an experiment, the transition from thinking about the con-
ceptual design to its technical implementation is facilitated
to a great extent. Expyriment therefore can also be helpful
for researcher who want to use other software for the
presentation of stimuli and experimental control.
3. Expyriment follows a modular approach. This means that
parts of the library can be used independently from the
rest of the library and in conjunction with other Python
libraries (see, e.g., Point 2).
4. Expyriment is easily extensible and offers a built-in plugin
system. This plugin system allows for the unified use and
sharing of user-made experimental stimuli, devices, or
design elements. More detailed information regarding
the development of Expyriment plugins can be found in
the Expyriment online documentation.
5. Expyriment strongly emphasizes the readability of the
code. Since researchers share experiments with other
researchers, colleagues, and students, an experiment’s
source code should be as readable and easily understand-
able as possible. On top of the easy syntax offered by the
Python programming language, Expyriment is strictly
object-oriented, which further increases clarity.
Expyriment shares the goal of code readability with most
other Python-based experiment control platforms, but it
extends this readability principle to the specification of the
experimental design. The resulting programming code
should most directly resemble the structure of the study,
and should in the first place facilitate researchers’ thinking
about the experimental design. Expyriment therefore al-
lows the hierarchical formalization of abstract designs in
1
www.debian.org. For the use of GNU/Linux in the field of neurosci-
ence, see Hanke and Halchenko (2011).
Behav Res
Python code, independent of the programming of the
stimulus presentation and experimental control.
6. Like most Python libraries, Expyriment also aims to be
cross-platform. Although most commercial software sup-
ports Windows, and sometimes Mac OS, Linux is often
left out of the picture or is an afterthought. Expyriment
was especially developed with Linux users in mind, and
each version of Expyriment is tested intensively on
Debian Linux
2
and on the Debian-based Linux distribu-
tion Ubuntu.
3
Expyriment is freely available for download from www.
expyriment.org. The website further provides links to the
online documentation, a tutorial, example experiments, and
the Expyriment newsletter and mailing list. The version of
Expyriment described in the present article is 0. 6. In the
following sections, we will first provide a description of the
structure and functionality of the library. Afterward, we will
showcase the use of Expyriment with an example experiment.
Finally, we will provide technical information on the
implementation and empirical results on timing accuracy.
Overview
Structure
The Expyriment library follows a strict modular and object-
oriented structure. It is organized into five packages, focusing
on different areas of an experimental setting. After importing
Expyriment in a Python script or interactive session, the
following subpackages become available: design, stimuli ,
io, misc,andcontrol. Each package consists of classes,
methods, or further modules. Figure 1 provides a graphical
overview of the library.
In Expyriment, all subpackages have a defaults module that
contains variables describing the default values for most classes
and certain global settings within that package. That is, for each
class, the default values of optional arguments that can be spec-
ified during instantiation are set via variables in the default
module. Importantly, these default values can be overwritten by
the user in order to make specific global settings for each
experiment.
The design, stimuli,andio packages allow for the integra-
tion of user-written plugins (called extras) in order to extend
Expyriment with, for example, specific randomization proce-
dures, more advanced stimuli or custom-made response de-
vices. The role and functionality of each package can be
summarized as follows.
expyriment.design The design package provides classes de-
scribing experimental structures. The main role of this pack-
age lies in the specification of experimental designs. This is
done by building a hierarchical relation between an experi-
ment, the experimental blocks, and the experimental trials,
and by specifying between- as well as within-subjects factors.
On the basis of this specification, the design package is further
capable of randomizing or permuting the experimental blocks
or trials within the defined experiment. As a simple example,
consider that you want to create an experiment with two
blocks, each containing 60 randomized trials of three different
conditions A, B, C:
2
www.debian.org
3
www.ubuntu.org
Fig. 1 An overview of the structure of the Expyriment library.
Expyriment consists of five packages (red ): design, stimuli, io, misc,
and control. Each package provides various classes (green), functions
(white), and further modules and default settings (yellow). A plugin
system (violet) allows for user-written extra functionality
Behav Res
Importantly, designs can be exported to text files [by call-
ing
exp.save
_
design('design-demo.csv')
in
the example above]. When specified without using any
Expyriment presentation feature, such as stimulus objects,
4
the design package can be used in combination with other
Python libraries or programming environments for stimulus
presentation that either do not offer experimental design
structures (e.g., Vision Egg) or that rely mainly on importing
externally created designs (e.g., PsychoPy, using its
ImportConditions()
function). Using an interactive
Python session, Expyriment can therefore also help researchers
to formalize and review complex designs or randomizations.
expyriment.stimuli The stimuli package provides classes for a
variety of experimental visual as well as auditory stimuli. The role
of this package thus lies in the definition and creation of stimuli.
Once a stimulus is created, it can be integrated into the hierarchi-
cal design structure, completing the definition of the experiment.
Importantly, stimuli can be automatically presented on the display
and do not rely on a separate external presenter—for example,
To create more complex visual scenes, visual stimuli can
also be plotted o n other s timuli. Th e resulting combined
stimulus can then be preloaded and/or presented as a whole.
This always ensures precise timing, even for complex stimuli:
Another way to achieve the simultaneous presentation of
several stimuli is to consecutively present them without clear-
ing the display contents and only to update the display after
the presentation of the last stimulus:
Although the first method is often preferable, since it allows
for reducing visual scenes into a sing le object, the second method
4
Note that each Expyriment stimulus can be integrated into the design
hierarchy as well, using the
trial.add
_
stimulus()
method.
Behav Res
can be useful when a screen has to be built up by incrementally
adding stimuli—for instance, on the basis of user input.
Updating the screen can also be done explicitly by calling the
update()
method of the screen object (see the Expyriment.
control section below). If you want to maximize the speed with
which your stimulus appears on the screen, in some cases this
might require updating only that part of the screen on which the
stimulus appears. This is possible by calling the
update
_
stimuli()
method of the screen object and spec-
ifying a list of stimuli to update as an argument. Note, however , that
this will only work as expected when OpenGL mode is switched
off (see the Software evaluation and timing section below).
expyriment.io The io package provides classes for handling
input and output. Its main role is to facilitate communication
with external devices (e.g., keyboard, button box, etc.), as well as
to handle log files. Most io classes can also be used indepen-
dently from the other Expyriment packages—for example,
expyriment.misc The main role of the misc package is to
provide additional functionality that goes beyond the scope of
the other packages—for instance, the functionality to
preprocess the acquired data files for use in further
statistical software (see the Additional features section below).
This functionality does not depend on Expyriment-specific
data and thus can easily be integrated into existing Python
code—for example,
expyriment.control The task of the control package is to
control the implementation of a formerly defined experiment.
It provides the three important functions to initialize, start, and
end an experiment and facilitates interaction with the display,
clock, and keyboard, as well as with data and event files, by
automatically integrating them into the current experiment
definition. With a complete hierarchically structured defini-
tion of an experiment, conducting the experiment consists of
nothing more than sequentially iterating over the hierarchical
structure.
The control package plays a central role for the processing
of experiment code, since it is involved in the building of the
scaffolding of each Expyriment program. A typical script
comprises three central commands, which can be described
as the crucial landmarks for the flow of the experimental
control program:
Landmark 1 initializes an experiment. If no
design.Experiment
object is given as a parameter, a
new experiment is automatically created and returned by this
function. Experiment initialization will create a screen object
representing the computer’s display (available as
exp.screen
), a keyboard input device object to receive
input from the keyboard (available as
exp.keyboard
), an
event log file object that automatically logs stimulus presen-
tation times and device communication in the background
(available as
exp.events
), and an experimental clock
object providing timing functionality (available as
exp.clock
). After this landmark, the experimental design
hierarchy and the experimental stimu li can be created.
Landmark 2 starts the currently initi alized experiment.
Behav Res
This will ask for a subject ID on the display (afterward
available as
exp.subject
) and create a data file
object that can be used to log experimental variables
on a trial-by-trial basis (available as
exp.data
). After
this landmark, the experiment can be conducted by
iterating over the hierarchical design (see below).
Landmark 3 ends an experiment, which will close the
screenandsaveallunwrittenlogfilestodisk.
The control package also includes t he Exp yriment
test suite (see the Additional features section below),
as well as some global settings concerning the execution
of the experiment (e.g., display, audio, and event-
logging settings, set via the defaults module).
Additional features
Test suite When using software to control the implemen-
tation of a scientific experiment, it is absolutely neces-
sary t o guaran tee pro per funct io nin g . The Expy r im ent
test suite is a visually guided tool for testing several
aspects of Expyriment on a specific system. The
tests include timing accuracy of visual stimulus presen-
tation, audio playback functionality, mouse functionality,
and serial port functionality/use. Eventually, all test
results can be saved as a protocol, t ogether with various
information about the system that Expyriment is
running on. The test suite can be started by calling
expyriment.control.run
_
test
_
suite()
.We
strongly recommend always using the test suite before testing
participants, in order to guarantee proper functioning.
Develop mode During the development of an experiment, it
can be convenient to change some of the default settings in the
control package—such as starting the experiment in a small
window (instead of occupying the full screen), suppressing the
startup and ending messages, automatically creating succes-
sive subject IDs (without asking for one), and switching off
time stamps for output files. To conveniently activate all of
these common settings at once, the control package allows for
switc hing into a dedicated development mode b y calling
before initializing an experiment.
Command line interface Expyriment comes with a command
line interface that allow s for starting the t est suite, a
graphical tool to browse the application programming
interface reference, as well as making specific settings
(e.g., running in develop mode) for a single execution
of an Expyriment script, without manipulating the script
file. A full description of the available options can be
obtained by typing
python-m expyriment.cli -h
from a command line.
Data preprocessing In most cases, the data acquired by
Expyriment needs to be processed further before a statistical
analysis can be performed. This processing entails aggrega-
tion of the dependent variables over all factor-level combina-
tions of the experimental design. Expyriment provides an
easy, but flexible way to automatize this process with the data
preprocessing module included in the misc package
(
expyriment.misc.data
_
preprocessing
). Further
information can be found in the Expyriment online documen-
tation. The Expyriment website also provides an R script for
conveniently reading the Expyriment data files of several
subjects into a single R data frame.
Use
To showcase the use of Expyriment, we will design and
implement a simple behavioral experiment for assessing
a Simon effect (Hommel, 1993; further examples can be
found at the Expyriment website). In two experimental
tasks, participants have to respond to a rectangle on the
screen, according to its color (red or green), by pressing
the left or the right arrow key on the computer ’skey-
board. Additionally, the position of the rectangles can
be either left or right. Each trial will start with the
presentation of a fixation cross for 500 ms, followed
by the rectangle that will remain on the display until a
response is given. Between trials, a blank screen is
shown for 3,000 ms. Each block will cont ain 128 tria ls
in random order. The two tasks will differ only in the
mapping of responses (i.e., w hich button to press for
which color), which will be shown to the participant as
a brief instruction at the beginning of each block. The
order of tasks will be counterbalanced over participants.
Theexperimenthasa2×2×2×2factorialdesign,
with the within-subjects factors Color (red, green),
Position (left, right), and Task (left = green, left = red), as well
as the between-subjects factor Task Order (left = green first,
left = red first).
Example program
The resulting programming code is shown in Listing 1 in the
Appendix. After importing all of the five Expyriment pack-
ages (line 1), we start creating the experimental design. First,
we create an experiment object
exp
that will be the root of the
design hierarchy (line 5). We then utilize the control package
in order to initialize the just-created experiment (line 6). Next,
we build the design hierarchy by iterating over all levels of all
factors in a nested fashion (lines 9–24). For both of the two
tasks (left = green, left = red), we create a block object
b
and set
the block factor T ask to the corresponding level (lines 10–11).
expyriment.control.set
_
develop
_
mode(True)
Behav Res
Within each task, we then create a trial object
t
for all
combinations of location (left, right) and color (red,
green) and set the trial fac tors Position and Color to
the corresponding levels (lines 12 –17). Note that in both
cases, we actually loop over lists with two items: the
factor-level name and additional concrete values to be
used when creating the stimuli (line s 12–13). Having
defined a trial, we now create the target stimulus (a
rectangle objec t
r
) with a p osition and color defined
in the loop (lines 18–19). The stimulus is added to the
trial that we just created (line 20), and the list of all
stimuli of this trial (including exa ctly one stimulus now)
is now accessible as
t.stimuli
. We next add 32
copies of the trial to the block (line 21), which will
now b e accessible as
b.trials
. To comple te the hie r-
archy, we shuffle all trials in the block (line 22), and
eventually also add each block to the experiment (line 23),
such that they are available as
exp.blocks
. The experi-
ment itself gets a between-subjects factor Task Order, with the
levels “left = green first” and “left = red first” (line 24).
After having created the experimental design, we define
and preload two global stimuli: a blank screen object
blankscreen
and a fixation cross object
fixcross
(lines 27–30).
Eventually, we start the experiment by using the control
package (line 33). We define what we want to log by
naming the variables of interest, which will be the first
entry of each column in the data file (line 34). Since we
want the order of the two tasks (which we defined as
blocks) in our design to be counterbalanced across partic-
ipants, we now swap the blocks if necessary, depending
on the between-subjects factor that is coupled to the
subject ID that was assigned when the experiment was
started (lines 35–36). Having a full definition of the
experimental design in one single object (
exp
) now al-
lows us to simply iterate over this structure (lines 39–52):
For each block in our experiment, we create and present a
text screen object with simple task instructions, as defined
by the block factor Task (line 40), and wait for a (any!)
buttonpress response by the participant (line 41), after
which we present a blank screen (line 42). For each trial
within that block, we first wait for 3,000 ms (the intertrial
interval). During this time, we preload all stimuli within
that trial (in our case, only one, the target stimulus) into
memory to prepare them for a timing accurate presentation
(line 44). Note how this mechanism works: Preloading of
the stimuli will return the time that it took to finish this
operation, which in turn we subtract from the total waiting
time (the 3,000 ms). We now present the fixation cross to
the screen (line 45) and wait for 500 ms (line 46). Then,
the target stimulus (the rectangle) is presented (line 47),
and we wait for the participant to respond with either the
left or the right arrow key (lines 48–49). The pressed key
and the response time are returned. Next, we present the
blank screen (line 50) and add our variables of interest to
the data file (line 51). This is a fast operation, since at
this point in time the data will not yet be written to the
hard disk, but will remain in memory until the experiment
is ended. At last, we unload the stimuli of the trial again
(in this case, the target stimulus only), in order to free
memory (line 52). To finish the experiment, we utilize the
control package (line 55) and present a goodbye message
to the participant.
Event logging and data output
After the experiment has been ended correctly, two files will
be created: an event log file, with the ending
'.xpe'
in a
directory called
'events'
, containing an automatic history
of the experimental events (e.g., stimulus presentations and
device communication), and a data file, with the ending
'.xpd'
in a directory ca lled
'data'
, c ontaining what
was manually saved in line 51 on a trial-by-trial basis.
Both directories are located in the same place as the
script holding the experimental code. All event logs and
the data files are named according to the experiment
name, followed by the subject I D and, if not otherwise
specified, a time stamp.
By default, the automatic event logging will contain a
detailed description of the experimental design (including a
full listing of all trials) as well as stimulus presentations and
the expected input/output (I/O) events (i.e., when explicitly
waiting or checking for keyboard or button box responses). It
is furthermore possible to activate extended event logging,
which will include even more detailed informat ion—for
instance, screen operations (updating an d clearing) or
the full stream of I/O events polled from the serial port.
In total, three levels of event logging can be set via the
defaults of the control package before initializing an
experiment:
Behav Res
Implementation
Expyriment is based on Python 2 (≥2.6) and several Python
libraries that implement low-level routines for timing-critical
communication with hardware components. PyOpenGL
(≥3.0; PyOpenGL, 2012) is used to present visual stimuli onto
the displa y in a timing-accurate manner. Py game (≥1.9;
Shinners, 2012) is used for auditory stimulus presentation
and visual stimulus creation, as well as for interacting with
the computer’s keyboard, mouse, and game port. PySerial
(≥2.5) and PyParallel (≥0.2; PySerial, 2012) can be used to
interact with a serial and a parallel port, respectively.
Additionally, NumPy (≥1.6; Oliphant, 2006) is needed in
order to use the b uilt-in data preprocessing functionality.
More detailed information about the installation procedure
can be found at the Expyriment website.
Software evaluation and timing
We empirically tested the timing accuracy of the visual and
auditory stimulus presentation, as well as serial port commu-
nication. All tests were performed under Windows XP SP3
(Microsoft Corp., Albuquerq ue, NM) installed on an HP
DC7900CMT personal computer with 4 GB internal working
memory (Hewlett Packard Co., Palo Alto, CA), equipped with
a Core2Duo processor E8400 (Intel Corp., Santa Clara, CA), a
Samsung SyncMaster 2233RZ display operating at 60 Hz
(Samsung Electronics Co., Ltd., Suwon, Sou th Korea), a
Quadro NVS 290 video card (Nvidia, Santa Clara, CA), a
Soundblaster Audigy sound card (Creative Technology Ltd.,
Jurong East, Singapore), and a UART 16550A compatible
serial port (Intel Corp., Santa Clara, CA).
Visual stimulus presentation
For a precise timing of the presentation and duration of visual
stimuli, it is necessary to synchronize the stimulus on- and offsets
with the video hardware. If this is not done, the reported presen-
tation onsets and durations can be inaccurate in the range of
several milliseconds. On video cards implementing the OpenGL
specification in version 2.0 or higher with the addition of the
'GL
_
ARB
_
texture
_
non
_
power
_
of
_
two'
extension
(in our experience, newer Nvidia and ATI cards work
well, whereas we experienced several problems with Intel
cards; more detailed information on hardware compatibility
can be found at the Expyriment website), Expyriment
supports the following mechanism to guarantee maximal
visual timing accuracy: If the video card allows, visual
stimulus presentation is synchronized to the refresh rate of
the display (i.e., the vertical retrace). This will make sure
that drawing to the display will always begin in the top
left corner. Importantly, code execution will be blocked
until this synchronization has actually occurred. This has
the important implication that the time at which
Expyriment reports a stimulus being presented is actually
the time that the stimulus is being drawn onto the display.
By definition, presentation durations will thus always be
exact multiples of one screen refresh.
It should be noted that the mechanism des-
cribed above can be switched off by calling
expyriment.control.defaults.open
_
gl = False
before the experiment initialization. This will switch
off OpenGL mode and use the Pygame library to
present visual stimuli. Importantly, this results in pre-
sentations that are not synchronized with the refresh
rate of the screen, which thus increases the uncertainty
of wh en exactly a visual stimulus ha s b een displayed.
However, unsync hronized pres entation can be useful in
some cases, for instance in paradigms in which the
main focus lies on rapidly changing visual scenes, so
that screen updates should occur without any delay
(e.g., moving objects or studies with eye-contingent
displays). Furthermore, Expyriment will automatically switch
to using Pygame when running in window mode (by calling
expyriment.defaults.control.window
_
mode =True
before experiment initialization, or by working in develop
mode).
To test the timing accuracy of visual stimulus presen-
tation in OpenGL mode, we repeatedly presented alternat-
ing pre loaded black and w hite b lank s creens on th e
display as quickly as possible. Display responses at the
upper left corner were measured using an optical sensor
(photocell) connected to a Tektronix MSO 2012 oscillo-
scope (Tektronix, Beaverton, OR). Before each stimulus
presentation, a marker was sent to the oscilloscope via
the serial port. The results revealed that the onset and
offset of the white blank scr een we re align ed to the
markers sent via the serial port, showing that the time
that Expyriment reported the stimulus as being presented
corresponded correctly to the time that the video card
began drawing onto the display (Fig. 2a). Furthermore,
the spacing between the onsets of successive stimulus
presentations corresponded to about 17 ms, showing that
Expyriment is capable of presenting one preloaded stim-
ulus each screen refresh (Fig. 2b).
Importantly, whereas these test results provide an em-
pirical basis for the timing accuracy of visual stimulus
presentation, they are also specific to our particular con-
figuration o f syst em co mpone nts ( video h ardwa re and
driver). It is thus worth mentioning that any system ’s
visual stimulus presentation performance should be
Behav Res
tested by using the integrated Expyrim ent test suite.
Users can thus get a clear picture of whether or not their
specific systems are capable of accurate stimulus
presentation.
Auditory stimulus presentations
When presenting auditory stimuli, timing accuracy is affected
by two phenomena: (1) The actual point in time at which the
audio stream is played back by the system can be delayed by
several milliseconds, depending on the audio hardware and
driver used, and (2) the amount of this delay might vary
between presentations. Although a static delay does not nec-
essarily face a problem for most experimental settings, since
all experimental conditions will be subject to the same time
lag, a large variability in this delay would be problematic, as
it would introduce differences between the experimental
conditions.
We tested the t iming accuracy of audit ory stimuli by
presenting a sine wave of 440 Hz. The audio system was
set to play back with a sample rate of 44,100 Hz and a
bit depth of 16. The buffer size was set to 128 samples.
Audio performance was measured by a Tektronix MSO
2012 oscilloscope (Tektronix, Beaverton, OR) connected
to the output of the sound card. Before each stimulus
presentation, a marker was sent to the oscilloscope via
the serial port. We were able to measure a minimal
latencyof15ms(Fig.3a) and a maximal latency of
20 ms (Fig. 3b ). These results show that the audio
playback delay was relatively stable, with a variance of
5ms.
Serial port communication
Another important aspect of experimental timing concerns
measuring response times from a participant. Because
response times are usually reported in milliseconds, a
participant’s response should be measurable with a preci-
sion of up to 1 ms. We therefore strongly discourage the
use of a computer’s keyboard for this task, as both PS/2
and USB keyboards are not build for timing accuracy in
this range. Rather, we advise using response devices
connected to the serial port. The serial port can also be
utilized, for instance, to receive triggers from a magnetic
resonance imaging scanner or to send markers to an elec-
troencephalography system. In Expyriment, the serial port
can be accessed (“po l l ed ”) directly in order t o allow for
maximal performance.
We tested the timing accuracy of serial port communica-
tion by repeatedly sending a single byte to a custom-made
Fig. 2 Timing accuracy of visual stimulus presentation: a A single
presentation of a white blank screen. b Consecutive presentations of
white and blank screens. Yellow lines indicate markers sent via the serial
port before and after presenting a stimulus, and cyan lines the display
response, measured with an optical sensor
Fig. 3 Timing accuracy of auditory stimulus presentation: a Minimal
measured delay of audio playback. b Maximal measured delay of audio
playback. Yellow lines indicate markers sent via the serial port before and
after presenting a stimulus, and cyan lines the audio response, measured
directly from the line output of the audio interface
Behav Res
loop-back device (a device that immediately sends back
anything it receives) and recorded the time between sending
out the byte and receiving it again. We conducted this test
with baud rates of 115,200 and 19,200. Our results revealed
that after 1,000 repetitions, the maximal time between send-
ing and receiving was 0.28 ms with a baud rate of 115,200,
and 0.69 ms with a baud rate of 19,200, showing that both
sending and receiving serial port data were reliably possible
within 1 ms.
Benchmark experiment
To test the performance of Expyriment (in default OpenGL
mode) in a more realistic setting that would be closer to actual
experimental paradigms (i.e., entailing a stimulus–response
loop), we developed a benchmark experiment with automated
responses to visual and auditory stimuli. The benchmark test
focused on the operation systems Windows and Linux, since
the times of visual presentations under OS X are less precisely
controllable, due to the lack of a possibility to block the
execution of programming code until the vertical retrace has
occurred. Although developing experiments is perfectly pos-
sible on OS X, we generally discourage the use of OS X for
testing participants.
Method
The experiment consisted of two parts, each containing
1,000 trials. In the first part, automated responses to a
white blank screen were recorded (cf. Mathôt et al.,
2012). Each trial started with the presentation of a black
blank screen for 100 ms, followed by the presentation
of a white blank screen. Responses were triggered by an
optical sensor attached to the left upper screen of the
display, every t ime that the brightness exceeded a cer-
tain threshold. After the response was recorded, a new
trial started. In the second part, automated responses to
a single cycle of a beep tone with a frequency of
1,000 Hz were recorded by a custom-made response
device connected to the output of the a udio card, every
time that the sound level exceeded a certain threshold.
After the response was recorded, the next trial started
after a delay of 100 ms.
Results and conclusions
Table 1 lists the main results of the benchmark experiment. On
both Windows and Linux systems, the average response to a
visual stimulus presentation was reliably below 2 ms. The
observed di ffer ence in response times betw een Windows
and Linux might result from a difference in the points of time
at which OpenGL reports the vertical retrace (i.e., shortly
before the retrace on Linux, and shortly after the retrace on
Windows). Crucially, however, the measured timing accuracy
was very stable, as indicated by the low standard deviations.
The results show that Exp yriment is ve ry well-suited for
highly timing-critical visual presentations with millisecond
precision.
The average response to an auditory stimulus presentation
for both systems was below 20 ms and, most importantly, was
relatively stable. The difference in response times between the
two operating systems most probably resulted from differ-
ences in the audio hardware used. These results suggest that
Expyriment is well capable of handling paradigms that entail
auditory stimuli.
Differences from other Python libraries
Expyriment contributes to a pool of existing Python li-
braries with similar aims and functionality, such as
Tabl e 1 Results of the benchmark experiment
System Visual stimulus presentation Auditory stimulus presentation
Average response time (ms) Standard deviation of response time (ms) Average response time (ms) Standard deviation of response time (ms)
Windows
a
0.48 0.36 18.71 2.55
Linux
b
1.69 0.30 12.46 1.48
Automated responses to visual and auditory stimulus presentation were recorded on Windows and Linux systems.
a
Intel Core 2 Duo E8400, 4 GB RAM,
Creative Sound Blaster Audigy, Nvidia Quadro NVS 290, Samsung SyncMaster 2233 (1,680 × 1,050, 60 Hz), Windows XP (SP3).
b
Intel Core i5-2400,
4 GB RAM, Intel HD Audio, Nvidia GTX 650, Samsung SyncMaster 2233 (1,680 × 1,050, 60 Hz), Lubuntu Linux 12.10 (running LXDE with the
Openbox window manager and no compositing)
Behav Res
PsychoPy (Peirce, 2007), OpenSesame (Mathôt et al.,
2012), or Vision Egg (Straw, 2008). Besides our belief
that any open-source contribution is a potentially welcome
addition, offering an alternative choice to the end user (cf.
Halchenko & Hanke, 2012), Expyriment differs from oth-
er libraries in some respects because of its different
approach.
First, Expyriment is meant to be strictly a programming
library and has no aims to become a graphical experiment
builder like OpenSesame (which, in fact, uses Expyriment as
the default internal back end for stimulus presentation) or a
hybrid solution like PsychoPy. Expyriment therefore provides
a relatively small and lightweight Python library for behav-
ioural and neuroimaging experiments, with only very few
software dependencies and no requirements to include any
system-specific compiled code. As a result, Expyriment en-
sures high performance together with a maximum in
portability.
A second important difference as compared with
previous Python-based experiment software is that
Expyriment can present stimuli without using OpenGL.
The user needs to be aware that t his might li kely result
in increased variability in the timings of the stimulus
presentations. H owever, a presentation mode without
OpenGL offers the interesting possibility of developing
experiments on computer systems that do not support
this type of graphics standard. Taken together,
Expyriment is especially appropriate for experiments
that are supposed to run on older or low-end systems
or on other types of computing hardware, such as tablet
PCs or low-cost embedded platforms such as the
Raspberry Pi.
5
Furthermore, an Expyriment runtime for
Android is under development and is currently available
as a preview version. In any case, the Expyriment test
suite offers an easy way of checking the stimulus pre-
sentation and input recording accuracy on a specific
system.
Third, as mentioned above, Expyriment is also a tool
for the creation and manipulation of experimental de-
signs—even in the absence of any stimulus presentation
procedure or trial handling (as implemented, e.g., in
PsychoPy). Although this feature is useful ( and has
already been used) in isolation to teach the formaliza-
tion of experimental design s to students, using it in
conjunction with the rest of the library provides an
intuitive way of transitioning the conceptualization of
an experimental design to its technical implementation.
Since Expyriment focuses on the accurate timing of
preloaded static stimuli, at this time it has shortcomings
in the real-time generation and presentation of complex
and dynamic visual stimuli. In these cases, we suggest
the use of, for i nsta nc e, Psyc hoPy, which provi des s ev-
eral excellent high-level routines for the development
of stimulus materials required, especially, for vision
research.
Summary
In the present article, we have presented Expyriment, a
lightweight Python library for designing and conducting
cognitive and neuroscientific experiments. By means
of an example experiment, we demonstrated how
Expyriment a ssists the researche r, first b y produc ing
readable source code that can be easily shared with
and understood by other researchers or stude nts, and
also by being centered around the experimental design
instead of stimulus presentation, which allows the re-
searcher to conceptualize the experiment in a familiar
form and makes for an easy transition from the concep-
tual level of an experiment to its concrete implementa-
tion in terms of programming code, setting Expyriment
apart from previous Python libraries. Both of these
aspects, together with the fact that Expyriment is freely
available and runs not only on Windows and Mac OS,
but also on Linux (of which many distributions are free
as well), make it a very suita ble tool for teaching.
Furthermore, we demonstrated that Expyriment is capa-
ble of delivering millisecond precisi on for presenting
visual stimuli and communicating with external devices.
Due to its modular approach, Expyriment can also be
used in conjunction with other Python libraries and can
be easily extended using a unified plugin system. Taken
together, Expyriment provides an easy, efficient, and
flexible way to design and conduct timing-critical be-
havioral and neuroimaging experiments for researchers
and students alike, independent of the choice of operat-
ing system.
Acknowledgments We thank Pascal de Water for great technical sup-
port, Sebastiaan Mathôt for his code evaluatio n and for c hoos ing
Expyriment as the default back end of OpenSesame 0.27, as well as
dozens of students and colleagues for using old preliminary versions of
Expyriment for their studies. Without their feedback, Exypriment would
not have reached the level of a stable and reliable experiment program-
ming library.
5
www.raspberrypi.org
Behav Res
Appendix
1 from expyriment import design, control, stimuli, io, misc
2
3
4 # Create and initialize an experiment
5 exp = design.Experiment("Simon Task")
6 control.initialize(exp)
7
8 # Create the design
9 for task in ["left=green", "left=red"]:
10 b = design.Block()
11 b.set_factor("Task", task)
12 for pos in [["left",-300], ["right",300]]:
13 for col in [["red",misc.constants.C_RED],
14 ["green",misc.constants.C_GREEN]]:
15 t = design.Trial()
16 t.set_factor("Position",pos[0])
17 t.set_factor("Colour", col[0])
18 rect = stimuli.Rectangle(size=[50,50], position=
[pos[1], 0],
19 colour=col[1])
20 t.add_stimulus(rect)
21 b.add_trial(t, copies=32)
22 b.shuffle_trials()
23 exp.add_block(b)
24 exp.add_bws_factor("TaskOrder",["left=green first" ,"left=red first"])
25
26 # Create and preload global stimuli
27 blankscreen = stimuli.BlankScreen()
28 blankscreen.preload()
29 fixcross = stimuli.FixCross()
30 fixcross.preload()
31
32 # Start the experiment
33 control.start()
34 exp.data_variable_names = ["Position", "Key", "RT"]
35 if exp.get_permuted_bws_factor_condition("TaskOrder") == "left=red first":
36 exp.swap_blocks(0,1)
37
38 # Run trials
39 for block in exp.blocks:
40 stimuli.TextScreen("Instructions", block.
get_factor("Task")).present()
41 exp.keyboard.wait()
42 blankscreen.present()
43 for trial in block.trials:
44 exp.clock.wait(3000 - trial.preload_stimuli())
45 fixcross.present()
46 exp.clock.wait(500)
47 trial.stimuli[0].present() # Target stimulus
48 key, rt = exp.keyboard.wait([misc.constants.K_LEFT,
49 misc.constants.K_RIGHT])
50 blankscreen.present()
51 exp.data.add([trial.get_factor("Position"), key, rt])
52 trial.unload_stimuli()
53
54 # End the experiment
55 control.end(goodbye_text="Thank you for participating!")
Listing 1 Programmingcodeforaresponsetimeexperiment to assess a spatial stimulus–response compatibility effect (the Simon effect; Hommel, 1993)
Behav Res
References
Bassi, S. (2007). A primer on Python for life science researchers. PLoS
Computational Biology, 3, e199.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10,
433–436. doi:10.1163/156856897X00357
Free Software Foundation. (2007). GNU General Public Licence. Re-
trieved from www .gnu.org/copyleft/gpl.html
Halchenko, Y. O., & Hanke, M. (2012). Open is not enough. Let’stake
the next step: An integrated, community-driven computing platform
for neuroscience. Frontiers in Neuroinformatics, 6, 22. doi:10.3389/
fninf.2012.00022
Hanke, M., & Halchenko, Y. O. (2011). Neuroscience runs on GNU/Linux.
Frontiers in Neuroinformatics, 5, 8. doi:10.3389/fninf.2011.00008
Hommel, B. (1993). Inverting the Simon effect by intention: Determi-
nants of direction and extent of effects of irrelevant spatial informa-
tion. Psych ological Research, 55, 270–279. doi:10 .1007/
BF00419687
Jones, E., Oliphant, T. E., Peterson, P., et al. (2001). SciPy: Open source
scientific tools for Python [Computer software]. Retrieved from
www.scipy.org
Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-
source, graphical experiment builder for the social sciences. Behav-
ior Research Methods, 44, 314–324. doi:10.3758/s13428-011-
0168-7
Oliphant, T. E. (2006). Guide to NumPy. Trelgol Publishing.
Peirce, J. W. (2007). PsychoPy: Psychophysics software in Python.
Journal of Neuroscience Methods, 162, 8–13.
PyOpenGL. (2012). [Computer software]. Retrieve d from http://
pyopengl.sourceforge.net
PySerial. (2012). [Computer software]. Retrieved from http://pyserial.
sourceforge.net
R Development Core Team. (2012). R: A language and environment for
statistical computing. Vienna, Austria: R Foundation for Statistical
Computing. Retrieved from www.R-project.org
Shinners, P. (2012). Pygame [Computer software]. Retrieved from www.
pygame.org
Stahl, C. (2006). Software for generating psychological experiments.
Experimental Psychology, 53, 218–232.
Straw, A. D. (2008). Vision Egg: An open-source library for realtime
visual stimulus generation. Frontiers in Neuroinformatics, 2(4), 1–
10. doi:10.3389/neuro.11.004.2008
Van Rossum, G., & Drake, F. L. (2011). Python language reference
manual . Bristol, UK: Network Theory Ltd.
Behav Res