ArticlePDF Available

Air-Track: A real-world floating environment for active sensing in head-fixed mice

Authors:

Abstract and Figures

Natural behavior occurs in multiple sensory and motor modalities and in particular is dependent on sensory feedback that constantly adjusts behavior. To investigate the underlying neuronal correlates of natural behavior, it is useful to have access to state-of-the-art recording equipment (e.g. 2-photon imaging, patch recordings, etc.) that frequently requires head-fixation. This limitation has been addressed with various approaches such as virtual reality/air ball or treadmill systems. However, achieving multimodal, realistic behavior in these systems can be challenging. These systems are often also complex and expensive to implement. Here we present “Air-Track”, an easy to build, head-fixed behavioral environment that requires only minimal computational processing. The Air-Track is a lightweight, physical maze floating on an air table that has all the properties of the “real” world, including multiple sensory modalities tightly coupled to motor actions. To test this system, we trained mice in Go/No-Go and two-alternative forced choice tasks in a plus maze. Mice chose lanes, and discriminated apertures or textures by moving the Air-Track back and forth, and rotating it around themselves. Mice rapidly adapted to moving the track, and utilized visual, auditory and tactile cues to guide them in performing the tasks. A customcontrolled camera system monitored animal location, and generated data that could be used to calculate reaction times in the visual and somatosensory discrimination tasks. We conclude that the Air-Track system is ideal for eliciting natural behavior in concert with virtually any system for monitoring or manipulating brain activity.
Content may be subject to copyright.
Innovative Methodology
CALL FOR PAPERS Active Sensing
Air-Track: a real-world floating environment for active sensing in head-fixed
mice
XMostafa A. Nashaat,
1,2
Hatem Oraby,
1
Robert N. S. Sachdev,
1
York Winter,
1
and Matthew E. Larkum
1
1
Neurocure Cluster of Excellence, Humboldt-Universität zu Berlin, Berlin, Germany; and
2
Berlin School of Mind and Brain,
Humboldt-Universität zu Berlin, Berlin, Germany
Submitted 1 February 2016; accepted in final form 1 July 2016
Nashaat MA, Oraby H, Sachdev RN, Winter Y, Larkum ME.
Air-Track: a real-world floating environment for active sensing in
head-fixed mice. J Neurophysiol 116: 1542–1553, 2016. First pub-
lished July 13, 2016; doi:10.1152/jn.00088.2016.—Natural behavior
occurs in multiple sensory and motor modalities and in particular is
dependent on sensory feedback that constantly adjusts behavior. To
investigate the underlying neuronal correlates of natural behavior, it is
useful to have access to state-of-the-art recording equipment (e.g.,
2-photon imaging, patch recordings, etc.) that frequently requires head
fixation. This limitation has been addressed with various approaches
such as virtual reality/air ball or treadmill systems. However, achiev-
ing multimodal realistic behavior in these systems can be challenging.
These systems are often also complex and expensive to implement.
Here we present “Air-Track,” an easy-to-build head-fixed behavioral
environment that requires only minimal computational processing.
The Air-Track is a lightweight physical maze floating on an air table
that has all the properties of the “real” world, including multiple
sensory modalities tightly coupled to motor actions. To test this
system, we trained mice in Go/No-Go and two-alternative forced
choice tasks in a plus maze. Mice chose lanes and discriminated
apertures or textures by moving the Air-Track back and forth and
rotating it around themselves. Mice rapidly adapted to moving the
track and used visual, auditory, and tactile cues to guide them in
performing the tasks. A custom-controlled camera system monitored
animal location and generated data that could be used to calculate
reaction times in the visual and somatosensory discrimination tasks.
We conclude that the Air-Track system is ideal for eliciting natural
behavior in concert with virtually any system for monitoring or
manipulating brain activity.
virtual reality; psychophysics; multisensory perception; head-fixed
behavior
NEW & NOTEWORTHY
This paper provides a description of a multimodal head-
fixed behavioral environment for rodents based on an
air-lifted platform design. The environment is compact,
cheap to build, and can be implemented on most recording
setups. The design allows for complex behavioral para-
digms to be combined with modern sophisticating record-
ing techniques such as two-photon imaging. This system
provides an alternative to virtual reality-based head-fixed
systems.
NATURAL BEHAVIORS ARE COMPLEX and occur in multiple sensory-
motor dimensions simultaneously. For example, a conversation
between people or even a simple stroll down a corridor engages
visual, auditory, tactile, kinesthetic, proprioceptive, and olfac-
tory senses. Accurate perception of these cues requires tight
and reliable coupling to the motor system (Kleinfeld et al.
2006; Zagha et al. 2013). Mismatches between actions and
sensory feedback profoundly disturb natural behavior (e.g.,
missing the “last step” while climbing stairs) (Keller et al.
2012). While natural behavior is usually performed during
multimodal sensation/perception, the behaviors typically stud-
ied in laboratory settings are impoverished and often con-
strained to a single sensory or motor modality (Crochet and
Petersen 2006). Advances in our understanding of the impor-
tance of natural feedback-controlled behavior has triggered a
gradual but fundamental shift toward multimodal multidimen-
sional behavioral approaches.
Rodents are an ideal choice for neuroscientists interested in
precise and invasive recording methodologies, since they are
relatively easy to train on simple tasks. Many of the experi-
mental methods, however, require the head of the animal to
remain stationary during the experiment. The early head-fixed
rodent behavioral approaches were therefore based on reduced
systems with only a single sensory modality, such as whisker
movement (Bermejo et al. 1996; Crochet and Petersen 2006;
Hentschke et al. 2006; Krupa et al. 2004; Sachdev et al. 2002;
Welsh et al. 1995). The shift towards more complex natural-
istic multimodal behavior began with awake mice head fixed
atop an air ball or a treadmill (Dombeck et al. 2009; Harvey et
al. 2009; Poort et al. 2015). Importantly, various studies have
reported that cortical responses are different when the animal is
engaged in multidimensional behavioral tasks (Dombeck et al.
2010; Harvey et al. 2012; Lenschow and Brecht 2015; Musall
et al. 2014; Poort et al. 2015; Sofroniew et al. 2015). Just the
act of walking on a treadmill changes visual and auditory
responses (McGinley et al. 2015; Niell and Stryker 2010;
Polack et al. 2013; Reimer et al. 2014; Saleem et al. 2013;
Schneider et al. 2014; Sofroniew et al. 2015).
One advantage of the air ball and treadmill methods is that
the movement of the mouse can be tracked and be used to
control a virtual environment (Harvey et al. 2009; Holscher
Address for reprint requests and other correspondence: M. E. Larkum,
Humboldt Univ., Charitéplatz 1, Virchowweg 6, Berlin, 10117, Germany
(e-mail: matthew.larkum@gmail.com).
J Neurophysiol 116: 1542–1553, 2016.
First published July 13, 2016; doi:10.1152/jn.00088.2016.
1542 0022-3077/16 Copyright © 2016 the American Physiological Society www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
et al. 2005). For accurate correspondence between the mouse’s
movement and the virtual world, this environment is best
represented using visual information usually in the form of 2D
monitors in the visual field of the head-fixed animal. More
recently, an equivalent somatosensory approach has been im-
plemented (Sofroniew et al. 2014). This seemingly simple
change in modality from visual to somatosensory presents a
major advantage creating a natural tactile representation than
virtual reality. However, it brings various problems in accu-
rately matching the virtual representation to ordinary real-
world situations, such as the representation of corners and
optical/tactile flow. To be effective, virtual environments also
require sophisticated software for tracking the animal and
mapping these movements to the virtual world. This presents
further difficulties estimating the perceptual experience of the
animal, which is not necessarily intuitive for humans designing
the mapping interface. For instance, place tuning of hippocam-
pal neurons in rodents is different when they are in a virtual
world as opposed to a real world (Acharya et al. 2016; Aghajan
et al. 2015). In practice, the qualitative perceptual experience
of rodents in virtual reality systems is probably impossible to
match with actual real-world experiences and, in any case, it is
impossible to definitively demonstrate a correspondence be-
tween the subjective experience of the animal and the real
world.
Here, we present an alternative behavioral system as one
solution to the problems of virtual reality approaches while
retaining all the benefits of head-fixed experiments. Our system
uses a real physical environment that rests on a cushion of air
and moves around the animal’s body under the direct control of
the animal itself. The system, “Air-Track,” is based on the
airlifted flat platform described by Kislin and colleagues (Kis-
lin et al. 2014). We successfully used Air-Track to train
head-fixed mice to perform a spatial orientation and multi-
modal discrimination task within only two weeks.
METHODS
Experiments were performed with approval of the local state
authority in Berlin (LAGeSo) that is advised by the animal use ethics
committee.
Air-Track components. The Air-Track consisted of three essential
custom-made components: 1) an air table that provided the air cush-
ion; 2) a platform constructed of lightweight material for floating on
air that included the custom-designed maze with walls (Fig. 1, Aand
B); and 3) a microcontroller system for tracking the position of the
platform and controlling reward delivery (Figs. 1Cand 2).
The air table. Our solution used a transparent Plexiglas box 20
24 3 cm, mounted on aluminum legs, forming a small table. The
table had one intake port that was pressurized with air at 300 kPa
(45 psi) and small (1 mm) holes, spaced 8 mm apart, providing jets
of pressurized air. The working surface of the table was 16 20 cm.
The platform. The circular platform (15 cm in diameter) was 3D
printed. The base of the platform was 3 mm thick, and this was
sufficient to hold the platform floating steadily on a cushion of air. The
platform attached atop the base was shaped as a plus maze, with four
lanes, each 5 cm long, 3 cm wide, and 3.5 cm in height, and weighed
180 grams. In our design, the walls and terminal aperture of each lane
Fig. 1. Air-Track setup enables closed-loop monitoring of behavior. Schematic
of Air-Track design side view (A) and top view (B). The plus maze sits atop
the Plexiglas air table showing the light-emitting diode (LED) for indicating
correct lanes (on the left), the actuator for positioning the lick spouts (in black
and red) also on the left, and the head post attachment in gray on the right.C:
schematic of the Pixy camera and the red/green color tag under the Air-Track
platform for position and orientation tracking. The camera has a 20-ms
sampling resolution. Pixy camera uses color information from the maze color
tag to determine the position of the animal in the maze.
Innovative Methodology
1543AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
could be modified: for textures, either smooth or with gratings etched
with 2 mm spatial periods, and for aperture either 1 or 2 cm wide. Just
above the platform, and outside the rotary axis of the maze, a white
light-emitting diode (LED) attached to a holder was used as a visual
stimulus for choosing lanes. The LED was maintained in a constant
state of either on or off based on the animal relative location in the
maze. Similarly, a linear actuator used for positioning the lick spouts
was placed 3 cm from the nose of the animal. The linear actuator
advanced the dual lick spouts, connected to capacitive sensors, to the
animal. When the animal arrived at the reward location in the maze,
it could either passively obtain a reward irrespective of spout choice
or perform a two-alternative forced choice (2AFC) between the two
lick spouts.
Monitoring system. We chose an off-the-shelf approach to video
tracking using a Pixy camera (CMUcam5 Image Sensor) placed below
the air table where it could detect the rotation and position of a
two-color mark glued to the bottom surface of the plus maze (Fig.
1C). This camera is unique in its ability to track colors because it
Fig. 2. Schematic for Air-Track circuit. Top left, schematic for Arduino-Uno connected through digital output to control a sound cue buzzer, LED, and H Bridge
(L293D) to control linear reward actuator and two solenoids driven by a 12-V direct current (DC) power supply. The Pixy camera was connected to Arduino
via an in-circuit serial programming port. The lick detectors (MPR121 module) were connected to Arduino via two analog and one digital inputs and powered
with 3.3 V.
Innovative Methodology
1544 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
processes color information in real-time on board and reports the
motion of the colored objects in x,y, and tilt angle with 50 frames/s
precision. Therefore, when the Air-Track platform was moved, the
position and orientation of the mouse within the maze was updated
every 20 ms. The output was streamed to an Arduino-Uno microcon-
troller that processed Pixy camera inputs for animal location and also
controlled the LED, the actuator that positioned the lick spouts, and the
reward delivery solenoids (Fig. 2). As the mouse rotated the maze, the
camera detected its position and set the LED status to on/off based on the
trial offset. The LED status for each lane was defined by a range of angles
(i.e., ¼ circle sector) for each lane to be compared with the real-time data
from the camera. Within a given trial, the LED was turned off only while
the mice faced the rewarding lane (Figs. 1Cand 3).
Surgery. In preparation for head fixation, adult 45-day-old mice
(C57BL/6) were anesthetized with ketamine-xylazine (90:10 mg/kg).
A lightweight aluminum head post was attached using dental cement
or Rely-X (3M ESPE, London, Ontario, Canada) (Andermann et al.
2013). Animals were monitored during recovery and were given
antibiotic (enrofloxacin) and analgesics (buprenorphine and carpofen)
during the recovery period.
Fig. 3. Flow chart of behavioral decisions in the Air-Track system. The Pixy camera keeps track of the color codes underneath the platform to report mouse
position in relation to the platform. This information controls, accordingly, assigning a new lane, moving the linear reward actuator, the timing of visual and
auditory cues, and reward delivery.
Innovative Methodology
1545AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
Training paradigm. One week after surgery, mice were acclimatized
to the Air-Track in incremental 20- to 120-min-long sessions (Fig. 4).
They were placed in the plus maze, head-fixed, and periodically given 1
ml/day of sugar-water (10% condensed milk with original sucrose con-
centration 45%) as a reward (Schwarz et al. 2010).
In the first one to two sessions (days) of training, we found it
helpful if the experimenter was actively engaged with the platform
and the animal by gently nudging the platform, getting the animal
habituated to the motion of the maze. During this active exploration
phase, mice learned to rotate the maze and to orient in the lanes going
forward and backward. In these sessions, a reward was given auto-
matically when the animal had reached the end of any lane irrespec-
tive of any task parameters.
In the third to fifth sessions, mice were acclimatized enough to
propel the maze, and rotate it, without any need for experimenter
intervention. During this active visual training phase, mice learned to
choose lanes based on the presence or absence of a white LED/visual
stimulus and collect the reward from a single lick of the reward spout.
By the sixth to eighth session, the training paradigm shifted to a
two-choice task, where visual stimuli still determined the correct lane,
but the reward was only delivered from one of two lick spouts. During
this stage of passive tactile training, mice were given rewards auto-
matically, without initially licking, at one of the two lick spouts based
on the texture of the wall in order for them to passively learn
associating each texture with one (left or right) of the two lick spouts.
From this point onward, mice were trained for four to six sessions
to discriminate either wall texture or aperture width. In this active
tactile training phase, the mouse still had to choose a correct lane and
actively discriminate either between two types of aperture width or
wall textures. To obtain a reward, mice had to initiate licking the
correct lick spout, and decision was determined by first lick.
Behavioral training. Animals were water restricted (body weight
stabilized 85% of initial weight) and conditioned to orient within the
floating plus maze. The orientation of the lick spout was optimized for
each mouse, in each session, to ensure that the mouse was positioned at
an appropriate distance (3 cm) from the lick spout. Animals were
trained in near-total darkness, with a white light source directed under the
nontransparent black platform for the Pixy camera. The light was suffi-
cient for the Pixy camera beneath the platform to track the position but
was not visible above the platform [Supplemental Video 1 (Supplemental
data for this article is available on the journal website.)].
Each complete trial can be divided into four temporally distinct
stages. The trial began with the mouse at the center of the plus maze
(Fig. 5, Aand B). In this phase of the trial, the mouse had to pay
attention to the LED status to choose the correct lane. To find the
correct lane, mice rotated the maze, clockwise or counterclockwise.
Over days, mice developed an individual preference for the direction
of platform rotation (Supplemental Video 2).
The second stage was the choice of the correct lane by the mouse.
This choice was reflected in the speed of rotation and in the position-
ing of the head of the mouse with respect to the walls of the lane
(Supplemental Video 3).
The third stage was the entry into the lane. As the mouse moved
forward, it touched with its whiskers the lane walls and the terminal
aperture as the mouse approached the end of the lane. At this stage,
the linear actuator moved the two lick spouts in front of the mouse.
The fourth stage was the decision to lick the left or right spout.
Animals were conditioned to lick right or left based on the type of tactile
cue presented. In the case of aperture width, mice were trained to lick
Fig. 4. Training paradigm. Duration of experimental
phases: head fixation, habituation, and behavioral
training along 2 wk. During active exploration (pale
green, sessions 1 and 2) the experimenter supervised
the mouse while moving the plus maze and collect-
ing rewards at each lane. During active visual train-
ing (red, sessions 3 and 4), 6 mice were trained to
discriminate a visual cue until performance reached
70%. During passive tactile training (violet, 1–3
sessions), mice were trained to navigate dark lanes
and obtain rewards from a lick spout. Three mice
were trained to obtain reward corresponding to lane
aperture width (1 session) in a two-choice task,
whereas the other 3 mice obtain reward based on
lane texture (3 sessions) in a delayed two-choice
task. Finally, training advanced to combining active
visual and active tactile training (dark green, 4 6
days) with mice trained to sequence of visual cue
discrimination (G-N-G) followed by tactile discrim-
ination for either an aperture width or wall texture
(2AFC) in the time line of one trial.
Innovative Methodology
1546 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
right on experiencing the wide aperture (2 cm) and left on experiencing
the narrow aperture (1 cm). In the case of texture discrimination, mice
were trained to lick right on experiencing rough texture and left on
experiencing smooth texture (Supplemental Video 4). After licking the
spout, and obtaining a reward, mice moved backward in the lane, arriving
at the center of the maze where a new trial could begin.
Bias correction. Mice performing the 2AFC discrimination task
typically developed a bias where they preferred one spout location
over the other. To eliminate such a bias, our behavioral control
software switched to a forced mode whenever mice licked the same
spout for five consecutive trials (Guo et al. 2014). In this forced mode,
the software dictated particular lane selection: only lanes with tactile
cues related to the animal’s nonpreferred spout were selected. The
mouse stayed in the forced mode until it licked the correct spout for
three consecutive trials. Once the animal showed that it correctly
licked the nonpreferred port for three trials in a row, the forced mode
was terminated, and the animal was switched back to randomized
uniform selection of lanes.
Fig. 5. Task design and metrics in the course
of the visual and somatosensory tasks. Aand
B: schematic of task design for aperture
width and wall texture discrimination tasks,
respectively, showing that the mouse first
rotates the plus maze (1), and watches the
LED light cue, which turns off (2) when the
mouse reaches the correct lane and enters it
(3). The final step is the discrimination of the
aperture (green) or textures, with the deci-
sion reported at the end of the lane where the
mouse licks one of the two spouts (4). In the
case of texture discrimination, 1.5 s delay is
imposed between the time the mouse reaches
the end of the lane to lick a spout. C: histo-
gram of intertrial intervals in 6 sessions of
active training for a single mouse. Because
the task was designed to be self-initiated,
there was no fixed intertrial interval. Mice
could wait up to 30 s or more before initiat-
ing a new trial. D: average time spent by a
single mouse in each lane from entering the
lane to making a decision. The mouse spent
2 s in each of four lanes before obtaining
reward. E: time spent by a single mouse
rotating to reach the correct lane. Time spent
rotating increased with increasing the num-
ber of travelled lanes. Note that mice can
traverse clockwise or counterclockwise, and,
as they traversed additional lanes, they could
take more than 7 s finding the correct lane.
F: average durations of behavioral events for
3 mice. Trials are sorted according to the no.
of lanes mice rotated around themselves
[blue, one lane (n 1,550 trials); red, two
lanes (n 361 trials); green, three lanes
(n 350 trials); and purple, four lanes (n
20 trials)]. The time points picked for these
analyses are 1) rotation time, which in-
creased with the number of lanes mice ro-
tated; 2) visual reaction time to enter a cor-
rect lane, which started when the light turned
off and ended when the mouse crosses the
lane boundary; 3) time spent inside the lane,
which began when the mouse entered the
lane and ended when the mouse licked the
reward spout; 4) tactile reaction time to lick
the right or left spout; and 5) intertrial inter-
val between trials determined by the moti-
vation of the mouse. When mice spent more
time rotating past additional lanes, they
spent less time before starting a new trial.
Innovative Methodology
1547AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
Setup electronics and software design. The setup consisted of: 1)an
Arduino-Uno microcontroller (www.arduino.cc) controlled and col-
lected data from the Air-Track system; 2) a Pixy (CMUcam5) camera
designed by Carnegie Mellon University and Charmed Labs
(Charmed Labs, Austin, TX) was used to track the platform location
and orientation; 3) a 50-mm linear actuator with position feedback
(model L16 50N 35:1 12V; Firgelli Technologies) was used to ad-
vance the lick spouts to the mouse; 4) a capacitive touch sensor
module (MPR121; Freescale Semiconductors) was used to detect
licking; 5) an active buzzer module (KY-012; KEYES DIY) was used
for false alarm cues, in the case of incorrect licks; 6) two solenoid
pinch valves were used to release sugar water from reward tubes
(2-way normally closed pinch valves; Bio-Chem Fluidics); and 7) data
from the Arduino were sent via a serial connection to a python
application running on a personal computer for logging and analysis
(Fig. 2).
Arduino code configurations. 1) Each of the four lanes of the maze
was defined by the angular orientation as determined from the angle
of the rectangular red and green color label glued underneath the
animal platform; 2) the beginning and the end of lanes were defined
by two coordinates that were fixed and used to determine whether the
head of the mouse was entering the lane or had reached the end of the
lane; and 3) based on associating particular orientation coordinates for
the platform with particular capacitive sensor and reward solenoid, the
right and left lick spouts were defined in the code to assign a reward
spout for each lane (Fig. 3).
Behavioral measures from Arduino output and data analysis. An
Arduino microcontroller polled inputs from the Pixy camera and
licking sensors while controlling solenoid valves, reward actuator,
LED, and piezo buzzer. The solenoid opening time was set to 150 ms,
which generated 10
l reward. Decision time window was set to 3 s
from the time the reward actuator was maximally extended until its
retraction if no lick event occurred. The maximum travel distance of
the motor was 5 cm (adding flexibility to the hardware design). The
maximum travel distance used in the experiment was 3 cm and took
1.5 s on average (reward access time, the time from the onset of
licking the correct sensor until the retraction of the lick spouts was
2.5 s; false-alarm cue, a buzzing sound was delivered for 1 s after the
mouse licked the wrong spout). Using values collected from the Pixy
camera about platform movement, we set a fixed value in our code to
report the status of the animal’s location relative to the platform (i.e.,
lane range, lane boundary, and end of the lane). Data were collected
about the mouse’s location, lane choice, and licking decision from the
Arduino serial port. The following definitions were used: a trial,
mouse started a trial when exited a lane (crossing a lane boundary)
and ended when the mouse licked a spout or withdrew from the lane;
intertrial interval, time between the end of one trial when the mouse
reported a decision by licking a spout and the time to move out of the
lane, i.e., crossing the lane boundary; correct visual trial, mouse chose
to enter the dark lane (LED off). This counted as a correct visual trial
whether or not the animal performed the somatosensory task; false
visual trial, mice chose to enter the lit lane (LED on), even if it
withdrew instantly afterward; correct somatosensory trial, mouse in
correct lane (LED off), licked the correct (rewarding) spout deter-
mined by lane aperture width or texture; wrong somatosensory trial,
mouse in correct lane (LED off), licked the wrong (nonrewarding)
spout; miss somatosensory trial, mouse in correct lane (LED off), did
not lick inside the decision time window (3 s); training session
termination, when animals exhibited signs of satiation, stopped per-
forming, or exhibited a sharp decrease in behavioral performance over
several trials (excluded from data analysis).
RESULTS
We designed and built an Air-Track system with a flat
platform that floated on air, equivalent to most air-ball systems
(Fig. 1; see METHODS). A physical maze was 3D printed on a
platform (Fig. 1, Aand B). The position of the floating platform
was tracked with a video tracking system (Fig. 1C) and
logged via an Arduino-Uno microprocessor (Fig. 2) that also
controlled the reward and cues in a closed-loop system (Fig.
3). It took about 2 wk to complete the entire training para-
digm, training individual mice to reach 70% correct perfor-
mance in the visual Go/No-Go task and 70% correct perfor-
mance in a two-choice tactile task (either aperture width or
texture discrimination task) (Fig. 4).
For the purpose of testing the system, we chose a “plus”
maze with four lanes. We tested whether mice could navigate
the maze and perform visual and tactile discrimination in
Go/No-Go and 2AFC tasks. In this environment, mice were
trained to choose dark lanes (LED off) and to avoid lit lanes
(LED on). Once mice entered a lane, their whiskers invariably
touched the walls, following which the mice proceeded to the
end of the lane until they got a reward. Mice learned to
discriminate between different aperture widths (Fig. 5A)or
wall textures and decided between one of the two lick spouts
(Fig. 5).
To minimize stress for the animal, the task was self-initiated.
Trials began when the mouse entered the center of the plat-
form. Figure 5 shows typical sensory-motor behavior on the
Air-Track platform. Data from a single “typical” animal show
that the mean duration of the intertrial interval from 1,000
trials for one animal was 11.5 s (Fig. 5C). Mice typically spent
equal amounts of time in the different arms; thus, the data
indicate that mice have no preference for any specific arm (Fig.
5D). This presumably reflected the symmetrical design of the
plus maze. The average time spent in each lane was 2.2 s
(Fig. 5D). During 1,000 trials with 250 trials in each lane, this
mouse spent 2.2 0.6 s (mean SD) in lane 1, 2.3 1.2 s
in lane 2, 2.2 0.5sinlane 3, and 2.1 0.7sinlane 4.
As a mouse rotated the maze, the different lanes moved past
it (Supplemental Videos 1– 4). In each trial the animal could
move a single lane or multiple lanes, past it. Each lane it
rotated past itself needed extra time (Fig. 5E). Note that while
there were only four lanes to choose from, animals sometimes
missed the correct lane, which added time to an individual trial.
This time interval varied considerably because the duration of
the various behavioral events within a trial (the time to pick a
correct lane, the time to make a correct choice to lick right or
left) could all vary (Fig. 5F, mean and SE for data from 3
animals). The mean durations for three mice sorted for the
number of lanes they rotated past (only 4 lanes are shown in
Fig. 5F) show that many temporal aspects of the behavior, for
example, the time taken to enter the correct lane, the time spent
traversing a lane, and the time to make tactile decision and lick
the spout, were similar irrespective of whether the animal
traversed one or four lanes before picking a lane. However, the
duration of intertrial intervals showed an inverse relation with
the time the mice spent traversing lanes.
Having established that the animals could orient the Air-
Track maze in a stress-free manner, we extended the study to
introduce two different kinds of behavioral tasks that could in
principle be used to examine the correlation between neuronal
activity and behavior. We did not train animals to the highest
levels of performance, and have not yet measured from brain
activity simultaneously, but within 2 wk of consecutive days of
training, animals reached threshold behavioral criteria in re-
Innovative Methodology
1548 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
markably few sessions and trials compared with other typical
head-fixed systems.
Six head-fixed mice were trained to perform in a Go/No-Go
visual discrimination task and achieved a criterion above 70%
correct choices. Three of them were subsequently advanced to
a second phase of training where they performed two-choice
aperture width discrimination tasks, whereas the other three
were advanced to perform a delayed two-choice texture dis-
crimination task. The criterion for successful performance in
this task was also 70%.
The goal of rotation was to find and choose the correct lane,
which was selected on the basis of a visual stimulus. After 3
days of training, mice (n6) learned to use this visual cue and
select the lane where an LED turned off with a mean perfor-
mance of 84.5% correct choices (Fig. 6A). Figure 6Bshows
performance over 4 days and for up to 1,000 trials for three
Fig. 6. Mice performance in visual and tactile discrimination tasks in Air-Track. A: performance in a Go/No-Go visual discrimination task and in 2-alternative
forced choice (2AFC) tactile discrimination tasks. The average performance in the visual task (n 6 mice) was 84.5% correct trials after 4 days of active visual
training. The average performance in the aperture width tactile task was 91% (n 3 mice), whereas average performance was 72.6% in the texture tactile task
(n 3 mice) after 4 6 days of active tactile training. B: performance of 3 mice in the Go/No-Go visual discrimination task within 4 days of training. Mouse
1(M1, blue line) achieved 91% rate, mouse 2 (M2, orange line) achieved 83%, and mouse 3 (M3, brown line) achieved 85%. During the 1st day of training,
the experimenter provided fine control guiding the animal to keep mice motivated; afterward, the animals were unsupervised. C: performance of 3 mice during
the 2-choice aperture width discrimination task. In 5 days, mouse 4 (M4, pale green line) achieved 96% success rate while mouse 5 (M5, violet line) achieved
78% and mouse 6 (M6, sky blue line) achieved 100%. D: performance of 3 mice during the delayed 2-choice texture discrimination task. M1 (blue line) achieved
75% success rate in day 6, M2 (orange line) achieved 69% in day 4, and M3 (brown line) achieved 74% in day 5.E: mouse performance during active aperture
width tactile discrimination through the timeline of one session. Tactile performance reaches 100% (data shown for M6) before it drops to the chance level after
removing the difference in aperture width between lanes. Reinstalling aperture width differences between lanes retrieves correct behavioral performance. F:
cross-modal performance of visual and tactile tasks during multiple sessions of training. Visual discrimination performance declined (data shown for M1) from
90 to 75% as mice learned the texture discrimination task. Over several days, performance increased 90% success rate in the visual discrimination task while
performance in the texture discrimination task reached 75% on day 6.
Innovative Methodology
1549AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
mice. During the initial phase of training (day 1), animals were
guided (supervised) manually to choose the correct lane (Fig.
6B). The aim of the supervised phase was to maintain the
animals’ motivation to perform more trials per session in this
early phase of training. Mice reached 70% correct performance
within 300 –700 trials (Fig. 6B).
Next, mice were advanced to perform a more difficult task
that combined the visual task Go/No-Go with one of the tactile
2AFC tasks (Fig. 6A). During the aperture 2AFC task, the three
mice reached the 70% correct choices by the 2nd day (200 –300
trials) and maintained this accuracy for the next 3 days (Fig. 6,
Aand C). In the texture-delayed 2AFC, two mice reached 70%
correct choices within 4 6 days of training (400-1,000 trials)
(Fig. 6, Aand D). To prove that mice use the aperture
apparatus in the two-choice task, we measured the mouse
performance (data shown for mouse 6) with and without the
aperture apparatus during the timeline of one session (Fig.
6E). Mouse tactile performance reaches a high level before it
drops to the chance level after removing the aperture apparatus
from all lanes. Reinstalling aperture apparatus retrieves correct
behavioral performance. Figure 6Fshows cross-modal perfor-
mance (data shown for mouse 1) along different training
sessions performing visual and tactile discrimination tasks. The
visual performance declined initially after the introduction of
the tactile discrimination task before visual and tactile perfor-
mance increase proportionally at the end of the training ses-
sions.
DISCUSSION
We present a novel behavioral system for use with head-
fixed mice. The Air-Track is flexible, low cost, easy to build,
and requires only a minimal computational control and data
acquisition system compared with common virtual reality sys-
tems. It is unique in providing a behaviorally rich environment
for active sensing that engages multiple sensory modalities
simultaneously. The proof-of-principle testing of mice in this
environment provided by this study shows that this system can
be used to quickly train animals for both simple and complex
tasks.
Air-Track exploits the principle of using air-lifted platforms
previously described by Kislin and colleagues (Kislin et al.
2014). Several features of the Air-Track system are novel,
introducing major engineering and design elements that make
it amenable for behavioral automation in active sensing exper-
iments. First, we used a camera-controlled real-time tracking of
2D position and rotation of the floating platform. Second, we
developed an automated closed-loop hardware/software con-
trol system based on an Arduino interface that provides an
interactive environment to control stimulus presentation and
reward delivery. Third, we designed a versatile reward delivery
system appropriate for different physical environment config-
urations. Fourth, we used 3D printing technology to construct
mazes for flexible design of novel environments. Using our
automated closed-loop system, we demonstrated that animals
could be easily trained on novel behavioral tasks with multi-
modal stimuli (visual, somatosensory, and auditory).
The Air-Track system is most directly comparable to virtual
reality systems that also attempt to achieve quasinatural be-
havior in rodents while head fixing the animal on an air-
cushioned ball (Harvey et al. 2009) (Fig. 7, Aand B). Virtual
reality approaches have been extremely successful in revealing
precise information about brain activity during active sensing
tasks. Most studies to date have focused on visual-motor tasks,
typically running through a visually displayed virtual maze
(Kaneko and Stryker 2014; Keller et al. 2012; Saleem et al.
2013). For instance, such systems have been used to show how
ensembles of neurons are recruited in the visual system while
learning new environments (Poort et al. 2015) and how differ-
ent behavioral states such as arousal influence visual process-
ing and functional flexibility of the primary visual cortex
(Vinck et al. 2015). An alternative approach applied to the
vibrissal sensory motor system has been to mount a tactile
environment (a set of walls) that moves as the animal walks on
an air ball (Sofroniew et al. 2014). Generating these kinds of
complex environments is difficult to achieve in a traditional
head-fixed nonactive sensing approach.
In our behavioral design, we used three sensory discrimina-
tion tasks: visual Go/No-Go task, 2AFC aperture width dis-
crimination task, and a delayed 2AFC texture discrimination
task. In visual and aperture discrimination tasks, the mice
learned the task in few days, whereas in the texture discrimi-
nation task mice performance was uneven. A major cognitive
challenge in performing the texture discrimination task was
that it also had a working memory component. After contact
with the stimulus, i.e., wall texture, the mouse had to extend its
head outside the lane, losing contact with stimulus for 1.5 s
until the decision could be made. This task therefore required
the retention of the texture in working memory, which signif-
icantly increased the difficulty of the task. Although the mice
passed the 70% criteria in the delayed 2AFC texture discrim-
ination task in a few days, their performance across sessions
was worse and uneven compared with the other two tasks.
The Air-Track provides a convenient “one-size-fits-all” so-
lution that extends virtual reality approaches to ultrarealistic
and multimodal behavioral regimes. Whereas virtual reality
approaches are best suited for visual stimulation, the Air-Track
system automatically includes multiple modalities (Fig. 7, C
and D), that is, because the maze is physically present, all of
the possible sensory information usually available to freely
moving animals (visual, auditory, tactile, and olfactory) is
available. We did not explicitly explore all modalities in this
study (concentrating on vision and somatosensation); however,
animals also had auditory (buzzer) (and in principle could have
had olfactory) cues coupled to their movements in the maze.
Another important difference between virtual reality ap-
proaches and the Air-Track is that virtual environments require
the mapping of the animal’s movements to the virtual world,
which can only be done via a computer model (Fig. 7E),
involving an additional step in the sensorimotor loop. With the
Air-Track, this step is unnecessary, since movements of the
mouse are automatically translated into movement of the phys-
ical maze (Fig. 7F). This has several advantages as follows: 1)
there is an accurate coupling of the animal’s movement with
the movement of the environment with no “glitches” (i.e.,
computational mapping errors), 2) there is little subjectivity in
determining this mapping, and 3) it requires zero computa-
tional effort or expensive equipment. On the other hand, a
disadvantage of our approach is that deliberate mismatches in
mapping are more difficult to produce compared with what can
be done in virtual reality systems that take the animal’s internal
model (mapping) of the virtual world into account (Keller et al.
Innovative Methodology
1550 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
2012; Saleem et al. 2013). It may still be possible with
Air-Track to introduce pseudomapping errors (i.e., the ani-
mal’s internal model) by changing the friction (air pressure) or
disturbing the movement of the platform.
Another disadvantage of the Air-Track system is that the mazes
are fixed and finite compared with virtual mazes that can be
infinite and/or freely changing. We used a 15-cm-diameter plat-
form, which provided enough space for several lanes, and this
may compromise the effectiveness of the Air-Track system for
experiments exploring the activity of place cells. However, larger
platforms that mice can move are likely to engage activity of place
cells, and are likely to engage an animal’s sense of place. Con-
sequently, it should be possible to use the Air-Track in multi-
modal spatial tasks (Gener et al. 2013; Griffin et al. 2012).
A practical limitation of the Air-Track is that the available
space and the inertial load of the platform for the animal limit
the maximum dimensions of the system. The inertial load in
our current design was extremely low (Supplemental Video 1)
such that movement appeared completely normal. In fact, we
found it necessary to introduce some artificial friction by
reducing air pressure below maximum to better approximate
the animal’s inertia in a natural freely moving situation.
The movement of the animals in the Air-Track system is
likely to be more realistic than on a ball or treadmill because
the platform is flat. In addition, the flat surface of the platform
can be used for different kinds of somatosensory cues, textures
on the floor of the maze, or directional auditory cues for a
Y-maze that have proved convenient in freely moving mazes
Fig. 7. Air-Track provides a real-world experience in head-restrained rodents. Aand B: a virtual reality platform adapted with permission from Dombeck and
colleagues (2010) (A) and an Air-Track platform (B). Air-Track provides a real maze, whereas virtual reality creates a virtual maze. In both systems, the walls
and the shape of the track can be changed based on the experimental design. Cand D: key difference between an Air-Track and a virtual reality setup. Air-Track
offers more of the real world to head-fixed mice, since it contains a palpable world with multiple dimensions; the air balls and virtual realities typically generate
a visually rich but nontactile world. Eand F: experimental mapping with virtual reality and Air-Track. Air-Track can create a real environment with walls that
can be used to skip the mapping of sensory input and motor output. In addition, a virtual visual world can still be introduced to surround the Air-Track as with
air-floating track spheres.
Innovative Methodology
1551AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
(Manita et al. 2015). Furthermore, with Air-Track we can
deliver a rich complete environment with different textures or
aperture widths that previously were typically used in freely
moving animals (Chen et al. 2015; Jadhav and Feldman 2010;
Krupa et al. 2004; Prigg et al. 2002; von Heimendahl et al.
2007; Wolfe et al. 2008). Nonetheless, like most head-fixed
preparations, it is necessary to keep the Air-Track system
perfectly horizontal, which prevents some possible experi-
ments, in particular tests of vestibular contribution; however,
this is an intrinsic difficulty for all head-fixed systems.
The tracking system we chose was the “Pixy” camera
attached to an Arduino-Uno microprocessor, which provided
convenient off-the-shelf tracking of the coordinates of the
platform in real-time. In principle, more sophisticated and
high-speed tracking systems could also be used. We placed an
emphasis in our study on easy-to-obtain components with the
intention of making the system easily available to any labora-
tory. For this reason, the code is open source, and hardware/
software descriptions are available online at http://www.neuro-
airtrack.com/. Because the system is small and compact, it can
be introduced into practically any existing recording setup
(e.g., under most in vitro microscope systems) and therefore
can easily be moved from one setup to another within a
laboratory to maximize the recording strategies. The total
material cost of our entire system was in the range of 200 –
500 (approximately $300 600) depending on the materials
used and manufacturing costs (e.g., 3D printing). We made
several mazes, including the plus maze presented here in this
paper and a Y-maze. Other shapes could be easily and flexibly
used using common 3D printers. Given the simplicity of the
system and its open source availability, there should be no
barrier against its introduction into any neuroscience laboratory
interested in behavioral paradigms based on active sensing.
ACKNOWLEDGMENTS
We thank the Charité Workshop for technical assistance, especially Alex-
ander Schill and Christian Koenig. We also thank Katja Frei for initial
establishment of a two-choice task on harnessed mice with an air table hover
cage. We also thank members of the Larkum laboratory, and in particular
Christina Bocklisch, Guy Doron, Albert Gidon, Naoya Takahashi, Keisuke
Sehara, and Julie Siebt, for useful discussions about earlier versions of this
manuscript.
GRANTS
This work was supported by grants from the Einstein Foundation Berlin (Y.
Winter, M. E. Larkum, and M. A. Nashaat), and the DFG EXC 257, Neuro-
Cure Center for Excellence (Y. Winter, M. E. Larkum) and Marie Curie
Fellowship (R. N. S. Sachdev).
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the authors.
AUTHOR CONTRIBUTIONS
M.A.N., Y.W., and M.E.L. conception and design of research; M.A.N.
performed experiments; M.A.N. and H.O. analyzed data; M.A.N., R.N.S.S.,
and M.E.L. interpreted results of experiments; M.A.N., H.O., R.N.S.S., and
M.E.L. prepared figures; M.A.N., H.O., R.N.S.S., and M.E.L. drafted manu-
script; M.A.N., H.O., R.N.S.S., Y.W., and M.E.L. edited and revised manu-
script; M.E.L. approved final version of manuscript.
REFERENCES
Acharya L, Aghajan ZM, Vuong C, Moore JJ, Mehta MR. Causal influence
of visual cues on hippocampal directional selectivity. Cell 164: 197–207,
2016.
Aghajan ZM, Acharya L, Moore JJ, Cushman JD, Vuong C, Mehta MR.
Impaired spatial selectivity and intact phase precession in two-dimensional
virtual reality. Nat Neurosci 18: 121–128, 2015.
Andermann ML, Gilfoy NB, Goldey GJ, Sachdev RN, Wolfel M, McCor-
mick DA, Reid RC, Levene MJ. Chronic cellular imaging of entire cortical
columns in awake mice using microprisms. Neuron 80: 900 –913, 2013.
Bermejo R, Harvey M, Gao P, Zeigler HP. Conditioned whisking in the rat.
Somatosens Mot Res 13: 225–233, 1996.
Chen JL, Margolis DJ, Stankov A, Sumanovski LT, Schneider BL,
Helmchen F. Pathway-specific reorganization of projection neurons in
somatosensory cortex during learning. Nat Neurosci 18: 1101–1108, 2015.
Crochet S, Petersen CC. Correlating whisker behavior with membrane
potential in barrel cortex of awake mice. Nat Neurosci 9: 608– 610, 2006.
Dombeck DA, Graziano MS, Tank DW. Functional clustering of neurons in
motor cortex determined by cellular resolution imaging in awake behaving
mice. J Neurosci 29: 13751–13760, 2009.
Dombeck DA, Harvey CD, Tian L, Looger LL, Tank DW. Functional
imaging of hippocampal place cells at cellular resolution during virtual
navigation. Nat Neurosci 13: 1433–1440, 2010.
Gener T, Perez-Mendez L, Sanchez-Vives MV. Tactile modulation of
hippocampal place fields. Hippocampus 23: 1453–1462, 2013.
Griffin AL, Owens CB, Peters GJ, Adelman PC, Cline KM. Spatial
representations in dorsal hippocampal neurons during a tactile-visual con-
ditional discrimination task. Hippocampus 22: 299 –308, 2012.
Guo ZV, Hires SA, Li N, O’Connor DH, Komiyama T, Ophir E, Huber D,
Bonardi C, Morandell K, Gutnisky D, Peron S, Xu NL, Cox J, Svoboda
K. Procedures for behavioral experiments in head-fixed mice. PLoS One 9:
e88678, 2014.
Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex
during a virtual-navigation decision task. Nature 484: 62– 68, 2012.
Harvey CD, Collman F, Dombeck DA, Tank DW. Intracellular dynamics of
hippocampal place cells during virtual navigation. Nature 461: 941–946,
2009.
Hentschke H, Haiss F, Schwarz C. Central signals rapidly switch tactile
processing in rat barrel cortex during whisker movements. Cereb Cortex 16:
1142–1156, 2006.
Holscher C, Schnee A, Dahmen H, Setia L, Mallot HA. Rats are able to
navigate in virtual environments. J Exp Biol 208: 561–569, 2005.
Jadhav SP, Feldman DE. Texture coding in the whisker system. Curr Opin
Neurobiol 20: 313–318, 2010.
Kaneko M, Stryker MP. Sensory experience during locomotion promotes
recovery of function in adult visual cortex. Elife 3: e02798, 2014.
Keller GB, Bonhoeffer T, Hubener M. Sensorimotor mismatch signals in
primary visual cortex of the behaving mouse. Neuron 74: 809– 815, 2012.
Kislin M, Mugantseva E, Molotkov D, Kulesskaya N, Khirug S, Kirilkin
I, Pryazhnikov E, Kolikova J, Toptunov D, Yuryev M, Giniatullin R,
Voikar V, Rivera C, Rauvala H, Khiroug L. Flat-floored air-lifted
platform: a new method for combining behavior with microscopy or elec-
trophysiology on awake freely moving rodents. J Vis Exp e51869, 2014.
Kleinfeld D, Ahissar E, Diamond ME. Active sensation: insights from the
rodent vibrissa sensorimotor system. Curr Opin Neurobiol 16: 435– 444,
2006.
Krupa DJ, Wiest MC, Shuler MG, Laubach M, Nicolelis MA. Layer-
specific somatosensory cortical activation during active tactile discrimina-
tion. Science 304: 1989 –1992, 2004.
Lenschow C, Brecht M. Barrel cortex membrane potential dynamics in social
touch. Neuron 85: 718 –725, 2015.
Manita S, Suzuki T, Homma C, Matsumoto T, Odagawa M, Yamada K,
Ota K, Matsubara C, Inutsuka A, Sato M, Ohkura M, Yamanaka A,
Yanagawa Y, Nakai J, Hayashi Y, Larkum ME, Murayama M. A
top-down cortical circuit for accurate sensory perception. Neuron 86: 1304 –
1316, 2015.
McGinley MJ, David SV, McCormick DA. Cortical membrane potential
signature of optimal states for sensory signal detection. Neuron 87: 179 –
192, 2015.
Musall S, von der Behrens W, Mayrhofer JM, Weber B, Helmchen F,
Haiss F. Tactile frequency discrimination is enhanced by circumventing
neocortical adaptation. Nat Neurosci 17: 1567–1573, 2014.
Innovative Methodology
1552 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
Niell CM, Stryker MP. Modulation of visual responses by behavioral state in
mouse visual cortex. Neuron 65: 472– 479, 2010.
Polack PO, Friedman J, Golshani P. Cellular mechanisms of brain state-
dependent gain modulation in visual cortex. Nat Neurosci 16: 1331–1339,
2013.
Poort J, Khan AG, Pachitariu M, Nemri A, Orsolic I, Krupic J, Bauza M,
Sahani M, Keller GB, Mrsic-Flogel TD, Hofer SB. Learning enhances
sensory and multiple non-sensory representations in primary visual cortex.
Neuron 86: 1478 –1490, 2015.
Prigg T, Goldreich D, Carvell GE, Simons DJ. Texture discrimination and
unit recordings in the rat whisker/barrel system. Physiol Behav 77: 671– 675,
2002.
Reimer J, Froudarakis E, Cadwell CR, Yatsenko D, Denfield GH, Tolias
AS. Pupil fluctuations track fast switching of cortical states during quiet
wakefulness. Neuron 84: 355–362, 2014.
Sachdev RN, Sato T, Ebner FF. Divergent movement of adjacent whiskers.
J Neurophysiol 87: 1440 –1448, 2002.
Saleem AB, Ayaz A, Jeffery KJ, Harris KD, Carandini M. Integration of
visual motion and locomotion in mouse visual cortex. Nat Neurosci 16:
1864 –1869, 2013.
Schneider DM, Nelson A, Mooney R. A synaptic and circuit basis for
corollary discharge in the auditory cortex. Nature 513: 189 –194, 2014.
Schwarz C, Hentschke H, Butovas S, Haiss F, Stuttgen MC, Gerdjikov
TV, Bergner CG, Waiblinger C. The head-fixed behaving rat–procedures
and pitfalls. Somatosens Mot Res 27: 131–148, 2010.
Sofroniew NJ, Cohen JD, Lee AK, Svoboda K. Natural whisker-guided
behavior by head-fixed mice in tactile virtual reality. J Neurosci 34:
9537–9550, 2014.
Sofroniew NJ, Vlasov YA, Andrew Hires S, Freeman J, Svoboda K. Neural
coding in barrel cortex during whisker-guided locomotion. Elife 4: 2015.
Vinck M, Batista-Brito R, Knoblich U, Cardin JA. Arousal and locomotion
make distinct contributions to cortical activity patterns and visual encoding.
Neuron 86: 740 –754, 2015.
von Heimendahl M, Itskov PM, Arabzadeh E, Diamond ME. Neuronal
activity in rat barrel cortex underlying texture discrimination. PLoS Biol 5:
e305, 2007.
Welsh JP, Lang EJ, Suglhara I, Llinas R. Dynamic organization of motor
control within the olivocerebellar system. Nature 374: 453– 457, 1995.
Wolfe J, Hill DN, Pahlavan S, Drew PJ, Kleinfeld D, Feldman DE. Texture
coding in the rat whisker system: slip-stick versus differential resonance.
PLoS Biol 6: e215, 2008.
Zagha E, Casale AE, Sachdev RN, McGinley MJ, McCormick DA. Motor
cortex feedback influences sensory processing by modulating network state.
Neuron 79: 567–578, 2013.
Innovative Methodology
1553AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol doi:10.1152/jn.00088.2016 www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
... This powerful new development of whole environments around head fixed mice has focused almost exclusively on visual information. A second related development has been to use "real world" floating-platform environments that mice navigate while head fixed ( [9,25,26]). Floating platform approaches are well-suited to tactile and multi-modal behaviors. ...
... The skull was exposed, the fascia on the bone scraped off with a dental scraper and the skull was air dried. A light-weight aluminum head post was laid on the skull and RelyX (3M, Minnesota, US) cement was used to affix the head post to the skull ( [9,25,27]). Black jet acrylic (Ortho Jet, Lang Dental) was used as a second layer to cover the exposed bone and to enhance the cementing of the head post. ...
... The Air-Track platform utilizes a simple design with wide experimental potential ( [25]). In this report, we used a clear plexiglass air table mounted on aluminum legs (Fig 1A). ...
Article
Full-text available
The use of head fixation has become routine in systems neuroscience. However, whether the behavior changes with head fixation, whether animals can learn aspects of a task while freely moving and transfer this knowledge to the head fixed condition, has not been examined in much detail. Here, we used a novel floating platform, the “Air-Track”, which simulates free movement in a real-world environment to address the effect of head fixation and developed methods to accelerate training of behavioral tasks for head fixed mice. We trained mice in a Y maze two choice discrimination task. One group was trained while head fixed and compared to a separate group that was pre-trained while freely moving and then trained on the same task while head fixed. Pre-training significantly reduced the time needed to relearn the discrimination task while head fixed. Freely moving and head fixed mice displayed similar behavioral patterns, however, head fixation significantly slowed movement speed. The speed of movement in the head fixed mice depended on the weight of the platform. We conclude that home-cage pre-training improves learning performance of head fixed mice and that while head fixation obviously limits some aspects of movement, the patterns of behavior observed in head fixed and freely moving mice are similar.
... The predominant approach to circumventing the shortcomings of head-fixed behaviors is to place head-fixed rodents in a virtual reality (VR) environment 19,20 . Animals are placed on a running wheel 21 , disc 22,23 , a floating omnidirectional ball treadmill 24 , or a flat arena on an air cushion 25,26 , and allowed to locomote. Despite the groundbreaking progress of these systems, current VR behaviors lack vestibular inputs, raising questions about whether rodents accept the VR environment as real or whether they merely learn to interact with it 20 . ...
... B: Schematic of conventional head-fixed VR systems that allow 2d-navigation. Mice are head-fixed and the floor they locomote on, either a floating flat arena 25,26 or a spherical treadmill 45 rotates under them. C: Schematic of head rotation systems. ...
... This axial constraint system is then combined with a separate system that allows the animal to locomote. Here we use a 2-D air maze 25,26 that is rotationally constrained (can translate in x and y, but not rotate -see Fig.9 for details). Through the combination of the 1 degree-of-freedom headpost and the 2 degrees-of-freedom air maze, animals are now free to translate and rotate their heads and bodies, completely removing the visual/motor/vestibular mismatch of current VR systems 20 . ...
Preprint
Full-text available
Understanding how the biology of the brain gives rise to the computations that drive behavior requires high fidelity, large scale, and subcellular measurements of neural activity. 2-photon microscopy is the primary tool that satisfies these requirements, particularly for measurements during behavior. However, this technique requires rigid head-fixation, constraining the behavioral repertoire of experimental subjects. Increasingly, complex task paradigms are being used to investigate the neural substrates of complex behaviors, including navigation of complex environments, resolving uncertainty between multiple outcomes, integrating unreliable information over time, and/or building internal models of the world. In rodents, planning and decision making processes are often expressed via head and body motion. This produces a significant limitation for head-fixed two-photon imaging. We therefore developed a system that overcomes a major problem of head-fixation: the lack of rotational vestibular input. The system measures rotational strain exerted by mice on the head restraint, which consequently drives a motor, rotating the constraint system and dissipating the strain. This permits mice to rotate their heads in the azimuthal plane with negligible inertia and friction. This stable rotating head-fixation system allows mice to explore physical or virtual 2-D environments. To demonstrate the performance of our system, we conducted 2-photon GCaMP6f imaging in somas and dendrites of pyramidal neurons in mouse retrosplenial cortex. We show that the subcellular resolution of the system’s 2-photon imaging is comparable to that of conventional head-fixed experiments. Additionally, this system allows the attachment of heavy instrumentation to the animal, making it possible to extend the approach to large-scale electrophysiology experiments in the future. Our method enables the use of state-of-the-art imaging techniques while animals perform more complex and naturalistic behaviors than currently possible, with broad potential applications in systems neuroscience.
... One solution to this problem is to use a floating real-world environment that moves around the animal when it moves. Such a system requires an arena that is not anchored to the recording setup and moves freely in response to animal movement 21 . ...
Article
Full-text available
Head-fixation of mice enables high-resolution monitoring of neuronal activity coupled with precise control of environmental stimuli. Virtual reality can be used to emulate the visual experience of movement during head fixation, but a low inertia floating real-world environment (mobile homecage, MHC) has the potential to engage more sensory modalities and provide a richer experimental environment for complex behavioral tasks. However, it is not known whether mice react to this adapted environment in a similar manner to real environments, or whether the MHC can be used to implement validated, maze-based behavioral tasks. Here, we show that hippocampal place cell representations are intact in the MHC and that the system allows relatively long (20 min) whole-cell patch clamp recordings from dorsal CA1 pyramidal neurons, revealing sub-threshold membrane potential dynamics. Furthermore, mice learn the location of a liquid reward within an adapted T-maze guided by 2-dimensional spatial navigation cues and relearn the location when spatial contingencies are reversed. Bilateral infusions of scopolamine show that this learning is hippocampus-dependent and requires intact cholinergic signalling. Therefore, we characterize the MHC system as an experimental tool to study sub-threshold membrane potential dynamics that underpin complex navigation behaviors.
... A second option involves the use of an air-levitated platform for head-fixed mice to move on: this allows the animal to traverse a physical environment containing multisensory (visual, tactile, olfactory) cues (Kislin et al., 2014;Nashaat et al., 2016). Such systems provide a more realistic environment than VR while still allowing high trial counts. ...
Article
Full-text available
The use of head fixation in mice is increasingly common in research, its use having initially been restricted to the field of sensory neuroscience. Head restraint has often been combined with fluid control, rather than food restriction, to motivate behaviour, but this too is now in use for both restrained and non-restrained animals. Despite this, there is little guidance on how best to employ these techniques to optimise both scientific outcomes and animal welfare. This article summarises current practices and provides recommendations to improve animal wellbeing and data quality, based on a survey of the community, literature reviews, and the expert opinion and practical experience of an international working group convened by the UK’s National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs). Topics covered include head fixation surgery and post-operative care, habituation to restraint, and the use of fluid/food control to motivate performance. We also discuss some recent developments that may offer alternative ways to collect data from large numbers of behavioural trials without the need for restraint. The aim is to provide support for researchers at all levels, animal care staff, and ethics committees to refine procedures and practices in line with the refinement principle of the 3Rs.
... Recently, a 2D real-world system in which mice are headfixed while navigating a track floating on air has been developed (Kislin et al., 2014;Nashaat et al., 2016). The system allows for sensory feedback and head immobility allows for intracellular recording and two-photon imaging. ...
Article
Full-text available
The hippocampal place cell system in rodents has provided a major paradigm for the scientific investigation of memory function and dysfunction. Place cells have been observed in area CA1 of the hippocampus of both freely moving animals, and of head-fixed animals navigating in virtual reality environments. However, spatial coding in virtual reality preparations has been observed to be impaired. Here we show that the use of a real-world environment system for head-fixed mice, consisting of an air-floating track with proximal cues, provides some advantages over virtual reality systems for the study of spatial memory. We imaged the hippocampus of head-fixed mice injected with the genetically encoded calcium indicator GCaMP6s while they navigated circularly constrained or open environments on the floating platform. We observed consistent place tuning in a substantial fraction of cells despite the absence of distal visual cues. Place fields remapped when animals entered a different environment. When animals re-entered the same environment, place fields typically remapped over a time period of multiple days, faster than in freely moving preparations, but comparable with virtual reality. Spatial information rates were within the range observed in freely moving mice. Manifold analysis indicated that spatial information could be extracted from a low-dimensional subspace of the neural population dynamics. This is the first demonstration of place cells in head-fixed mice navigating on an air-lifted real-world platform, validating its use for the study of brain circuits involved in memory and affected by neurodegenerative disorders.
Chapter
The first line of defense for the central nervous system (CNS) against injury or disease is provided by microglia. Microglia were long believed to stay in a dormant/resting state, reacting only to injury or disease. This view changed dramatically with the development of modern imaging techniques that allowed the study of microglial behavior in the intact brain over time, to reveal the dynamic nature of their responses. Over the past two decades, in vivo imaging using multiphoton microscopy has revealed numerous new functions of microglia in the developing, adult, aged, injured, and diseased CNS. As the most dynamic cells in the brain, microglia continuously contact all structures and cell types, such as glial and vascular cells, neuronal cell bodies, axons, dendrites, and dendritic spines, and are believed to play a central role in sculpting neuronal networks throughout life. Following trauma, or in neurodegenerative or neuroinflammatory diseases, microglial responses range from protective to harmful, underscoring the need to better understand their diverse roles and states in different pathological conditions. In this chapter, we introduce multiphoton microscopy and discuss recent advances in structural and functional imaging technologies that have expanded our toolbox to study microglial states and behaviors in new ways and depths. We also discuss relevant mouse models available for in vivo imaging studies of microglia and review how such studies are constantly refining our understanding of the multifaceted role of microglia in the healthy and diseased CNS.
Chapter
Central synapses are typically ensheathed by nanoscopic protrusions of the astrocytic cell body known as perisynaptic astrocytic processes. While advances in in vivo laser-scanning microscopy and bioengineering have provided ample tools to measure intracellular activity of both neurons and astrocytes through fluorescent biosensors, there remain significant challenges when assessing specific responses at single synapses that incorporate activity from both astrocytic and neuronal compartments. In this chapter, we describe a simple strategy to label the astrocytic population present at axonal terminations using viral vectors, in combination with sparse labeling of the axonal projections themselves, such that individual axonal boutons en passant within the astrocytic arbor can be optically assessed using two-photon light microscopy. We discuss some ascending and descending pathways that are amenable to this strategy, describe the various protocols needed to set up such experiments, and consider the arousal state of the animal in order to account for constitutive or cryptic activity. With more recent advances in optogenetic actuators and genetically encoded indicators, including those for a wide variety of neurotransmitters, this strategy can be applied to more precisely investigate how astrocytes impact synaptic efficacy and plasticity at individual synapses in the central nervous system.
Article
Full-text available
Huntington’s disease (HD) is a fatal, hereditary neurodegenerative disorder that causes chorea, cognitive deficits, and psychiatric symptoms. It is characterized by accumulation of mutant Htt protein, which primarily impacts striatal medium-sized spiny neurons (MSNs), as well as cortical pyramidal neurons (CPNs), causing synapse loss and eventually cell death. Perturbed Ca²⁺ homeostasis is believed to play a major role in HD, as altered Ca²⁺ homeostasis often precedes striatal dysfunction and manifestation of HD symptoms. In addition, dysregulation of Ca²⁺ can cause morphological and functional changes in MSNs and CPNs. Therefore, Ca²⁺ imaging techniques have the potential of visualizing changes in Ca²⁺ dynamics and neuronal activity in HD animal models. This minireview focuses on studies using diverse Ca²⁺ imaging techniques, including two-photon microscopy, fiber photometry, and miniscopes, in combination of Ca²⁺ indicators to monitor activity of neurons in HD models as the disease progresses. We then discuss the future applications of Ca²⁺ imaging to visualize disease mechanisms and alterations associated with HD, as well as studies showing how, as a proof-of-concept, Ca²⁺imaging using miniscopes in freely-behaving animals can help elucidate the differential role of direct and indirect pathway MSNs in HD symptoms.
Article
Full-text available
Navigation through complex environments requires motor planning, motor preparation, and the coordination between multiple sensory-motor modalities. For example, the stepping motion when we walk is coordinated with motion of the torso, arms, head, and eyes. In rodents, movement of the animal through the environment is coordinated with whisking. Even head-fixed mice navigating a plus maze position their whiskers asymmetrically with the bilateral asymmetry signifying the upcoming turn direction. Here we report that, in addition to moving their whiskers, on every trial mice also move their eyes conjugately in the direction of the upcoming turn. Not only do mice move their eyes, but they coordinate saccadic eye movement with the asymmetric positioning of the whiskers. Our analysis shows that asymmetric positioning of whiskers predicted the turn direction that mice will make at an earlier stage than eye movement. Consistent with these results, our observations also revealed that whisker asymmetry increases before saccadic eye movement. Importantly, this work shows that when rodents plan for active behavior, their motor plans can involve both eye and whisker movement. We conclude that, when mice are engaged in and moving through complex real-world environments, their behavioral state can be read out in the movement of both their whiskers and eyes.
Article
Full-text available
A major frontier in neuroscience is to find neural correlates of perception, learning, decision making, and a variety of other types of behavior. In the last decades, modern devices allow simultaneous recordings of different operant responses and the electrical activity of large neuronal populations. However, the commercially available instruments for studying operant conditioning are expensive, and the design of low-cost chambers has emerged as an appealing alternative to resource-limited laboratories engaged in animal behavior. In this article, we provide a full description of a platform that records the operant behavior and synchronizes it with the electrophysiological activity. The programming of this platform is open source, flexible, and adaptable to a wide range of operant conditioning tasks. We also show results of operant conditioning experiments with freely moving rats with simultaneous electrophysiological recordings.
Article
Full-text available
Animals seek out relevant information by moving through a dynamic world, but sensory systems are usually studied under highly constrained and passive conditions that may not probe important dimensions of the neural code. Here, we explored neural coding in the barrel cortex of head-fixed mice that tracked walls with their whiskers in tactile virtual reality. Optogenetic manipulations revealed that barrel cortex plays a role in wall-tracking. Closed-loop optogenetic control of layer 4 neurons can substitute for whisker-object contact to guide behavior resembling wall tracking. We measured neural activity using two-photon calcium imaging and extracellular recordings. Neurons were tuned to the distance between the animal snout and the contralateral wall, with monotonic, unimodal, and multimodal tuning curves. This rich representation of object location in the barrel cortex could not be predicted based on simple stimulus-response relationships involving individual whiskers and likely emerges within cortical circuits.
Article
Full-text available
We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Article
Full-text available
The impact of social stimuli on the membrane potential dynamics of barrel cortex neurons is unknown. We obtained in vivo whole-cell recordings in the barrel cortex of head-restrained rats while they interacted with conspecifics. Social touch was associated with a depolarization and large membrane potential fluctuations locked to the rat's whisking. Both depolarization and membrane potential fluctuations were already observed prior to contact and did not occur during free whisking. This anticipatory pre-contact depolarization was not seen in passive social touch in anesthetized animals. The membrane potential fluctuations locked to the rat's whisking observed in interactions with awake conspecifics were larger than those seen for whisking onto nonconspecific stimuli (stuffed rats, objects, and the experimenter's hand). Responses did not correlate with whisker movement parameters. We conclude that responses to social touch differ from conventional tactile responses in (1) amplitude, (2) locking to whisking, and (3) pre-contact membrane potential changes. Copyright © 2015 Elsevier Inc. All rights reserved.
Article
Hippocampal neurons show selectivity with respect to visual cues in primates, including humans, but this has never been found in rodents. To address this long-standing discrepancy, we measured hippocampal activity from rodents during real-world random foraging. Surprisingly, ∼25% of neurons exhibited significant directional modulation with respect to visual cues. To dissociate the contributions of visual and vestibular cues, we made similar measurements in virtual reality, in which only visual cues were informative. Here, we found significant directional modulation despite the severe loss of vestibular information, challenging prevailing theories of directionality. Changes in the amount of angular information in visual cues induced corresponding changes in head-directional modulation at the neuronal and population levels. Thus, visual cues are sufficient for-and play a predictable, causal role in-generating directionally selective hippocampal responses. These results dissociate hippocampal directional and spatial selectivity and bridge the gap between primate and rodent studies.
Article
To investigate synaptic events underlying sensory perception, we made whole-cell membrane potential recordings of barrel cortex neurons in awake mice while recording whisker-related behavior. During quiet periods, we recorded slow, large-amplitude membrane potential changes, which switched during whisking to small, fast fluctuations that were correlated with whisker position. Robust subthreshold responses were evoked by passive whisker stimulation during quiet behavior and by active whisker contact with an object.
Article
In the mammalian brain, sensory cortices exhibit plasticity during task learning, but how this alters information transferred between connected cortical areas remains unknown. We found that divergent subpopulations of cortico-cortical neurons in mouse whisker primary somatosensory cortex (S1) undergo functional changes reflecting learned behavior. We chronically imaged activity of S1 neurons projecting to secondary somatosensory (S2) or primary motor (M1) cortex in mice learning a texture discrimination task. Mice adopted an active whisking strategy that enhanced texture-related whisker kinematics, correlating with task performance. M1-projecting neurons reliably encoded basic kinematics features, and an additional subset of touch-related neurons was recruited that persisted past training. The number of S2-projecting touch neurons remained constant, but improved their discrimination of trial types through reorganization while developing activity patterns capable of discriminating the animal's decision. We propose that learning-related changes in S1 enhance sensory representations in a pathway-specific manner, providing downstream areas with task-relevant information for behavior.
Article
The neural correlates of optimal states for signal detection task performance are largely unknown. One hypothesis holds that optimal states exhibit tonically depolarized cortical neurons with enhanced spiking activity, such as occur during movement. We recorded membrane potentials of auditory cortical neurons in mice trained on a challenging tone-in-noise detection task while assessing arousal with simultaneous pupillometry and hippocampal recordings. Arousal measures accurately predicted multiple modes of membrane potential activity, including rhythmic slow oscillations at low arousal, stable hyperpolarization at intermediate arousal, and depolarization during phasic or tonic periods of hyper-arousal. Walking always occurred during hyper-arousal. Optimal signal detection behavior and sound-evoked responses, at both sub-threshold and spiking levels, occurred at intermediate arousal when pre-decision membrane potentials were stably hyperpolarized. These results reveal a cortical physiological signature of the classically observed inverted-U relationship between task performance and arousal and that optimal detection exhibits enhanced sensory-evoked responses and reduced background synaptic activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Article
A fundamental issue in cortical processing of sensory information is whether top-down control circuits from higher brain areas to primary sensory areas not only modulate but actively engage in perception. Here, we report the identification of a neural circuit for top-down control in the mouse somatosensory system. The circuit consisted of a long-range reciprocal projection between M2 secondary motor cortex and S1 primary somatosensory cortex. In vivo physiological recordings revealed that sensory stimulation induced sequential S1 to M2 followed by M2 to S1 neural activity. The top-down projection from M2 to S1 initiated dendritic spikes and persistent firing of S1 layer 5 (L5) neurons. Optogenetic inhibition of M2 input to S1 decreased L5 firing and the accurate perception of tactile surfaces. These findings demonstrate that recurrent input to sensory areas is essential for accurate perception and provide a physiological model for one type of top-down control circuit. Copyright © 2015 Elsevier Inc. All rights reserved.
Article
Spontaneous and sensory-evoked cortical activity is highly state-dependent, yet relatively little is known about transitions between distinct waking states. Patterns of activity in mouse V1 differ dramatically between quiescence and locomotion, but this difference could be explained by either motor feedback or a change in arousal levels. We recorded single cells and local field potentials from area V1 in mice head-fixed on a running wheel and monitored pupil diameter to assay arousal. Using naturally occurring and induced state transitions, we dissociated arousal and locomotion effects in V1. Arousal suppressed spontaneous firing and strongly altered the temporal patterning of population activity. Moreover, heightened arousal increased the signal-to-noise ratio of visual responses and reduced noise correlations. In contrast, increased firing in anticipation of and during movement was attributable to locomotion effects. Our findings suggest complementary roles of arousal and locomotion in promoting functional flexibility in cortical circuits. Copyright © 2015 Elsevier Inc. All rights reserved.