Content uploaded by Robert N S Sachdev
Author content
All content in this area was uploaded by Robert N S Sachdev on Nov 08, 2017
Content may be subject to copyright.
Innovative Methodology
CALL FOR PAPERS Active Sensing
Air-Track: a real-world floating environment for active sensing in head-fixed
mice
XMostafa A. Nashaat,
1,2
Hatem Oraby,
1
Robert N. S. Sachdev,
1
York Winter,
1
and Matthew E. Larkum
1
1
Neurocure Cluster of Excellence, Humboldt-Universität zu Berlin, Berlin, Germany; and
2
Berlin School of Mind and Brain,
Humboldt-Universität zu Berlin, Berlin, Germany
Submitted 1 February 2016; accepted in final form 1 July 2016
Nashaat MA, Oraby H, Sachdev RN, Winter Y, Larkum ME.
Air-Track: a real-world floating environment for active sensing in
head-fixed mice. J Neurophysiol 116: 1542–1553, 2016. First pub-
lished July 13, 2016; doi:10.1152/jn.00088.2016.—Natural behavior
occurs in multiple sensory and motor modalities and in particular is
dependent on sensory feedback that constantly adjusts behavior. To
investigate the underlying neuronal correlates of natural behavior, it is
useful to have access to state-of-the-art recording equipment (e.g.,
2-photon imaging, patch recordings, etc.) that frequently requires head
fixation. This limitation has been addressed with various approaches
such as virtual reality/air ball or treadmill systems. However, achiev-
ing multimodal realistic behavior in these systems can be challenging.
These systems are often also complex and expensive to implement.
Here we present “Air-Track,” an easy-to-build head-fixed behavioral
environment that requires only minimal computational processing.
The Air-Track is a lightweight physical maze floating on an air table
that has all the properties of the “real” world, including multiple
sensory modalities tightly coupled to motor actions. To test this
system, we trained mice in Go/No-Go and two-alternative forced
choice tasks in a plus maze. Mice chose lanes and discriminated
apertures or textures by moving the Air-Track back and forth and
rotating it around themselves. Mice rapidly adapted to moving the
track and used visual, auditory, and tactile cues to guide them in
performing the tasks. A custom-controlled camera system monitored
animal location and generated data that could be used to calculate
reaction times in the visual and somatosensory discrimination tasks.
We conclude that the Air-Track system is ideal for eliciting natural
behavior in concert with virtually any system for monitoring or
manipulating brain activity.
virtual reality; psychophysics; multisensory perception; head-fixed
behavior
NEW & NOTEWORTHY
This paper provides a description of a multimodal head-
fixed behavioral environment for rodents based on an
air-lifted platform design. The environment is compact,
cheap to build, and can be implemented on most recording
setups. The design allows for complex behavioral para-
digms to be combined with modern sophisticating record-
ing techniques such as two-photon imaging. This system
provides an alternative to virtual reality-based head-fixed
systems.
NATURAL BEHAVIORS ARE COMPLEX and occur in multiple sensory-
motor dimensions simultaneously. For example, a conversation
between people or even a simple stroll down a corridor engages
visual, auditory, tactile, kinesthetic, proprioceptive, and olfac-
tory senses. Accurate perception of these cues requires tight
and reliable coupling to the motor system (Kleinfeld et al.
2006; Zagha et al. 2013). Mismatches between actions and
sensory feedback profoundly disturb natural behavior (e.g.,
missing the “last step” while climbing stairs) (Keller et al.
2012). While natural behavior is usually performed during
multimodal sensation/perception, the behaviors typically stud-
ied in laboratory settings are impoverished and often con-
strained to a single sensory or motor modality (Crochet and
Petersen 2006). Advances in our understanding of the impor-
tance of natural feedback-controlled behavior has triggered a
gradual but fundamental shift toward multimodal multidimen-
sional behavioral approaches.
Rodents are an ideal choice for neuroscientists interested in
precise and invasive recording methodologies, since they are
relatively easy to train on simple tasks. Many of the experi-
mental methods, however, require the head of the animal to
remain stationary during the experiment. The early head-fixed
rodent behavioral approaches were therefore based on reduced
systems with only a single sensory modality, such as whisker
movement (Bermejo et al. 1996; Crochet and Petersen 2006;
Hentschke et al. 2006; Krupa et al. 2004; Sachdev et al. 2002;
Welsh et al. 1995). The shift towards more complex natural-
istic multimodal behavior began with awake mice head fixed
atop an air ball or a treadmill (Dombeck et al. 2009; Harvey et
al. 2009; Poort et al. 2015). Importantly, various studies have
reported that cortical responses are different when the animal is
engaged in multidimensional behavioral tasks (Dombeck et al.
2010; Harvey et al. 2012; Lenschow and Brecht 2015; Musall
et al. 2014; Poort et al. 2015; Sofroniew et al. 2015). Just the
act of walking on a treadmill changes visual and auditory
responses (McGinley et al. 2015; Niell and Stryker 2010;
Polack et al. 2013; Reimer et al. 2014; Saleem et al. 2013;
Schneider et al. 2014; Sofroniew et al. 2015).
One advantage of the air ball and treadmill methods is that
the movement of the mouse can be tracked and be used to
control a virtual environment (Harvey et al. 2009; Holscher
Address for reprint requests and other correspondence: M. E. Larkum,
Humboldt Univ., Charitéplatz 1, Virchowweg 6, Berlin, 10117, Germany
(e-mail: matthew.larkum@gmail.com).
J Neurophysiol 116: 1542–1553, 2016.
First published July 13, 2016; doi:10.1152/jn.00088.2016.
1542 0022-3077/16 Copyright © 2016 the American Physiological Society www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
et al. 2005). For accurate correspondence between the mouse’s
movement and the virtual world, this environment is best
represented using visual information usually in the form of 2D
monitors in the visual field of the head-fixed animal. More
recently, an equivalent somatosensory approach has been im-
plemented (Sofroniew et al. 2014). This seemingly simple
change in modality from visual to somatosensory presents a
major advantage creating a natural tactile representation than
virtual reality. However, it brings various problems in accu-
rately matching the virtual representation to ordinary real-
world situations, such as the representation of corners and
optical/tactile flow. To be effective, virtual environments also
require sophisticated software for tracking the animal and
mapping these movements to the virtual world. This presents
further difficulties estimating the perceptual experience of the
animal, which is not necessarily intuitive for humans designing
the mapping interface. For instance, place tuning of hippocam-
pal neurons in rodents is different when they are in a virtual
world as opposed to a real world (Acharya et al. 2016; Aghajan
et al. 2015). In practice, the qualitative perceptual experience
of rodents in virtual reality systems is probably impossible to
match with actual real-world experiences and, in any case, it is
impossible to definitively demonstrate a correspondence be-
tween the subjective experience of the animal and the real
world.
Here, we present an alternative behavioral system as one
solution to the problems of virtual reality approaches while
retaining all the benefits of head-fixed experiments. Our system
uses a real physical environment that rests on a cushion of air
and moves around the animal’s body under the direct control of
the animal itself. The system, “Air-Track,” is based on the
airlifted flat platform described by Kislin and colleagues (Kis-
lin et al. 2014). We successfully used Air-Track to train
head-fixed mice to perform a spatial orientation and multi-
modal discrimination task within only two weeks.
METHODS
Experiments were performed with approval of the local state
authority in Berlin (LAGeSo) that is advised by the animal use ethics
committee.
Air-Track components. The Air-Track consisted of three essential
custom-made components: 1) an air table that provided the air cush-
ion; 2) a platform constructed of lightweight material for floating on
air that included the custom-designed maze with walls (Fig. 1, Aand
B); and 3) a microcontroller system for tracking the position of the
platform and controlling reward delivery (Figs. 1Cand 2).
The air table. Our solution used a transparent Plexiglas box 20 ⫻
24 ⫻3 cm, mounted on aluminum legs, forming a small table. The
table had one intake port that was pressurized with air at 300 kPa
(⬃45 psi) and small (1 mm) holes, spaced 8 mm apart, providing jets
of pressurized air. The working surface of the table was 16 ⫻20 cm.
The platform. The circular platform (15 cm in diameter) was 3D
printed. The base of the platform was 3 mm thick, and this was
sufficient to hold the platform floating steadily on a cushion of air. The
platform attached atop the base was shaped as a plus maze, with four
lanes, each 5 cm long, 3 cm wide, and 3.5 cm in height, and weighed
180 grams. In our design, the walls and terminal aperture of each lane
Fig. 1. Air-Track setup enables closed-loop monitoring of behavior. Schematic
of Air-Track design side view (A) and top view (B). The plus maze sits atop
the Plexiglas air table showing the light-emitting diode (LED) for indicating
correct lanes (on the left), the actuator for positioning the lick spouts (in black
and red) also on the left, and the head post attachment in gray on the right.C:
schematic of the Pixy camera and the red/green color tag under the Air-Track
platform for position and orientation tracking. The camera has a 20-ms
sampling resolution. Pixy camera uses color information from the maze color
tag to determine the position of the animal in the maze.
Innovative Methodology
1543AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
could be modified: for textures, either smooth or with gratings etched
with 2 mm spatial periods, and for aperture either 1 or 2 cm wide. Just
above the platform, and outside the rotary axis of the maze, a white
light-emitting diode (LED) attached to a holder was used as a visual
stimulus for choosing lanes. The LED was maintained in a constant
state of either on or off based on the animal relative location in the
maze. Similarly, a linear actuator used for positioning the lick spouts
was placed ⬃3 cm from the nose of the animal. The linear actuator
advanced the dual lick spouts, connected to capacitive sensors, to the
animal. When the animal arrived at the reward location in the maze,
it could either passively obtain a reward irrespective of spout choice
or perform a two-alternative forced choice (2AFC) between the two
lick spouts.
Monitoring system. We chose an off-the-shelf approach to video
tracking using a Pixy camera (CMUcam5 Image Sensor) placed below
the air table where it could detect the rotation and position of a
two-color mark glued to the bottom surface of the plus maze (Fig.
1C). This camera is unique in its ability to track colors because it
Fig. 2. Schematic for Air-Track circuit. Top left, schematic for Arduino-Uno connected through digital output to control a sound cue buzzer, LED, and H Bridge
(L293D) to control linear reward actuator and two solenoids driven by a 12-V direct current (DC) power supply. The Pixy camera was connected to Arduino
via an in-circuit serial programming port. The lick detectors (MPR121 module) were connected to Arduino via two analog and one digital inputs and powered
with 3.3 V.
Innovative Methodology
1544 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
processes color information in real-time on board and reports the
motion of the colored objects in x,y, and tilt angle with 50 frames/s
precision. Therefore, when the Air-Track platform was moved, the
position and orientation of the mouse within the maze was updated
every 20 ms. The output was streamed to an Arduino-Uno microcon-
troller that processed Pixy camera inputs for animal location and also
controlled the LED, the actuator that positioned the lick spouts, and the
reward delivery solenoids (Fig. 2). As the mouse rotated the maze, the
camera detected its position and set the LED status to on/off based on the
trial offset. The LED status for each lane was defined by a range of angles
(i.e., ¼ circle sector) for each lane to be compared with the real-time data
from the camera. Within a given trial, the LED was turned off only while
the mice faced the rewarding lane (Figs. 1Cand 3).
Surgery. In preparation for head fixation, adult 45-day-old mice
(C57BL/6) were anesthetized with ketamine-xylazine (90:10 mg/kg).
A lightweight aluminum head post was attached using dental cement
or Rely-X (3M ESPE, London, Ontario, Canada) (Andermann et al.
2013). Animals were monitored during recovery and were given
antibiotic (enrofloxacin) and analgesics (buprenorphine and carpofen)
during the recovery period.
Fig. 3. Flow chart of behavioral decisions in the Air-Track system. The Pixy camera keeps track of the color codes underneath the platform to report mouse
position in relation to the platform. This information controls, accordingly, assigning a new lane, moving the linear reward actuator, the timing of visual and
auditory cues, and reward delivery.
Innovative Methodology
1545AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
Training paradigm. One week after surgery, mice were acclimatized
to the Air-Track in incremental 20- to 120-min-long sessions (Fig. 4).
They were placed in the plus maze, head-fixed, and periodically given 1
ml/day of sugar-water (10% condensed milk with original sucrose con-
centration 45%) as a reward (Schwarz et al. 2010).
In the first one to two sessions (days) of training, we found it
helpful if the experimenter was actively engaged with the platform
and the animal by gently nudging the platform, getting the animal
habituated to the motion of the maze. During this active exploration
phase, mice learned to rotate the maze and to orient in the lanes going
forward and backward. In these sessions, a reward was given auto-
matically when the animal had reached the end of any lane irrespec-
tive of any task parameters.
In the third to fifth sessions, mice were acclimatized enough to
propel the maze, and rotate it, without any need for experimenter
intervention. During this active visual training phase, mice learned to
choose lanes based on the presence or absence of a white LED/visual
stimulus and collect the reward from a single lick of the reward spout.
By the sixth to eighth session, the training paradigm shifted to a
two-choice task, where visual stimuli still determined the correct lane,
but the reward was only delivered from one of two lick spouts. During
this stage of passive tactile training, mice were given rewards auto-
matically, without initially licking, at one of the two lick spouts based
on the texture of the wall in order for them to passively learn
associating each texture with one (left or right) of the two lick spouts.
From this point onward, mice were trained for four to six sessions
to discriminate either wall texture or aperture width. In this active
tactile training phase, the mouse still had to choose a correct lane and
actively discriminate either between two types of aperture width or
wall textures. To obtain a reward, mice had to initiate licking the
correct lick spout, and decision was determined by first lick.
Behavioral training. Animals were water restricted (body weight
stabilized ⬎85% of initial weight) and conditioned to orient within the
floating plus maze. The orientation of the lick spout was optimized for
each mouse, in each session, to ensure that the mouse was positioned at
an appropriate distance (⬃3 cm) from the lick spout. Animals were
trained in near-total darkness, with a white light source directed under the
nontransparent black platform for the Pixy camera. The light was suffi-
cient for the Pixy camera beneath the platform to track the position but
was not visible above the platform [Supplemental Video 1 (Supplemental
data for this article is available on the journal website.)].
Each complete trial can be divided into four temporally distinct
stages. The trial began with the mouse at the center of the plus maze
(Fig. 5, Aand B). In this phase of the trial, the mouse had to pay
attention to the LED status to choose the correct lane. To find the
correct lane, mice rotated the maze, clockwise or counterclockwise.
Over days, mice developed an individual preference for the direction
of platform rotation (Supplemental Video 2).
The second stage was the choice of the correct lane by the mouse.
This choice was reflected in the speed of rotation and in the position-
ing of the head of the mouse with respect to the walls of the lane
(Supplemental Video 3).
The third stage was the entry into the lane. As the mouse moved
forward, it touched with its whiskers the lane walls and the terminal
aperture as the mouse approached the end of the lane. At this stage,
the linear actuator moved the two lick spouts in front of the mouse.
The fourth stage was the decision to lick the left or right spout.
Animals were conditioned to lick right or left based on the type of tactile
cue presented. In the case of aperture width, mice were trained to lick
Fig. 4. Training paradigm. Duration of experimental
phases: head fixation, habituation, and behavioral
training along 2 wk. During active exploration (pale
green, sessions 1 and 2) the experimenter supervised
the mouse while moving the plus maze and collect-
ing rewards at each lane. During active visual train-
ing (red, sessions 3 and 4), 6 mice were trained to
discriminate a visual cue until performance reached
⬎70%. During passive tactile training (violet, 1–3
sessions), mice were trained to navigate dark lanes
and obtain rewards from a lick spout. Three mice
were trained to obtain reward corresponding to lane
aperture width (1 session) in a two-choice task,
whereas the other 3 mice obtain reward based on
lane texture (3 sessions) in a delayed two-choice
task. Finally, training advanced to combining active
visual and active tactile training (dark green, 4 – 6
days) with mice trained to sequence of visual cue
discrimination (G-N-G) followed by tactile discrim-
ination for either an aperture width or wall texture
(2AFC) in the time line of one trial.
Innovative Methodology
1546 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
right on experiencing the wide aperture (2 cm) and left on experiencing
the narrow aperture (1 cm). In the case of texture discrimination, mice
were trained to lick right on experiencing rough texture and left on
experiencing smooth texture (Supplemental Video 4). After licking the
spout, and obtaining a reward, mice moved backward in the lane, arriving
at the center of the maze where a new trial could begin.
Bias correction. Mice performing the 2AFC discrimination task
typically developed a bias where they preferred one spout location
over the other. To eliminate such a bias, our behavioral control
software switched to a forced mode whenever mice licked the same
spout for five consecutive trials (Guo et al. 2014). In this forced mode,
the software dictated particular lane selection: only lanes with tactile
cues related to the animal’s nonpreferred spout were selected. The
mouse stayed in the forced mode until it licked the correct spout for
three consecutive trials. Once the animal showed that it correctly
licked the nonpreferred port for three trials in a row, the forced mode
was terminated, and the animal was switched back to randomized
uniform selection of lanes.
Fig. 5. Task design and metrics in the course
of the visual and somatosensory tasks. Aand
B: schematic of task design for aperture
width and wall texture discrimination tasks,
respectively, showing that the mouse first
rotates the plus maze (1), and watches the
LED light cue, which turns off (2) when the
mouse reaches the correct lane and enters it
(3). The final step is the discrimination of the
aperture (green) or textures, with the deci-
sion reported at the end of the lane where the
mouse licks one of the two spouts (4). In the
case of texture discrimination, 1.5 s delay is
imposed between the time the mouse reaches
the end of the lane to lick a spout. C: histo-
gram of intertrial intervals in 6 sessions of
active training for a single mouse. Because
the task was designed to be self-initiated,
there was no fixed intertrial interval. Mice
could wait up to 30 s or more before initiat-
ing a new trial. D: average time spent by a
single mouse in each lane from entering the
lane to making a decision. The mouse spent
⬃2 s in each of four lanes before obtaining
reward. E: time spent by a single mouse
rotating to reach the correct lane. Time spent
rotating increased with increasing the num-
ber of travelled lanes. Note that mice can
traverse clockwise or counterclockwise, and,
as they traversed additional lanes, they could
take more than 7 s finding the correct lane.
F: average durations of behavioral events for
3 mice. Trials are sorted according to the no.
of lanes mice rotated around themselves
[blue, one lane (n ⫽1,550 trials); red, two
lanes (n ⫽361 trials); green, three lanes
(n ⫽350 trials); and purple, four lanes (n ⫽
20 trials)]. The time points picked for these
analyses are 1) rotation time, which in-
creased with the number of lanes mice ro-
tated; 2) visual reaction time to enter a cor-
rect lane, which started when the light turned
off and ended when the mouse crosses the
lane boundary; 3) time spent inside the lane,
which began when the mouse entered the
lane and ended when the mouse licked the
reward spout; 4) tactile reaction time to lick
the right or left spout; and 5) intertrial inter-
val between trials determined by the moti-
vation of the mouse. When mice spent more
time rotating past additional lanes, they
spent less time before starting a new trial.
Innovative Methodology
1547AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
Setup electronics and software design. The setup consisted of: 1)an
Arduino-Uno microcontroller (www.arduino.cc) controlled and col-
lected data from the Air-Track system; 2) a Pixy (CMUcam5) camera
designed by Carnegie Mellon University and Charmed Labs
(Charmed Labs, Austin, TX) was used to track the platform location
and orientation; 3) a 50-mm linear actuator with position feedback
(model L16 50N 35:1 12V; Firgelli Technologies) was used to ad-
vance the lick spouts to the mouse; 4) a capacitive touch sensor
module (MPR121; Freescale Semiconductors) was used to detect
licking; 5) an active buzzer module (KY-012; KEYES DIY) was used
for false alarm cues, in the case of incorrect licks; 6) two solenoid
pinch valves were used to release sugar water from reward tubes
(2-way normally closed pinch valves; Bio-Chem Fluidics); and 7) data
from the Arduino were sent via a serial connection to a python
application running on a personal computer for logging and analysis
(Fig. 2).
Arduino code configurations. 1) Each of the four lanes of the maze
was defined by the angular orientation as determined from the angle
of the rectangular red and green color label glued underneath the
animal platform; 2) the beginning and the end of lanes were defined
by two coordinates that were fixed and used to determine whether the
head of the mouse was entering the lane or had reached the end of the
lane; and 3) based on associating particular orientation coordinates for
the platform with particular capacitive sensor and reward solenoid, the
right and left lick spouts were defined in the code to assign a reward
spout for each lane (Fig. 3).
Behavioral measures from Arduino output and data analysis. An
Arduino microcontroller polled inputs from the Pixy camera and
licking sensors while controlling solenoid valves, reward actuator,
LED, and piezo buzzer. The solenoid opening time was set to 150 ms,
which generated 10
l reward. Decision time window was set to 3 s
from the time the reward actuator was maximally extended until its
retraction if no lick event occurred. The maximum travel distance of
the motor was 5 cm (adding flexibility to the hardware design). The
maximum travel distance used in the experiment was 3 cm and took
1.5 s on average (reward access time, the time from the onset of
licking the correct sensor until the retraction of the lick spouts was
2.5 s; false-alarm cue, a buzzing sound was delivered for 1 s after the
mouse licked the wrong spout). Using values collected from the Pixy
camera about platform movement, we set a fixed value in our code to
report the status of the animal’s location relative to the platform (i.e.,
lane range, lane boundary, and end of the lane). Data were collected
about the mouse’s location, lane choice, and licking decision from the
Arduino serial port. The following definitions were used: a trial,
mouse started a trial when exited a lane (crossing a lane boundary)
and ended when the mouse licked a spout or withdrew from the lane;
intertrial interval, time between the end of one trial when the mouse
reported a decision by licking a spout and the time to move out of the
lane, i.e., crossing the lane boundary; correct visual trial, mouse chose
to enter the dark lane (LED off). This counted as a correct visual trial
whether or not the animal performed the somatosensory task; false
visual trial, mice chose to enter the lit lane (LED on), even if it
withdrew instantly afterward; correct somatosensory trial, mouse in
correct lane (LED off), licked the correct (rewarding) spout deter-
mined by lane aperture width or texture; wrong somatosensory trial,
mouse in correct lane (LED off), licked the wrong (nonrewarding)
spout; miss somatosensory trial, mouse in correct lane (LED off), did
not lick inside the decision time window (3 s); training session
termination, when animals exhibited signs of satiation, stopped per-
forming, or exhibited a sharp decrease in behavioral performance over
several trials (excluded from data analysis).
RESULTS
We designed and built an Air-Track system with a flat
platform that floated on air, equivalent to most air-ball systems
(Fig. 1; see METHODS). A physical maze was 3D printed on a
platform (Fig. 1, Aand B). The position of the floating platform
was tracked with a video tracking system (Fig. 1C) and
logged via an Arduino-Uno microprocessor (Fig. 2) that also
controlled the reward and cues in a closed-loop system (Fig.
3). It took about 2 wk to complete the entire training para-
digm, training individual mice to reach ⬎70% correct perfor-
mance in the visual Go/No-Go task and ⬎70% correct perfor-
mance in a two-choice tactile task (either aperture width or
texture discrimination task) (Fig. 4).
For the purpose of testing the system, we chose a “plus”
maze with four lanes. We tested whether mice could navigate
the maze and perform visual and tactile discrimination in
Go/No-Go and 2AFC tasks. In this environment, mice were
trained to choose dark lanes (LED off) and to avoid lit lanes
(LED on). Once mice entered a lane, their whiskers invariably
touched the walls, following which the mice proceeded to the
end of the lane until they got a reward. Mice learned to
discriminate between different aperture widths (Fig. 5A)or
wall textures and decided between one of the two lick spouts
(Fig. 5).
To minimize stress for the animal, the task was self-initiated.
Trials began when the mouse entered the center of the plat-
form. Figure 5 shows typical sensory-motor behavior on the
Air-Track platform. Data from a single “typical” animal show
that the mean duration of the intertrial interval from ⬎1,000
trials for one animal was 11.5 s (Fig. 5C). Mice typically spent
equal amounts of time in the different arms; thus, the data
indicate that mice have no preference for any specific arm (Fig.
5D). This presumably reflected the symmetrical design of the
plus maze. The average time spent in each lane was ⬃2.2 s
(Fig. 5D). During 1,000 trials with 250 trials in each lane, this
mouse spent 2.2 ⫾0.6 s (mean ⫾SD) in lane 1, 2.3 ⫾1.2 s
in lane 2, 2.2 ⫾0.5sinlane 3, and 2.1 ⫾0.7sinlane 4.
As a mouse rotated the maze, the different lanes moved past
it (Supplemental Videos 1– 4). In each trial the animal could
move a single lane or multiple lanes, past it. Each lane it
rotated past itself needed extra time (Fig. 5E). Note that while
there were only four lanes to choose from, animals sometimes
missed the correct lane, which added time to an individual trial.
This time interval varied considerably because the duration of
the various behavioral events within a trial (the time to pick a
correct lane, the time to make a correct choice to lick right or
left) could all vary (Fig. 5F, mean and SE for data from 3
animals). The mean durations for three mice sorted for the
number of lanes they rotated past (only 4 lanes are shown in
Fig. 5F) show that many temporal aspects of the behavior, for
example, the time taken to enter the correct lane, the time spent
traversing a lane, and the time to make tactile decision and lick
the spout, were similar irrespective of whether the animal
traversed one or four lanes before picking a lane. However, the
duration of intertrial intervals showed an inverse relation with
the time the mice spent traversing lanes.
Having established that the animals could orient the Air-
Track maze in a stress-free manner, we extended the study to
introduce two different kinds of behavioral tasks that could in
principle be used to examine the correlation between neuronal
activity and behavior. We did not train animals to the highest
levels of performance, and have not yet measured from brain
activity simultaneously, but within 2 wk of consecutive days of
training, animals reached threshold behavioral criteria in re-
Innovative Methodology
1548 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
markably few sessions and trials compared with other typical
head-fixed systems.
Six head-fixed mice were trained to perform in a Go/No-Go
visual discrimination task and achieved a criterion above 70%
correct choices. Three of them were subsequently advanced to
a second phase of training where they performed two-choice
aperture width discrimination tasks, whereas the other three
were advanced to perform a delayed two-choice texture dis-
crimination task. The criterion for successful performance in
this task was also ⬎70%.
The goal of rotation was to find and choose the correct lane,
which was selected on the basis of a visual stimulus. After 3
days of training, mice (n⫽6) learned to use this visual cue and
select the lane where an LED turned off with a mean perfor-
mance of 84.5% correct choices (Fig. 6A). Figure 6Bshows
performance over 4 days and for up to 1,000 trials for three
Fig. 6. Mice performance in visual and tactile discrimination tasks in Air-Track. A: performance in a Go/No-Go visual discrimination task and in 2-alternative
forced choice (2AFC) tactile discrimination tasks. The average performance in the visual task (n ⫽6 mice) was 84.5% correct trials after 4 days of active visual
training. The average performance in the aperture width tactile task was 91% (n ⫽3 mice), whereas average performance was 72.6% in the texture tactile task
(n ⫽3 mice) after 4 – 6 days of active tactile training. B: performance of 3 mice in the Go/No-Go visual discrimination task within 4 days of training. Mouse
1(M1, blue line) achieved 91% rate, mouse 2 (M2, orange line) achieved 83%, and mouse 3 (M3, brown line) achieved 85%. During the 1st day of training,
the experimenter provided fine control guiding the animal to keep mice motivated; afterward, the animals were unsupervised. C: performance of 3 mice during
the 2-choice aperture width discrimination task. In 5 days, mouse 4 (M4, pale green line) achieved 96% success rate while mouse 5 (M5, violet line) achieved
78% and mouse 6 (M6, sky blue line) achieved 100%. D: performance of 3 mice during the delayed 2-choice texture discrimination task. M1 (blue line) achieved
75% success rate in day 6, M2 (orange line) achieved 69% in day 4, and M3 (brown line) achieved 74% in day 5.E: mouse performance during active aperture
width tactile discrimination through the timeline of one session. Tactile performance reaches 100% (data shown for M6) before it drops to the chance level after
removing the difference in aperture width between lanes. Reinstalling aperture width differences between lanes retrieves correct behavioral performance. F:
cross-modal performance of visual and tactile tasks during multiple sessions of training. Visual discrimination performance declined (data shown for M1) from
90 to 75% as mice learned the texture discrimination task. Over several days, performance increased ⬎90% success rate in the visual discrimination task while
performance in the texture discrimination task reached 75% on day 6.
Innovative Methodology
1549AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
mice. During the initial phase of training (day 1), animals were
guided (supervised) manually to choose the correct lane (Fig.
6B). The aim of the supervised phase was to maintain the
animals’ motivation to perform more trials per session in this
early phase of training. Mice reached 70% correct performance
within 300 –700 trials (Fig. 6B).
Next, mice were advanced to perform a more difficult task
that combined the visual task Go/No-Go with one of the tactile
2AFC tasks (Fig. 6A). During the aperture 2AFC task, the three
mice reached the 70% correct choices by the 2nd day (200 –300
trials) and maintained this accuracy for the next 3 days (Fig. 6,
Aand C). In the texture-delayed 2AFC, two mice reached 70%
correct choices within 4 – 6 days of training (400-1,000 trials)
(Fig. 6, Aand D). To prove that mice use the aperture
apparatus in the two-choice task, we measured the mouse
performance (data shown for mouse 6) with and without the
aperture apparatus during the timeline of one session (Fig.
6E). Mouse tactile performance reaches a high level before it
drops to the chance level after removing the aperture apparatus
from all lanes. Reinstalling aperture apparatus retrieves correct
behavioral performance. Figure 6Fshows cross-modal perfor-
mance (data shown for mouse 1) along different training
sessions performing visual and tactile discrimination tasks. The
visual performance declined initially after the introduction of
the tactile discrimination task before visual and tactile perfor-
mance increase proportionally at the end of the training ses-
sions.
DISCUSSION
We present a novel behavioral system for use with head-
fixed mice. The Air-Track is flexible, low cost, easy to build,
and requires only a minimal computational control and data
acquisition system compared with common virtual reality sys-
tems. It is unique in providing a behaviorally rich environment
for active sensing that engages multiple sensory modalities
simultaneously. The proof-of-principle testing of mice in this
environment provided by this study shows that this system can
be used to quickly train animals for both simple and complex
tasks.
Air-Track exploits the principle of using air-lifted platforms
previously described by Kislin and colleagues (Kislin et al.
2014). Several features of the Air-Track system are novel,
introducing major engineering and design elements that make
it amenable for behavioral automation in active sensing exper-
iments. First, we used a camera-controlled real-time tracking of
2D position and rotation of the floating platform. Second, we
developed an automated closed-loop hardware/software con-
trol system based on an Arduino interface that provides an
interactive environment to control stimulus presentation and
reward delivery. Third, we designed a versatile reward delivery
system appropriate for different physical environment config-
urations. Fourth, we used 3D printing technology to construct
mazes for flexible design of novel environments. Using our
automated closed-loop system, we demonstrated that animals
could be easily trained on novel behavioral tasks with multi-
modal stimuli (visual, somatosensory, and auditory).
The Air-Track system is most directly comparable to virtual
reality systems that also attempt to achieve quasinatural be-
havior in rodents while head fixing the animal on an air-
cushioned ball (Harvey et al. 2009) (Fig. 7, Aand B). Virtual
reality approaches have been extremely successful in revealing
precise information about brain activity during active sensing
tasks. Most studies to date have focused on visual-motor tasks,
typically running through a visually displayed virtual maze
(Kaneko and Stryker 2014; Keller et al. 2012; Saleem et al.
2013). For instance, such systems have been used to show how
ensembles of neurons are recruited in the visual system while
learning new environments (Poort et al. 2015) and how differ-
ent behavioral states such as arousal influence visual process-
ing and functional flexibility of the primary visual cortex
(Vinck et al. 2015). An alternative approach applied to the
vibrissal sensory motor system has been to mount a tactile
environment (a set of walls) that moves as the animal walks on
an air ball (Sofroniew et al. 2014). Generating these kinds of
complex environments is difficult to achieve in a traditional
head-fixed nonactive sensing approach.
In our behavioral design, we used three sensory discrimina-
tion tasks: visual Go/No-Go task, 2AFC aperture width dis-
crimination task, and a delayed 2AFC texture discrimination
task. In visual and aperture discrimination tasks, the mice
learned the task in few days, whereas in the texture discrimi-
nation task mice performance was uneven. A major cognitive
challenge in performing the texture discrimination task was
that it also had a working memory component. After contact
with the stimulus, i.e., wall texture, the mouse had to extend its
head outside the lane, losing contact with stimulus for 1.5 s
until the decision could be made. This task therefore required
the retention of the texture in working memory, which signif-
icantly increased the difficulty of the task. Although the mice
passed the 70% criteria in the delayed 2AFC texture discrim-
ination task in a few days, their performance across sessions
was worse and uneven compared with the other two tasks.
The Air-Track provides a convenient “one-size-fits-all” so-
lution that extends virtual reality approaches to ultrarealistic
and multimodal behavioral regimes. Whereas virtual reality
approaches are best suited for visual stimulation, the Air-Track
system automatically includes multiple modalities (Fig. 7, C
and D), that is, because the maze is physically present, all of
the possible sensory information usually available to freely
moving animals (visual, auditory, tactile, and olfactory) is
available. We did not explicitly explore all modalities in this
study (concentrating on vision and somatosensation); however,
animals also had auditory (buzzer) (and in principle could have
had olfactory) cues coupled to their movements in the maze.
Another important difference between virtual reality ap-
proaches and the Air-Track is that virtual environments require
the mapping of the animal’s movements to the virtual world,
which can only be done via a computer model (Fig. 7E),
involving an additional step in the sensorimotor loop. With the
Air-Track, this step is unnecessary, since movements of the
mouse are automatically translated into movement of the phys-
ical maze (Fig. 7F). This has several advantages as follows: 1)
there is an accurate coupling of the animal’s movement with
the movement of the environment with no “glitches” (i.e.,
computational mapping errors), 2) there is little subjectivity in
determining this mapping, and 3) it requires zero computa-
tional effort or expensive equipment. On the other hand, a
disadvantage of our approach is that deliberate mismatches in
mapping are more difficult to produce compared with what can
be done in virtual reality systems that take the animal’s internal
model (mapping) of the virtual world into account (Keller et al.
Innovative Methodology
1550 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
2012; Saleem et al. 2013). It may still be possible with
Air-Track to introduce pseudomapping errors (i.e., the ani-
mal’s internal model) by changing the friction (air pressure) or
disturbing the movement of the platform.
Another disadvantage of the Air-Track system is that the mazes
are fixed and finite compared with virtual mazes that can be
infinite and/or freely changing. We used a 15-cm-diameter plat-
form, which provided enough space for several lanes, and this
may compromise the effectiveness of the Air-Track system for
experiments exploring the activity of place cells. However, larger
platforms that mice can move are likely to engage activity of place
cells, and are likely to engage an animal’s sense of place. Con-
sequently, it should be possible to use the Air-Track in multi-
modal spatial tasks (Gener et al. 2013; Griffin et al. 2012).
A practical limitation of the Air-Track is that the available
space and the inertial load of the platform for the animal limit
the maximum dimensions of the system. The inertial load in
our current design was extremely low (Supplemental Video 1)
such that movement appeared completely normal. In fact, we
found it necessary to introduce some artificial friction by
reducing air pressure below maximum to better approximate
the animal’s inertia in a natural freely moving situation.
The movement of the animals in the Air-Track system is
likely to be more realistic than on a ball or treadmill because
the platform is flat. In addition, the flat surface of the platform
can be used for different kinds of somatosensory cues, textures
on the floor of the maze, or directional auditory cues for a
Y-maze that have proved convenient in freely moving mazes
Fig. 7. Air-Track provides a real-world experience in head-restrained rodents. Aand B: a virtual reality platform adapted with permission from Dombeck and
colleagues (2010) (A) and an Air-Track platform (B). Air-Track provides a real maze, whereas virtual reality creates a virtual maze. In both systems, the walls
and the shape of the track can be changed based on the experimental design. Cand D: key difference between an Air-Track and a virtual reality setup. Air-Track
offers more of the real world to head-fixed mice, since it contains a palpable world with multiple dimensions; the air balls and virtual realities typically generate
a visually rich but nontactile world. Eand F: experimental mapping with virtual reality and Air-Track. Air-Track can create a real environment with walls that
can be used to skip the mapping of sensory input and motor output. In addition, a virtual visual world can still be introduced to surround the Air-Track as with
air-floating track spheres.
Innovative Methodology
1551AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
(Manita et al. 2015). Furthermore, with Air-Track we can
deliver a rich complete environment with different textures or
aperture widths that previously were typically used in freely
moving animals (Chen et al. 2015; Jadhav and Feldman 2010;
Krupa et al. 2004; Prigg et al. 2002; von Heimendahl et al.
2007; Wolfe et al. 2008). Nonetheless, like most head-fixed
preparations, it is necessary to keep the Air-Track system
perfectly horizontal, which prevents some possible experi-
ments, in particular tests of vestibular contribution; however,
this is an intrinsic difficulty for all head-fixed systems.
The tracking system we chose was the “Pixy” camera
attached to an Arduino-Uno microprocessor, which provided
convenient off-the-shelf tracking of the coordinates of the
platform in real-time. In principle, more sophisticated and
high-speed tracking systems could also be used. We placed an
emphasis in our study on easy-to-obtain components with the
intention of making the system easily available to any labora-
tory. For this reason, the code is open source, and hardware/
software descriptions are available online at http://www.neuro-
airtrack.com/. Because the system is small and compact, it can
be introduced into practically any existing recording setup
(e.g., under most in vitro microscope systems) and therefore
can easily be moved from one setup to another within a
laboratory to maximize the recording strategies. The total
material cost of our entire system was in the range of €200 –
500 (approximately $300 – 600) depending on the materials
used and manufacturing costs (e.g., 3D printing). We made
several mazes, including the plus maze presented here in this
paper and a Y-maze. Other shapes could be easily and flexibly
used using common 3D printers. Given the simplicity of the
system and its open source availability, there should be no
barrier against its introduction into any neuroscience laboratory
interested in behavioral paradigms based on active sensing.
ACKNOWLEDGMENTS
We thank the Charité Workshop for technical assistance, especially Alex-
ander Schill and Christian Koenig. We also thank Katja Frei for initial
establishment of a two-choice task on harnessed mice with an air table hover
cage. We also thank members of the Larkum laboratory, and in particular
Christina Bocklisch, Guy Doron, Albert Gidon, Naoya Takahashi, Keisuke
Sehara, and Julie Siebt, for useful discussions about earlier versions of this
manuscript.
GRANTS
This work was supported by grants from the Einstein Foundation Berlin (Y.
Winter, M. E. Larkum, and M. A. Nashaat), and the DFG EXC 257, Neuro-
Cure Center for Excellence (Y. Winter, M. E. Larkum) and Marie Curie
Fellowship (R. N. S. Sachdev).
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the authors.
AUTHOR CONTRIBUTIONS
M.A.N., Y.W., and M.E.L. conception and design of research; M.A.N.
performed experiments; M.A.N. and H.O. analyzed data; M.A.N., R.N.S.S.,
and M.E.L. interpreted results of experiments; M.A.N., H.O., R.N.S.S., and
M.E.L. prepared figures; M.A.N., H.O., R.N.S.S., and M.E.L. drafted manu-
script; M.A.N., H.O., R.N.S.S., Y.W., and M.E.L. edited and revised manu-
script; M.E.L. approved final version of manuscript.
REFERENCES
Acharya L, Aghajan ZM, Vuong C, Moore JJ, Mehta MR. Causal influence
of visual cues on hippocampal directional selectivity. Cell 164: 197–207,
2016.
Aghajan ZM, Acharya L, Moore JJ, Cushman JD, Vuong C, Mehta MR.
Impaired spatial selectivity and intact phase precession in two-dimensional
virtual reality. Nat Neurosci 18: 121–128, 2015.
Andermann ML, Gilfoy NB, Goldey GJ, Sachdev RN, Wolfel M, McCor-
mick DA, Reid RC, Levene MJ. Chronic cellular imaging of entire cortical
columns in awake mice using microprisms. Neuron 80: 900 –913, 2013.
Bermejo R, Harvey M, Gao P, Zeigler HP. Conditioned whisking in the rat.
Somatosens Mot Res 13: 225–233, 1996.
Chen JL, Margolis DJ, Stankov A, Sumanovski LT, Schneider BL,
Helmchen F. Pathway-specific reorganization of projection neurons in
somatosensory cortex during learning. Nat Neurosci 18: 1101–1108, 2015.
Crochet S, Petersen CC. Correlating whisker behavior with membrane
potential in barrel cortex of awake mice. Nat Neurosci 9: 608– 610, 2006.
Dombeck DA, Graziano MS, Tank DW. Functional clustering of neurons in
motor cortex determined by cellular resolution imaging in awake behaving
mice. J Neurosci 29: 13751–13760, 2009.
Dombeck DA, Harvey CD, Tian L, Looger LL, Tank DW. Functional
imaging of hippocampal place cells at cellular resolution during virtual
navigation. Nat Neurosci 13: 1433–1440, 2010.
Gener T, Perez-Mendez L, Sanchez-Vives MV. Tactile modulation of
hippocampal place fields. Hippocampus 23: 1453–1462, 2013.
Griffin AL, Owens CB, Peters GJ, Adelman PC, Cline KM. Spatial
representations in dorsal hippocampal neurons during a tactile-visual con-
ditional discrimination task. Hippocampus 22: 299 –308, 2012.
Guo ZV, Hires SA, Li N, O’Connor DH, Komiyama T, Ophir E, Huber D,
Bonardi C, Morandell K, Gutnisky D, Peron S, Xu NL, Cox J, Svoboda
K. Procedures for behavioral experiments in head-fixed mice. PLoS One 9:
e88678, 2014.
Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex
during a virtual-navigation decision task. Nature 484: 62– 68, 2012.
Harvey CD, Collman F, Dombeck DA, Tank DW. Intracellular dynamics of
hippocampal place cells during virtual navigation. Nature 461: 941–946,
2009.
Hentschke H, Haiss F, Schwarz C. Central signals rapidly switch tactile
processing in rat barrel cortex during whisker movements. Cereb Cortex 16:
1142–1156, 2006.
Holscher C, Schnee A, Dahmen H, Setia L, Mallot HA. Rats are able to
navigate in virtual environments. J Exp Biol 208: 561–569, 2005.
Jadhav SP, Feldman DE. Texture coding in the whisker system. Curr Opin
Neurobiol 20: 313–318, 2010.
Kaneko M, Stryker MP. Sensory experience during locomotion promotes
recovery of function in adult visual cortex. Elife 3: e02798, 2014.
Keller GB, Bonhoeffer T, Hubener M. Sensorimotor mismatch signals in
primary visual cortex of the behaving mouse. Neuron 74: 809– 815, 2012.
Kislin M, Mugantseva E, Molotkov D, Kulesskaya N, Khirug S, Kirilkin
I, Pryazhnikov E, Kolikova J, Toptunov D, Yuryev M, Giniatullin R,
Voikar V, Rivera C, Rauvala H, Khiroug L. Flat-floored air-lifted
platform: a new method for combining behavior with microscopy or elec-
trophysiology on awake freely moving rodents. J Vis Exp e51869, 2014.
Kleinfeld D, Ahissar E, Diamond ME. Active sensation: insights from the
rodent vibrissa sensorimotor system. Curr Opin Neurobiol 16: 435– 444,
2006.
Krupa DJ, Wiest MC, Shuler MG, Laubach M, Nicolelis MA. Layer-
specific somatosensory cortical activation during active tactile discrimina-
tion. Science 304: 1989 –1992, 2004.
Lenschow C, Brecht M. Barrel cortex membrane potential dynamics in social
touch. Neuron 85: 718 –725, 2015.
Manita S, Suzuki T, Homma C, Matsumoto T, Odagawa M, Yamada K,
Ota K, Matsubara C, Inutsuka A, Sato M, Ohkura M, Yamanaka A,
Yanagawa Y, Nakai J, Hayashi Y, Larkum ME, Murayama M. A
top-down cortical circuit for accurate sensory perception. Neuron 86: 1304 –
1316, 2015.
McGinley MJ, David SV, McCormick DA. Cortical membrane potential
signature of optimal states for sensory signal detection. Neuron 87: 179 –
192, 2015.
Musall S, von der Behrens W, Mayrhofer JM, Weber B, Helmchen F,
Haiss F. Tactile frequency discrimination is enhanced by circumventing
neocortical adaptation. Nat Neurosci 17: 1567–1573, 2014.
Innovative Methodology
1552 AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from
Niell CM, Stryker MP. Modulation of visual responses by behavioral state in
mouse visual cortex. Neuron 65: 472– 479, 2010.
Polack PO, Friedman J, Golshani P. Cellular mechanisms of brain state-
dependent gain modulation in visual cortex. Nat Neurosci 16: 1331–1339,
2013.
Poort J, Khan AG, Pachitariu M, Nemri A, Orsolic I, Krupic J, Bauza M,
Sahani M, Keller GB, Mrsic-Flogel TD, Hofer SB. Learning enhances
sensory and multiple non-sensory representations in primary visual cortex.
Neuron 86: 1478 –1490, 2015.
Prigg T, Goldreich D, Carvell GE, Simons DJ. Texture discrimination and
unit recordings in the rat whisker/barrel system. Physiol Behav 77: 671– 675,
2002.
Reimer J, Froudarakis E, Cadwell CR, Yatsenko D, Denfield GH, Tolias
AS. Pupil fluctuations track fast switching of cortical states during quiet
wakefulness. Neuron 84: 355–362, 2014.
Sachdev RN, Sato T, Ebner FF. Divergent movement of adjacent whiskers.
J Neurophysiol 87: 1440 –1448, 2002.
Saleem AB, Ayaz A, Jeffery KJ, Harris KD, Carandini M. Integration of
visual motion and locomotion in mouse visual cortex. Nat Neurosci 16:
1864 –1869, 2013.
Schneider DM, Nelson A, Mooney R. A synaptic and circuit basis for
corollary discharge in the auditory cortex. Nature 513: 189 –194, 2014.
Schwarz C, Hentschke H, Butovas S, Haiss F, Stuttgen MC, Gerdjikov
TV, Bergner CG, Waiblinger C. The head-fixed behaving rat–procedures
and pitfalls. Somatosens Mot Res 27: 131–148, 2010.
Sofroniew NJ, Cohen JD, Lee AK, Svoboda K. Natural whisker-guided
behavior by head-fixed mice in tactile virtual reality. J Neurosci 34:
9537–9550, 2014.
Sofroniew NJ, Vlasov YA, Andrew Hires S, Freeman J, Svoboda K. Neural
coding in barrel cortex during whisker-guided locomotion. Elife 4: 2015.
Vinck M, Batista-Brito R, Knoblich U, Cardin JA. Arousal and locomotion
make distinct contributions to cortical activity patterns and visual encoding.
Neuron 86: 740 –754, 2015.
von Heimendahl M, Itskov PM, Arabzadeh E, Diamond ME. Neuronal
activity in rat barrel cortex underlying texture discrimination. PLoS Biol 5:
e305, 2007.
Welsh JP, Lang EJ, Suglhara I, Llinas R. Dynamic organization of motor
control within the olivocerebellar system. Nature 374: 453– 457, 1995.
Wolfe J, Hill DN, Pahlavan S, Drew PJ, Kleinfeld D, Feldman DE. Texture
coding in the rat whisker system: slip-stick versus differential resonance.
PLoS Biol 6: e215, 2008.
Zagha E, Casale AE, Sachdev RN, McGinley MJ, McCormick DA. Motor
cortex feedback influences sensory processing by modulating network state.
Neuron 79: 567–578, 2013.
Innovative Methodology
1553AIR-TRACK: A REAL-WORLD ENVIRONMENT FOR ACTIVE SENSING
J Neurophysiol •doi:10.1152/jn.00088.2016 •www.jn.org
by 10.220.32.247 on October 3, 2016http://jn.physiology.org/Downloaded from