Available via license: CC BY-NC 4.0
Content may be subject to copyright.
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
1 of 10
NEUROSCIENCE
Assessing bimanual motor skills with
optical neuroimaging
Arun Nemani1, Meryem A. Yücel2, Uwe Kruger1, Denise W. Gee3, Clairice Cooper4,
Steven D. Schwaitzberg4, Suvranu De1*, Xavier Intes1*
Measuring motor skill proficiency is critical for the certification of highly skilled individuals in numerous fields.
However, conventional measures use subjective metrics that often cannot distinguish between expertise levels.
We present an advanced optical neuroimaging methodology that can objectively and successfully classify subjects
with different expertise levels associated with bimanual motor dexterity. The methodology was tested by assess-
ing laparoscopic surgery skills within the framework of the fundamentals of a laparoscopic surgery program, which
is a prerequisite for certification in general surgery. We demonstrate that optical-based metrics outperformed cur-
rent metrics for surgical certification in classifying subjects with varying surgical expertise. Moreover, we report
that optical neuroimaging allows for the successful classification of subjects during the acquisition of these skills.
INTRODUCTION
Motor skills that involve bimanual motor coordination are essential
for performing numerous tasks, ranging from simple daily activities
to complex motor actions performed by highly skilled individuals.
Hence, metrics to assess motor task performance are critical in nu-
merous fields, including neuropathology and neurological recovery,
surgical training and certification, and athletic performance (1–6).
In the vast majority of fields, however, current metrics are human-
administered and subjective and require significant personnel re-
sources and time. Thus, there is a critical need for more automated,
analytical, and objective evaluation methods (3, 7–10). From a neu-
roscience perspective, bimanual task assessment provides insights
into motor skill expertise, motor dysfunctions, interconnectivity
between brain regions, and higher cognitive and executive func-
tions, such as motor perception, motor action, and task multitask-
ing (6, 11). Therefore, incorporating the underlying neurological
responses in bimanual motor skill assessment is a logical step to-
ward providing robust, objective metrics, which ultimately may
lead to greatly improving our understanding of motor skill processes
and facilitating bimanual-based task certification.
Among all noninvasive functional brain imaging techniques,
functional near-infrared spectroscopy (fNIRS) offers the unique
ability to monitor and quantify fast functional brain activations
over numerous cortical areas without constraining and interfering
with bimanual task execution. Hence, fNIRS is a promising neuro-
imaging modality to study cortical brain activations, but to date,
only a very limited number of studies have been reported with re-
gard to assessing fine surgical motor skills (12). These exploratory
studies have reported differentiation in functional cortical activa-
tions between groups with varying surgical motor skills (12–16).
However, they suffer from recognized limitations (12), such as the
lack of signal specificity between scalp and cortical hemodynamics
(17, 18), the lack of multivariate statistical approaches that leverage
changes in functional brain activity across multiple brain regions,
and benchmarking against established metrics. Hence, they have
not affected current practice of professional bimanual skill profi-
ciency assessment. Here, we present an fNIRS-based optical neuro-
imaging methodology that overcomes all these shortcomings at once.
We measure concurrently functional activations in the prefrontal
cortex (PFC), the primary motor cortex (M1), and the supplemen-
tary motor area (SMA) to map the distributed brain functions asso-
ciated with motor task strategy, motor task planning, and fine motor
control in complex bimanual tasks (19–22). Moreover, we increase
the specificity of optical measurements to cortical tissue hemo-
dynamics by regressing signals from scalp tissues (17). Furthermore,
we leverage changes in intraregional activation and interregional
coupling of cerebral regions via multivariate statistical approaches
to classify subjects according to motor skill levels. Finally, we com-
pare our fNIRS-based approach with currently used metrics in sur-
gical certification by assessing bimanual motor tasks that are a part
of surgical training accreditation.
The performance of the reported optical neuroimaging meth-
odology enables the objective assessment of complex bimanual
motor skills as seen in laparoscopic surgery. Imaging-distributed
task-based functional responses demonstrated significant cortical
activation differences between subjects with varying surgical ex-
pertise. By leveraging connected cerebral regions correlated to fine
motor skills, we report increased specificity in discriminating sur-
gical motor skills via fNIRS-based metrics. We show that our ap-
proach is significantly more accurate than currently established
metrics used for certification in general surgery, as reported via
estimated misclassification errors (MCEs). These results demon-
strate that the combination of advanced fNIRS imaging with mul-
tivariate statistical approaches offers a practical and quantitative
method to assess complex bimanual tasks. Topically, the reported
optical neuroimaging methodology is well suited to provide quan-
titative and standardized metrics for bimanual skill–based profes-
sional certifications.
RESULTS
Surgical training task performance assessment
To demonstrate the potential of neuroimaging as an objective tool
to assess bimanual task expertise, we selected a challenging bimanual
1Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY
12180, USA. 2Department of Radiology, Harvard Medical School, Cambridge, MA
02138, USA. 3Department of Surgery, Massachusetts General Hospital, Boston, MA
02114, USA. 4University at Buffalo School of Medicine and Biomedical Sciences,
Buffalo, NY 14260, USA.
*Corresponding author. Email: des@rpi.edu (S.D.); intesx@rpi.edu (X.I.)
Copyright © 2018
The Authors, some
rights reserved;
exclusive licensee
American Association
for the Advancement
of Science. No claim to
original U.S. Government
Works. Distributed
under a Creative
Commons Attribution
NonCommercial
License 4.0 (CC BY-NC).
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
2 of 10
pattern cutting (PC) task, which is part of the fundamentals of lap-
aroscopic surgery (FLS) programs. The American Board of Surgery
now requires demonstrating proficiency in the FLS for certification
in general surgery. For our study, we recruited a population with
varying laparoscopic surgical expertise as defined via the FLS pro-
gram and conventional professional nomenclature. The subjects
were either classified into established skill levels, such as Novice
surgeons (1st- to 3rd-year surgical residents) and Expert surgeons
(4th- to 5th-year residents and attending surgeons), or into trained
medical students that are labeled as Skilled or Unskilled trainees
(see table S1). The Control group constituted medical students that
underwent no training at all. Note that all groups were indepen-
dent, that is, each subject belonged to only one group. Each subject
followed the official FLS PC task protocols. The experimental pro-
tocol followed by each cohort is provided in fig. S1. We recorded the
FLS performance scores for all subjects and the cumulative sum
control chart (CUSUM) computed for the population following a
training protocol. It is important to note that this study is the first to
acquire FLS performance scores simultaneously with the neuroim-
aging data. We obtained the FLS scoring methodology with consent
under a nondisclosure agreement from the FLS Committee. Thus,
this study is the first one to report on direct comparisons of neuro-
imaging metrics and FLS scores for validation. In all cases, we
acquired the FLS performance score simultaneously with the neuro-
imaging data.
Figure1A shows a schematic of the surgical trainer along with
the fNIRS setup that is used to measure real-time cortical activation,
and Fig.1B shows the probe placement schematic designed for this
study for PFC, M1, and SMA measurements. A physical depiction
of the setup is also provided in fig. S2. Figure2A reports on the de-
scriptive statistics of the FLS performance score for the Novice and
Expert surgeons, where Experts significantly outperformed Novice
surgeons (P < 0.05). Similarly, the descriptive statistics of FLS per-
formance scores over the whole training period are provided for all
FLS task training subjects and untrained Control subjects in Fig.2B.
Results indicate that there are no significant differences between the
untrained Control subjects and training subjects on day 1 or the
pretest (P > 0.05). However, the trained FLS students significantly
outperformed the untrained Control students on the final posttest,
which follows a 2-week break period after training (P < 0.05). To
provide insight at the subject level, Fig.2C summarizes the CUSUM
scores for each of the subjects with respect to trials performed. Tri-
als that have a FLS performance score higher than 63 are considered
a “success,” and the respective CUSUM score is subtracted by 0.07
(23, 24). Trials that have a performance score lower than 63 are con-
sidered a “failure,” and the CUSUM score is added by 0.93 (23, 24).
Results indicate that three trained subjects (FLS 2, FLS 3, and FLS 5)
passed the acceptable failure rate of 0.05 (H0), and thus are consid-
ered “Skilled” henceforth. The remaining trained four subjects (FLS 1,
FLS 4, FLS 6, and FLS 7) are considered “Unskilled,” as they did
not meet the FLS criteria for successful completion of the training
program.
Optical neuroimaging assessment of established surgical
skill levels
To ascertain that our neuroimaging methodology can discrimi-
nate between established skill levels, we quantified the real-time
hemodynamic activation over the PFC, M1, and SMA cortical
regions while Novice and Expert surgeons performed the stan-
dardized FLS bimanual PC task (23, 25, 26), where typical hemo-
dynamic responses are shown in fig. S3. Figure3A depicts the spa-
tial distribution of average changes in functional brain activation,
as reported by [HbO2] for all subjects in the surgical Novice and
Expert groups. Significant differences were observed in all the PFC,
the SMA, the left medial M1 (LMM1), and the right lateral M1 re-
gions, as depicted in Fig.3B. More precisely, Novice surgeons have
significantly higher functional activation in the PFC regions (P <
0.05) and significantly lower functional activation in the LMM1 and
SMA regions when compared to Expert surgeons.
While motor skill discrimination as reported via significant dif-
ferences in the measurements from different cortical regions is typ-
ically central to neuroscience discovery studies, it does not provide
insights into the utility of the dataset to achieve robust classification
based on quantitative metrics, such as accomplished during certifi-
cation (that is, successfully pass a performance-based manual skills
assessment). To quantify the performance accuracy of neuroimaging-
based classification of individuals in preset categories such as Novice
surgeons and Expert surgeons, we postcomputed MCEs associated
with current accredited FLS performance scores and with our neu-
roimaging method.
We used a multivariate statistical method, namely, linear dis-
criminant analysis (LDA), to estimate the MCEs associated with
the FLS- and fNIRS-based measurements. We also used quadratic
support vector machines (SVMs) to classify subject populations
to ensure that classification results are not dependent on classifi-
cation techniques. MCEs are defined as the probability that the
first population is classified into the second population (MCE12),
and the second population is classified into the first population
(MCE21). Perfect classification is indicated by MCE = 0%, and
complete misclassification is indicated by MCE = 100%. Figure3C
reports on these two MCEs for FLS performance scores and all
combinations of fNIRS metrics for the classification of surgical
Experts and Novices. Results indicate that subject classification
is relatively poor when considering FLS performance scores only
(MCE12 = 61% and MCE21 = 53%). On the other hand, neuroimaging-
based quantities provide lower errors (besides SMA only). Specifi-
cally, the combination of PFC, LMM1, and SMA leads to the overall
lowest MCEs (MCE12 = 4.4% and MCE21 = 4.2%). In addition, we
provide the leave-one-out cross-validation results for the LDA
classification models used for this dataset, as seen in Fig.3D. This
approach assesses the robustness of the LDA classification model,
where each sample is systematically not used to build the LDA
model and is treated independently. Results show that the combi-
nation of PFC, LMM1, and SMA leads to the most robust and best
performing datasets to build the classification model, as demon-
strated by the fact that 100% of the samples in the leave-one-out
cross-validation have MCEs < 5%. The specific distributions of the
classification results are shown in fig. S8 (A and B). Furthermore,
we determined weights for each cortical region and their respective
contribution to the total LDA model to show the correlation be-
tween different cortical regions on motor skill proficiency. The
weights for left lateral PFC (0.58), medial PFC (0.23), right lateral
PFC (0.29), LMM1 (−0.70), and SMA (0.14) contribute to the en-
tire discriminant function, with the norm of all the weights equal to
1.0. Three regions (left lateral PFC, right lateral PFC, and LMM1)
account for 96.18% of the discriminant function, indicating the
preponderance of these regions for a robust and accurate subject
classification.
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
3 of 10
Optical neuroimaging assessment of surgical skill level
during training
Beyond determining skill levels of individuals compared to estab-
lished groups, one key challenge in bimanual skill assessment and in
laparoscopic surgery is the evaluation of bimanual motor skill ac-
quisition during training. We applied our neuroimaging methodol-
ogy to the FLS PC task over an 11-day training period for inexperi-
enced medical students. On the basis of the established FLS metrics
currently used in the field, we divided the enrolled medical student
population into Skilled and Unskilled trainees at the completion of
the training program, as previously shown in Fig.2C. In addition,
we recruited five medical students with no previous experience in
Fig. 1. fNIRS probe placement design for PFC, M1, and SMA measurements. (A) Schematic depicting the FLS box simulator where trainees perform the bimanual
dexterity task. A continuous-wave spectrometer is used to measure functional brain activation via raw fNIRS signals in real time. (B) Optode positions for coverage over
the PFC, M1, and SMA. Red dots indicate infrared sources, blue dots indicate long separation detectors, and light blue dots indicate short separation detectors. The
PFC has three sources (1 to 3), three short separation detectors (S1 to S3), and four long separation detectors (1 to 4). The M1 has 4 sources (4 to 7), 4 short separation
detectors (S4 to S7), and 10 long detectors (5 to 14). The SMA has one source (8), one short separation detector (S8), and three long separation detectors (9, 15, and 16).
Illustration by Nicolás Fernández.
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
4 of 10
laparoscopic surgery as the Control group that underwent no train-
ing. Figure4A shows a visual spatial map conveying the average
cortical activation of all Skilled trainees or Unskilled trainees while
performing the posttest (that is, simulated certification exam). Like
Expert versus Novice surgeons as shown in Fig.3A, Skilled trainees
exhibit increased cortical activation in the LMM1 and SMA and de-
creased PFC activation when compared to Unskilled trainees upon
training completion and after a 2-week break.
To provide a more global view of the training outcome, we pres-
ent the descriptive statistics of functional activation between un-
trained Control students and all trained FLS students for pretest
(day 1) and posttest (final day after 2-week break period) with re-
spect to different brain regions in Fig.4B. Results indicate that there
are no significant differences between the Control and all training
students (Skilled and Unskilled trainees) at the onset of the training
program (P > 0.05). However, at the completion of the training and
after a 2-week break period, both Skilled and Unskilled trainees ex-
hibit a significantly lower functional activation in the left lateral and
right lateral PFC compared to the untrained Control students (P <
0.05). Furthermore, trained FLS students have significantly higher
LMM1 and SMA activation than untrained Control students during
the posttest (P < 0.05). These results reinforce the findings of the
previous section regarding functional activation differences be-
tween Expert and Novice surgeons. To further stress the fact that
our neuroimaging modality enables us to provide a more granular
view of training outcomes, we computed the MCEs for the three pop-
ulations involved in this surgical training study (Control, Skilled,
and Unskilled trainees).
Figure5 (A and B) reports on the MCEs for each potential com-
bination of medical student populations at different stages or end
points of training. We computed these MCEs using the combined
PFC, LMM1, and SMA brain functional optical measurements. The
longitudinal MCEs of pretest populations versus odd days of train-
ing indicate that at the onset of the training, the populations could
not be distinguished as reported by large intergroup MCEs, as
shown in Fig.5A. However, after day 7, the Skilled trainee popula-
tion demonstrated a significantly different neuroimaging distributed
response compared to the first day of training, as demonstrated by
very low intragroup MCEs. Conversely, the Unskilled trainee pop-
ulation did not exhibit these marked trends. Even during the final
training day (day 11), we observed poor intragroup MCEs for the
Unskilled trainee population (MCE12 = 24% and MCE21 = 47%). In
contrast, we completely classified Skilled trainees on the final train-
ing day from Skilled trainees on the pretest, with MCE12 = 0% and
MCE21 = 0%.
Similar results were observed when looking at the same intra-
group misclassifications between the pretest and posttest conditions,
as shown in Fig.5B. Classification continues to remain poor for Un-
skilled trainees when comparing this population from the pretest
and the posttest, with MCE12 = 58% and MCE21 = 80%. Yet, we
successfully classified Skilled trainees during the pretest from
Skilled trainees during the posttest, with MCE12 = 10% and MCE21 =
11%. While the Unskilled and Skilled trainee intergroups were suc-
cessfully classified at the end of the training session compared to
the pretest, the two populations did exhibit some intragroup over-
lap in their associated probability density function during the posttest.
Fig. 2. Bimanual motor task performance scores. (A) FLS performance scores for Novice surgeons (green) and Expert surgeons (red), where Expert surgeons signifi-
cantly outperformed Novice surgeons. (B) FLS performance scores for all training subjects (black) with respect to days trained compared to untrained Control subjects
(orange). Two-sample t tests were used for statistical differentiation. n.s., not significant. Red+ signs indicate outliers. *P < 0.05. (C) CUSUM scores for each trained subject
with respect to trials. The H0 threshold indicates that the probability of any given trained subject is mislabeled as a “Skilled trainee” is less than 0.05 and is subsequently
labeled as a “Skilled trainee” subject. Results indicate that three trained subjects, FLS 2, FLS 3, and FLS 5, are labeled as “Skilled trainees.” The remaining trained subjects
that do not cross the H0 line are labeled “Unskilled trainees.”
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
5 of 10
Fig. 3. Differentiation and classification of motor skill between Novice and Expert surgeons. (A) Brain region labels are shown for PFC, M1, and SMA regions. Average
functional activation for all subjects in the Novice and Expert surgeon groups are shown as spatial maps while subjects perform the FLS task. (B) Average changes in he-
moglobin concentration (conc.) during the FLS task duration with respect to specific brain regions for Novice (green) and Expert (red) surgeons. Two sample t tests were
used for statistical tests. *P < 0.05. (C) LDA classification results for FLS scores and all combinations of fNIRS metrics. (D) Leave-one-out cross-validation results show the
ratio of samples that are below MCE rates of 0.05 for FLS scores and all other combinations of fNIRS metrics.
Fig. 4. Differentiation of motor skill between Control, Skilled, and Unskilled trainees. (A) Spatial maps of average functional activation for all subjects in each respec-
tive group during the FLS training task on the posttest day. (B) Average changes in hemoglobin concentration during stimulus duration with respect to specific brain
regions for untrained Control subjects (orange) and all FLS training students (black). Two-sample t tests were used for statistical differentiation. *P < 0.05. Type I error is
defined as 0.05 for all cases.
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
6 of 10
Of importance, both trainee populations did not exhibit marked
differences between the final training day and posttest measure-
ments, as indicated by relatively high MCEs. Classification of Skilled
trainees and Control subjects during the posttest also yielded very
low MCEs, whereas classification of Unskilled trainees and Con-
trol subjects still yielded high MCEs, as shown in further detail in
fig. S8 (C and D). These cross-validated classification methods
show that cortical activation has significantly changed for Skilled
trainees during the posttest when compared to Skilled trainees on
the pretest or untrained Control subjects, whereas Unskilled trainees
do not exhibit such a marked trend.
Classification of subjects with varying surgical
expertise levels
For our neuroimaging-based approach for motor skill differentia-
tion to be formative, it is important to validate the classification
models across all subject populations, especially since the studies
associated with assessment of established skill levels and FLS train-
ing were performed independently in two different institutions.
The subject population represents the full spectrum of laparoscopic
surgical expertise, from Novices to certified attending surgeons, in-
cluding Skilled and Unskilled medical student trainees. Regarding
the number of procedures and associated level of expertise (at the
completion of the training protocol), it is expected that the distribu-
tion in terms of surgical skills levels, from more proficient to less
proficient, is distributed as follows at the group level: Expert sur-
geons, Skilled trainees, Unskilled trainees, Novice surgeons, and
Control.
Figure6 shows the cross-validated classification model results
comparing all subject population groups with varying expertise lev-
els. Each box corresponds to a single trial for each expertise group,
as shown via different-colored borders. Shaded regions within each
box indicate the MCE if that trial is removed from the classification
model. For example, the first trial in cross-validation results for
classifying Expert surgeons from Skilled trainees shows an MCE of
0%, as indicated by a white shade. However, the 29th sample in the
classification model, or the third trial in the Skilled trainee group,
shows an MCE of 89% when removed from the classification model.
The latter is an indication that the LDA classification model fails to
reliably classify Experts and Skilled trainees if the third sample in
the Skilled trainee group is removed.
First, we compare Expert surgeons with all other subject popula-
tions. Results indicate that Expert surgeons can be robustly classi-
fied with all subject populations, except for Skilled trainees where
only 28 of 35 samples have MCEs less than 5%. Similarly, Skilled
trainees can be successfully classified with Unskilled trainees, Novice
surgeons, and Control subjects. Conversely, Unskilled trainees,
Novice surgeons, and Control subjects exhibit a poor intergroup
Fig. 5. Classification of motor skill between Control, Skilled, and Unskilled trainees. (A) Inter- and intragroup MCEs for each subject population (Control, Skilled, and
Unskilled trainees) with respect to training days. MCE12 and MCE21 values significantly decrease below 5% when classifying pretest Skilled and Unskilled trainees on the
final training day. Furthermore, MCEs are also low when classifying Skilled and Unskilled trainees on the final training day, along with Skilled trainees and untrained Con-
trol subjects. (B) MCEs are reported for each combination of training groups (Control, Skilled, and Unskilled trainees) with respect to pretest, posttest, and final training
days. MCEs are substantially low when classifying Skilled trainees and Control subjects along with inter-Skilled trainee group classification. Unskilled trainees, however,
showed high MCEs even when compared to Unskilled trainees and Control subjects during the posttest. As a measure of skill retention, classification models were also
applied for all subject groups from the final training day to the posttest.
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
7 of 10
classification as reported by multiple samples leading to high MCEs.
Overall, these results indicate that the population with high exper-
tise levels (Expert surgeons and Skilled trainees) can be robustly
classified compared to groups that have not yet attained the required
expertise levels as required by the FLS training program. “Noncerti-
fied” group, including Novice surgeons, Unskilled trainees, and
Control subjects, however, cannot be robustly classified among
themselves.
DISCUSSION
While there have been extensive efforts in the surgical community
to confirm training effectiveness and validation of the FLS program
(25–29), the surgical skill scoring component has received little at-
tention and has garnered criticisms, such as subjectivity in scoring,
inconsistencies in FLS score interpretations, and no correlation of
patient injury reduction due to FLS certification (26, 28, 30–34).
Despite the lack of rigorous evaluation of the FLS scoring method-
ology, the program has become the de facto evaluation method for
accreditation of skills required for general surgery (32). Given the
high-stakes nature of surgical assessment in the FLS program and
its implications on training for future surgeons, there is a current gap
in the rigorous validation of FLS scores as a robust and objective
methodology (32). In this regard, previous studies have broached
the concept of noninvasive brain imaging as a means for objectively
assessing surgical skills (12, 13, 16, 35). However, they suffer from
methodological limitations that are now well recognized by the
fNIRS community, namely, the contamination of superficial tissue,
such as scalp, dura, or pia matter, in the recorded measurements
(17, 18). To highlight this point, results from the Expert and Novice
surgeon cohort in this study were reprocessed without the regres-
sion of superficial tissue data and are provided in fig. S9 (A to C).
These results demonstrate that previously reported fNIRS-based
metrics with the inclusion of superficial tissue responses can statis-
tically differentiate surgical novices and experts (12–15, 35) yet fail
to classify subjects on the basis of motor skill proficiency and per-
form as poorly as current surgical skill assessment metrics. In con-
trast, regressing shallow tissue hemodynamics from the optical
measurements significantly reduces the false omission rate, where a
surgical novice is mistakenly classified as an expert, to 0%, whereas
previous approaches still maintain false omission rates of 13 to 18%
(see table S4). While oxygenated hemoglobin is the primary metric
used in this study due to superior contrast-to-noise ratios (36), al-
ternative metrics, such as deoxygenated hemoglobin (Hb), tissue
oxygenation saturation (StO2), or total hemoglobin (HbT), may
provide further insights into optimal measurement sensitivity and
surgical skill assessment.
Beyond improving the robustness of optical measurements’ sen-
sitivity to cortical activations, this work is also the first to measure
functional activation in a multivariate fashion to determine critical
cortical regions that are correlated to surgical motor skill differenti-
ation and classification. More specifically, this is the first report of
measuring functional activations in the PFC, M1, and SMA cortical
regions that are putatively associated with motor task strategy, mo-
tor task planning, and fine motor control (2, 13, 14, 35, 37–42).
Since the PFC is associated with decision-making and motor strategy
development (2, 13, 14, 35, 37–42), it is expected that PFC activa-
tion decreases as motor skill proficiency increases, as seen in surgi-
cal experts. Similarly, higher executive functions such as fine motor
skill control are also correlated to increased activation in the M1
and SMA, as expected for surgical experts. Our results corroborate
these findings regarding activation changes in the PFC, M1, and
SMA as motor skill proficiency increases (2, 13, 14, 35, 37–42). Fur-
thermore, our results demonstrate that the inclusion of these corti-
cal regions significantly improves the utility of fNIRS in assessing
bimanual skills and can offer improved objective metrics over con-
ventional FLS-based metrics currently used for certification in gen-
eral surgery. Of importance, while using single regional readouts
leads to enhanced population differentiation, the combination of
the three abovementioned cortical regions provides excellent classi-
fication performances (for completeness, we also provide bivariate
classification results using SVMs in figs. S4 to S7). When combining
measurements from these three brain regions, optical neuroimag-
ing enables a remarkably robust classification of subjects based on
their proven surgical skills levels, including novice, intermediate,
and expert skill levels. More precisely, our methodology allows for
(i) highly accurate classification of subjects with well-defined bi-
manual skills levels with better performance than currently used
metrics, (ii) longitudinally assessing the acquisition of surgical skills
during the FLS training program, and (iii) performing robust clas-
sifications of populations recruited from multiple institutions with
varying skill levels.
On a practical side, it is important to note that even if our meth-
ods leverage the most recent technical developments in the field of
fNIRS, the instrumental and algorithmic platforms used herein are
readily available for wide dissemination and use in surgical training
facilities. Moreover, as more neuroscience-driven investigations
focus on mapping distributed brain function, the positioning of
the optodes (source or detector) on the subject scalp becomes in-
creasingly challenging with extended spatial coverage. One key
con sideration is to ensure that effective coupling is minimally
affected by natural movements and not compromised by the sub-
ject’s hair. Hence, positioning of the optodes can be a lengthy process
that is not suitable for professional environments that are time-
constrained either by cost or throughput considerations. In this re-
gard, our study identifies that the PFC, SMA, and LMM1 regions
Fig. 6. Cross-validation results for classification across all subjects with vary-
ing degree of motor skills. Each box represents one trial per expertise group
during the posttest, where the shaded regions indicate the MCE if that given trial is
removed from the classification model. Cross-validation results with their respec-
tive ratio of samples that are below MCE rates of 5% for Expert surgeons versus
Skilled trainees (28 of 35 samples), Expert surgeons versus Unskilled trainees (29 of
38), Expert versus Novice surgeons (43 of 43), Expert surgeons versus untrained
Control subjects (34 of 38), Skilled trainees versus Unskilled trainees (15 of 21),
Skilled trainees versus Novice surgeons (24 of 26), Skilled trainees versus untrained
Control subjects (18 of 21), Unskilled trainees versus Novice surgeons (16 of
29 samples), Unskilled trainees versus untrained Control subjects (11 of 24), and
finally, Novice surgeons versus untrained Control subjects (9 of 29).
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
8 of 10
are sufficient for accurately assessing bimanual skill–based task ex-
ecution. Thus, probe placement can be completed in a short amount
of time without any impact on task execution, both critical factors
for the acceptance of our surgical skill assessment methodology by
the surgical community.
Beyond bimanual skill assessment and objective classification of
individuals based on their skill levels, the work herein provides a
sound foundation to further investigate the neurophysiology un-
derlying bimanual skill acquisition and retention. Here, we deliber-
ately focused on reading the brain outputs as a means to provide
objective and quantitative measures of bimanual task execution
without delving into the mechanistic understanding of the under-
lying physiology and functional connectivity. However, current
neurophysiological knowledge supports the overall findings of our
studies, namely, increases in LMM1 and SMA activation and
significant decreases in PFC activation across all groups with in-
creasing motor task performance (2, 12–14, 35, 37–42). It is also
important to note that previous studies use motor tasks that are
deliberately designed to decrease variability in studying cortical
activation changes, such as finger tapping or simple visual or virtual-
based unimanual tasks. Conversely, the FLS task at hand is a com-
plex bimanual task that involves visuospatial coordination, varying
degrees of synchronicity among hands, motion frequency and
range, and exerted forces on the surgical tools for task completion.
Consequently, it is not feasible to ensure that each session repli-
cates the same conditions, and hence, the same cortical responses.
Moreover, the cortical activations and interactions associated with
task planning and execution are dynamic by nature, from expected
explicit control in the early stages of learning to more implicit or
automatic control in the later stages of motor learning. Thus,
mapping the cortical networks and their dynamical changes asso-
ciated with task execution and skill acquisition should be the
next step.
There is currently great interest in investigating dynamic func-
tional connectivity (DFC) in neuroscience. Typically, DFC studies
are conducted using functional magnetic resonance imaging, which
is not appropriate for protocols requiring supine positions and/or
nonelicited task execution. Recent studies have demonstrated that
fNIRS is well positioned in these scenarios (43–45). We foresee that
implementing these approaches in the context of bimanual skill as-
sessment can lead to refined skill level assessment metrics and po-
tentially provide predictive models of skill acquisition. For instance,
composite cognitive metrics, possibly obtained by weighting re-
gional cortical measurements using the LDA weights for best clas-
sification between Skilled and Unskilled trainees, could be central
to developing a tailored surgical training program for optimal skill
acquisition and retention assessment (figs. S6 and S7). In particu-
lar, this approach may be expanded to robustly identify and predict
surgical candidates that may achieve faster learning curves for
learning complex surgical skills, and by extension, achieve surgical
skill mastery with a significantly faster rate than other surgical
trainees. Furthermore, these methodologies can be easily applied to
other fields, including rehabilitation, brain computer interfaces, ro-
botics, stroke, and rehabilitation therapy (46–48). In summary, we
believe that this noninvasive imaging approach for objective quan-
tification for complex bimanual motor skills will bring about a par-
adigm change in broad applications, such as surgical certification
and assessment, aviation training, and motor skill rehabilitation
and therapy.
MATERIALS AND METHODS
The study was approved by the Institutional Review Board of
Mas sachusetts General Hospital, University of Buffalo, and Rensselaer
Polytechnic Institute.
Hardware and equipment
We used a validated continuous-wave, 32-channel, near- infrared
spectrometer for this study, which delivered infrared light at 690
and 830 nm (CW6 system, TechEn Inc.). The system used eight
long-distance and eight short-distance illumination fibers coupled
to 16 detectors. The long-distance channels comprised all the mea-
surements within a 30- to 40-mm distance between the source and
the detector, and the short distance channels comprised all the mea-
surements within a ~8-mm distance between the source and the
detector. The short channels were limited to probing the superficial
tissue layers, such as skin, bone, dura, and pial surfaces, whereas the
long channel probed both superficial layers and cortical surface.
The probe design was assessed using Monte Carlo simulations and
was characterized to have high sensitivity to functional changes in
the PFC, M1, and SMA. A schematic of the geometric arrangement
of probes is shown in Fig.1B.
Participants and experimental design
Seventeen surgeons and 13 medical students participated in this
study. The minimum number of samples required for this study
was determined a priori using power analysis according to the
two-sample t test comparing the means between two groups. On the
basis of an initial pilot study, a conservative effect size (d = 1.4) was
chosen for the prefrontal and motor cortices. Furthermore, with a
95% confidence interval and a minimum power of 0.80, it was de-
termined that a minimum of eight samples were required per group,
which was calculated by a statistical software, G*Power (49). The
sample population was distributed within Novices (n = 9, 1st- to
3rd-year residents with mean age 31 ± 2) and Experts (n = 8, 4th-
and 5th-year residents and attending surgeons with mean age 35 ±
6) surgeons. Subject demographics are listed in table S1. We ad hoc
defined surgical novices and experts on the basis of the number of
laparoscopic procedures completed according to the literature (15, 50).
To avoid any issues regarding hemisphere-specific activation, only
right-handed participants were selected. All participants were
instructed on how to perform the task with standardized verbal
instructions indicating the goal of the task and rules for task com-
pletion. The optical probes were positioned on the participant with
great care to avoid any hair between the source/detector and scalp,
as well as robust coupling with the skin. The cap holding the fibers
on the participant as well as the fibers did not hinder the partici-
pant’s movement during bimanual tasks. This cap is a standard
electroencephalography EASYCAP (www.easycap.de) that has been
used for numerous fNIRS studies (16, 35, 51), which has marked
anatomical landmarks for placement on the scalp. The cap was
carefully placed on the scalp by aligning the CZ, FP1, and FP2 land-
marks on the head and the marked landmarks on the cap. Specific
anatomical location simulations (52) for each source and detector
channel are shown in table S2. The participants were asked to per-
form the FLS PC task using an FLS-certified simulator, where the
goal was to use laparoscopic tools to cut a marked piece of gauze as
quickly and as accurately as possible. The experiment for each par-
ticipant consisted of a block design of rest and stimulus period (cut-
ting task). The surgical cutting task was performed until completion
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
9 of 10
or stopped after 5 min. Then, a rest period of 1 min was observed.
The cycle of cutting task and rest periods was repeated five times
per participant. The following measurements were recorded simul-
taneously for each participant during each trial: total task time, light
intensity (raw NIRS data), and performance scores for the PC task
were based on the FLS metrics.
NIRS after processing
Data processing was completed using the open-source software
HOMER2 (53), which is implemented in MATLAB (MathWorks).
First, channels with signal quality outside of the range of 80 to 140 dB
were excluded. The remaining raw optical signals (intensity at 690
and 830 nm) were converted into optical density using the modified
Beer-Lambert law with a partial path-length factor of 6.4 (690 nm)
and 5.8 (830 nm) (54–56). Motion artifacts and systemic physiology
interference were corrected using recursive principal component
analysis and low-pass filters (53, 57, 58). The filtered optical density
data were used to derive the delta concentrations of oxyhemoglobin
and deoxyhemoglobin.
The short-distance channels were regressed from the long-distance
channels to remove any interference originating from superficial
layers. This was achieved by using a consecutive sequence of Gaussian
basis functions via ordinary least squares to regress scalp and dura
activation data collected from the short separation fibers to create
the hemodynamic response function (HRF) (17, 18, 59). The corre-
sponding source and detector pairs for each source were averaged
over the task start and end time for each trial and subject (as shown
in fig. S1). For example, if a subject performed five trials, then the
HRF would be group-averaged over the task stimulus for each trial,
ensuring that rest periods were not included in the group average. It
is important to note that the task stimulus period for a given trial of
each subject may vary because of motor skill proficiency, as seen
in varying task completion times (table S3). The result is a scalar
value for the change in oxyhemoglobin according to different brain
regions for all participants. Finally, scalar values for each of the
32 channels were grouped into eight distinct regions of interest ac-
cording to the anatomical structures as follows: left PFC (source 1,
detectors 1 and 2), medial PFC (source 2, detectors 2 and 3), right
PFC (source 3, detectors 3 and 4), left lateral M1 (source 4, detectors
5 to 8), LMM1 (source 5, detectors 8 to 10), right medial M1 (source
6, detectors 9 to 12), right lateral M1 (source 7, detectors 11 to 14),
and finally, SMA (source 8, detectors 9, 15, and 16).
Task performance metrics, statistical,
and classification methods
The FLS scores were determined using the standardized FLS scor-
ing metric formulation for the PC task based on time and error.
This formulation is intellectual property–protected and was ob-
tained with consent under a nondisclosure agreement from the
FLS Committee, and hence, its details cannot be reported in this
paper. Descriptive and inferential statistics were performed using
SPSS (IBM Inc.). Two-sample t tests were used to determine statis-
tically significant differences in functional activation between two
groups. All box plots display median values (red bar) along with
SDs. A confidence level of 95% was selected as the minimum re-
quired to reject the null hypothesis.
LDA was used to classify the populations on the basis of their
FLS scores and functional brain activation metrics. Before the anal-
ysis of LDA, all recorded metrics were first normalized, that is, the
sample mean and variance are 0 and 1. LDA determines the optimal
vector v such that the projected metrics of two classes (for example,
Novice and Expert surgeons) in the v direction have the highest sep-
aration between the classes with the lowest variance for each class
(60). The resulting LDA scores were objectively compared for each
class, and the degree of separation was objectively quantified as MCEs.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/
content/full/4/10/eaat3807/DC1
Table S1. Subject demographics and descriptive data.
Table S2. Theoretical Montreal Neurological Institute coordinates for each source detector
channel.
Table S3. FLS task trial completion times for Novices.
Table S4. Expert versus Novice classification results for fNIRS (with and without short
separation regression) and FLS metrics.
Fig. S1. Experimental protocol design.
Fig. S2. Subjects performing FLS PC task with fNIRS measurements.
Fig. S3. Group average HRFs with respect to cortical regions.
Fig. S4. Quadratic SVM classification of Expert and Novice surgeons.
Fig. S5. Weighted quadratic SVM classification of Expert and Novice surgeons.
Fig. S6. Quadratic SVM classification of Skilled versus Unskilled trainees.
Fig. S7. Weighted quadratic SVM classification of Skilled versus Unskilled trainees.
Fig. S8. Probability density functions for projected LDA classification models.
Fig. S9. Differentiation and classification of motor skill between Novice and Expert surgeons
without short separation regression.
REFERENCES AND NOTES
1. G. Wulf, C. Shea, R. Lewthwaite, Motor skill learning and performance: A review of
influential factors. Med. Educ. 44, 75–84 (2010).
2. S. Shimada, Modulation of motor area activity by the outcome for a player during
observation of a baseball game. PLOS ONE 4, e8034 (2009).
3. C. Bosecker, L. Dipietro, B. Volpe, H. I. Krebs, Kinematic robot-based evaluation scales and
clinical counterparts to measure upper limb motor performance in patients with chronic
stroke. Neurorehabil. Neural Repair 24, 62–69 (2010).
4. R. Aggarwal, K. Moorthy, A. Darzi, Laparoscopic skills training and assessment. Br. J. Surg.
91, 1549–1558 (2004).
5. S. P. Swinnen, Intermanual coordination: From behavioural principles to neural-network
interactions. Nat. Rev. Neurosci. 3, 348–359 (2002).
6. C. Maes, J. Gooijers, J.-J. Orban de Xivry, S. P. Swinnen, M. P. Boisgontier, Two hands, one
brain, and aging. Neurosci. Biobehav. Rev. 75, 234–256 (2017).
7. A. Darzi, S. Smith, N. Taffinder, Assessing operative skill: Needs to become more
objective. Br. Med. J. 318, 887–888 (1999).
8. K. R. Wanzel, S. J. Hamstra, D. J. Anastakis, E. D. Matsumoto, M. D. Cusimano, Effect of
visual-spatial ability on learning of spatially-complex surgical skills. Lancet 359, 230–231 (2002).
9. K. Moorthy, Y. Munz, Objective assessment of technical skills in surgery. Br. Med. J. 327,
1032–1037 (2003).
10. J. Shah, A. Darzi, Surgical skills assessment: An ongoing debate. BJU Int. 88, 655–660 (2001).
11. S. P. Swinnen, J. Gooijers, Bimanual coordination, in Brain Mapping (Elsevier Academic
Press, ed. 1, 2015), vol. 2, pp. 475–482; http://linkinghub.elsevier.com/retrieve/pii/
B9780123970251000300.
12. H. N. Modi, H. Singh, G. Z. Yang, A. Darzi, D. R. Leff, A decade of imaging surgeons’ brain
function (part I): Terminology, techniques, and clinical translation. Surgery 162,
1121–1130 (2017).
13. K. Ohuchida, H. Kenmotsu, A. Yamamoto, K. Sawada, T. Hayami, K. Morooka, S. Takasugi,
K. Konishi, S. Ieiri, K. Tanoue, Y. Iwamoto, M. Tanaka, M. Hashizume, The frontal cortex is
activated during learning of endoscopic procedures. Surg. Endosc. 23, 2296–2301 (2009).
14. D. R. Leff, F. Orihuela-Espina, C. E. Elwell, T. Athanasiou, D. T. Delpy, A. W. Darzi, G.-Z. Yang,
Assessment of the cerebral cortex during motor task behaviours in adults: A systematic
review of functional near infrared spectroscopy (fNIRS) studies. Neuroimage 54,
2922–2936 (2011).
15. J. Andreu-Perez, D. R. Leff, K. Shetty, A. Darzi, G.-Z. Yang, Disparity in frontal lobe
connectivity on a complex bimanual motor task aids in classification of operator skill
level. Brain Connect. 6, 375–388 (2016).
16. D. R. Leff, C. E. Elwell, F. Orihuela-Espina, L. Atallah, D. T. Delpy, A. W. Darzi, G. Z. Yang,
Changes in prefrontal cortical behaviour depend upon familiarity on a bimanual
co-ordination task: An fNIRS study. Neuroimage 39, 805–813 (2008).
17. L. Gagnon, R. J. Cooper, M. A. Yücel, K. L. Perdue, D. N. Greve, D. A. Boas, Short separation
channel location impacts the performance of short channel regression in NIRS.
Neuroimage 59, 2518–2528 (2012).
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Nemani et al., Sci. Adv. 2018; 4 : eaat3807 3 October 2018
SCIENCE ADVANCES | RESEARCH ARTICLE
10 of 10
18. L. Gagnon, M. A. Yücel, D. A. Boas, R. J. Cooper, Further improvement in reducing
superficial contamination in NIRS using double short separation measurements.
Neuroimage 85, 127–135 (2014).
19. O. Hikosaka, K. Nakamura, K. Sakai, H. Nakahara, Central mechanisms of motor skill
learning. Curr. Opin. Neurobiol. 12, 217–222 (2002).
20. K. Nakamura, K. Sakai, O. Hikosaka, Neuronal activity in medial frontal cortex during
learning of sequential procedures. J. Neurophysiol. 80, 2671–2687 (1998).
21. D. M. Wolpert, Z. Ghahramani, J. R. Flanagan, Perspectives and problems in motor
learning. Trends Cogn. Sci. 5, 487–494 (2001).
22. D. M. Wolpert, J. Diedrichsen, J. R. Flanagan, Principles of sensorimotor learning.
Nat. Rev. Neurosci. 12, 739–751 (2011).
23. S. A. Fraser, D. R. Klassen, L. S. Feldman, G. A. Ghitulescu, D. Stanbridge, G. M. Fried,
Evaluating laparoscopic skills: Setting the pass/fail score for the MISTELS system. Surg.
Endosc. 17, 964–967 (2003).
24. S. A. Fraser, L. S. Feldman, D. Stanbridge, G. M. Fried, Characterizing the learning curve for
a basic laparoscopic drill. Surg. Endosc. 19, 1572–1578 (2005).
25. G. M. Fried, L. S. Feldman, M. C. Vassiliou, S. A. Fraser, D. Stanbridge, G. Ghitulescu,
C. G. Andrew, Proving the value of simulation in laparoscopic surgery. Ann. Surg. 240,
518–525 (2004).
26. M. C. Vassiliou, G. A. Ghitulescu, L. S. Feldman, D. Stanbridge, K. Leffondré, H. H. Sigman,
G. M. Fried, The MISTELS program to measure technical skill in laparoscopic surgery :
Evidence for reliability. Surg. Endosc. 20, 744–747 (2006).
27. N. J. Soper, G. M. Fried, The fundamentals of laparoscopic surgery: Its time has come.
Bull. Am. Coll. Surg. 93, 30–32 (2008).
28. J. H. Peters, G. M. Fried, L. L. Swanstrom, N. J. Soper, L. F. Sillin, B. Schirmer, K. Hoffman;
SAGES FLS Committee, Development and validation of a comprehensive program of
education and assessment of the basic fundamentals of laparoscopic surgery. Surgery
135, 21–27 (2004).
29. G. M. Fried, FLS assessment of competency using simulated laparoscopic tasks.
J. Gastrointest. Surg. 12, 210–212 (2008).
30. L. Zhang, G. Sankaranarayanan, V. S. Arikatla, W. Ahn, C. Grosdemouge, J. M. Rideout,
S. K. Epstein, S. De, S. D. Schwaitzberg, D. B. Jones, C. G. L. Cao, Characterizing the
learning curve of the VBLaST-PT© (Virtual Basic Laparoscopic Skill Trainer). Surg. Endosc.
27, 3603–3615 (2013).
31. A. Nemani, W. Ahn, C. Cooper, S. Schwaitzberg, S. De, Convergent validation and transfer
of learning studies of a virtual reality-based pattern cutting simulator. Surg. Endosc. 32,
1265–1272 (2018).
32. B. Zendejas, R. K. Ruparel, D. A. Cook, Validity evidence for the Fundamentals of
Laparoscopic Surgery (FLS) program as an assessment tool: A systematic review.
Surg. Endosc. 30, 512–520 (2016).
33. B. Zheng, H.-C. Hur, S. Johnson, L. L. Swanström, Validity of using fundamentals of
laparoscopic surgery (FLS) program to assess laparoscopic competence for
gynecologists. Surg. Endosc. 24, 152–160 (2010).
34. S. D. Schwaitzberg, D. J. Scott, D. B. Jones, S. K. McKinley, J. Castrillion, T. D. Hunter,
L. Michael Brunt, Threefold increased bile duct injury rate is associated with less surgeon
experience in an insurance claims database. Surg. Endosc. 28, 3068–3073 (2014).
35. D. R. C. James, F. Orihuela-Espina, D. R. Leff, M. H. Sodergren, T. Athanasiou, A. W. Darzi,
G. Z. Yang, The ergonomics of natural orifice translumenal endoscopic surgery (NOTES)
navigation in terms of performance, stress, and cognitive behavior. Surgery 149, 525–533 (2011).
36. G. Strangman, J. P. Culver, J. H. Thompson, D. A. Boas, A quantitative comparison of
simultaneous BOLD fMRI and NIRS recordings during functional brain activation.
Neuroimage 17, 719–31 (2002).
37. E. Watanabe, Y. Yamashita, A. Maki, Y. Ito, H. Koizumi, Non-invasive functional mapping
with multi-channel near infra-red spectroscopic topography in humans. Neurosci. Lett.
205, 41–44 (1996).
38. M. M. Plichta, M. J. Herrmann, A. C. Ehlis, C. G. Baehne, M. M. Richter, A. J. Fallgatter,
Event-related visual versus blocked motor task: Detection of specific cortical activation
patterns with functional near-infrared spectroscopy. Neuropsychobiology 53, 77–82 (2006).
39. U. Chaudhary, N. Birbaumer, A. Ramos-Murguialday, Brain–computer interfaces for
communication and rehabilitation. Nat. Rev. Neurol. 12, 513–525 (2016).
40. M. Mihara, N. Hattori, M. Hatakenaka, H. Yagura, T. Kawano, T. Hino, I. Miyai, Near-infrared
spectroscopy-mediated neurofeedback enhances efficacy of motor imagery-based
training in poststroke victims: A pilot study. Stroke 44, 1091–1098 (2013).
41. K. Yarrow, P. Brown, J. W. Krakauer, Inside the brain of an elite athlete: The neural
processes that support high achievement in sports. Nat. Rev. Neurosci. 10, 585–596 (2009).
42. T. Durduran, G. Yu, M. G. Burnett, J. A. Detre, J. H. Greenberg, J. Wang, C. Zhou, A. G. Yodh,
Diffuse optical measurement of blood flow, blood oxygenation, and metabolism in a
human brain during sensorimotor cortex activation. Opt. Lett. 29, 1766–1768 (2004).
43. L. Xu, B. Wang, G. Xu, W. Wang, Z. Liu, Z. Li, Functional connectivity analysis using fNIRS in
healthy subjects during prolonged simulated driving. Neurosci. Lett. 640, 21–28 (2017).
44. F. S. Racz, P. Mukli, Z. Nagy, A. Eke, Increased prefrontal cortex connectivity during
cognitive challenge assessed by fNIRS imaging. Biomed. Opt. Express 8, 3842–3855 (2017).
45. N. Arizono, Y. Ohmura, S. Yano, T. Kondo, Functional connectivity analysis of NIRS data
under rubber hand illusion to find a biomarker of sense of ownership. Neural Plast. 2016,
6726238 (2016).
46. D. J. Serrien, R. B. Ivry, S. P. Swinnen, Dynamics of hemispheric specialization
and integration in the context of motor control. Nat. Rev. Neurosci. 7, 160–166
(2006).
47. J. R. Flanagan, M. C. Bowman, R. S. Johansson, Control strategies in object manipulation
tasks. Curr. Opin. Neurobiol. 16, 650–659 (2006).
48. S. P. Swinnen, N. Wenderoth, Two hands, one brain: Cognitive neuroscience of bimanual
skill. Trends Cogn. Sci. 8, 18–25 (2004).
49. F. Faul, E. Erdfelder, A.-G. Lang, A. Buchner, G*Power 3: A flexible statistical power
analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods
39, 175–191 (2007).
50. G. Sankaranarayanan, H. Lin, V. S. Arikatla, M. Mulcare, L. Zhang, A. Derevianko, R. Lim,
D. Fobert, C. Cao, S. D. Schwaitzberg, D. B. Jones, S. De, Preliminary face and construct
validation study of a virtual basic laparoscopic skill trainer. J. Laparoendosc. Adv. Surg.
Tech. 20, 153–157 (2010).
51. M. A. Yücel, C. M. Aasted, M. P. Petkov, D. Borsook, D. A. Boas, L. Becerra, Specificity of
hemodynamic brain responses to painful stimuli: A functional near-infrared spectroscopy
study. Sci. Rep. 5, 9469 (2015).
52. C. M. Aasted, M. A. Yücel, R. J. Cooper, J. Dubb, D. Tsuzuki, L. Becerra, M. P. Petkov,
D. Borsook, I. Dan, D. A. Boas, Anatomical guidance for functional near-infrared
spectroscopy: AtlasViewer tutorial. Neurophotonics 2, 020801 (2015).
53. T. J. Huppert, S. G. Diamond, M. A. Franceschini, D. A. Boas, HomER: A review of
time-series analysis methods for near-infrared spectroscopy of the brain. Appl. Opt. 48,
D280–D298 (2009).
54. M. Cope, D. T. Delpy, System for long-term measurement of cerebral blood and tissue
oxygenation on newborn infants by near infra-red transillumination. Med. Biol.
Eng. Comput. 26, 289–294 (1988).
55. D. T. Delpy, M. Cope, P. van der Zee, S. Arridge, S. Wray, J. Wyatt, Estimation of optical
pathlength through tissue from direct time of flight measurement. Phys. Med. Biol. 33,
1433–1442 (1988).
56. A. Duncan, J. H. Meek, M. Clemence, C. E. Elwell, P. Fallon, L. Tyszczuk, M. Cope,
D. T. Delpy, Measurement of cranial optical path length as a function of age using phase
resolved near infrared spectroscopy. Pediatr. Res. 39, 889–894 (1996).
57. M. A. Franceschini, D. K. Joseph, T. J. Huppert, S. G. Diamond, D. A. Boas, Diffuse optical
imaging of the whole head. J. Biomed. Opt. 11, 054007 (2006).
58. Y. Zhang, D. H. Brooks, M. A. Franceschini, D. A. Boas, Eigenvector-based spatial filtering
for reduction of physiological interference in diffuse optical imaging. J. Biomed. Opt. 10,
11014 (2005).
59. J. C. Ye, S. Tak, K. E. Jang, J. Jung, J. Jang, NIRS-SPM: Statistical parametric mapping for
near-infrared spectroscopy. Neuroimage 44, 428–447 (2009).
60. B. G. Tabachnick, L. S. Fidell, Using Multivariate Statistics (Allyn & Bacon, ed. 5, 2007).
Acknowledgments: We would like to acknowledge D. Boas and J. Dubb from the Martinos
Imaging Center for their significant support throughout this project. We would also like to
acknowledge A. “Buzz” DiMartino and his team at TechEn for their gracious support for the
fNIRS hardware components. Finally, we would like to thank N. Fernández for his contributions
to the illustrations. Funding: This work was supported by funding provided by NIH/National
Institute of Biomedical Imaging and Bioengineering grants 2R01EB005807, 5R01EB010037,
1R01EB009362, 1R01EB014305, and R01EB019443. Author contributions: A.N. conceived the
original idea. A.N., X.I., S.D., and M.A.Y. designed the research study. D.W.G., C.C., S.D.S., and
M.A.Y. contributed to study logistics, subject recruitment, and clinical expertise. A.N.
performed research studies. A.N., M.A.Y., U.K. performed data processing and analyses of
results. A.N., M.A.Y., U.K., X.I., and S.D. interpreted the results. A.N. and X.I. wrote the
manuscript, and M.A.Y., U.K., and S.D. edited the manuscript. All authors discussed conclusions
and commented on the manuscript. Competing interests: The authors declare that they have
no competing interests. Data and materials availability: Data needed to evaluate the
conclusions in the paper are present in the paper and/or the Supplementary Materials.
Additional data related to this paper are available through a linked repository (https://github.
com/arun-nemani/Surgical-skill-classification). Correspondence and requests for materials
should be addressed to X.I. Further information regarding data, figures, and other research
findings in this study may be requested from corresponding authors.
Submitted 20 February 2018
Accepted 29 August 2018
Published 3 October 2018
10.1126/sciadv.aat3807
Citation: A. Nemani, M. A. Yücel, U. Kruger, D. W. Gee, C. Cooper, S. D. Schwaitzberg, S. De,
X. Intes, Assessing bimanual motor skills with optical neuroimaging. Sci. Adv. 4, eaat3807 (2018).
on October 3, 2018http://advances.sciencemag.org/Downloaded from
Assessing bimanual motor skills with optical neuroimaging
Xavier Intes
Arun Nemani, Meryem A. Yücel, Uwe Kruger, Denise W. Gee, Clairice Cooper, Steven D. Schwaitzberg, Suvranu De and
DOI: 10.1126/sciadv.aat3807
(10), eaat3807.4Sci Adv
ARTICLE TOOLS http://advances.sciencemag.org/content/4/10/eaat3807
MATERIALS
SUPPLEMENTARY http://advances.sciencemag.org/content/suppl/2018/10/01/4.10.eaat3807.DC1
REFERENCES http://advances.sciencemag.org/content/4/10/eaat3807#BIBL
This article cites 58 articles, 3 of which you can access for free
PERMISSIONS http://www.sciencemag.org/help/reprints-and-permissions
Terms of ServiceUse of this article is subject to the
registered trademark of AAAS. is aScience Advances Association for the Advancement of Science. No claim to original U.S. Government Works. The title
York Avenue NW, Washington, DC 20005. 2017 © The Authors, some rights reserved; exclusive licensee American
(ISSN 2375-2548) is published by the American Association for the Advancement of Science, 1200 NewScience Advances
on October 3, 2018http://advances.sciencemag.org/Downloaded from