Russell A. Poldrack’s research while affiliated with Stanford University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (575)


Unintended bias in the pursuit of collinearity solutions in fMRI analysis
  • Preprint
  • File available

January 2025

·

4 Reads

Jeanette Alane Mumford

·

·

·

[...]

·

Russell A Poldrack

In task functional magnetic resonance imaging (fMRI), collinearity between task regressors in a design matrix may impact power. Researchers often optimize task designs by assessing collinearity between task regressors to minimize downstream effects. However, some methods intended to reduce collinearity during optimization and data analysis may fail, in some cases introducing unintended bias into the parameter estimates. Although relevant to all task-based fMRI studies, we describe these issues and illustrate them using the Monetary Incentive Delay (MID) task fMRI data from the Adolescent Brain Cognitive Development (ABCD®) study. Specifically, we show that omitting regressors for certain task components, using impulse regressors for extended activations, and ignoring response time adjustments can bias common contrast estimates. We present a "Saturated" model that models all stimuli and response times, which minimizes bias in MID task simulations and validly estimates task-relevant whole brain activity, offering greater flexibility in studying contrasts that might otherwise be avoided due to potential biases.

Download

Number and proportion of responses to the question asking whether data management errors caused the retraction.
The frequency and percentages of the RDM error types that led to the retraction of the paper. After excluding the missing and irrelevant responses, 85 responses remained in our sample.
The frequency and percentages of causes that led to the retraction of the paper. After excluding the missing and irrelevant responses, 93 responses remained in our sample.
Relationship between the most frequent causes and the RDM error types. For this figure, we used 42 responses as we only included errors caused by the four most frequent causes of errors.
The frequency and percentages of the stress that the retraction caused to the authors on a seven-point Likert-type scale. The X-axis shows the scale, while the Y-axis shows the number of responses.
Opening the black box of article retractions: exploring the causes and consequences of data management errors

December 2024

·

34 Reads

·

1 Citation

The retraction of an article is probably the most severe outcome of a scientific project. While great emphasis has been placed on articles retracted due to scientific misconduct, studies show many retractions are due to honest errors. Unfortunately, in most cases, retraction notices do not provide sufficient information to determine the specific types and causes of these errors. In our study, we explored the research data management (RDM) errors that led to retractions from the authors’ perspectives. We collected responses from 97 researchers from a broad range of disciplines using a survey design. Our exploratory results suggest that just about any type of RDM error can lead to the retraction of a paper, and these errors can occur at any stage of the data management workflow. The most frequently occurring cause of an error was inattention. The retraction was an extremely stressful experience for the majority of our sample, and most surveyed researchers introduced changes to their data management workflow as a result. Based on our findings, we propose that researchers revise their data management workflows as a whole instead of focusing on certain aspects of the process, with particular emphasis on tasks vulnerable to human fallibility.


Generative dynamical models for classification of rsfMRI data

December 2024

·

3 Reads

Network Neuroscience

The growing availability of large-scale neuroimaging datasets and user-friendly machine learning tools has led to a recent surge in studies that use fMRI data to predict psychological or behavioral variables. Many such studies classify fMRI data on the basis of static features, but fewer try to leverage brain dynamics for classification. Here, we pilot a generative, dynamical approach for classifying resting-state fMRI (rsfMRI) data. By fitting separate hidden Markov models to the classes in our training data and assigning class labels to test data based on their likelihood under those models, we are able to take advantage of dynamical patterns in the data without confronting the statistical limitations of some other dynamical approaches. Moreover, we demonstrate that hidden Markov models are able to successfully perform within-subject classification on the MyConnectome dataset solely on the basis of transition probabilities among their hidden states. On the other hand, individual Human Connectome Project subjects cannot be identified on the basis of hidden state transition probabilities alone—although a vector autoregressive model does achieve high performance. These results demonstrate a dynamical classification approach for rsfMRI data that shows promising performance, particularly for within-subject classification, and has the potential to afford greater interpretability than other approaches.


Can I have your data? Recommendations and practical tips for sharing neuroimaging data upon a direct personal request

November 2024

·

7 Reads

Sharing neuroimaging data upon a direct personal request can be challenging both for researchers who request the data and for those who agree to share their data. Unlike sharing through repositories under standardized protocols and data use/sharing agreements, each party often needs to negotiate the terms of sharing and use of data case by case. This negotiation unfolds against a complex backdrop of ethical and regulatory requirements along with technical hurdles related to data transfer and management. These challenges can significantly delay the data sharing process, and if not properly addressed, lead to potential tensions and disputes between sharing parties. This study aims to help researchers navigate these challenges by examining what to consider during the process of data sharing and by offering recommendations and practical tips. We first divided the process of sharing data upon a direct personal request into six stages: requesting data, reviewing the applicability of and requirements under relevant laws and regulations, negotiating terms for sharing and use of data, preparing and transferring data, managing and analyzing data, and sharing the outcome of secondary analysis of data. For each stage, we identified factors to consider through a review of ethical principles for human subject research; individual institutions' and funding agencies' policies; and applicable regulations in the U.S. and E.U. We then provide practical insights from a large-scale ongoing neuroimaging data sharing project led by one of the authors as a case study. In this case study, PET/MRI data from a total of 782 subjects were collected through direct personal requests across seven sites in the USA, Canada, the UK, Denmark, Germany, and Austria. The case study also revealed that researchers should typically expect to spend an average of 8 months on data sharing efforts, with the timeline extending up to 24 months in some cases due to additional data requests or necessary corrections. The current state of data sharing via direct requests is far from ideal and presents significant challenges, particularly for early career scientists, who often have a limited time frame – typically two to three years – to work on a project. The best practices and practical tips offered in this study will help researchers streamline the process of sharing neuroimaging data while minimizing friction and frustrations.


Figure 3 | View of the background of the anatomical image. This visualization is an enhancement of the background noise. Any artifacts identified in this image may also be visible in the zoomed-in image. Look for:
Figure 4 | Standard deviation of signal through time. This visualization corresponds to the standard deviation of the BOLD signal in each voxel, plotted as sagittal and axial cross sections, with yellow representing the most extreme values. The eyes and arteries will typically be the brightest yellow, a result of physiological motion. Look for:
Figure 5 | View of the background of the voxel-wise average of the BOLD timeseries. Mosaic view of the average BOLD signal, with background enhancement. Look for:
Figure 6 | Voxel-wise average of BOLD time-series, zoomed-in covering just the brain. This visualization is the average values of the BOLD signal in each voxel across the entire scan duration, plotted as sagittal and axial cross sections. Look for:
Quality assessment and control of unprocessed anatomical, functional, and diffusion MRI of the human brain using MRIQC

October 2024

·

45 Reads

Quality control of MRI data prior to preprocessing is fundamental, as substandard data are known to increase variability spuriously. Currently, no automated or manual method reliably identifies subpar images, given pre-specified exclusion criteria. In this work, we propose a protocol describing how to carry out the visual assessment of T1-weighted, T2-weighted, functional, and diffusion MRI scans of the human brain with the visual reports generated by MRIQC. The protocol describes how to execute the software on all the images of the input dataset using typical research settings (i.e., a high-performance computing cluster). We then describe how to screen the visual reports generated with MRIQC to identify artifacts and potential quality issues and annotate the latter with the "rating widget" - a utility that enables rapid annotation and minimizes bookkeeping errors. Integrating proper quality control checks on the unprocessed data is fundamental to producing reliable statistical results and crucial to identifying faults in the scanning settings, preempting the acquisition of large datasets with persistent artifacts that should have been addressed as they emerged.


Figure 1. An example of T1w image before and after defacing. Defacing is typically implemented by zeroing the voxels around the face. The background noise visualization is extracted from the MRIQC visual report and illustrates that eye spillover is one example of information key in evaluating image quality that is removed by defacing.
Figure 2. Defacing biases human assessment of image quality, particularly when image quality is low.
Defacing biases visual quality assessments of structural MRI

October 2024

·

17 Reads

A critical requirement before data-sharing of human neuroimaging is removing facial features to protect individuals' privacy. However, not only does this process redact identifiable information about individuals, but it also removes non-identifiable information. This introduces undesired variability into downstream analysis and interpretation. This registered report investigated the degree to which the so-called defacing altered the quality assessment of T1-weighted images of the human brain from the openly available "IXI dataset". The effect of defacing on manual quality assessment was investigated on a single-site subset of the dataset (N=185). By comparing two linear mixed-effects models, we determined that four trained human raters' perception of quality was significantly influenced by defacing by comparing their ratings on the same set of images in two conditions: "nondefaced" (that is, preserving facial features) and "defaced". In addition, we investigated these biases on automated quality assessments by applying repeated-measures multivariate ANOVA (rm-MANOVA) on the image quality metrics extracted with MRIQC on the full IXI dataset (N=581; three acquisition sites). This study found that defacing altered the quality assessments by humans and showed that MRIQC's quality metrics were mostly insensitive to defacing.


Predicting Task Activation Maps from Resting-State Functional Connectivity using Deep Learning

September 2024

·

59 Reads

Recent work has shown that deep learning is a powerful tool for predicting brain activation patterns evoked through various tasks using resting state features. We replicate and improve upon this recent work to introduce two models, BrainSERF and BrainSurfGCN, that perform at least as well as the state-of-the-art while greatly reducing memory and computational footprints. Our performance analysis observed that low predictability was associated with a possible lack of task engagement derived from behavioral performance. Furthermore, a deficiency in model performance was also observed for closely matched task contrasts, likely due to high individual variability confirmed by low test-retest reliability. Overall, we successfully replicate recently developed deep learning architecture and provide scalable models for further research.


Impact of analytic decisions on test–retest reliability of individual and group estimates in functional magnetic resonance imaging: A multiverse analysis using the monetary incentive delay task

September 2024

·

10 Reads

·

1 Citation

Imaging Neuroscience

Empirical studies reporting low test–retest reliability of individual blood oxygen-level dependent (BOLD) signal estimates in functional magnetic resonance imaging (fMRI) data have resurrected interest among cognitive neuroscientists in methods that may improve reliability in fMRI. Over the last decade, several individual studies have reported that modeling decisions, such as smoothing, motion correction, and contrast selection, may improve estimates of test–retest reliability of BOLD signal estimates. However, it remains an empirical question whether certain analytic decisions consistently improve individual- and group-level reliability estimates in an fMRI task across multiple large, independent samples. This study used three independent samples (Ns: 60, 81, 119) that collected the same task (Monetary Incentive Delay task) across two runs and two sessions to evaluate the effects of analytic decisions on the individual (intraclass correlation coefficient [ICC(3,1)]) and group (Jaccard/Spearman rho) reliability estimates of BOLD activity of task fMRI data. The analytic decisions in this study vary across four categories: smoothing kernel (five options), motion correction (four options), task parameterizing (three options), and task contrasts (four options), totaling 240 different pipeline permutations. Across all 240 pipelines, the median ICC estimates are consistently low, with a maximum median ICC estimate of .43 – .55 across the 3 samples. The analytic decisions with the greatest impact on the median ICC and group similarity estimates are the Implicit Baseline contrast, Cue Model parameterization, and a larger smoothing kernel. Using an Implicit Baseline in a contrast condition meaningfully increased group similarity and ICC estimates as compared with using the Neutral cue. This effect was largest for the Cue Model parameterization; however, improvements in reliability came at the cost of interpretability. This study illustrates that estimates of reliability in the MID task are consistently low and variable at small samples, and a higher test–retest reliability may not always improve interpretability of the estimated BOLD signal.


Reporting checklists in neuroimaging: promoting transparency, replicability, and reproducibility

September 2024

·

98 Reads

·

1 Citation

Neuropsychopharmacology: official publication of the American College of Neuropsychopharmacology

Neuroimaging plays a crucial role in understanding brain structure and function, but the lack of transparency, reproducibility, and reliability of findings is a significant obstacle for the field. To address these challenges, there are ongoing efforts to develop reporting checklists for neuroimaging studies to improve the reporting of fundamental aspects of study design and execution. In this review, we first define what we mean by a neuroimaging reporting checklist and then discuss how a reporting checklist can be developed and implemented. We consider the core values that should inform checklist design, including transparency, repeatability, data sharing, diversity, and supporting innovations. We then share experiences with currently available neuroimaging checklists. We review the motivation for creating checklists and whether checklists achieve their intended objectives, before proposing a development cycle for neuroimaging reporting checklists and describing each implementation step. We emphasize the importance of reporting checklists in enhancing the quality of data repositories and consortia, how they can support education and best practices, and how emerging computational methods, like artificial intelligence, can help checklist development and adherence. We also highlight the role that funding agencies and global collaborations can play in supporting the adoption of neuroimaging reporting checklists. We hope this review will encourage better adherence to available checklists and promote the development of new ones, and ultimately increase the quality, transparency, and reproducibility of neuroimaging research.


Fig. 2. : Pilot data, acquired in a 5 week old participant, showing two alternative T2-weighted protocols evaluated during optimization on the Siemens platform. The vendor-matched protocol harmonized TR, TE, and echo train length (ETL) across all vendors. However, due to variations in how each vendor implements variable-flip-angle turbo spin echo, we could achieve equivalent or superior image contrast in shorter time with vendor-specific choices of TR, TE, and ETL.
Fig. 5. : Axial slices of unprocessed diffusion-weighted images from an HBCD acquisition. Slices acquired with AP and PA phase encoding directions are shown in the left and right columns, respectively. Gradient strengths in b (s/mm 2 ) are shown per row with the number of images collected at the b value in parenthesis. The AP and PA images shown at b>0 are not the same gradient direction, as the gradient directions are split across the phase encoding directions.
Fig. 9. : Summary of the fully automated MRS data processing workflow. The workflow includes automated data transfer and ingestion, integrates derivatives from the HBCD MRI analysis, performs the MRS analysis, and generates quantitative results and summary reports.
Fig. 10. : High-level schematic of the CBRAIN user interface, data management and processing components for HBCD study.
Quantifying Brain Development in the HEALthy Brain and Child Development (HBCD) Study: The Magnetic Resonance Imaging and Spectroscopy Protocol

September 2024

·

158 Reads

·

6 Citations

Developmental Cognitive Neuroscience

The HEALthy Brain and Child Development (HBCD) Study, a multi-site prospective longitudinal cohort study, will examine human brain, cognitive, behavioral, social, and emotional development beginning prenatally and planned through early childhood. The acquisition of multimodal magnetic resonance-based brain development data is central to the study’s core protocol. However, application of Magnetic Resonance Imaging (MRI) methods in this population is complicated by technical challenges and difficulties of imaging in early life. Overcoming these challenges requires an innovative and harmonized approach, combining age-appropriate acquisition protocols together with specialized pediatric neuroimaging strategies. The HBCD MRI Working Group aimed to establish a core acquisition protocol for all 27 HBCD Study recruitment sites to measure brain structure, function, microstructure, and metabolites. Acquisition parameters of individual modalities have been matched across MRI scanner platforms for harmonized acquisitions and state-of-the-art technologies are employed to enable faster and motion-robust imaging. Here, we provide an overview of the HBCD MRI protocol, including decisions of individual modalities and preliminary data. The result will be an unparalleled resource for examining early neurodevelopment which enables the larger scientific community to assess normative trajectories from birth through childhood and to examine the genetic, biological, and environmental factors that help shape the developing brain.


Citations (70)


... The currently funded protocol includes visits 1 (prenatal) through 4 (9-15 months) as shown schematically in Fig. 1. For details about the protocol and each of the domains sampled (e.g., biospecimens, EEG, MRI, wearable sensors), please consult the additional papers in this Special Issue (Cioffredi et al., 2024b(Cioffredi et al., , 2024aDean et al., 2024;Fox et al., 2024;Nelson et al., 2024;Pini et al., 2024;Sullivan et al., 2024). This paper aims to describe the role of the SLCC in the design and implementation of this protocol. ...

Reference:

¿Donde están? Hispanic/Latine inclusion, diversity and representation in the HEALthy Brain and Child Development Study (HBCD)
Quantifying Brain Development in the HEALthy Brain and Child Development (HBCD) Study: The Magnetic Resonance Imaging and Spectroscopy Protocol

Developmental Cognitive Neuroscience

... Whilst the IAPS is good for standardising images used within studies examining affect, further standardisation in emotional processing task design is necessary. A shift towards consistency through the utilisation of reporting checklists [80], akin to what is observed in the cue reactivity literature with the formation of the ENIGMA Addiction Cue Reactivity Initiative [81,82], would help improve reliability and replicability of results. A similar initiative should be developed within the field of emotional processing. ...

Reporting checklists in neuroimaging: promoting transparency, replicability, and reproducibility
  • Citing Article
  • September 2024

Neuropsychopharmacology: official publication of the American College of Neuropsychopharmacology

... Our analysis centred on pre-processing steps that follow an initial minimal processing pipeline, as we anticipated that these would produce the most significant differences in FC and brain-behaviour effect sizes. Recent findings suggest that decisions made during the minimal preprocessing phase-such as the choice of registration template or processing package-can also have a substantial impact on the resulting FC (Li et al., 2024). We used the same normalization template in all our analyses, but it is possible that such choices also influence the strength of BWAS effect sizes. ...

Moving beyond processing- and analysis-related variation in resting-state functional brain imaging

Nature Human Behaviour

... We obtained imaging and phenotypic data from the Stanford Science of Behavior Change project (https:// scien ceofb ehavi orcha nge. org/ proje cts/ poldr ack-marsch/ ) (Bissett et al. 2024). The sample consisted of 82 meditation-naïve participants (age M = 23.6, ...

Cognitive tasks, anatomical MRI, and functional MRI data evaluating the construct of self-regulation

Scientific Data

... The advancement of Generative Artificial Intelligence (Jovanovic & Campbell, 2022), most notably via the Large Language Models (Zhou et al. 2023), has opened new avenues for data analysis across various fields (e.g., Shang & Huang, 2024;Linkon et al. 2024;Salah et al. 2023;DuPre & Poldrack, 2024). ...

The future of data analysis is now: Integrating generative AI in neuroimaging methods development

Imaging Neuroscience

... We provide a complement here, using tractometry, which allows for the evaluation of diffusion characteristics along the lengths of known tracts. Similar, tractometry-based analysis results for a subset of HCP subjects have been published as a part of larger data releases containing subjects from multiple datasets (Avesani et al., 2019;Lerma-Usabiaga et al., 2020;Hayashi et al., 2023). Here, we provide tractometry results for all subjects in HCP that have a complete dMRI acquisition. ...

Author Correction: brainlife.io: a decentralized and open-source cloud platform to support neuroscience research

Nature Methods

... Moreover, they may find it challenging to extract the relevant insights from literature aimed at a different audience with partly different needs. In contrast, literature explicitly directed toward applied ML users tends to either focus on general guidelines for ML-based predictive modeling, lacking detailed coverage of HP tuning (e.g., Kuhn and Johnson, 2013;Pfob et al., 2022;Lones, 2024;Kapoor et al., 2024;Poldrack et al., 2020;Collins et al., 2024a;Van Royen et al., 2023), or addresses HP tuning only within specific research areas (e.g., Hosseini et al., 2020;Dunias et al., 2024). Additionally, much of the existing HP tuning literature does not consider preprocessing HPs. ...

REFORMS: Consensus-based Recommendations for Machine-learning-based Science
  • Citing Article
  • May 2024

Science Advances

... Alternatively, researchers might only share their data, 9 and the preprocessing and model building could all happen within these infrastructures. 10 This fosters an open science environment where AI models can be tested and refined across different datasets to accelerate the development of clinically applicable tools. ...

brainlife.io: a decentralized and open-source cloud platform to support neuroscience research

Nature Methods

... However, researchers still navigate complex analysis choices. Multiverse analysis studies (Demidenko et al., 2024;Kristanto et al., 2024) explore the impact that these choices have on results. ...

Impact of analytic decisions on test-retest reliability of individual and group estimates in functional magnetic resonance imaging: a multiverse analysis using the monetary incentive delay task

... This study also indicated that the ED field lags behind in terms of the evidence needed to implement preventive strategies for individuals with sub-threshold ED symptoms, recommending multi-centric cohort studies to identify modifiable risk factors. Of note, a potential example of such a project has been recently implemented in the CHR-P field, with an ongoing large-scale observational cohort study collecting and analyzing multimodal data to improve prognostic precision [79]. If translated to the ED field, an analogous project collecting multimodal data like clinical, environmental, cognitive, and neuroimaging data could increase precision in determining risk factors and prognosis and selecting appropriate preventive treatments for eating pathology. ...

Accelerating Medicines Partnership® Schizophrenia (AMP® SCZ): Rationale and Study Design of the Largest Global Prospective Cohort Study of Clinical High Risk for Psychosis

Schizophrenia Bulletin