Article

The Human Connectome Project: A data acquisition perspective

Department of Anatomy & Neurobiology, Washington University, St. Louis, MO, USA.
NeuroImage (Impact Factor: 6.13). 02/2012; 62(4):2222-31. DOI: 10.1016/j.neuroimage.2012.02.018
Source: PubMed

ABSTRACT The Human Connectome Project (HCP) is an ambitious 5-year effort to characterize brain connectivity and function and their variability in healthy adults. This review summarizes the data acquisition plans being implemented by a consortium of HCP investigators who will study a population of 1200 subjects (twins and their non-twin siblings) using multiple imaging modalities along with extensive behavioral and genetic data. The imaging modalities will include diffusion imaging (dMRI), resting-state fMRI (R-fMRI), task-evoked fMRI (T-fMRI), T1- and T2-weighted MRI for structural and myelin mapping, plus combined magnetoencephalography and electroencephalography (MEG/EEG). Given the importance of obtaining the best possible data quality, we discuss the efforts underway during the first two years of the grant (Phase I) to refine and optimize many aspects of HCP data acquisition, including a new 7T scanner, a customized 3T scanner, and improved MR pulse sequences.

Download full-text

Full-text

Available from: Linda J Larson-Prior, Jun 30, 2014
1 Follower
 · 
189 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.
    Frontiers in Neuroinformatics 01/2015; 8:90. DOI:10.3389/fninf.2014.00090
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ongoing 1000 brains study (1000BRAINS) is an epidemiological and neuroscientific investigation of structural and functional variability in the human brain during aging. The two recruitment sources are the 10-year follow-up cohort of the German Heinz Nixdorf Recall (HNR) Study, and the HNR MultiGeneration Study cohort, which comprises spouses and offspring of HNR subjects. The HNR is a longitudinal epidemiological investigation of cardiovascular risk factors, with a comprehensive collection of clinical, laboratory, socioeconomic, and environmental data from population-based subjects aged 45-75 years on inclusion. HNR subjects underwent detailed assessments in 2000, 2006, and 2011, and completed annual postal questionnaires on health status. 1000BRAINS accesses these HNR data and applies a separate protocol comprising: neuropsychological tests of attention, memory, executive functions and language; examination of motor skills; ratings of personality, life quality, mood and daily activities; analysis of laboratory and genetic data; and state-of-the-art magnetic resonance imaging (MRI, 3 Tesla) of the brain. The latter includes (i) 3D-T1- and 3D-T2-weighted scans for structural analyses and myelin mapping; (ii) three diffusion imaging sequences optimized for diffusion tensor imaging, high-angular resolution diffusion imaging for detailed fiber tracking and for diffusion kurtosis imaging; (iii) resting-state and task-based functional MRI; and (iv) fluid-attenuated inversion recovery and MR angiography for the detection of vascular lesions and the mapping of white matter lesions. The unique design of 1000BRAINS allows: (i) comprehensive investigation of various influences including genetics, environment and health status on variability in brain structure and function during aging; and (ii) identification of the impact of selected influencing factors on specific cognitive subsystems and their anatomical correlates.
    Frontiers in Aging Neuroscience 07/2014; 6:149. DOI:10.3389/fnagi.2014.00149 · 2.84 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The XNAT informatics platform is an open source data management tool used by biomedical imaging researchers around the world. An important feature of XNAT is its highly extensible architecture: users of XNAT can add new data types to the system to capture the imaging and phenotypic data generated in their studies. Until recently, XNAT has had limited capacity to broadcast the meaning of these data extensions to users, other XNAT installations, and other software. We have implemented a data dictionary service for XNAT, which is currently being used on ConnectomeDB, the Human Connectome Project (HCP) public data sharing website. The data dictionary service provides a framework to define key relationships between data elements and structures across the XNAT installation. This includes not just core data representing medical imaging data or subject or patient evaluations, but also taxonomical structures, security relationships, subject groups, and research protocols. The data dictionary allows users to define metadata for data structures and their properties, such as value types (e.g., textual, integers, floats) and valid value templates, ranges, or field lists. The service provides compatibility and integration with other research data management services by enabling easy migration of XNAT data to standards-based formats such as the Resource Description Framework (RDF), JavaScript Object Notation (JSON), and Extensible Markup Language (XML). It also facilitates the conversion of XNAT's native data schema into standard neuroimaging vocabularies and structures.
    Frontiers in Neuroinformatics 07/2014; 8:65. DOI:10.3389/fninf.2014.00065