ArticleLiterature Review

Why sharing matters for electrophysiological data analysis

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

We present the case for the sharing of electrophysiological datasets and tools for their analysis. Some of the problems, both sociological and technical, associated with improving the sharing of data and analysis tools are discussed. The work that has been done to try to improve data and code sharing in the electrophysiology area is reviewed. The sharing aspects of the current large projects in brain research are considered. Copyright © 2015. Published by Elsevier Inc.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... An obligation to share IPD has been encouraged for some time by many stakeholders, including academic institutions, the pharmaceutical industry, health regulatory authorities, medicinal product pricing agencies, patient lobby groups, investigative journalists, and public media representatives. [18][19][20][21][22] Sharing data from clinical trials benefits patients by pointing to new research questions that can lead to new discoveries. It also allows clinical trial results to be included in meta-The final decision on how to deal with a submitted manuscript and the Data Sharing Statement rests with the editor of each journal. ...
Article
Full-text available
Sharing of deidentified/anonymised individual participant data is rapidly becoming the norm. The International Committee of Medical Journal Editors recently implement - ed requirements for data sharing as a condition for considering publication of clinical trial reports in member journals. These requirements are: 1. manuscripts that are based on results of a clinical trial submitted on or after July 1, 2018, must contain a Data Sharing Statement at the manuscript submission stage; and 2. interventional clinical trials that began enrolling participants on or after January 1, 2019, must include a Data Sharing Plan in the trial’s public registration record. The full effect of these data sharing requirements and the resolution with other legal provisions still need to be resolved, especially regarding protection of personal information of clinical trial participants and commercially confidential information for clinical trial sponsors. Nevertheless, sharing of deidentified individual participant data from clinical trials will continue to expand.
... For this reason, spike sorting has been and continues to be a central problem in computational neuroscience. Freely available spike sorting software and data sets, such as Wave clus and its associated simulated data set, are valuable resources that help the field of computational neuroscience move forward [27]. ...
Article
Wave clus is an unsupervised spike detection and sorting algorithm that has been used in dozens of experimental studies as a spike sorting tool. It is often used as a benchmark for comparing the performance of new spike sorting algorithms. For these reasons, the spike detection performance of Wave clus is important for both experimental and computational studies that involve spike sorting. Two measures of spike detection performance are the number of false positive detections (type I error) and the number of missed spikes (type II error). Here, a new spike detection algorithm is proposed that reduces the number of misses and false positives of Wave clus in a widely used simulated data set across the entire range of commonly used detection thresholds. The algorithm accepts a spike if its amplitude is larger than the amplitude of its two immediate neighbors, where an immediate neighbor is the nearest peak of the same polarity within ±1 refractory period. The simultaneous reduction that is achieved in the number of false positives and misses is important for experimental and computational studies that use Wave clus as a spike sorting tool or as a benchmark. A software patch that incorporates the algorithm into Wave clus as an optional spike detection algorithm is provided.
... 20 Big electrophysiology data and various applications have imposed challenges on data transfer, storage, data standardization, visualization, statistical analysis, real-time computing, data mining, and multi-institution collaboration online or offline. 12,[21][22][23][24][25] Moreover, large electrophysiological datasets also need advanced data storage methods such as distributed file systems (DFSs) and NoSQL, computing architecture such as cloud computing, and online parallel data mining algorithms integrated in big data platforms. Hence, we believe that an online data sharing and analysis platform is an essential solution to cope with these challenges, which are illustrated in Figure 1. ...
Article
Full-text available
With the development of applications and high-throughput sensor technologies in medical fields, scientists and scientific professionals are facing a big challenge—how to manage and analyze the big electrophysiological datasets created by these sensor technologies. The challenge has several aspects: one is the size of the data (which is usually more than terabytes); the second is the format used to store the data (the data created are generally stored using different formats); the third is that most of these unstructured, semi-structured, or structured datasets are still distributed over many researchers’ own local computers in their labs, which are not open access, to become isolated data islands. Thus, how to overcome the challenge and share/mine the scientific data has become an important research topic. The aim of this paper is to systematically review recent published research on the developed web-based electrophysiological data platforms from the perspective of cloud computing and programming frameworks. Based on this review, we suggest that a conceptual scientific workflow (SWF) based programming framework associated with an elastic cloud computing environment running big data tools (such as Hadoop and Spark) is a good choice for facilitating effective data mining and collaboration among scientists.
... 20 Big electrophysiology data and various applications have imposed challenges on data transfer, storage, data standardization, visualization, statistical analysis, real-time computing, data mining, and multi-institution collaboration online or offline. 12,[21][22][23][24][25] Moreover, large electrophysiological datasets also need advanced data storage methods such as distributed file systems (DFSs) and NoSQL, computing architecture such as cloud computing, and online parallel data mining algorithms integrated in big data platforms. Hence, we believe that an online data sharing and analysis platform is an essential solution to cope with these challenges, which are illustrated in Figure 1. ...
Article
Full-text available
With the development of applications and high-throughput sensor technologies in medical fields, scientists and scientific professionals are facing a big challenge-how to manage and analyze the big electrophysiological datasets created by these sensor technologies. The challenge exhibits several aspects: one is the size of the data (which is usually more than terabytes); the second is the format used to store the data (the data created are generally stored using different formats); the third is that most of these unstructured, semi-structured, or structured datasets are still distributed over many researchers' own local computers in their laboratories, which are not open access, to become isolated data islands. Thus, how to overcome the challenge and share/mine the scientific data has become an important research topic. The aim of this paper is to systematically review recent published research on the developed web-based electrophysiological data platforms from the perspective of cloud computing and programming frameworks. Based on this review, we suggest that a conceptual scientific workflow-based programming framework associated with an elastic cloud computing environment running big data tools (such as Hadoop and Spark) is a good choice for facilitating effective data mining and collaboration among scientists. For further resources related to this article, please visit the WIREs website.
... 20 Big electrophysiology data and various applications have imposed challenges on data transfer, storage, data standardization, visualization, statistical analysis, real-time computing, data mining, and multi-institution collaboration online or offline. 12,[21][22][23][24][25] Moreover, large electrophysiological datasets also need advanced data storage methods such as distributed file systems (DFSs) and NoSQL, computing architecture such as cloud computing, and online parallel data mining algorithms integrated in big data platforms. Hence, we believe that an online data sharing and analysis platform is an essential solution to cope with these challenges, which are illustrated in Figure 1. ...
Article
Full-text available
With the development of applications and high-throughput sensor technologies in medical fields, scientists and scientific professionals are facing a big challenge—how to manage and analyze the big electrophysiological datasets created by these sensor technologies. The challenge exhibits several aspects: one is the size of the data (which is usually more than terabytes); the second is the format used to store the data (the data created are generally stored using different formats); the third is that most of these unstructured, semi-structured, or structured datasets are still distributed over many researchers' own local computers in their laboratories, which are not open access, to become isolated data islands. Thus, how to overcome the challenge and share/mine the scientific data has become an important research topic. The aim of this paper is to systematically review recent published research on the developed web-based electrophysiological data platforms from the perspective of cloud computing and programming frameworks. Based on this review, we suggest that a conceptual scientific workflow-based programming framework associated with an elastic cloud computing environment running big data tools (such as Hadoop and Spark) is a good choice for facilitating effective data mining and collaboration among scientists. For further resources related to this article, please visit the WIREs website.
Article
Full-text available
Computational neuroscience is a powerful ally in our quest to understand the brain. Even the most simple model can shed light on the role of this or that structure and propose new hypothesis concerning the overall brain organization. However, any model in Science is doomed to be proved wrong or incomplete and replaced by a more accurate one. In the meantime, for such replacement to happen, we have first to make sure that models are actually reproducible such that they can be tested, evaluated, criticized and ultimately modified, replaced or even rejected. This is where the shoe pinches. If we cannot reproduce a model in the first place, we're doomed to re-invent the wheel again and again, preventing us from building an incremental computational knowledge of the brain.
Article
Full-text available
Using silicon-based recording electrodes, we recorded neuronal activity of the dorsal hippocampus and dorsomedial entorhinal cortex from behaving rats. The entorhinal neurons were classified as principal neurons and interneurons based on monosynaptic interactions and wave-shapes. The hippocampal neurons were classified as principal neurons and interneurons based on monosynaptic interactions, wave-shapes and burstiness. The data set contains recordings from 7,736 neurons (6,100 classified as principal neurons, 1,132 as interneurons, and 504 cells that did not clearly fit into either category) obtained during 442 recording sessions from 11 rats (a total of 204.5 hours) while they were engaged in one of eight different behaviours/tasks. Both original and processed data (time stamp of spikes, spike waveforms, result of spike sorting and local field potential) are included, along with metadata of behavioural markers. Community-driven data sharing may offer cross-validation of findings, refinement of interpretations and facilitate discoveries.
Article
Full-text available
During early development, neural circuits fire spontaneously, generating activity episodes with complex spatiotemporal patterns. Recordings of spontaneous activity have been made in many parts of the nervous system over the last 25 years, reporting developmental changes in activity patterns and the effects of various genetic perturbations. We present a curated repository of multielectrode array recordings of spontaneous activity in developing mouse and ferret retina. The data have been annotated with minimal metadata and converted into HDF5. This paper describes the structure of the data, along with examples of reproducible research using these data files. We also demonstrate how these data can be analysed in the CARMEN workflow system. This article is written as a literate programming document; all programs and data described here are freely available. 1. We hope this repository will lead to novel analysis of spontaneous activity recorded in different laboratories. 2. We encourage published data to be added to the repository. 3. This repository serves as an example of how multielectrode array recordings can be stored for long-term reuse.
Article
Full-text available
Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow.
Article
Full-text available
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.
Article
Full-text available
An increasing number of publishers and funding agencies require public data archiving (PDA) in open-access databases. PDA has obvious group benefits for the scientific community, but many researchers are reluctant to share their data publicly because of real or perceived individual costs. Improving participation in PDA will require lowering costs and/or increasing benefits for primary data collectors. Small, simple changes can enhance existing measures to ensure that more scientific data are properly archived and made publicly available: (1) facilitate more flexible embargoes on archived data, (2) encourage communication between data generators and re-users, (3) disclose data re-use ethics, and (4) encourage increased recognition of publicly archived data.
Article
Full-text available
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
Article
Full-text available
Access to and sharing of data are essential for the conduct and advancement of science. This article argues that publicly funded research data should be openly available to the maximum extent possible. To seize upon advancements of cyberinfrastructure and the explosion of data in a range of scientific disciplines, this access to and sharing of publicly funded data must be advanced within an international framework, beyond technological solutions. The authors, members of an OECD Follow-up Group, present their research findings, based closely ontheir report to OECD, on key issues in data access, as well as operating principles and management aspects necessary to successful data access regimes.
Article
Full-text available
The CARMEN (Code, Analysis, Repository and Modelling for e-Neuroscience) system [1] provides a web based portal platform through which users can share and collaboratively exploit data, analysis code and expertise in neuroscience. The system has been beendeveloped in the UK and currently supports 200 hundred neuroscientists working in a Virtual Environment with an initial focus on electrophysiology data. The proposal here is that the CARMEN system provides an excellent base from which to develop an ‘executable paper’ system. CARMEN has been built by York and Newcastle Universities and is based on over 10 years experience in the construction of eScience based distributed technology. CARMEN started four years ago involving 20 scientific investigators (neuroscientists and computer scientists) at 11 UK Universities (www.CARMEN.org.uk). The project is supported for another 4 years at York and Newcastle, along with a sister project to take the underlying technology and pilot it as a UK platform for supporting the sharing of research outputs in a generic way. An entirely natural extension to the CARMEN system would be its alignment with a publications repository. The CARMEN system is operational on the domain https://portal.CARMEN.org.uk, where it is possible to request a login to try out the system.
Article
Full-text available
Developing retinal ganglion cells fire in correlated spontaneous bursts, resulting in propagating waves with robust spatiotemporal features preserved across development and species. Here we investigate the effects of homeostatic adaptation on the circuits controlling retinal waves. Mouse retinal waves were recorded in vitro for up to 35 h with a multielectrode array in presence of the GABA(A) antagonist bicuculline, allowing us to obtain a precise, time-resolved characterization of homeostatic effects in this preparation. Experiments were performed at P4-P6, when GABA(A) signaling is depolarizing in ganglion cells, and at P7-P10, when GABA(A) signaling is hyperpolarizing. At all ages, bicuculline initially increased the wave sizes and other activity metrics. At P5-P6, wave sizes decreased toward control levels within a few hours while firing remained strong, but this ability to compensate disappeared entirely from P7 onwards. This demonstrates that homeostatic control of spontaneous retinal activity maintains specific network dynamic properties in an age-dependent manner, and suggests that the underlying mechanism is linked to GABA(A) signaling.
Article
Full-text available
Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers--data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.
Article
Full-text available
To a large extent, progress in neuroscience has been driven by the study of single-cell responses averaged over several repetitions of stimuli or behaviours. However,the brain typically makes decisions based on single events by evaluating the activity of large neuronal populations. Therefore, to further understand how the brain processes information, it is important to shift from a single-neuron, multiple-trial framework to multiple-neuron, single-trial methodologies. Two related approaches--decoding and information theory--can be used to extract single-trial information from the activity of neuronal populations. Such population analysis can give us more information about how neurons encode stimulus features than traditional single-cell studies.
Article
Full-text available
The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local area. This article reviews algorithms and methods for detecting and classifying action potentials, a problem commonly referred to as spike sorting. The article first discusses the challenges of measuring neural activity and the basic issues of signal detection and classification. It reviews and illustrates algorithms and techniques that have been applied to many of the problems in spike sorting and discusses the advantages and limitations of each and the applicability of these methods for different types of experimental demands. The article is written both for the physiologist wanting to use simple methods that will improve experimental yield and minimize the selection biases of traditional techniques and for those who want to apply or extend more sophisticated algorithms to meet new experimental challenges.
Article
Developments in microfabrication technology have enabled the production of neural electrode arrays with hundreds of closely spaced recording sites, and electrodes with thousands of sites are under development. These probes in principle allow the simultaneous recording of very large numbers of neurons. However, use of this technology requires the development of techniques for decoding the spike times of the recorded neurons from the raw data captured from the probes. Here we present a set of tools to solve this problem, implemented in a suite of practical, user-friendly, open-source software. We validate these methods on data from the cortex, hippocampus and thalamus of rat, mouse, macaque and marmoset, demonstrating error rates as low as 5%.
Article
The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local area. This article reviews algorithms and methods for detecting and classifying action potentials, a problem commonly referred to as spike sorting. The article first discusses the challenges of measuring neural activity and the basic issues of signal detection and classification. It reviews and illustrates algorithms and techniques that have been applied to many of the problems in spike sorting and discusses the advantages and limitations of each and the applicability of these methods for different types of experimental demands. The article is written both for the physiologist wanting to use simple methods that will improve experimental yield and minimize the selection biases of traditional techniques and for those who want to apply or extend more sophisticated algorithms to meet new experimental challenges.
Article
Studying the dynamics of neural activity via electrical recording, relies on the ability to detect and sort neural spikes recorded from a number of neurons by the same electrode. We suggest the wavelet packets decomposition (WPD) as a tool to analyze neural spikes and extract their main features. The unique quality of the wavelet packets-adaptive coverage of both time and frequency domains using a set of localized packets, facilitate the task. The best basis algorithm utilizing the Shannon's information cost function and local discriminant basis (LDB) using mutual information are employed to select a few packets that are sufficient for both detection and sorting of spikes. The efficiency of the method is demonstrated on data recorded from in vitro 2D neural networks, placed on electrodes that read data from as many as five neurons. Comparison between our method and the widely used principal components method and a sorting technique based on the ordinary wavelet transform (WT) shows that our method is more efficient both in separating spikes from noise and in resolving overlapping spikes.
Article
We discuss spike detection for noisy neuronal data. Robust spike detection techniques are especially important for probes which have fixed electrode sites that cannot be independently manipulated to isolate signals from specific neurons. Low signal-to-noise ratio (SNR) and similarity of spectral characteristic between the target signal and background noise are obstacles to spike detection. We propose a new technique based on cumulative energy.
Article
Simultaneous recording from large numbers of neurons is a prerequisite for understanding their cooperative behavior. Various recording techniques and spike separation methods are being used toward this goal. However, the error rates involved in spike separation have not yet been quantified. We studied the separation reliability of "tetrode" (4-wire electrode)-recorded spikes by monitoring simultaneously from the same cell intracellularly with a glass pipette and extracellularly with a tetrode. With manual spike sorting, we found a trade-off between Type I and Type II errors, with errors typically ranging from 0 to 30% depending on the amplitude and firing pattern of the cell, the similarity of the waveshapes of neighboring neurons, and the experience of the operator. Performance using only a single wire was markedly lower, indicating the advantages of multiple-site monitoring techniques over single-wire recordings. For tetrode recordings, error rates were increased by burst activity and during periods of cellular synchrony. The lowest possible separation error rates were estimated by a search for the best ellipsoidal cluster shape. Human operator performance was significantly below the estimated optimum. Investigation of error distributions indicated that suboptimal performance was caused by inability of the operators to mark cluster boundaries accurately in a high-dimensional feature space. We therefore hypothesized that automatic spike-sorting algorithms have the potential to significantly lower error rates. Implementation of a semi-automatic classification system confirms this suggestion, reducing errors close to the estimated optimum, in the range 0-8%.
Article
Spike-sorting techniques attempt to classify a series of noisy electrical waveforms according to the identity of the neurons that generated them. Existing techniques perform this classification ignoring several properties of actual neurons that can ultimately improve classification performance. In this study, we propose a more realistic spike train generation model. It incorporates both a description of "nontrivial" (i.e., non-Poisson) neuronal discharge statistics and a description of spike waveform dynamics (e.g., the events amplitude decays for short interspike intervals). We show that this spike train generation model is analogous to a one-dimensional Potts spin-glass model. We can therefore tailor to our particular case the computational methods that have been developed in fields where Potts models are extensively used, including statistical physics and image restoration. These methods are based on the construction of a Markov chain in the space of model parameters and spike train configurations, where a configuration is defined by specifying a neuron of origin for each spike. This Markov chain is built such that its unique stationary density is the posterior density of model parameters and configurations given the observed data. A Monte Carlo simulation of the Markov chain is then used to estimate the posterior density. We illustrate the way to build the transition matrix of the Markov chain with a simple, but realistic, model for data generation. We use simulated data to illustrate the performance of the method and to show that this approach can easily cope with neurons firing doublets of spikes and/or generating spikes with highly dynamic waveforms. The method cannot automatically find the "correct" number of neurons in the data. User input is required for this important problem and we illustrate how this can be done. We finally discuss further developments of the method.
Article
Multi-neuronal recording with a tetrode is a powerful technique to reveal neuronal interactions in local circuits. However, it is difficult to detect precise spike timings among closely neighboring neurons because the spike waveforms of individual neurons overlap on the electrode when more than two neurons fire simultaneously. In addition, the spike waveforms of single neurons, especially in the presence of complex spikes, are often non-stationary. These problems limit the ability of ordinary spike sorting to sort multi-neuronal activities recorded using tetrodes into their single-neuron components. Though sorting with independent component analysis (ICA) can solve these problems, it has one serious limitation that the number of separated neurons must be less than the number of electrodes. Using a combination of ICA and the efficiency of ordinary spike sorting technique (k-means clustering), we developed an automatic procedure to solve the spike-overlapping and the non-stationarity problems with no limitation on the number of separated neurons. The results for the procedure applied to real multi-neuronal data demonstrated that some outliers which may be assigned to distinct clusters if ordinary spike-sorting methods were used can be identified as overlapping spikes, and that there are functional connections between a putative pyramidal neuron and its putative dendrite. These findings suggest that the combination of ICA and k-means clustering can provide insights into the precise nature of functional circuits among neurons, i.e. cell assemblies.
Human Brain Project Mediation Report Neurosharing: large-scale data sets (spike, LFP) recorded from the hippocampal-entorhinal system in behaving rats. F1000Research, 3:98. [Mtetwa and Smith Smoothing and thresholding in neuronal spike detection
  • W Marquardt
  • Julich
  • Germany
  • Mizuseki
[Marquardt, 2015] Marquardt, W. (2015). Human Brain Project Mediation Report. Technical report, Mediation of the Human Brain Project, Forschungzentrum Juelich GMBH, 52425 Julich, Germany. [Mizuseki et al., 2014] Mizuseki, K., Diba, K., Pastalkova, E., Teeters, J., Sirota, A., and Buzsaki, G. (2014). Neurosharing: large-scale data sets (spike, LFP) recorded from the hippocampal-entorhinal system in behaving rats. F1000Research, 3:98. [Mtetwa and Smith, 2006] Mtetwa, N. and Smith, L. S. (2006). Smoothing and thresholding in neuronal spike detection. Neurocomputing, 69(10-12):1366–1370.
Data Sharing by Scientists: Practices and Perceptions A Long Journey into Reproducible Computational Neuroscience
  • Tenopir
[Tenopir et al., 2011] Tenopir, C., Allard, S., Douglass, K., Aydinoglu, A. U., Wu, L., Read, E., Manoff, M., and Frame, M. (2011). Data Sharing by Scientists: Practices and Perceptions. PLoS ONE, 6(6):e21101. [Topalidou et al., 2015] Topalidou, M., Leblois, A., Boraud, T., and Rougier, N. P. (2015). A Long Journey into Reproducible Computational Neuroscience. Frontiers in computational neuroscience, 9(28):1–3.
  • Roche
[Roche et al., 2014] Roche, D. G., Lanfear, R., Binning, S. A., Haff, T. M., Schwanz, L. E., Cain, K. E., Kokko, H., Jennions, M. D., and Kruuk, L. E. B. (2014).
Human Brain Project Mediation Report
  • W Marquardt
[Marquardt, 2015] Marquardt, W. (2015). Human Brain Project Mediation Report. Technical report, Mediation of the Human Brain Project, Forschungzentrum Juelich GMBH, 52425 Julich, Germany.
Code share Ctrl alt share
[Editorial, 2014] Editorial (2014). Code share. Nature, 514(7524):536. [Editorial, 2015] Editorial (2015). Ctrl alt share. Scientific Data, 2:150004.