ArticlePDF Available

What about trialists sharing other study materials?

Authors:

Abstract

Rathi and colleagues provide insight into clinical trialists' opinions and experiences of sharing clinical trial data. 1 We would like to raise two related matters. Firstly, this study and previous studies cited within are limited to sharing data. We suggest that questionnaires, intervention manuals, analysis scripts (such as syntax files), output files, and other materials that are used to generate study reports should also be shared. Accurate replication requires these additional materials. This is essential for scientific progress because successful replication strengthens our evidence that a given theory, model, or assumption holds, whereas unsuccessful replication is our only means of disproving them. 2 Publication of analysis scripts and output files is necessary because researchers make many important choices that are not disclosed in research articles. 3 Secondly, the concerns about data sharing raised by Rathi and colleagues' respondents can mostly be refuted. 4 For example, one concern was the ability to publish own research. When a researcher wants to publish several articles about one dataset, publishing the dataset before all articles are published can be risky, because others might conduct analyses that had been planned for later articles and publish them earlier. A solution is not to publish the entire dataset but only those variables described in published articles. This is related to another identified reason: sufficient academic or scientific recognition. In the rat race of "publish or perish," it might seem that publishing materials (such as data, questionnaires, and intervention manuals) will help others. However, materials are only published after acceptance of the article, so any race has already been won, or lost.
DATA SHARING AMONG TRIALISTS
What about trialists sharing other study materials?
Rik Crutzen senior researcher1, Gjalt-Jorn Y Peters assistant professor2, Charles Abraham professor
of behaviour change 3
1Maastricht University/CAPHRI, PO Box 616, 6200 MD Maastricht, Netherlands; 2Open Universiteit, Postbus 2960, 6401 DL Heerlen, Netherlands;
3University of Exeter Medical School, St Luke’s Campus, Exeter EX1 2LU, UK
Rathi and colleagues provide insight into clinical trialists’
opinions and experiences of sharing clinical trial data.1We
would like to raise two related matters.
Firstly, this study and previous studies cited within are limited
to sharing data. We suggest that questionnaires, intervention
manuals, analysis scripts (such as syntax files), output files, and
other materials that are used to generate study reports should
also be shared. Accurate replication requires these additional
materials. This is essential for scientific progress because
successful replication strengthens our evidence that a given
theory, model, or assumption holds, whereas unsuccessful
replication is our only means of disproving them.2Publication
of analysis scripts and output files is necessary because
researchers make many important choices that are not disclosed
in research articles.3
Secondly, the concerns about data sharing raised by Rathi and
colleagues’ respondents can mostly be refuted.4For example,
one concern was the ability to publish own research. When a
researcher wants to publish several articles about one dataset,
publishing the dataset before all articles are published can be
risky, because others might conduct analyses that had been
planned for later articles and publish them earlier. A solution
is not to publish the entire dataset but only those variables
described in published articles. This is related to another
identified reason: sufficient academic or scientific recognition.
In the rat race of “publish or perish,” it might seem that
publishing materials (such as data, questionnaires, and
intervention manuals) will help others. However, materials are
only published after acceptance of the article, so any race has
already been won, or lost.
Competing interests: None declared.
1 Rathi V, Dzara K, Gross CP, Hrynaszkiewicz I, Joffe S, Krumholz HM, et al. Sharing of
clinical trial data among trialists: a cross sectional survey. BMJ 2012;345:e7570. (20
November.)
2 Ritchie SJ, Wiseman R, French CC. Failing the future: three unsuccessful attempts to
replicate Bem’s “retroactive facilitation of recall” effect. PLoS One 2012;7:e33423.
3 Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility
in data collection and analysis allows presenting anything as significant. Psychol Sci
2011;22:1359-66.
4 Peters G-JY, Abraham C, Crutzen R. Full disclosure: doing behavioural science
necessitates sharing. Eur Health Psychol 2012;14:77-84.
Cite this as: BMJ 2012;345:e8352
© BMJ Publishing Group Ltd 2012
rik.crutzen@maastrichtuniversity.nl
For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe
BMJ 2012;345:e8352 doi: 10.1136/bmj.e8352 (Published 10 December 2012) Page 1 of 1
Letters
LETTERS
... There are a wide range of connected behaviours that constitute Open Science (Corker, 2018;FOSTER Open Science, 2019;Pontika, Knoth, Cancellieri, & Pearce, 2015), existing across the whole research process (Table 1). For example, uploading a pre-print to PsyArXiv (i.e., a pre-print server) or creating an R Markdown file (i.e., a file format used in R) to explain your statistical work can be seen as Open Science behaviours. ...
... As with any behaviour, Open Science behaviours may not be stable over time (Corker, 2018). Researchers' behaviours may change as they move between projects depending on the methods, timescales or project aims, or research teams depending on the priorities of the group (Kwasnicka, Dombrowski, White, & Sniehotta, 2016;Michie et al., 2011). ...
... Open Science working provides an exciting plethora of training, dissemination and connectivity opportunities. What is important to remember is that researchers should not feel obliged or pressurised to integrate the full range of Open Science behaviours into their workflow to become an 'Open Scientist' (Corker, 2018). Not all behaviours are suitable for every research question. ...
... In such a situation, for those students who cannot feasibly collect data for an adequately powered study, many solutions exist. For example, students could design one study but analyse data from existing datasets (which becomes easier as full disclosure becomes commonplace, Crutzen et al., 2012;Peters et al., 2012Wicherts, 2013). Students can also collect part of the data for a larger project, either within their universities or across multiple universities (e.g., the Collaborative Replications and Education Project CREP, 2013), which also gives students experience with collecting data. ...
... The data are under embargo until July 1, 2017, but will then be added to that repository. These efforts are taken to acknowledge a recent call for full disclosure to maximize scrutiny, foster accurate replication, and facilitate future data syntheses (e.g., meta-analyses) (36,37). ...
Full-text available
Article
When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.
... Computational and Mathematical Methods in Medicine 5 GPS datasets can contain identifiable records: we can infer where individuals live and work. Data collection agreements often prohibit sharing identifiable records, even if sharing study data has been encouraged [62]. Consequently, sharing the data may either be a challenge or require a mix of agreements and anonymization procedures (e.g., segmenting the signals), which altogether limit the possibility of combining several studies to obtain a bigger picture of how food exposure relates to utilization. ...
Full-text available
Article
Most adults are overweight or obese in many western countries. Several population-level interventions on the physical, economical, political, or sociocultural environment have thus attempted to achieve a healthier weight. These interventions have involved different weight-related behaviours, such as food behaviours. Agent-based models (ABMs) have the potential to help policymakers evaluate food behaviour interventions from a systems perspective. However, fully realizing this potential involves a complex procedure starting with obtaining and analyzing data to populate the model and eventually identifying more efficient cross-sectoral policies. Current procedures for ABMs of food behaviours are mostly rooted in one technique, often ignore the food environment beyond home and work, and underutilize rich datasets. In this paper, we address some of these limitations to better support policymakers through two contributions. First, via a scoping review, we highlight readily available datasets and techniques to deal with these limitations independently. Second, we propose a three steps’ process to tackle all limitations together and discuss its use to develop future models for food behaviours. We acknowledge that this integrated process is a leap forward in ABMs. However, this long-term objective is well-worth addressing as it can generate robust findings to effectively inform the design of food behaviour interventions.
Full-text available
Article
Objective: Although basing conclusions on confidence intervals for effect size estimates is preferred over relying on null hypothesis significance testing alone, confidence intervals in psychology are typically very wide. One reason may be a lack of easily applicable methods for planning studies to achieve sufficiently tight confidence intervals. This paper presents tables and freely accessible tools to facilitate planning studies for the desired accuracy in parameter estimation for a common effect size (Cohen’s d). In addition, the importance of such accuracy is demonstrated using data from the Reproducibility Project: Psychology (RPP). Results: It is shown that the sampling distribution of Cohen’s d is very wide unless sample sizes are considerably larger than what is common in psychology studies. This means that effect size estimates can vary substantially from sample to sample, even with perfect replications. The RPP replications’ confidence intervals for Cohen’s d have widths of around 1 standard deviation (95% confidence interval from 1.05 to 1.39). Therefore, point estimates obtained in replications are likely to vary substantially from the estimates from earlier studies. Conclusion: The implication is that researchers in psychology -and funders- will have to get used to conducting considerably larger studies if they are to build a strong evidence base.
Full-text available
Article
Background: The network approach has recently been introduced to clinical psychology and provides a powerful framework for analyzing variables in a system. Since then, its applications have rapidly spread to various fields of social sciences. Unlike in the case of clinical psychology, the peculiarities of the phenomena under study in social sciences have not received sufficient attention. In this paper, along with practical illustrations, we discuss what a system of psychological variables represents and what the interrelationships between the variables mean in the context of health behavior research. Additionally, we explore the structural analysis of the system which has not been the focus of the recent applications of network analysis in health psychology. Discussion: In this paper, we illustrate two approaches of incorporating observable behavioral variables in a system and strategies for investigating structural components of the system. We illustrate these two approaches with an analysis of cross-sectional data on adolescents’ beliefs and behavior with respect to registering their choice regarding organ donation in the Netherlands. Furthermore, with this paper, we wish to facilitate a larger discussion on conceptualizing networks of psychological variables, which will guide the analysis and the interpretation of node level interactions as well as network level structures.
Full-text available
Article
The General Data Protection Regulation (GDPR) is the new European Union-wide (EU) law on data protection, which is a great step towards more comprehensive and more far-reaching protection of individuals' personal data. In this editorial, we describe why and how we – as researchers within the field of health psychology – should care about the GDPR. In the first part, we explain when the GDPR is applicable, who is accountable for data protection, and what is covered by the notions of personal data and processing. In the second part, we explain aspects of the GDPR that are relevant for researchers within the field of health psychology (e.g., obtaining informed consent, data minimisation, and open science). We focus on questions that researchers may ask themselves in their daily practice. Compliance with the GDPR requires adopting research practices (e.g., data minimisation and anonymization procedures) that are not yet commonly used, but serve the fundamental right to protection of personal data of study participants.
Full-text available
Article
We argue that the active ingredients of behaviour change interventions, often called behaviour change methods or techniques (BCTs), can usefully be placed on a dimension of psychological aggregation. We introduce evolutionary learning processes (ELPs) as fundamental building blocks that are on a lower level of psychological aggregation than behaviour change methods. A better understanding of ELPs is useful to select the appropriate behaviour change methods to target determinants of behaviour, or vice versa, to identify potential determinants targeted by a given behaviour change method (or technique), and to optimally translate behaviour change methods into practical applications. Using these insights during intervention development may increase the likelihood of developing effective interventions – both in terms of behaviour change as well as maintenance of behaviour change.
Full-text available
Article
The instrument of scientific publishing, originally a necessary tool to enable development of a global science, has evolved relatively little in response to technological advances. Current scientific publishing practices incentivize a number of harmful approaches to research. Health Psychology Bulletin was founded to address these issues. Health Psychology Bulletin (HPB) is a new open access journal that actively promotes full disclosure through publication of replication and analysis packages and that explicitly welcomes null findings, reports of failed manipulation, replications, as well as regular contributions. HPB strives to publish all conducted studies in a manner that maximizes the potential lessons that can be learned, fostering a shift from a competitive to a collaborative model of science. HPB also implements a double blind peer reviewing procedure that is unblinded and citable once an article is accepted, and HPB will strive to enable post-publication peer reviews. Finally, HPB offers authors the possibility to submit (and publish) their introductions, methods, study protocols and replication packages before data collection, thereby benefiting from the peer review process in optimization of their methodologies. By implementing these innovations, HPB aims to contribute to remedying a number of problems that have recently been identified in the way health psychology science is conducted. In addition, the European Health Psychology Society aims to offer a new accessible, affordable, and flexible outlet to her members (and non-member health psychology researchers).
Full-text available
Article
The way we currently organise and report research retards behavioural science. This brief article provides an explanation of how we are holding back scientific progress, how this situation developed, and how current practice is justified. We also recommend practical, low-cost solutions, which would facilitate scientific advances.
Full-text available
Article
To investigate clinical trialists' opinions and experiences of sharing of clinical trial data with investigators who are not directly collaborating with the research team. Cross sectional, web based survey. Clinical trialists who were corresponding authors of clinical trials published in 2010 or 2011 in one of six general medical journals with the highest impact factor in 2011. Support for and prevalence of data sharing through data repositories and in response to individual requests, concerns with data sharing through repositories, and reasons for granting or denying requests. Of 683 potential respondents, 317 completed the survey (response rate 46%). In principle, 236 (74%) thought that sharing de-identified data through data repositories should be required, and 229 (72%) thought that investigators should be required to share de-identified data in response to individual requests. In practice, only 56 (18%) indicated that they were required by the trial funder to deposit the trial data in a repository; of these 32 (57%) had done so. In all, 149 respondents (47%) had received an individual request to share their clinical trial data; of these, 115 (77%) had granted and 56 (38%) had denied at least one request. Respondents' most common concerns about data sharing were related to appropriate data use, investigator or funder interests, and protection of research subjects. We found strong support for sharing clinical trial data among corresponding authors of recently published trials in high impact general medical journals who responded to our survey, including a willingness to share data, although several practical concerns were identified.
Full-text available
Article
Nine recently reported parapsychological experiments appear to support the existence of precognition. We describe three pre-registered independent attempts to exactly replicate one of these experiments, 'retroactive facilitation of recall', which examines whether performance on a memory test can be influenced by a post-test exercise. All three replication attempts failed to produce significant effects (combined n = 150; combined p = .83, one-tailed) and thus do not support the existence of psychic ability.
Full-text available
Article
In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.