Content uploaded by Helena Hartmann
Author content
All content in this area was uploaded by Helena Hartmann on Jan 30, 2024
Content may be subject to copyright.
This manuscript is a preprint and has not been peer-reviewed yet.
ARIADNE – a scientific navigator to find your
way through the resource labyrinth
Helena Hartmann1+*, Çağatay Gürsoy2,3,4+, Alexander Lischke5,6, Marie Mueckstein7,8,
Matthias F. J. Sperl9,10, Susanne Vogel6,11, Yu-Fang Yang12, Gordon B. Feld2,3,4, Alexandros
Kastrinogiannis13,14++, Alina Koppold13++
1 Clinical Neurosciences, Department for Neurology and Center for Translational and Behavioral
Neuroscience, University Hospital Essen, Germany
2 Department of Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim,
University of Heidelberg, Mannheim, Germany
3 Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty
Mannheim, University of Heidelberg, Mannheim, Germany
4 Department of Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical
Faculty Mannheim, University of Heidelberg, Mannheim, Germany
5 Institute of Clinical Psychology and Psychotherapy, Medical School Hamburg, Germany
6 Department of Psychology, Medical School Hamburg, Germany
7 Department of General and Neurocognitive Psychology, International Psychoanalytic University
Berlin, Germany
8 Department of Psychology, Universität Potsdam, Germany
9 Department of Clinical Psychology and Psychotherapy, University of Giessen, Germany
10 Center for Mind, Brain and Behavior, Universities of Marburg and Giessen (Research Campus
Central Hessen), Germany
11 ICAN Institute for Cognitive and Affective Neuroscience, Medical School Hamburg, Germany
12 Division of Experimental Psychology and Neuropsychology, Department of Education and
Psychology, Freie Universität Berlin, Berlin, Germany
13 Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg,
Germany
14 Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig,
Germany
+ Shared first author
++ Shared last author
* Corresponding author: Helena Hartmann, helena.hartmann@uk-essen.de
ORCID IDs
Helena Hartmann: https://orcid.org/0000-0002-1331-6683
Çağatay Gürsoy: https://orcid.org/0000-0001-9762-7747
Alexander Lischke:https://orcid.org/0000-0002-8322-2287
Marie Mueckstein: https://orcid.org/0000-0002-7113-0879
Matthias F. J. Sperl: https://orcid.org/0000-0002-5011-0780
Susanne Vogel: https://orcid.org/0000-0001-9717-5568
Yu-Fang Yang:https://orcid.org/0000-0001-9089-6020
Gordon B. Feld: https://orcid.org/0000-0002-1238-9493
Alexandros Kastrinogiannis: https://orcid.org/0000-0001-6248-7385
Alina Koppold: https://orcid.org/0000-0002-3164-3389
ARIADNE – a scientific resource navigator
2
Abstract
Performing high-quality research is a challenging endeavor, especially for early
career researchers (ECRs). Most research is characterized by an experiential learning
approach, which can be time-consuming, error-prone, and frustrating. While most institutions
provide a selection of resources to help researchers with their research projects, these
resources are often expensive, spread out, hard to find, and difficult to compare with one
another in terms of reliability, validity, usability, and practicability. A comprehensive overview
of resources that are useful for early career researchers and their supervisors is missing. To
address this issue, we created ARIADNE – a living and interactive resource navigator that
helps to use and search a dynamically updated database of resources. The open-access
database covers a constantly growing list of resources that are useful for each step of a
research project, ranging from the planning and designing of study, over the collection and
analysis of the data, to the writing and disseminating of findings. By introducing ARIADNE to
the research community, we provide 1) a step-by-step guide on how to perform a research
project, 2) an overview on resources that are useful at the different steps of such a project,
and 3) a glossary of most common terms surrounding the research cycle. By focusing on
open-access and open-source resources, we level the playing field for researchers from
underprivileged countries or institutions, thereby facilitating open, fair, and reproducible
research in the field of neuroscience.
ARIADNE – a scientific resource navigator
3
Introduction
A comparison between research projects conducted two decades ago and those of
today reveals a marked increase in the demands placed upon early career researchers
(ECRs; Weissgerber, 2021). This can be attributed, in part, to factors such as the need for
larger sample sizes (Fan et al., 2014; Marx, 2013; Zook et al., 2017), the incorporation of
novel methods such as pre-registration or dissemination possibilities (Ross-Hellauer et al.,
2020; Tripathy et al., 2017), and the growing utilization of advanced computational and
statistical techniques (Bolt et al., 2021; Bzdok et al., 2017) such as machine learning, as well
as the implementation of cutting-edge technologies such as virtual reality (Matthews, 2018).
All of these factors contribute to an increased time commitment required to successfully
undertake such research endeavors (Powell, 2016). Accordingly, the motivation and
eagerness many ECRs feel during the first year of their work is more and more often
accompanied by feelings of being overwhelmed (Kismihók et al., 2022; Levecque et al.,
2017), as many project choices have to be made and a variety of skills need to be learned
that determine the long-term success of one’s first research project.
At this stage, most ECRs lack the necessary expertise and experience to make these
important decisions. Moreover, many common terms need to be understood and learned.
Learning this ‘language of science’ can be difficult for ECRs (Parsons et al., 2022; see also
Table 1). In addition, institutions and supervisors often provide researchers with a relatively
fixed array of available resources which are conventionally used, such as software
subscriptions or in-house software. These tools are often expensive and bound to the
institution itself (i.e., may become unavailable when the researcher changes institutions or
works from home). On top of that, limited (subscription-based) resources might not only
impede, but also prevent good scientific practice. Accordingly, many open access tools have
been proposed to facilitate life as an ECR. However, these resources are often spread out
and hard to find or to compare with each other in terms of reliability, validity, usability, and
practicality. Moreover, these resources can be difficult to learn, in particular if there is limited
ARIADNE – a scientific resource navigator
4
support by supervisors. Taken together, these difficulties are time-consuming and create a
(potentially error-prone) resource-labyrinth, further exacerbating the uncertainty of how, and
with which tools, high-quality science can be achieved.
Table 1
Mini glossary of science-related terminology, sorted alphabetically. The first occurrence of
each term is highlighted in the text.
Term
Definition
References
Corresponding
author
The corresponding author is typically the researcher
who takes primary responsibility for communication
regarding the manuscript, during both pre-publication
and post-publication phases. This usually includes
communication with the publisher, the readers, and
handling requests for data-sharing. Note that different
journals may have different requirements for
corresponding authors.
Pain, 2021
Cover letter
A letter to the editor of a scientific journal that is
submitted together with a manuscript. It outlines the
importance of the study and summarizes key findings
and contributions to the field. Some journals explicitly
require such a letter, while others actively discourage
it.
Palminteri
(2023)
CRediT
Statement
A taxonomy of 14 roles that can be assumed when
being part of a research project. The statement can be
included at the end of a manuscript to transparently
report which author assumed which roles.
Brand et al.
(2015); Tay
(2021)
Data wrangling
/ munging
The process of transforming and mapping data from
one “raw” data form into another format with the intent
of making it more appropriate and valuable for a
variety of downstream purposes such as analytics.
Endel &
Piringer (2015);
Kandel et al.
(2011)
Early career
researcher
An individual that is early in their academic career.
Typically from graduate or PhD student to Postdoc
level, sometimes even young principal investigators
such as junior professors.
Bazeley (2003);
Laudel & Gläser
(2008)
First author
The first author is the person listed first in an author
list of a manuscript. In many fields, it is the person
who has done most of the hands-on work and who
has taken on a pivotal role in the research project.
Shared co-first authorship is possible when two (or
more) authors provided equal first-author-level
contributions.
Riesenberg
(1990)
Garden of
Metaphor for the many (analytic) decisions that
Gelman &
ARIADNE – a scientific resource navigator
5
Forking Paths /
Researcher
degrees of
freedom
researchers can take, leading to many possible
outcomes. The multitude of possible decisions can
give rise to questionable measurement practices such
as p-hacking or hypothesizing after the results are
known (HARKing).
Loken (2013);
Botvinik-Nezer
et al. (2020)
h-factor
A controversially discussed metric proposed by Hirsch
(2005) to assess a researcher’s (or journal’s) scientific
output by combining publication quantity and impact
(i.e., citations). h is defined as the highest number of
papers of an individual (or journal) with at least h
citations (e.g., h = 3 means having 3 papers with at
least 3 citations each).
Hirsch (2005)
Impact factor
A metric used to evaluate the relative importance of a
scholarly journal in a particular field by measuring the
average number of citations received per article
published in that journal over a specific period of time.
It is calculated by dividing the total number of citations
a journal receives in a given year by the total number
of articles published by the journal in the preceding
two years and commonly used as a tool to assess the
quality and significance of research, and has become
an influential factor in the academic publishing
industry, although it is controversially discussed.
Sharma et al.
(2014)
Ivory tower
A metaphor for academia, portraying scientists as a
group of closed-off individuals living in a tower and
discussing scientific progress only amongst
themselves, limiting the outreach of scientific results.
Bond &
Paterson
(2005); Lewis
(2018)
Lab book
Also known as a laboratory notebook, is a scientific
record-keeping tool used by researchers, scientists,
and students to document their research project,
experiments, observations, data, and findings.
Schnell (2015);
Guerrero et al.
(2019)
Open-access
When scholarly content (such as software, data,
materials, or output) is published in a way that is freely
available to everybody.
Evans &
Reimer (2009)
Paywall
A digital barrier implemented by academic publishers
restricting access to scholarly content (e.g., articles) to
researchers or institutions that have paid for a
subscription (or a one-time access). These costs are
intended to cover processes associated with editing,
peer-reviewing, and formatting; however,
paradoxically, they limit dissemination and potentially
hinder scientific progress. Hence, some researchers
advocate for open access publishing models to
promote equity in knowledge distribution.
Barbour et al.
(2006); Day et
al. (2020)
Peer review
The act of giving feedback on a manuscript under
consideration at a scientific journal. Typically, a
minimum of two reviewers that are experts in the field
are invited to comment on a manuscript.
Jana (2019)
ARIADNE – a scientific resource navigator
6
Subsequently, editors make a decision whether to
accept or reject the submission and authors can be
asked to revise their work based on reviewers’
comments.
Pilot study
A pilot study is a small-scale preliminary investigation
that is conducted before a larger research project or
study to test the feasibility of the research design,
methods, and instruments. The primary purpose of a
pilot study is to identify potential problems and areas
for improvement in the research protocol, which can
be rectified before conducting the actual study.
Arain et al.
(2010); In
(2017);
Thabane et al.
(2010)
Postprint
The accepted or published version of a manuscript in
a scientific journal. Postprints can often be shared on
public repositories to make them accessible to
everyone and forgo the “paywall”. Note that journal-
specific policies (e.g., embargo periods) need to be
considered.
Harnad (2003)
Power analysis
A statistical method used in research to determine the
sample size needed for a study to achieve a desired
level of statistical power. Statistical power refers to the
ability of a study to detect a significant effect (or
difference) between groups or conditions when a true
effect (or difference) exists. Crucially, if a study is
underpowered (i.e., the sample size is too small),
researchers may not be able to detect significant
effects even if they are present. Conversely, if a study
is overpowered (i.e., the sample size is too large),
resources may be wasted and the study may be
unnecessarily expensive or time-consuming.
Kemal (2020)
Preprint
A version of a manuscript that has not yet been peer-
reviewed and published in a scientific journal, but is
uploaded to an open-access online repository, usually
at the time of submission to a journal. Since preprints
did not undergo the established scientific quality-
control process (i.e., peer review), preprints usually
include a brief note that the reported findings should
be examined with caution by practitioners, journalists,
and policymakers.
Hoy (2020);
Wingen et al.
(2022)
Rebuttal
Registered
Report
A written response to a criticism made against a
research manuscript or proposal. It aims to refute or
dispute opposing arguments by presenting counter-
evidence or alternative interpretations or theories.
Thus, rebuttals are an important aspect of peer review
processes, which allows for the improvement of
scientific work through constructive feedback or
critical discourse.
A type of scholarly article format that involves a two-
stage peer review process. In this format, authors
submit a detailed research proposal or protocol to a
Palminteri
(2023)
Henderson &
Chambers
(2022)
ARIADNE – a scientific resource navigator
7
journal, which is then peer-reviewed before any data
is collected. If the proposal is deemed to be
methodologically sound and potentially impactful, the
journal agrees in advance to publish the results of the
study, regardless of the outcome.
Revise and
Resubmit
An outcome resulting from the submission of a
manuscript to a scientific journal. The manuscript is
rejected in its current form, but the authors are invited
to revise and resubmit their work after incorporating
feedback from reviewers.
Kornfield (2019)
Scooped
A slang term used when one’s research idea, study,
or result is being claimed by other researchers, e.g.,
through publishing first.
Laine (2017)
Senior author
The senior author is the lead person (e.g., classically
the principal investigator; PI), primarily associated with
funding, supervision, or major responsibility for the
project. Shared co-senior authorship is possible when
two (or more) authors provided equal senior-author-
level contributions.
Pain (2021)
Standard
operating
procedure
(SOP)
Type I error
rate
Documents or materials describing study procedures
or processes for the purpose of establishing and
managing data quality and reproducibility.
Type I or alpha error rate in statistics refers to the
probability of rejecting a null hypothesis when it is
actually true. In other words, it is the likelihood of
obtaining a statistically significant result by chance
alone, without any true underlying effect.
Manghani
(2011)
Banerjee et al.
(2009)
Type II error
rate
Type II or beta error rate in statistics refers to the
probability of falsely rejecting the alternative
hypothesis and maintaining the null hypothesis, when
the alternative hypothesis is actually true. Beta can be
used in power analyses.
Hartgerink et al.
(2017)
Table 2
Checklist of relevant questions for each step of the research cycle.
Step
Questions
1) Project start
● What is the gap in the literature and the resulting research
question?
● Is funding available to conduct the project?
● What are the time plan and work packages of the project?
● Who is responsible for what in the project?
ARIADNE – a scientific resource navigator
8
2) Study design
● What are the hypotheses and how can they be tested?
● Which independent variables (IVs) are manipulated?
● Which dependent variables (DVs) need to be measured?
● Is approval by an ethics/institutional review board (IRB)
needed?
● How large should the sample be?
3) Study implementation
● What measures are most fitting (tasks, questionnaires,
etc.)?
● What stimuli need to be created (e.g., pictures, videos,
text)?
● Which programming environment should be used?
4) Piloting
● Is the study feasible?
● Do all manipulations work as intended?
5) Data collection
● How can we make sure my data is safely stored,
accessible, and backed up?
● Is the data collected in a way that protects private
information?
6) Data validation
● How can we ensure the quality and accuracy of the data?
● How can we store the data reproducibly?
7) Data analysis
● What are specific analysis pipelines and programs that
can be used for specific types of data (e.g., EEG, (f)MRI,
behavior)?
● What open-source software is a good alternative to
proprietary products?
● Which tools allow complete replicability of an analysis
pipeline, independent of the specific operating system of a
user or continuous software updates?
● How are results visualized in a captivating, yet transparent
and maximally inclusive way?
8) Writing the manuscript
● What is the scope of the paper?
● What is the target audience and journal?
● How to write a convincing abstract?
● How to properly credit authors?
● How to find and cite sources correctly?
● How to structure a manuscript?
● Which frameworks allow to conveniently write a
reproducible manuscript?
9) Publication
● Where to upload data, code, materials, and/or a preprint?
● Is the published data FAIR (“Findable, Accessible,
Interoperable, and Reusable”)?
● How to write a cover letter?
ARIADNE – a scientific resource navigator
9
● How to write a rebuttal to reviewer comments?
10) Dissemination
● How to design a poster for a conference?
● How to prepare a scientific presentation?
● How to present your research to a lay audience?
Therefore, a comprehensive overview of curated resources that cover all parts of a
research project is warranted. To address this issue, we created ARIADNE – a living and
interactive resource navigator that helps to use and search a dynamically updated database
of resources (see also Figure 1 and exemplary resources marked with ➜ in the subsequent
text). We named our tool ARIADNE, as we aim to help ECRs navigate the 'labyrinth' of
research tools and resources, much like the mythological Ariadne helped Theseus navigate
the labyrinth (e.g., see here). We named our tool ARIADNE, as we aim to help ECRs
navigate the 'labyrinth' of research tools and resources, much like the mythological Ariadne
helped Theseus navigate the labyrinth (e.g., see here). Our tool spans the whole research
cycle, helps ECRs to identify and find relevant resources, and is available as a dynamic
interface for easier use and searchability. The open-access database covers a constantly
growing list of resources that are useful for each step of a research project, ranging from the
planning and designing of study, over the collection and analysis of the data, to the writing
and disseminating of findings. In doing so, we put an emphasis on open and reproducible
science practices, as these practices become more and more valued and even mandatory
(Kent et al., 2022). In the following, we divide the research cycle into 10 steps that determine
the quality and the success of research projects. We describe the challenges and choices to
be made in each step and provide curated resources from ARIADNE for each of them: 1)
project start, 2) study design, 3) study implementation, 4) piloting, 5) data collection, 6) data
validation, 7) data analysis, 8) writing, 9) publication, and 10) dissemination. We also
introduce key terms relevant in each step , ultimately aiming to facilitate training and
communication between experts and people starting out in the world of academia (see Table
1 and italicized words in the main text; for open science-related terms see Parsons et al.,
ARIADNE – a scientific resource navigator
10
2022). Lastly, we provide a checklist with questions one might ask at each step of a research
project in Table 2.
Figure 1. Exemplary visualization of ARIADNE - the scientific resource navigator. Clicking on
nodes leads to deeper levels (black arrow keys), with the final level showing all associated
resources including descriptions and hyperlinked websites (e.g., Project Start → Literature →
10 ways to Open Access).
Step 1: Project start
Even before the start of a project, researchers already have to make a variety of
decisions. Most important is the formulation of an interesting research question. Critically, a
research gap or limitation of previous work should be derived from the literature (Pautasso,
2013). This requires a comprehensive and systematic literature search, using subject-
specific databases and search engines, which are listed in ARIADNE (e.g., ➜ PsycInfo,
American Psychological Association (APA); ➜ PubMed, National Institutes of Health).
However, novel research findings that are still in the peer-review process cannot be found
via these databases. Therefore, researchers should also widen their search towards preprint
repositories (e.g., ➜ bioRxiv or ➜ PsyArXiv) for appropriate content, keeping in mind that
the latter work has not been peer-reviewed yet. Adopting an open science approach is also
ARIADNE – a scientific resource navigator
11
useful to avoid one’s idea or project being scooped by other researchers (e.g., a
preregistration or Registered Report documents one’s original ideas; Laine, 2017; e.g., ➜
Connected Papers or ➜ Research Rabbit; see Table 1). Moreover, direct or conceptual
replications of prior work have been highlighted to be critical to scientific progress (Nosek &
Errington, 2017). Depending on the research question, different amounts of funding are
required, so a third-party funding application might be necessary. Researchers who depend
on grants have to keep in mind that such applications take substantial amounts of time and
are not guaranteed to succeed. If there is not enough money available, it may be an option to
adapt the research question accordingly at this stage (e.g., switching from a lab experiment
to an online experiment). Researchers can also first conduct a pilot study for feasibility
testing and use the obtained results for a funding application (see Step 4; e.g., a behavioral
study before employing more complex and costly neuroscientific methods). One should also
consider whether the research question can be answered in the time available, in particular if
they work on fixed contracts. Researchers who work on a joint research project have to
discuss (and document) the responsibilities of each member of the project teams. Possibly,
during the following steps, the research group may realize that further expertise is required,
which can lead to the inclusion of additional co-authors. Finally, the research group should
ideally establish a workflow pipeline that outlines the subsequent steps (i.e., Steps 2 to 9;
Gantt charts: bar charts used to illustrate a project schedule, showing start and finish dates
of activities and their dependencies ➜ Ganttrify). This is particularly useful for a set of related
tasks within a project (e.g., planning, scheduling, and monitoring projects and work
packages)
Step 2: Study design
In an empirical research project, the study design encompasses conceptualizing and
planning the methodology for data collection and analysis. Additionally, documenting the
decision-making process throughout the research project is crucial for enhancing
ARIADNE – a scientific resource navigator
12
reproducibility, enabling other researchers to understand and replicate the study with greater
ease. It is essential to maintain flexibility in this pipeline, allowing for adjustments as the
project progresses. In this step and for most empirical research projects, approval by the
local ethics committee should be applied for. Another important aspect of the study design
step is determining the appropriate sample size, target population (e.g., neurotypical
individuals or patients), as well as the sampling strategy (e.g., stratified or convenience
sampling; Stratton, 2021). To ensure that the study has sufficient statistical power to detect
meaningful differences or associations, justification of one’s sample size is helpful at this
stage, e.g., via a power analysis (➜ G*Power, ➜ Justification Shinyapp, or ➜ g_ci_spm;
Cohen, 1962; Jones et al., 2003; Kemal, 2020; see Table 1). Considering ethics in
experimental design involves taking steps to protect the rights and welfare of participants,
weighing costs and benefits while minimizing risks, and ensuring the privacy of participants
and the confidentiality of their data. It also involves considering the impact of the findings on
society and potential biases that may exist in the study. In essence, the study design step
lays out the foundation for the entire project and provides a roadmap for all subsequent
steps. Most importantly, it considers data collection, analysis, and interpretation of results
(Steps 5-8). It is essential for the study design to be well-conceived, well-executed, and well-
documented to ensure the quality, integrity, and generalizability of the findings of the
research. Drawing on the experience from supervisor(s), mentor(s), and/or collaborator(s) is
key in this step, as they might have specific expertise or experience with certain aspects of
the planned project. In this step, the importance of ‘Big Team Science’ and sharing of
knowledge and expertise becomes especially clear (Hall et al., 2018). ARIADNE can help
kick-start this process by providing grounds for tool selection. In this step, criteria, tasks, and
rules for (co-)authorship should be discussed already at an early stage of the project, and re-
discussed over its course if changes arise (➜ CRediT statement; Brand et al., 2015; Tay,
2021; see Table 1 and Step 8). Finally, the decision for a suitable task programming
environment should take into account whether the study will be lab-based or implemented
ARIADNE – a scientific resource navigator
13
online and whether the program is freely available (➜ Psychopy vs. ➜ Psychtoolbox in
Matlab).
Step 3: Study implementation
Study implementation refers to the development of a task or paradigm that will be
used to manipulate the independent variables (IVs) and measure the dependent variables
(DVs) of interest, as well as the creation of the necessary stimuli and control conditions.
Here, stimulus control refers to the methods used to control and manipulate the stimuli that
participants are exposed to during the experiment. This might include creating specific visual,
auditory, tactile, or other types of stimuli, as well as controlling the timing, duration, and
intensity of the stimuli. The selection of openly available stimuli on platforms such as the ➜
Kapodi Stimuli database or ➜ International Affective Picture System (IAPS) is recommended
enhancing not only reproducibility, but also ensuring the use of stimuli that underwent a
proper standardization procedure (Lang et al., 2008). ARIADNE helps in providing curated,
tried-and-tested resources. Crucially, and of note, the trap of “questionable measurement
practices” as indicated by Flake (2020) should be avoided by favoring materials proven for
standardization, reliability, and validity (e.g., stimuli, tasks, questionnaires). However,
researchers should consider that task reliability can mean different things in experimental
and correlational research (Hedge et al., 2018; Nebe et al., 2023). Other aspects of study
implementation may include the development of a standard operating procedure (SOP;
Maghani, 2011; see Table 1) or protocol to guide the experimenter through the study, the
creation of a data collection and analysis plan, and the implementation of procedures to
ensure the reliability and validity of the study (see also Step 6). It is immensely helpful to
note down decisions and the reasons for these decisions, as those will be relevant for the
later writing process (Step 8). In this context, preregistration, which entails documenting and
uploading the research plan before the outset of data collection, including the hypothesis,
design, and analysis plan, has received a great deal of attention recently and been employed
ARIADNE – a scientific resource navigator
14
as a crucial tool in transparent and reproducible scientific research (Toth et al., 2021; see
Table 1; ➜ PROSPERO for systematic reviews or ➜ Open Science Framework templates).
This practice helps to prevent an inflation of the false-positive rate by reducing researcher
degrees of freedom and/or limiting decisions within the garden of forking paths (see Table 1).
Furthermore, it improves transparency and reproducibility of the study (Peikert et al., 2021).
An extension of preregistration, so-called Registered Reports (Henderson & Chambers,
2022; see Table 1), even shift the peer-review process from after to before data collection,
allowing researchers to get feedback on their work early in the process and to be able to
adapt their research design before the study starts (Scheel et al., 2021).
Step 4: Piloting
A pilot study, also known as exploratory trial, is a preliminary small-scale study
conducted to assess potential problems, duration, and other factors before a full
investigation. This is often a reflective and iterative process (Thabane et al., 2010). By
setting criteria based on important feasibility objectives and research goals, these pilot
studies enable researchers to determine the feasibility of a more extensive, time-consuming,
and expensive main study and to test whether the operationalization (Step 2) makes sense
(see ARIADNE for resources related to piloting; e.g., ➜ data simulation). First, regarding
feasibility, it is common and recommended to always test a few “pilot” participants with your
whole set-up before starting Step 5 (the data collection), to test if participants understand the
instructions of the new experiment and all procedures work as planned. The design of the
main study can then be modified for improvements based on the findings of this pilot study.
Of note, another complementary method for better determining a study’s feasibility is to
simulate data, which allows researchers to test multiple hypotheses and prepare for
prospective outcomes before carrying out the primary investigation. Second, on an
operationalization level, these preliminary data should be used to check whether all
dependent variables can be extracted from the raw files. It is also important to note here that
ARIADNE – a scientific resource navigator
15
data from pilot studies or participants should be kept separate from the data of the main
study. Crucially, it is considered controversial to use pilot data to calculate preliminary
estimates of the effect size and variability of the outcome measures to estimate the required
sample size for the main study (Albers et al., 2018; Sakaluk, 2016). In conclusion, piloting
and data simulation are essential steps for study planning and design, enabling researchers
to evaluate viability, foster greater transparency, and enhance the overall quality of their
research.
Step 5: Data collection
Before starting with data collection, it is recommended to create a standardized
manual (SOP; see Step 3 and Table 1; Manghani, 2011) and document the experimental
procedure in a lab-book (Schnell, 2015) that lists unforeseen events and information for each
participant/session. The latter ensures that important details, such as equipment
malfunctioning, reasons for participant dropout, noticeable participant behavior, and any
crucial decisions or modifications made on the fly, are not lost or forgotten. Note that writing
up the method section before Step 5 promises to save time prospectively and enhances the
precision and reproducibility of the research project. Here, data management strategies such
as intuitive data saving structures can help to avoid misunderstandings as well as waste of
time due to data rearrangement or rewriting scripts (Michener, 2015). Making sure you have
all data backed up is essential to prevent valuable data from being accidentally lost. These
practices later facilitate data, code, and material sharing as part of the publication (Step 9;
Contaxis et al., 2022). Even though the scientific community still lacks consensus on data
arrangement structures and is constantly finding new approaches, there are already well-
established structures such as the ➜ Brain Imaging Data Structure (BIDS; Gorgolewski et
al., 2017) for complex neuroimaging data, which are listed in ARIADNE. Furthermore, data
anonymization or pseudonymization are critical techniques to protect participants’ rights and
privacy (Meyer, 2018 for ethical data sharing; Hallinan et al., 2023 for European Union
ARIADNE – a scientific resource navigator
16
regulations on data privacy). ARIADNE also provides examples of existing data that can be
used for some research questions.
Step 6: Data validation
Data validation in a research project refers to the process of ensuring the quality and
accuracy of the data collected during the study (e.g., Breck et al., 2019 for machine learning
projects and our tool ARIADNE for specific resources). Accordingly, quality control refers to
the continuous process of evaluating the data or procedures such as SOPs for
completeness, accuracy, and consistency, and identifying and removing any errors or
outliers (Freire, 2021). This may include checks for missing data, incorrect data entry, or
other issues that could impact the validity of the study and subsequent interpretation of the
results, but also assuring your data is FAIR (“Findable, Accessible, Interoperable, and
Reusable” ➜ FAIR data or ➜ RDMkit), which will facilitate the later publication of the data
along with the paper (Step 9). Data wrangling, also known as data munging, is the process of
transforming and mapping data from one “raw” data form into another format with the intent
of making it more appropriate and valuable for a variety of downstream purposes such as
analytics (see Table 1; Endel & Piringer, 2015; Kandel et al., 2011). This step has the
ultimate goal of cleaning, organizing, documenting, and preserving the data for future use.
This may include creating detailed metadata, documenting the data collection and cleaning
process, and storing the raw and processed data in a secure and accessible format (which
might mean that the software [version] used to gather and process data has to be stored as
well). However, aspects like data quality, merging data from different sources, creating
reproducible processes, and data provenance are equally important. Importantly, these
validation practices should be implemented throughout the data collection. In sum, this step
contributes essentially to the robustness of the study’s findings and the ability to replicate or
build upon the research in future studies. This step can be started as soon as first data is
collected, leading to the next step, data analysis.
ARIADNE – a scientific resource navigator
17
Step 7: Data analysis
Classically, this step overlaps with Step 6. Initial data analysis refers to the process of
data inspection and reorganization that needs to be carried out before formal statistical
analyses (Hueber et al., 2016). This process includes metadata setup, data
cleaning/screening/refining, updating the research analysis plan, and the documentation of
initial data analysis procedures (see Baillie et al., 2022). Ideally, the data analysis procedure
for the current project has been thoroughly planned and fixed in advance during Step 3 as
part of a preregistration or Registered Report. But even then, many new decisions have to be
made at this stage, which may affect the next steps, e.g., how the data can be best shared
with others or how results are best visualized (Kroon et al., 2022). Choosing the right
analysis framework one feels comfortable with is just one of the many challenges in this step
(➜ RStudio, ➜ JASP, and ➜ Jupyter Notebook). Statistical approaches that are suitable for
the research question need to be chosen (e.g., Bayesian versus frequentist statistics; Pek &
Van Zandt, 2020; van Zyl, 2018). If applicable, correction methods for multiple comparisons
should be considered (Alberton et al., 2020; Noble, 2009), to avoid a potential increase in
Type I error rate (see Table 1). In the processing of analyzing results, it is essential to
consider the role of visualizations. Effective visual representations can enhance the
comprehension of complex data sets and findings (➜ BioRender; ➜ Mermaid; ➜ Nipype).
Crucially, in recent times, there has been a shift in the focus of group-level to individual
trajectory analyses, which has a significant impact on the required sample size and the effect
size (Marek et al., 2022). To overcome inherent inaccuracies associated with estimating
effect sizes, sequential analyses involve monitoring data collection as it progresses and
controlling for Type 1 error rate (Lakens, 2014). At a predetermined stage in the project (e.g.,
defined in Step 2), an interim analysis can be conducted to determine whether the collected
data provide sufficient evidence to conclude that an effect is present, whether more data
should be gathered, or whether the study should be terminated if the predicted effect is
ARIADNE – a scientific resource navigator
18
unlikely to be observed (Lakens, 2014). Of note, data analysis is a critical step that has
attracted much attention recently in light of the so-called “replicability crisis” (Anvari &
Lakens, 2018), as this is a stage where questionable research practices (John et al., 2012)
and biases may occur (even inadvertently).
Step 8: Writing the manuscript
Once data is analyzed and discussed with supervisor(s) and potential co-authors,
researchers are set to outline their results in a comprehensive manuscript (Mensh & Kording,
2017). In this context, they need to determine a target journal for their manuscript. This
journal should ideally be related to the research question, and will subsequently influence the
scope of the paper (e.g., audience, article structure). Various criteria can guide the journal
selection (Salinas et al., 2014). Criteria like impact factor (see Table 1) and journal prestige
may be critical for more senior researchers who need to build-up a reputation, whereas
criteria like acceptance rates and turn-around times may be more important for ECRs who
need to complete their academic training within a limited amount of time.
The decision for a target journal is usually taken together with the project team (i.e.,
supervisor, collaborators, and co-authors, see also Step 1; ➜ Journal-Author Name
Estimator), and will often specify the sections to be included, how many words to write, how
many figures or tables to include, and whether there is space for supplementary materials.
For example, writing a manuscript with the results directly after the introduction as opposed
to after the methods will substantially change the way the whole manuscript needs to be
organized. Moreover, journal choice will directly affect how the article can be accessed (e.g.,
open-access or paywall) and whether and how pre- and postprints can be shared with the
scientific community (see Table 1). Authorship of the manuscript should be offered to
individuals who agree to make substantial scientific contributions to the project (see APA
Ethics Code Standard 8.12a, for example; see also Step 1). These include, but are not
limited to, conceptualization, data collection, data analysis, writing, funding, or supervision.
ARIADNE – a scientific resource navigator
19
However, the status and order of authors varies strongly depending on the scientific
discipline (Pain, 2021). In human neuroscience and psychology, the order of authorship
usually reflects the relative contributions of the researchers involved (e.g., ➜ Credit Author
Statement; ➜ Tenzing). While the first author is typically the person who has contributed
most to the project (e.g., the graduate student), the person who is supervising the project
often appears last (“senior author”) (see Table 1; Pain, 2021). The other authors are named
in between, usually in descending order of decreasing contributions. Other fields may opt to
include people with minor contributions or choose an alphabetical author order (Pain, 2021).
Step 9: Publication
There are many ways in which to disseminate scientific work (Bourne, 2005; see Step
10) or are summarized in ARIADNE. Preprints facilitate early access to the manuscript,
which helps researchers to document their scientific or academic work and may even be
used to assert priority (e.g., ➜ MetaArXiv or ➜ BioArXiv). Preprint publication often happens
simultaneously with the submission to the target journal of choice. The accessibility and
reception of a preprint may make it easier to assess the quality of scientific work than bold
claims about the novelty or impact of the work (Brembs, 2019). However, be aware that
some journals prohibit the upload of preprints (➜ Sherpa Romeo). Additionally, fellow
researchers who have access to this work may provide comments that may be useful for a
critical re-evaluation of the manuscript, which might also happen simultaneously to its peer
review at a journal (see Table 1). Most journals ask researchers to submit the manuscript
together with a cover letter (see Table 1). The cover letter allows researchers to demonstrate
the relevance and quality of their work. However, some journals also actively discourage the
submission of a cover letter to let the manuscript “speak for itself”. Once the manuscript is
under review, reviewers might raise more or less critical issues about the manuscript and
inform the editor handling your paper (Suls et al., 2009). More often, the editor then
recommends either acceptance, minor revisions (both rarely happen on the first submission),
ARIADNE – a scientific resource navigator
20
major revisions (sometimes also called revise and resubmit; see Table 1), or rejection.
Addressing each issue raised by the reviewers in a well-crafted, point-by-point response
rebuttal letter (Palminteri, 2023; see Table 1) allows researchers to demonstrate that
criticized parts of the manuscript have been revised to an extent that warrants the
acceptance of the manuscript (Noble, 2017) or to argue why suggested changes have not
been adapted. Following acceptance, researchers may think about publishing their data and
code together with the manuscript in a way that allows easy access to and reuse of the work
(Goodman et al., 2014). This process until seeing your paper published can take several
months (in rare cases even years) and this time should be factored in Step 1, where a time
plan of the project is first fixed. If your manuscript is rejected by your first journal choice, a
submission to an alternative journal of equal or slightly lower rank is usually warranted. Only
in exceptional cases an appeal could be considered. Crucially, if you notice an error only
after publication (e.g., a software bug or faulty code/input data), this should be discussed
with the co-authors and corrected in the published article as soon as possible (Bruns et al.,
2019).
Step 10: Dissemination
Once a study has been pre-printed and/or published, the dissemination process does
not necessarily end, and ARIADNE showcases the many ways in which you can continue to
publicize or present your work (Bourne, 2007). It can be important to pursue additional
dissemination strategies in order to reach as many people as possible to benefit from the
new findings (Ross-Hellauer et al., 2020). Typically, the results should be presented at
conferences in the form of talks or posters (Pain, 2022), and potentially circulated on online
platforms (e.g., ➜ X or ➜ Mastodon). These dissemination forms might happen before or
during Step 9 as part of the preprint upload, or even as early as Step 7 to get peer feedback
on the freshly analyzed results. Generally, two target groups should be differentiated when it
comes to dissemination: Other researchers and the general public. Regarding the latter,
ARIADNE – a scientific resource navigator
21
science communication journals can also be addressed (➜ In-Mind, ➜ Scientific American,
➜ APS Observer, ➜ APA Monitor on Psychology, or ➜ Gehirn und Geist), and usually the
outreach offices of many institutions can be contacted to circulate a press release among
regional and national news outlets. Ultimately, sharing open materials, including codes and
data (Contaxis et al., 2022) with licenses, is highly favorable considering the rise in open
science practices. The server’s privacy policies and the respective lawful basis (e.g., General
Data Protection Regulation; Houtkoop et al., 2018, Peloquin et al., 2020) should be carefully
considered when choosing an appropriate platform (➜ Open Science Framework or ➜
Zenodo). A wide-reach dissemination strategy is highly recommended. Eventually, research
only has value when the methods and results leave the academic ivory tower (see Table 1)
and are communicated to the general public and stakeholders.
Discussion and outlook
With this comprehensive overview of the ten most important steps of a research
project and their inherent respective challenges, we present our tool ARIADNE. By
introducing ARIADNE to the research community, we provide 1) a step-by-step guide on how
to perform a research project, 2) an overview on resources that are useful at the different
steps of such a project (with a specific focus on open and reproducible science), and 3) a
glossary of most common terms surrounding the research cycle. By focusing on open-
access and open-source resources, we level the playing field for researchers from
underprivileged countries or institutions. We also facilitate open, fair, and reproducible
research in the field of neuroscience, and empower ECRs to master reproducibility and
replicability challenges with this living and dynamic open resource platform.
We think that providing an accessible and structured overview of resources with a
focus on open science will be of utmost importance to ECRs, particularly since institutions,
funding agencies, and other stakeholders are laying more and more weight on efforts in
improving scientific quality (see, for instance, the ➜ Declaration on Research Assessment,
ARIADNE – a scientific resource navigator
22
DORA). Facilitating the integration of open science practices and improving research quality
through collections such as ARIADNE will thus be an important contribution to advance the
careers of ECRs. We nevertheless hope that our paper and tool can be widely distributed to
researchers of all levels starting a new project, but also to supervisors as a guideline or
tutorial for their employees. As our resource is “living” and “interactive”, we also actively call
experienced researchers from our, but also other, neighboring fields to contribute their own
tried-and-tested tools to our database here.
As a team of ten researchers at different career levels, including PhD students,
postdocs, and professors, we bring extensive experience and knowledge in using these
resources, many of which are regularly employed in our own work. The resources provided
in this manuscript and in ARIADNE serve as curated recommendations based on current
research practices. However, it is important for researchers to consider their own
preferences and requirements when choosing resources for their experiments. We cannot
guarantee the effectiveness, suitability, or long-term availability of any particular resource for
a specific research project, but we will regularly update and add resources with a dynamic,
quality-driven approach. Researchers are encouraged to exercise their own judgment and
discretion when selecting resources and conducting experiments.
We would further like to stress that the present version of the tool is a starting point,
which we aim to continuously extend and improve upon. Hence, future versions will, include
resources and information regarding supervision and mentoring (Jabre et al., 2021),
academia beyond the PhD (postdoc-level: Bourne & Friedberg, 2006; professor-level:
Tregoning & McDermott, 2020), lab life (Maestre, 2019), building up collaborations,
networking and lab exchanges (Vicens & Bourne, 2007), how to deal with article rejection
(Nature Human Behaviour, 2021), as well as time management, progress tracking, and grant
writing (Bourne & Chalupa, 2006).
In conclusion, we believe that this resource holds promise to encourage not only
early career scholars, but also more senior researchers, delving into the field of open and
reproducible science, using our tool as a starting and orientation point. Together, we can
ARIADNE – a scientific resource navigator
23
greatly alleviate the challenges attached to starting out in science, prevent a constant,
frustrating “re-invention of the wheel”, and provide helpful support during all stages of the
research cycle – for everyone.
Author contributions
● Conceptualization: All authors
● Data curation: All authors
● Investigation: All authors
● Methodology: All authors
● Project administration: HH, CG
● Software: CG, MM, AKa
● Supervision: HH, CG
● Validation: All authors
● Visualization: HH, CG, MM, AKa
● Writing – original draft: HH, MFJS, AL, YY, AKo, SV
● Writing – review & editing: All authors
Acknowledgements
We thank the Interest Group for Open and Reproducible Science (IGOR) of the section
Biological Psychology and Neuropsychology, which forms part of the German Psychological
Society (DGPs). This work originated in and was supported by IGOR.
Competing interests
No author has any competing interests to declare.
Funding
HH was supported by the Marietta-Blau scholarship of the Austrian Agency for Education
and Internationalisation (OeAD) and the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation - Project-ID 422744262 - TRR 289). CG and GBF were supported by
an Emmy-Noether-Grant of the Deutsche Forschungsgemeinschaft to GBF (DFG, German
Research Foundation - FE1617/2-1). AL was supported by a research grant of the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation - Project-ID LI2517/2-1).
MFJS was supported by the “Justus Liebig University Postdoc Fund,” provided by the
Postdoc Career and Mentoring Office of the University of Giessen. AKo and AKa were
supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation -
Project-ID LO1980/41). None of the funders had any role in the conceptualization, creation of
the resource, writing, or decision to publish.
Data accessibility statement
Our step-by-step tool ARIADNE is freely available online and hosted on GitHub, using
Cytoscapes.js and JupyterBooks. It complies with the Open Source definition. Find the
matching website here and the tool itself here. New resources and/or feedback can be
submitted via this form.
ARIADNE – a scientific resource navigator
24
References
Albers, C., & Lakens, D. (2018). When power analyses based on pilot data are biased:
Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social
Psychology, 74, 187–195. https://doi.org/10.1016/j.jesp.2017.09.004
Alberton, B. A. V., Nichols, T. E., Gamba, H. R., & Winkler, A. M. (2020). Multiple testing
correction over contrasts for brain imaging. NeuroImage, 216, 116760.
https://doi.org/10.1016/j.neuroimage.2020.116760
Anvari, F., & Lakens, D. (2018). The replicability crisis and public trust in psychological
science. Comprehensive Results in Social Psychology, 3(3), 266–286.
https://doi.org/10.1080/23743603.2019.1684822
Arain, M., Campbell, M. J., Cooper, C. L., & Lancaster, G. A. (2010). What is a pilot or
feasibility study? A review of current practice and editorial policy. BMC Medical
Research Methodology, 10(1), 67. https://doi.org/10.1186/1471-2288-10-67
Baillie, M., Le Cessie, S., Schmidt, C. O., Lusa, L., Huebner, M., & for the Topic Group “Initial
Data Analysis” of the STRATOS Initiative. (2022). Ten simple rules for initial data
analysis. PLOS Computational Biology, 18(2), e1009819.
https://doi.org/10.1371/journal.pcbi.1009819
Banerjee, A., Chitnis, U., Jadhav, S., Bhawalkar, J., & Chaudhury, S. (2009). Hypothesis
testing, type I and type II errors. Industrial Psychiatry Journal, 18(2), 127.
https://doi.org/10.4103/0972-6748.62274
Barbour, V. (2006). The impact of open access upon public health. Bulletin of the World
Health Organization, 84(5), 339–339. https://doi.org/10.2471/BLT.06.032409
Bazeley, P. (2003). Defining “Early Career” in research. Higher Education, 45(3), 257–279.
https://doi.org/10.1023/A:1022698529612
Bolt, T., Nomi, J. S., Bzdok, D., & Uddin, L. Q. (2021). Educating the future generation of
researchers: A cross-disciplinary survey of trends in analysis methods. PLOS Biology,
19(7), e3001313. https://doi.org/10.1371/journal.pbio.3001313
Bond, R., & Paterson, L. (2005). Coming down from the ivory tower? Academics’ civic and
economic engagement with the community. Oxford Review of Education, 31(3), 331–
351. https://doi.org/10.1080/03054980500221934
Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M.,
Kirchler, M., Iwanir, R., Mumford, J. A., Adcock, R. A., Avesani, P., Baczkowski, B. M.,
Bajracharya, A., Bakst, L., Ball, S., Barilari, M., Bault, N., Beaton, D., Beitner, J., …
Schonberg, T. (2020). Variability in the analysis of a single neuroimaging dataset by
many teams. Nature, 582(7810), 84–88. https://doi.org/10.1038/s41586-020-2314-9
Bourne, P. E. (2005). Ten simple rules for getting published. PLoS Computational Biology,
1(5), e57. https://doi.org/10.1371/journal.pcbi.0010057
Bourne, P. E. (2007). Ten simple rules for making good oral presentations. PLoS
Computational Biology, 3(4), e77. https://doi.org/10.1371/journal.pcbi.0030077
Bourne, P. E., & Chalupa, L. M. (2006). Ten simple rules for getting grants. PLoS
Computational Biology, 2(2), e12. https://doi.org/10.1371/journal.pcbi.0020012
Bourne, P. E., & Friedberg, I. (2006). Ten simple rules for selecting a postdoctoral position.
PLoS Computational Biology, 2(11), e121. https://doi.org/10.1371/journal.pcbi.0020121
Bourne, P. E., Polka, J. K., Vale, R. D., & Kiley, R. (2017). Ten simple rules to consider
regarding preprint submission. PLOS Computational Biology, 13(5), e1005473.
https://doi.org/10.1371/journal.pcbi.1005473
ARIADNE – a scientific resource navigator
25
Bradley, M. M., & Lang, P. J. (2017). International Affective Picture System. In V. Zeigler-Hill
& T. K. Shackelford (Eds.), Encyclopedia of Personality and Individual Differences (pp.
1–4). Springer International Publishing. https://doi.org/10.1007/978-3-319-28099-8_42-
1
Brand, A., Allen, L., Altman, M., Hlava, M., & Scott, J. (2015). Beyond authorship: Attribution,
contribution, collaboration, and credit. Learned Publishing, 28(2), 151–155.
https://doi.org/10.1087/20150211
Breck, E., Polyzotis, N., Roy, S., Whang, S., & Zinkevich, M. (2019). Data validation for
machine learning. MLSys.
Brembs, B. (2019). Reliable novelty: New should not trump true. PLOS Biology, 17(2),
e3000117. https://doi.org/10.1371/journal.pbio.3000117
Bruns, S. B., Asanov, I., Bode, R., Dunger, M., Funk, C., Hassan, S. M., Hauschildt, J.,
Heinisch, D., Kempa, K., König, J., Lips, J., Verbeck, M., Wolfschütz, E., & Buenstorf,
G. (2019). Reporting errors and biases in published empirical findings: Evidence from
innovation research. Research Policy, 48(9), 103796.
https://doi.org/10.1016/j.respol.2019.05.005
Bzdok, D., & Yeo, B. T. T. (2017). Inference in the age of big data: Future perspectives on
neuroscience. NeuroImage, 155, 549–564.
https://doi.org/10.1016/j.neuroimage.2017.04.061
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review.
The Journal of Abnormal and Social Psychology, 65(3), 145–153.
https://doi.org/10.1037/h0045186
Contaxis, N., Clark, J., Dellureficio, A., Gonzales, S., Mannheimer, S., Oxley, P. R.,
Ratajeski, M. A., Surkis, A., Yarnell, A. M., Yee, M., & Holmes, K. (2022). Ten simple
rules for improving research data discovery. PLOS Computational Biology, 18(2),
e1009768. https://doi.org/10.1371/journal.pcbi.1009768
Day, S., Rennie, S., Luo, D., & Tucker, J. D. (2020). Open to the public: Paywalls and the
public rationale for open access medical research publishing. Research Involvement
and Engagement, 6(1), 8. https://doi.org/10.1186/s40900-020-0182-y
Endel, F., & Piringer, H. (2015). Data Wrangling: Making data useful again. IFAC-
PapersOnLine, 48(1), 111–112. https://doi.org/10.1016/j.ifacol.2015.05.197
Evans, J. A., & Reimer, J. (2009). Open access and global participation in science. Science,
323(5917), 1025–1025. https://doi.org/10.1126/science.1154562
Fan, J., Han, F., & Liu, H. (2014). Challenges of big data analysis. National Science Review,
1(2), 293–314. https://doi.org/10.1093/nsr/nwt032
Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable
measurement practices and how to avoid them. Advances in Methods and Practices in
Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393
Freire, D. (2021). How to improve data validation in five steps. SSRN, Mercatus Working
Paper Series. https://doi.org/10.2139/ssrn.3812561
Gallagher, J. R., & DeVoss, D. N. (Eds.). (2019). Explanation points: Publishing in rhetoric
and composition. Utah State University Press.
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can
be a problem, even when there is no “fishing expedition” or “p-hacking” and the
research hypothesis was posited ahead of time. Department of Statistics, Columbia
University, 348, 1–17.
Goodman, A., Pepe, A., Blocker, A. W., Borgman, C. L., Cranmer, K., Crosas, M., Di
Stefano, R., Gil, Y., Groth, P., Hedstrom, M., Hogg, D. W., Kashyap, V., Mahabal, A.,
ARIADNE – a scientific resource navigator
26
Siemiginowska, A., & Slavkovic, A. (2014). Ten simple rules for the care and feeding of
scientific data. PLoS Computational Biology, 10(4), e1003542.
https://doi.org/10.1371/journal.pcbi.1003542
Gorgolewski, K. J., Alfaro-Almagro, F., Auer, T., Bellec, P., Capotă, M., Chakravarty, M. M.,
Churchill, N. W., Cohen, A. L., Craddock, R. C., Devenyi, G. A., Eklund, A., Esteban,
O., Flandin, G., Ghosh, S. S., Guntupalli, J. S., Jenkinson, M., Keshavan, A., Kiar, G.,
Liem, F., … Poldrack, R. A. (2017). BIDS apps: Improving ease of use, accessibility,
and reproducibility of neuroimaging data analysis methods. PLOS Computational
Biology, 13(3), e1005209. https://doi.org/10.1371/journal.pcbi.1005209
Guerrero, S., López-Cortés, A., GarcíaCárdenas, J.M., Saa, P., Indacochea, A.,
ArmendárizCastillo, I., et al. (2019). A quick guide for using Microsoft OneNote as an
electronic laboratory notebook. PLOS Computational Biology 15(5), e1006918.
https://doi.org/10.1371/journal.pcbi.1006918
Hall, K. L., Vogel, A. L., Huang, G. C., Serrano, K. J., Rice, E. L., Tsakraklides, S. P., &
Fiore, S. M. (2018). The science of team science: A review of the empirical evidence
and research gaps on collaboration in science. American Psychologist, 73(4), 532–
548. https://doi.org/10.1037/amp0000319
Hallinan, D., Boehm, F., Külpmann, A., & Elson, M. (2023). Information provision for
informed consent procedures in psychological research under the general data
protection regulation: A practical guide. Advances in Methods and Practices in
Psychological Science, 6(1), 251524592311519.
https://doi.org/10.1177/25152459231151944
Harnad, S. (2003). Eprints: Electronic Preprints and Postprints. In Encyclopedia of Library
and Information Science. Encyclopedia of Cognitive Science (01/12/03).
Hartgerink, C. H. J., Wicherts, J. M., & Van Assen, M. A. L. M. (2017). Too good to be false:
Nonsignificant results revisited. Collabra: Psychology, 3(1), 9.
https://doi.org/10.1525/collabra.71
Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive
tasks do not produce reliable individual differences. Behavior Research Methods,
50(3), 1166–1186. https://doi.org/10.3758/s13428-017-0935-1
Henderson, E. L., & Chambers, C. D. (2022). Ten simple rules for writing a Registered
Report. PLOS Computational Biology, 18(10), e1010571.
https://doi.org/10.1371/journal.pcbi.1010571
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output.
Proceedings of the National Academy of Sciences, 102(46), 16569–16572.
https://doi.org/10.1073/pnas.0507655102
Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V. M., Nichols, T. E., &
Wagenmakers, E.-J. (2018). Data sharing in psychology: A survey on barriers and
preconditions. Advances in Methods and Practices in Psychological Science, 1(1), 70–
85. https://doi.org/10.1177/2515245917751886
How (not) to appeal. (2021). Nature Human Behaviour, 5(7), 805–806.
https://doi.org/10.1038/s41562-021-01174-w
Hoy, M. B. (2020). Rise of the Rxivs: How preprint servers are changing the publishing
process. Medical Reference Services Quarterly, 39(1), 84–89.
https://doi.org/10.1080/02763869.2020.1704597
Huebner, M., Vach, W., & le Cessie, S. (2016). A systematic approach to initial data analysis
is good research practice. The Journal of Thoracic and Cardiovascular Surgery,
151(1), 25–27. https://doi.org/10.1016/j.jtcvs.2015.09.085
ARIADNE – a scientific resource navigator
27
In, J. (2017). Introduction of a pilot study. Korean Journal of Anesthesiology, 70(6), 601.
https://doi.org/10.4097/kjae.2017.70.6.601
Jabre, L., Bannon, C., McCain, J. S. P., & Eglit, Y. (2021). Ten simple rules for choosing a
PhD supervisor. PLOS Computational Biology, 17(9), e1009330.
https://doi.org/10.1371/journal.pcbi.1009330
Jana, S. (2019). A history and development of peer-review process. Annals of Library and
Information Studies, 66, 152–162.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable
research practices with incentives for truth telling. Psychological Science, 23(5), 524–
532. https://doi.org/10.1177/0956797611430953
Jones, S. R. (2003). An introduction to power and sample size estimation. Emergency
Medicine Journal, 20(5), 453–458. https://doi.org/10.1136/emj.20.5.453
Kandel, S., Heer, J., Plaisant, C., Kennedy, J., Van Ham, F., Riche, N. H., Weaver, C., Lee,
B., Brodbeck, D., & Buono, P. (2011). Research directions in data wrangling:
Visualizations and transformations for usable and credible data. Information
Visualization, 10(4), 271–288. https://doi.org/10.1177/1473871611415994
Kemal, O. (2020). Power analysis and sample size, when and why? Turkish Archives of
Otorhinolaryngology, 58(1), 3–4. https://doi.org/10.5152/tao.2020.0330
Kent, B. A., Holman, C., Amoako, E., Antonietti, A., Azam, J. M., Ballhausen, H., Bediako, Y.,
Belasen, A. M., Carneiro, C. F. D., Chen, Y.-C., Compeer, E. B., Connor, C. A. C.,
Crüwell, S., Debat, H., Dorris, E., Ebrahimi, H., Erlich, J. C., Fernández-Chiappe, F.,
Fischer, F., … Weissgerber, T. L. (2022). Recommendations for empowering early
career researchers to improve research culture and practice. PLOS Biology, 20(7),
e3001680. https://doi.org/10.1371/journal.pbio.3001680
Kismihók, G., McCashin, D., Mol, S. T., & Cahill, B. (2022). The well‐being and mental health
of doctoral candidates. European Journal of Education, 57(3), 410-423.
https://doi.org/10.1111/ejed.12519
Kornfield, S. (2019). Revise and Resubmit! But How? In J. Gallagher & D. DeVoss (Eds.),
Explanation Points (pp. 259–262). Utah State University Press.
https://doi.org/10.7330/9781607328834.c061
Kroon, C., Breuer, L., Jones, L., An, J., Akan, A., Mohamed Ali, E. A., Busch, F., Fislage, M.,
Ghosh, B., Hellrigel-Holderbaum, M., Kazezian, V., Koppold, A., Moreira Restrepo, C.
A., Riedel, N., Scherschinski, L., Urrutia Gonzalez, F. R., & Weissgerber, T. L. (2022).
Blind spots on western blots: Assessment of common problems in western blot figures
and methods reporting with recommendations to improve them. PLOS Biology, 20(9),
e3001783. https://doi.org/10.1371/journal.pbio.3001783
Laine, H. (2017). Afraid of scooping – case study on researcher strategies against fear of
scooping in the context of open science. Data Science Journal, 16, 29.
https://doi.org/10.5334/dsj-2017-029
Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses:
Sequential analyses. European Journal of Social Psychology, 44(7), 701–710.
https://doi.org/10.1002/ejsp.2023
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International Affective Picture System
(IAPS): Instruction manual and affective ratings, Technical Report A-8. Gainesville:
The Center for Research in Psychophysiology, University of Florida.
Laudel, G., & Gläser, J. (2008). From apprentice to colleague: The metamorphosis of early
career researchers. Higher Education, 55, 387-406.
ARIADNE – a scientific resource navigator
28
Levecque, K., Anseel, F., De Beuckelaer, A., Van der Heyden, J., & Gisle, L. (2017). Work
organization and mental health problems in PhD students. Research Policy, 46(4),
868-879. https://doi.org/10.1016/j.respol.2017.02.008
Lewis, L. S. (1975). Scaling the ivory tower: Merit and its limits in academic careers. Johns
Hopkins University Press.
Maestre, F. T. (2019). Ten simple rules towards healthier research labs. PLOS
Computational Biology, 15(4), e1006914. https://doi.org/10.1371/journal.pcbi.1006914
Manghani, K. (2011). Quality assurance: Importance of systems and standard operating
procedures. Perspectives in Clinical Research, 2(1), 34. https://doi.org/10.4103/2229-
3485.76288
Marek, S., Tervo-Clemmens, B., Calabro, F. J., Montez, D. F., Kay, B. P., Hatoum, A. S., ...
& Dosenbach, N. U. (2022). Reproducible brain-wide association studies require
thousands of individuals. Nature, 603(7902), 654-660.
Marx, V. (2013). The big challenges of big data. Nature, 498(7453), 255–260.
https://doi.org/10.1038/498255a
Matthews, D. (2018). Virtual-reality applications give science a new dimension. Nature,
557(7703), 127–128. https://doi.org/10.1038/d41586-018-04997-2
Mensh, B., & Kording, K. (2017). Ten simple rules for structuring papers. PLOS
Computational Biology, 13(9), e1005619. https://doi.org/10.1371/journal.pcbi.1005619
Meyer, M. N. (2018). Practical tips for ethical data sharing. Advances in Methods and
Practices in Psychological Science, 1(1), 131–144.
https://doi.org/10.1177/2515245917747656
Michener, W. K. (2015). Ten simple rules for creating a good data management plan. PLOS
Computational Biology, 11(10), e1004525. https://doi.org/10.1371/journal.pcbi.1004525
Moroff, G., & Brandt, K. G. (1975). Yeast glutathione reductase. Studies of the kinetics and
stability of the enzyme as a function of pH and salt concentration. Biochimica et
Biophysica Acta – Enzymology, 410(1), 21–31. https://doi.org/10.1016/0005-
2744(75)90204-1
Nebe, S., Reutter, M., Baker, D. H., Bölte, J., Domes, G., Gamer, M., Gärtner, A., Gießing,
C., Gurr, C., Hilger, K., Jawinski, P., Kulke, L., Lischke, A., Markett, S., Meier, M.,
Merz, C. J., Popov, T., Puhlmann, L. M. C., Quintana, D. S., Schäfer, T., Schubert, A.-
L., Sperl, M. F. J., Vehlen, A., Lonsdorf, T. B., & Feld, G. B. (2023). Enhancing
precision in human neuroscience. eLife, 12, e85980.
https://doi.org/10.7554/eLife.85980
Noble, W. S. (2017). Ten simple rules for writing a response to reviewers. PLOS
Computational Biology, 13(10), e1005730. https://doi.org/10.1371/journal.pcbi.1005730
Noble, W. How does multiple testing correction work? Nature Biotechnology, 27, 1135–1137
(2009). https://doi.org/10.1038/nbt1209-1135
Nosek, B. A., & Errington, T. M. (2017). Making sense of replications. eLife, 6, e23383.
https://doi.org/10.7554/eLife.23383
Pain, E. (2021). How to navigate authorship of scientific manuscripts. Science.
https://doi.org/10.1126/science.caredit.abj3459
Pain, E. (2022). How to prepare a scientific poster.
https://doi.org/10.1126/science.caredit.ada0293
Palminteri, S. (2023). How to prepare a rebuttal letter: Some advice from a scientist, reviewer
and editor [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/kyfus
Parsons, S., Azevedo, F., Elsherif, M. M., Guay, S., Shahim, O. N., Govaart, G. H., Norris,
E., O’Mahony, A., Parker, A. J., Todorovic, A., Pennington, C. R., Garcia-Pelegrin, E.,
ARIADNE – a scientific resource navigator
29
Lazić, A., Robertson, O., Middleton, S. L., Valentini, B., McCuaig, J., Baker, B. J.,
Collins, E., … Aczel, B. (2022). A community-sourced glossary of open scholarship
terms. Nature Human Behaviour, 6(3), 312–318. https://doi.org/10.1038/s41562-021-
01269-4
Pautasso, M. (2013). Ten simple rules for witing a literature review. PLoS Computational
Biology, 9(7), e1003149. https://doi.org/10.1371/journal.pcbi.1003149
Peikert, A., Van Lissa, C. J., & Brandmaier, A. M. (2021). Reproducible research in R: A
tutorial on how to do the same thing more than once. Psych, 3(4), 836–867.
https://doi.org/10.3390/psych3040053
Pek, J., & Van Zandt, T. (2020). Frequentist and Bayesian approaches to data analysis:
Evaluation and estimation. Psychology Learning & Teaching, 19(1), 21-35.
https://doi.org/10.1177/1475725719874542
Peloquin, D., DiMaio, M., Bierer, B., & Barnes, M. (2020). Disruptive and avoidable: GDPR
challenges to secondary research uses of data. European Journal of Human Genetics,
28(6), 697–705. https://doi.org/10.1038/s41431-020-0596-x
Powell, K. (2016). Hard work, little reward: Nature readers reveal working hours and
research challenges. Nature. https://doi.org/10.1038/nature.2016.20933
Riesenberg, D. (1990). The order of authorship: Who’s on first? JAMA: The Journal of the
American Medical Association, 264(14), 1857.
https://doi.org/10.1001/jama.1990.03450140079039
Ross-Hellauer, T., Tennant, J. P., Banelytė, V., Gorogh, E., Luzi, D., Kraker, P., Pisacane,
L., Ruggieri, R., Sifacaki, E., & Vignoli, M. (2020). Ten simple rules for innovative
dissemination of research. PLOS Computational Biology, 16(4), e1007704.
https://doi.org/10.1371/journal.pcbi.1007704
Sakaluk, J. K. (2016). Exploring small, confirming big: An alternative system to The New
Statistics for advancing cumulative and replicable psychological research. Journal of
Experimental Social Psychology, 66, 47–54. https://doi.org/10.1016/j.jesp.2015.09.013
Salinas, S., & Munch, S. B. (2015). Where should I send it? Optimizing the submission
decision process. PLOS ONE, 10(1), e0115451.
https://doi.org/10.1371/journal.pone.0115451
Scheel, A. M., Schijen, M. R. M. J., & Lakens, D. (2021). An excess of positive results:
Comparing the standard psychology literature with registered reports. Advances in
Methods and Practices in Psychological Science, 4(2), 251524592110074.
https://doi.org/10.1177/25152459211007467
Schnell, S. (2015). Ten simple rules for a computational biologist’s laboratory notebook.
PLOS Computational Biology, 11(9), e1004385.
https://doi.org/10.1371/journal.pcbi.1004385
Sharma, M., Sarin, A., Gupta, P., Sachdeva, S., & Desai, A. (2014). Journal impact factor: Its
use, significance and limitations. World Journal of Nuclear Medicine, 13(02), 146–146.
https://doi.org/10.4103/1450-1147.139151
Stratton, S. J. (2021). Population research: Convenience sampling strategies. Prehospital
and Disaster Medicine, 36(4), 373–374. https://doi.org/10.1017/S1049023X21000649
Suls, J., & Martin, R. (2009). The air we breathe: A critical look at practices and alternatives
in the peer-review process. Perspectives on Psychological Science, 4(1), 40–50.
https://doi.org/10.1111/j.1745-6924.2009.01105.x
Tay, A. (2021, January 22). Researchers are embracing visual tools to give fair credit for
work on papers. Nature Index. https://www.nature.com/nature-index/news/researchers-
embracing-visual-tools-contribution-matrix-give-fair-credit-authors-scientific-papers
ARIADNE – a scientific resource navigator
30
Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., Robson, R., Thabane, M.,
Giangregorio, L., & Goldsmith, C. H. (2010). A tutorial on pilot studies: The what, why
and how. BMC Medical Research Methodology, 10(1), 1. https://doi.org/10.1186/1471-
2288-10-1
Toth, A. A., Banks, G. C., Mellor, D., O’Boyle, E. H., Dickson, A., Davis, D. J., DeHaven, A.,
Bochantin, J., & Borns, J. (2021). Study preregistration: An evaluation of a method for
transparent reporting. Journal of Business and Psychology, 36(4), 553–571.
https://doi.org/10.1007/s10869-020-09695-3
Tregoning, J. S., & McDermott, J. E. (2020). Ten simple rules to becoming a principal
investigator. PLOS Computational Biology, 16(2), e1007448.
https://doi.org/10.1371/journal.pcbi.1007448
Tripathy, J. P., Bhatnagar, A., Shewade, H. D., Kumar, A. M. V., Zachariah, R., & Harries, A.
D. (2017). Ten tips to improve the visibility and dissemination of research for policy
makers and practitioners. Public Health Action, 7(1), 10–14.
https://doi.org/10.5588/pha.16.0090
Vicens, Q., & Bourne, P. E. (2007). Ten simple rules for a successful collaboration. PLoS
Computational Biology, 3(3), e44. https://doi.org/10.1371/journal.pcbi.0030044
Weissgerber, T. L. (2021). Learning from the past to develop data analysis curricula for the
future. PLOS Biology, 19(7), e3001343. https://doi.org/10.1371/journal.pbio.3001343
Wingen, T., Berkessel, J. B., & Dohle, S. (2022). Caution, preprint! Brief explanations allow
nonscientists to differentiate between preprints and peer-reviewed journal articles.
Advances in Methods and Practices in Psychological Science,
5(1). https://doi.org/10.1177/25152459211070559
Zook, M., Barocas, S., Boyd, D., Crawford, K., Keller, E., Gangadharan, S. P., Goodman, A.,
Hollander, R., Koenig, B. A., Metcalf, J., Narayanan, A., Nelson, A., & Pasquale, F.
(2017). Ten simple rules for responsible big data research. PLOS Computational
Biology, 13(3), e1005399. https://doi.org/10.1371/journal.pcbi.1005399
van Zyl, C. J. J. (2018). Frequentist and Bayesian inference: A conceptual primer. New Ideas
in Psychology, 51, 44–49. https://doi.org/10.1016/j.newideapsych.2018.06.004