PreprintPDF Available

Seeing the Big Picture - Scientific Image Integrity under Inspection

Authors:
  • BioVoxxel
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Scientific integrity is an important principle ensuring accuracy, precision and reproducibility of research and finally leading to incremental knowledge, scientific and medical progress as well as technical improvement. In natural and life sciences as well as other research areas, images derived from diverse imaging techniques make up a big part of data as basis of analyses as well as experimental evidence presented in literature. Therefore, it is of utmost importance to ensure the integrity of scientific image data and their proper presentation. This report concentrates on scientific image integrity from different points of view. It shows, in line with prior studies, the potential prevalence of problematic image manipulation in published life science literature based on a small scale study. It describes some potential factors leading to or increasing chances of image manipulation. Furthermore, it discusses existing challenges during detection of image manipulation and suggests some measures facilitating image scrutiny. Finally, it presents potential solutions and measures necessary on various institutional and educational levels to convey and safeguard scientific image integrity along the scientific process of knowledge formation.
Content may be subject to copyright.
*Corresponding author email: jan.brocher@biovoxxel.de www.biovoxxel.de
Seeing the Big Picture Scientific Image
Integrity under Inspection
Jan Brocher*
Affiliation: BioVoxxel, Ludwigshafen am Rhein,
Germany
Abstract
Scientific integrity is an important principle ensuring
accuracy, precision and reproducibility of research
and finally leading to incremental knowledge,
scientific and medical progress as well as technical
improvement. In natural and life sciences as well as
other research areas, images derived from diverse
imaging techniques make up a big part of data as
basis of analyses as well as experimental evidence
presented in literature. Therefore, it is of utmost
importance to ensure the integrity of scientific image
data and their proper presentation. This assay
concentrates on scientific image integrity from
different points of view. It shows, in line with prior
studies, the potential prevalence of problematic
image manipulation in published life science
literature based on a small scale study.
It describes some potential factors leading to or
increasing chances of image manipulation.
Furthermore, it discusses existing challenges during
detection of image manipulation and suggests some
measures facilitating image scrutiny. Finally, it
presents potential solutions and measures necessary
on various institutional and educational levels to
convey and safeguard scientific image integrity
along the scientific process of knowledge formation.
Introduction
In recent years, research integrity, publication ethics
and related topics have received growing attention
by the scientific community, funding institutions and
the public. Popular misconduct cases unfortunately
contributed to a certain consternation. Surely, they
also nurture some mistrust in research by the public
and exert an impact on the reputation of those
scholars involved in scientific misconduct [1].
However, besides cases of clear fraud, it is of utmost
importance to avoid generalization and to
oversimplify the situation, by assuming that all kind
of research integrity issues are caused by deliberate
dishonest behavior. In turn, it is necessary to clearly
differentiate between deliberate misconduct and
non-intentional data alteration whenever possible.
To point out one figure besides reputational issues,
financial damage resulting from misconduct in
NIH funded research projects between 1992 and
2012 accounted for almost $400.000 per retracted
article in average [2]. This is likely to generalize
fairly well to all global research. Thus, besides
affected scientific credibility, there is also a
considerable economical damage related to
retractions. Moreover, the magnitude might be
bigger than pointed out by studies focusing on
retractions due to undetected and non-investigated
misconduct.
Another potential risk may exist for patients in
clinical trials based on falsified research and it can
lead to reduced efficiency in new technical as well
as efficacy in medical developments [3], [4].
Therefore, it is of high societal interest to properly
address this issue. In addition, retraction processes
often take years to happen. A time, during which the
affected publications can still be used as a
knowledge base for further studies and will occur
as references, as long as there are no quick, clear and
visible expressions of concern issued by the journals.
Misbehavior of individuals discredits the huge
majority of enthusiastic and honest scientists,
who follow scientific ethics and conduct their
research according to existing scientific codes of
conduct. Importantly, looking for potential reasons
behind those issues, will give directions for
respective solutions. Experience shows, that in most
cases genuine errors cause the problem. In others, it
is simply a lack of knowledge at a certain stage along
the way of data collection, statistical evaluation and
presentation. One common ground for problems is
the topic of proper acquisition of scientific images,
post-acquisition image editing techniques,
application of suitable image analyses algorithms as
well as proper figure compilation and data
presentation.
Page | 2
Image editing vs. image manipulation in the life
science
In life science literature, images and plotted data
make up a big part of the publicly presented
experimental evidence. Therefore, correct
representation of data is essential to draw valid
conclusions and build up unbiased scientific
evidence. Something, which sounds easier than it
might be in reality. It is often as well desired to
present data in an esthetic and fancy manner.
Even though, this should not rank among the
highest priorities in data publication, it receives a
lot of attention. Software-based image processing
or editing became an often inevitable but likewise
frequently unchallenged action along the way
from data acquisition to figure preparation. Not
seldom, there is a thin line with some small gray
flanking margins which separate acceptable from
inappropriate modification. To this end, one
needs to clearly differentiate between methods
which are suitable for image enhancement and
those which would represent manipulations of
data. For most of the image adjustments clear
limitations are already defined [5], [6]. However,
there are also a few methods (i.e. gamma
adjustments or color balance) for which even
expert opinions sometimes diverge. Besides
defined adjustment limits, some of the techniques
in use need to be seen and judged in the context
in which they are applied to an image. BioVoxxel
is currently preparing a detailed guideline on the
why’s and how’s of proper handling of scientific
images and considerations during publication
figure creation which should serve as a guardrail
to avoid data alterations and give a deeper
understanding of common issues.
Understanding the magnitude of the problem
In the last 10 years, there is increasing evidence of
scientific misconduct related to image manipulation.
For instance, Bik and coworkers [7], performed a
study with the goal to determine the percentage of
published manuscripts presenting inappropriate
image duplication. The analysis included manual
examination of images two decades. It reported an
average prevalence of inappropriate duplications in
3.8% of the investigated literature (n=782/20,621).
Since the analysis concentrated mainly on one type
of image manipulation, the authors discussed the
reported prevalence could actually be even higher, if
other types of manipulation would be considered.
Similarly, in a more recent study, Acuna et al. [8]
performed an extensive semi-automatic screening to
detect duplications within and partially across
760,000 scientific publications. Their findings
suggested that 1.5% of all analyzed articles were
suspicious, while 0.9% were potentially fraudulent
and 0.59% were clear fraud. Overall, 3% of all
publications were considered problematic.
Although, these investigations concentrated mainly
on image duplication and do not cover the full range
of potential manipulations, they provide a very good
overview over this type of manipulation which often
might go unnoticed. These studies not only point out
the magnitude of the problem, but also create
awareness on the need for structured procedures to
identify and follow-up misconduct. In a pilot study,
BioVoxxel tested the reliability of a battery of
software tools and algorithms, specifically
developed for identifying image manipulation of all
kinds. The study was at the time of this writing
performed on 159 (almost exclusively) open access
publications, randomly selected in a range of
publication years from 2009 to 2018. They were
chosen independent of journal, author name,
nationality, or affiliation. Accessible publications
containing images of extremely poor resolution were
excluded as well. Reliable manipulation detection
would not have been possible in those images. The
latter is a general limitation in image manipulation
assessments often ignored. Low quality figures (in
terms of compression artifacts and low
resolution) themselves should be handled as
inappropriate, since the observer might have
difficulties to evaluate drawn conclusions optically.
Therefore, this issue needs to be addressed by all
authors, reviewers and journals. Analyzed images
included micrographs, gel and blot data, photographs
and some data plots (such as FACS data). Drawings
and diagrams (such as bar or line graphs) were not
subject of the testing. Analysis was performed on
1008 figures (consisting of multiple images/panels)
in total. The implemented tools focused on the
detection of image duplication, intentional
manipulation and inappropriate image editing such
as pixel saturations. The methods used, cover over
40 different multicolor look-up-tables (LUTs) in
static as well as dynamic ways, 8 selected image
enhancements, 15 feature filters and 2 novel
automatic algorithms for detection of copy-
transform-move manipulations. Inspection was
semi-automated regarding image retrieval from
online resources and for applying the designated
methods on individual images. Interpretation was
done by visual inspection of the algorithmic result
images or flags. Detection of duplications was semi-
automated and aimed to detect intra-image
duplications as well as those between different
figures of one publication. Cross-publication
comparison was not performed in this study. The
latter would make most sense when concentrating on
Page | 3
specific authors or labs as described in Acuna et al.
[8], something which was not in the focus of this
study. Any wider cross-comparison would be highly
calculation and time intensive. The results of this
small scale study are summarized in the diagram
below. A blinded list of the investigated publications
can be found in table 1, attached below this
manuscript. In total, over one fourth (>27%,
n=44/159) of the scrutinized publications contained
at least one case of inappropriate image editing or
manipulation. Most strikingly, >14% (n=21/159) of
all analyzed publications presented partially severe
image manipulations. Those were classified in 3
groups by estimating their impact on data and
conclusion. This fuzzy classification showed that
~3% (n=5/159) of all publications contained image
alterations that most likely have no or little impact
on the discussed scientific outcome. In contrast, 7%
(n=11/159) contained manipulations which were
likely to influence data interpretation and
conclusion. Striking, over 4% (n=7/159) had an
obvious impact on interpretation and conclusion of
at least the parts manipulated, rendering the
publication rather a scientific pollution. Different to
studies conducted previously, this work investigates
a broader range of manipulations. Thus, the
presented results not only underline, but in its
essence extends findings of others [7], [8].
Figure 1: The tip of the iceberg. Prevalence of
inappropriate image editing and manipulation as detected
in the small scale study elaborated in the text above. In the
scrutinized publications >14% could be classified as (most
likely deliberate) image manipulation of different severity.
Thereof, over 4% would alter scientific outcome or
conclusions drawn. Another 7% very likely at least
influence outcome and conclusion, while 3% might have
little to no influence on data. Furthermore, in 13% of the
scrutinized publications, figures with inappropriate image
editing issues were identified.
The comparably higher numbers of inappropriate
editing, and specifically of image manipulation
indicated in this study, might have different origins.
Investigating a broader range of manipulation types
surely is one of the major factors for the increased
detection rate.
Moreover, the extensive use of diverse methods to
identify manipulations might further add up to the
chance of their detection.
A very stringent definition of alterations as
inappropriate leads as well to an increase of
classified manipulations. However, also
mentionable is that the small case number and
sorting for analyzable images might lead to a slight
overestimation.
Although, those percentages of cases considered
inappropriate editing, manipulation or misconduct
might decline if ten thousands of publications will be
screened, this provides an idea of the impact on
published science and goes in line with descriptions
by others [9], [10] throughout at least two decades.
If not corrected, each of those publications might
affects future analyses in a potentially snowball-like
effect. Leaving these in the wild to be fixed by the
often proclaimed self-correcting nature of science is
an illusion. The latter is not working properly if
manipulated data is not efficiently removed from the
system. It rather keeps the spanner in the works of
science after detecting that someone threw it in.
Plea for practical training on scientific image
handling, processing and analysis basics
My experience while teaching appropriate image
editing techniques, as well as processing and
analysis of digital scientific images has given me the
opportunity to identify some of the issues often
occurring in practice. Those are in majority
unintentional alterations due to a lack of awareness
regarding image acquisition principles or missing
hands-on practice during sequential image handling
in batch or missing knowledge of ethical limitations
in image editing. Furthermore, lack of understanding
on the proper use of image editing tools and their
effects as well as some inappropriate use of software
when analyzing scientific images are common.
Generally, my impression is that the immense
amount of software solutions for image editing
([11][13] and many more) as well as analysis ([14]
[18] and many more) on the market are
overwhelming to most researchers. The first obstacle
is, finding an appropriate one for the individual
needs. Additionally, clearer guidance on the
technical principles, common to most of those tools,
is crucial. Creating awareness of the why’s and
how’s of proper image acquisition, editing and
Page | 4
analysis techniques as well as the understanding of
the impact, image alterations exert on the
conclusions drawn from the edited images, is
indispensable.
To add complexity to the issue, specific image
analyses are often performed manually in cases
where automation would improve the outcome.
Automation mostly reduces user bias, variable error
rates as well as time investment. However, the
implementation of those automation techniques are
frequently not self-explanatory, hampering their use.
For researchers, which feel overwhelmed by
algorithms and programming code, manual
processing and analysis seems often the best short-
term solution. It saves (learning) time in the first
place and speeds up production of publication-ready
data. But this view neglects the various
disadvantages manual processing and analyses
entails. Primarily, every task has to be repetitively
applied in the same manner image per image
sequentially. Something that practically is
impossible for a human. Moreover, the time saving
argument quickly vanishes. In some cases, analyses
involve the drawing hundreds of selections for
quantifications by hand. Those carry certain inherent
variability due to human bias and error. In other
situations, it might result in counting thousands of
objects in huge collections of images by eye or
mouse clicking. Everyone who has done image
analysis by these means knows, how tiring and
partially frustrating this can be. Moreover, it
represents a variable source of unintended but
unavoidable errors in the analysis due to fatigue and
manual inaccuracies. Finally, many of those tasks
remain irreproducible, simply because the analyzed
regions of interest (ROIs) used for the analysis may
not be systematically stored. All the risen points
might influence results and conclusions drawn from
these analyses and definitely are more time intensive
on the long-run.
Since automation procedures often involve specific
knowledge or some programming skills, many
people find it hard to get started with it or don’t dare
to explore it (supported by a lack of time). Others
simply are not aware of the possibility to automate
some analysis tasks at all. Not seldom, a
considerable amount of time e.g. during PhD theses
is therefore spend with manual tasks as described
earlier. However, this is not limited to early stage
researchers per se. Senior scientists might as well
profit from an introduction to image processing
principles and pitfalls to act themselves as
ambassadors and spread such information to provide
guidance to juniors they supervise.
Not seldom, I looked in astonished faces during my
courses, when individual attendees realized that
certain contrast adjustment methods, size changes or
image file formats they were told to use, turned out
to be inappropriate and influenced their data in an
improper way. Those reactions just demonstrate the
importance of a comprehensive practical training for
researchers regarding digital image handling, editing
and analysis.
Furthermore, the return on investment in those skill
trainings will always be positive. This is a win-win
situation for everybody who is a part the scientific
publication life cycle. The relatively low investment
of time and money in those hands-on trainings
produce a multitude of actual benefits. Those are e.g.
the possibility of broader and scalable data
acquisition, increased (up to 100%) image analysis
reproducibility, reduced bias, more constant error
rates as well as time optimization and not the least
the related reduction of costs (for personnel which
can perform other tasks in the time otherwise wasted
with manual counting).
This can be easily illustrated by an eye-opening little
test. If students are told to count small cells in a
micrograph for 30 seconds. They make on average
around 50 cells in that time. Extrapolated, the
amount of cells they would be able to count in a
hypothetical experiment could look as follows:
Counting 2000 cells per image in a total of 300
images (e.g. 6 experimental conditions á 50 images)
would take them approximately 100 hours of pure
manual counting time. What a waste of brilliant
scientific brain capacity and time! Automated, such
a task might take approximately 20-60 minutes
dependent on complexity, including the complete
data storage process already. Learning some basic
skills to automate such a task takes way less than 100
hours and is sustainably reusable.
This would massively leverage efficiency,
reproducibility, report-ability, transparency and
often also reliability of the results obtained using
automated methods.
To cover the theoretical demand for related hands-
on courses, there is an increasing need for
professional (bio)image analyst experts to teach the
teachers and/or the students according to
internationally accepted standards and to improve
the latter to keep up with new technologies.
The other side of the coin is deliberate manipulation
condensed in the trio of falsification, fabrication and
plagiarism (FFP).
However, courses on good scientific practice (GSP)
will only partially tackle those intended
manipulations. Some roots of these phenomena
might be found in the current rewarding system in
science, based on career or monetary incentives
instead of intrinsic motivation [19] and surely also
the emanating pressure to publish fast and high-
Page | 5
impact. Something, which is beyond the scope of this
assay, but nevertheless, an important systemic
contribution, worth to be discussed and revised by
the scientific community.
Long before Master or early PhD students start with
their first active contribution to acquisition and
documentation of new scientific knowledge, the
responsibility of the scientific community emerges
to educate responsible scientists of integrity [20].
The mere existence of guidelines or signing a code
of conduct is not enough. There is also a
responsibility of each and every person and
institution involved in the scientific process to
actively transmit and teach the practical issues listed
in those.
For scientists who want to adhere to those principles,
the knowledge about the existence of a code of
conduct is not automatically establishing the
practical know-how and experience on how-to-
do-things-right.
Bullet point do’s and don’ts are partially empty
shells, often not resonating with (young) scientists.
Why’s and how’s should rather be put in place to
enable a sound understanding of proper procedures,
limits and problematic techniques. The latter can
only be gained by practical hands-on trainings.
Transmission of background in research ethics and
promoting our current perception of GSPs is
inevitably linked to all methodological teaching and
supervision. This includes setting good examples as
seniors to provide fertile ground for an ethical
understanding to grow. Practical guidance and open
discussions will foster such an environment. This
way, younger researchers can overcome insecurity
regarding the topics discussed above. Hence,
systematic improvements might co-evolve with each
new generation of scientists.
Automated manipulation detection - limitations
and responsibilities
Besides all good intentions and teaching, deliberate
data manipulation cannot completely be avoided.
Therefore, detecting those trying to sneak past
honesty is unfortunately another necessary pillar of
safeguarding scientific integrity.
Why not simply making automatic testing on every
manuscript against manipulations? Would that not
solve all the integrity issues? No, I am sure, it would
not! Besides image duplications, which are subject
to several studies and initiatives [7], [8], [21], [22],
there are all the other image editing issues and
manipulations. Those also need a thorough
assessment. Some of them are unlikely more difficult
to spot, especially in the context of automation. One
reason is that, unlike in the case of duplications, there
is no reference image to which an algorithm could
compare specific areas, features or pixels. Certain
techniques help to highlight alterations in images
which makes them detectable by a human. Because
of the often complex nature of an image, the
highlighted region might be extremely similar to
other image areas which do not resemble any
manipulation. Therefore, the process of separating
true hits from false positive ones is currently very
limited for computer systems.
Even when thinking about machine learning
algorithms, obstacles might still be high. Artificial
intelligence initiatives might surely drive this area
forward and have promising approaches. However,
while being perfectly suitable for very specifically
defined analysis tasks, high image (content)
variability will massively raise the bar for reliable
detection. Therefore, it might be more difficult to
apply them generally in the context of image
manipulation detection. Variability due to the nature
of the sample, its preparation, image acquisition,
processing, presentation, artifacts and many more
make such a task increasingly difficult. Just imagine
the amount of completely different types of
experiments shown in published images. A computer
algorithm trained on detecting problems in Western
blots will have difficult times to detect something in
a fluorescent micrograph. Additionally, there is a
multitude of manipulations to be detected.
Therefore, the amount of necessary training data
sets/images, with all possible manipulations and
their nuances, ranges from immensely high to
infinite just to achieve a somewhat accurate
prediction model of manipulations.
So still, in most cases, the experts’ eye is necessary
to interpret a computer generated flag regarding any
suspicious indicator of manipulations. Cross
checking images by a trained experts’ eye, although
not scalable, will for some time still be necessary in
addition to put a detected manipulation into the
experimental context by an experienced person.
Only this allows a final conclusion in many cases on
whether a detected image alteration is an acceptable
one or needs to be flagged as inappropriate.
In this context, the foremost responsibility of
scientific journals, which show a high interest in the
automated fraud detection, is to ensure the presence
of image data which serve as an appropriate basis for
such investigations. There is the need for the
implementation of clearer guidelines for authors.
Quality measures regarding submitted images are
important to be implemented. Those include e.g. to
ban the presence of compression artifacts (such as
resulting from the use of jpeg compression) or pixel
saturation issues etc. Unfortunately, submission of
figures in a high quality image file format does not
Page | 6
exclude prior jpeg compression of individual figure
panels. Strictly spoken, in the process of image
acquisition, storage, editing as well as figure
creation, no single lossy compression step must
occur. This needs to be integrated in submission
policies, communicated to authors as well as
checked during submission. Accepting jpeg images
for pre-submissions or the review process is already
jeopardizing those efforts. Detection of severe jpeg-
artifacts is rather easy due to a block-like
appearances in the images. Those artefacts partially
hinder automatic detection of manipulations or even
render it impossible. Thus, their rejection would
have a two-fold benefit: it would facilitate the
implementation of more automatic manipulation
detection with higher success rates and it would help
to analyze and publish images of notably higher
output quality.
Last but not least, there is also the need of some
image analysis know-how, in addition to automated
methods, for those which are part of an image
screening process. This could be journal staff as well
as research integrity officers involved in misconduct
investigations. An additional solution is the
consultation of a (bio)image analyst or an image
analysis expert to outsource image integrity
screening. Currently, this is a scalability problem,
since too few experts offer related services.
Alternatively, hands-on training of journal staff, in
conjunction with an appropriately equipped software
package, would as well increase efficiency of
screening or investigation processes. Those
alternative options are surely a question of budget,
especially for many smaller journals and those new
to the publishing landscape. But in the light of
scientific responsibility and the harm caused by post-
publication retractions, achieving higher integrity
control pre-publishing is paramount. However, this
won’t come for free. In summary, the responsibility
for an improvement is of systemic nature and in the
hands of the whole scientific community, as
discussed later.
Additional remark on artificial intelligence
Concerning is the possibility that artificial
intelligence via generative adversarial networks
(GAN) can be misused to create perfectly realistic
Western blot images [23] or micrographs [24] (and
basically any other image-based experiment output).
Besides having benign application scenarios for this
techniques, this is to worry about when used to
artificially create “scientific data”. The detection of
such artificial images, dependent on the GAN
method used, the training effort and their output
“quality”, is practically impossible. The general
availability of those techniques can lead to more
serious breaches in scientific integrity than ever
before.
Pre-submission screening as good practice with
a big impact
A recently started collaboration with an
international research institution offers pre-
submission screenings on figures for their
researchers as a voluntary service. Time
consumption is minimal compared to the actual
peer-review process and is not delaying
publication. However, it has a good chance in
increasing its quality. The service is well
accepted, since in around 25-30% of the
submitted manuscripts, image quality issues,
missing information and several accidental image
duplications could be identified, communicated
to the authors and eliminated on the spot, before
being submitted.
This is an encouraging success and should form a
precedence for all research institutions per se.
Thus, a huge amount of genuine errors as well as
manipulations could be avoided to be submitted
or even published.
Not to mention, that this would reduce the need
for errata and retractions considerably. It might
even reduce irreproducibility issues, a snow-ball
effect in citation of incorrect results and creation
of problematic hypothesis based on misleading
information.
In turn, it would be an effective security measure
among others to ensure proper usage of public
funding resources.
It is also a badge of scientific integrity for the
research institutions implementing such a service,
underlining their effort in quality assurance and
fortifying their own reputation.
The latter, is undoubted at stake when
corrections, retractions or even misconduct is
detected post-publication.
Safeguarding reputation in the first place is an
important factor in this discussion. The fear of
reputational damage is undoubtedly a major
reason why misconduct investigations are mostly
not in the interest of many parties involved.
Hence, hindering investigation, clarification and
correction of issues, which finally are swept
under the rug and continue to pollute the
scientific landscape. But at which costs for
society?
Therefore, pre-submission screenings are the
perfect no-blame-no-shame alternative and a
sustainable effort serving every stakeholder in
the scientific community without any
disadvantages.
Page | 7
Handling detected misconduct post-publication
In case, image manipulation as well as other
questionable research practices are detected by third
parties after publication, there should be a certain
protocol to follow. The first address to contact
should generally be the journal which published the
article in question or the related research institution.
Many journals are member of the Committee on
Publication Ethics (COPE) [25] which issues clear
guidelines, their members should follow in case of
any information about a potential misconduct [26]
[28].
However, COPE guidelines suggest to contact
authors repetitively regarding image manipulation
suspicion in case they are not responding. This is
most likely not a very efficient procedure if not a
dead-end street.
It would rather make sense to include an action point
that suggests the consultation of an image analysis
expert, which could independently shed light on such
a case and directly provide proof (for or against the
accusation). In case, a journal editorial board
member does not answer to an expression of concern
from the scientific community, a further possible
address are the authors institutions. The latter
might, in case of a necessary misconduct
investigation, be contacted also by the journal. But
in all possible constellations the issue should first be
discussed between those parties which are directly
affected by it (e.g. the journal, the authors, the
institution, and potentially an independent expert in
case suspicion of misconduct would solidify).
Misconduct investigations generally are a matter of
the research institution or a responsible research
integrity officer. However, for being efficiently
processed, a culture of holding scientific integrity
high by any means needs to be established and
nurtured.
If, and only then, no reaction would take place by
none of the mentioned parties, going public via
media could be a potential measure as a last resort in
case of sound proof of the allegations.
Unfortunately, whistleblowing sometimes takes the
shortcut of directly posting allegations anonymously
at online platforms. This protects their own
anonymity for sure. However, like with other
problematic or villainizing postings in social media,
those actions might as well cause harm. Even some
whistleblowers choose this action for the sake of
correcting the wrong and help to uncover several
cases of misconduct, there must be a certain ethical
behavior during those reporting as well. Public
allegations of misconduct without substance or even
lacking some kind of clear evidence, will in any case
even if withdrawn - execute a certain negative
effect on the indicted scientists. This might
enduringly damage careers as well as private lives.
Thus, public “witch hunting” is definitely not an act
of integrity on its own. And in those cases, this
“pseudo whistleblowing” does no good, neither to
science nor to research integrity.
While honest whistleblowing is an important part in
uncovering research integrity breaches and needs
protection such as a certain anonymity, it has to
adhere to a certain code of conduct as well. The US
Office of Research Integrity has issued a position
paper [29] on this topic and there are differentiated
thoughts on the protection of both parties, the
whistleblower as well as indicted researcher. This is
also discussed by Bouter and Hendrix [30], [31].
Also recommendable is a recent talk from Lex
Bouter [32].
There are various examples existing, in which
multiple Western blot bands were wrongly indicated
publically as being inappropriate copies. A thorough
inspection would have shown clear differences
between them and could have ruled out
manipulation. This easily happens when taking only
the over-all shape of Western blot or gel bands into
account. Something which is the easiest for non-
experts. But additional inspection of their
surrounding background in detail is indispensable to
get a more reliable indication on potential
manipulation. Gel and blot bands naturally can be
very similar in their morphology and still represent a
different sample. Therefore, such posts cannot be
seen as in-depth scrutiny. However, they might
quickly lead to a false-positive misconduct
allegation. In consequence, journals flooded with
endless of those vague allegations will at some point
(or even have to) resign to follow them, simply
because of a lack in manpower, time and substance
of the allegations.
The opposite case is that allegations of misconduct
providing sound evidence are ignored or not
followed-up. Situations might occur, in which older
cases are not investigated with the argumentation
that it lies too long in the past (>10 years),
documentation period for the raw data had expired
or the results had independently been reproduced
elsewhere anyway.
But does that mean that data manipulation or other
scientific misconduct in general just needs to be
undiscovered long enough to be considered not
worth investigation and punishment? How should
science then be able to correct itself if fabricated or
falsified data are partially retained in published
literature? There should be a general obligation to
follow those cases, since the chance that someone
who got away with misconduct once might also try
it again in future (potentially with even more ease) if
there are no consequences to bear. Prevention of
Page | 8
misconduct also needs setting precedent, clear
indications that cases will be neutrally investigated
but ultimately also have inconvenient consequences
if a misconduct can be proven.
Food for thoughts
It is evident that image manipulations occur and pose
a problem in research as it not only harms the
reputation of science, but can lead to wrong
scientific conclusions that negatively impact
scientific credibility, the progress of research, and it
might harm patients health or technical advance on
the long-run. Good news is that it is never too late to
address this issue in a systematic way. Joined
stakeholder efforts will be needed to fill guidelines
with hands-on experience content and develop
prevention programs (i.e. teaching, screening, etc.)
to avoid misconduct. Nevertheless it should also be
considered that all acquired knowledge and the
development of a personal attitude towards GSP can
individually be undermined by the pressure to
publish faster, more often and with higher “impact”
at any phase during a professional career.
Which actions could bring progress in this topic?
Who should start to make the first move? And who
needs to be involved in tackling this issue?
In the following, I try to present some holistic
approach and food for thoughts. That is what I would
like the reader to take it for. It is neither a claim nor
an act of finger-pointing. Some things might be
already in place in some institutions and surely not
everybody might agree to all of it. Some points might
bring up doubts, hit limitations or valid criticism. If
this is the case, then this collection achieved part of
its goal. From here on, all of those ideas might be
improved, complemented or discarded if proven
ineffective. It should lead to an open scientific
discussion.
Publishers and Scientific Journals:
Metaphorically speaking: Publication figures
are the windows through which readers peak
into individual research projects. If those
windows are smeary, dirty, shattered and
opaque, how should one get an impression of
what is going on inside that scientific story
building? Translated that is a call to accept only
high quality figures at any time in the
submission/publication process. Yes, also
before review, especially there! Refrain from
accepting initial submissions as files which
include image compression and alterations
(such as images in PDF created from office
software, RTF, DOC, DOCX, PPT). Upload of
high quality figures will be necessary to
implement an efficient image integrity
assessment during peer-review. Even requesting
figures in loss-less image formats like TIFF or
EPS does not prevent figure compilation from
image data compromised by JPEG-compression
artifacts. Strictly speaking, there should be not a
single JPEG or any other lossy compression step
in the procedure of storing and publishing
scientific images. Therefore, only image files
fulfilling quality standards (e.g. tested during
upload) should be accepted. After accepting a
manuscript, authors will be asked to provide
high quality images in any case. Doing this
already at initial submission will increase the
need for transient storage capacity during peer
review, but will be one big inevitable step
towards securing image integrity in
publications. Furthermore, storage capacity is
cheaper than ever before. Finally, a certain
artifact indicator could serve as one criterion to
flag low quality images during upload.
Implement/extend pre-publication
screenings for image manipulation using
partially more scalable approaches to achieve
this. Integration of such a step before peer-
review might optimize the review process by
saving time of reviewers in cases where
integrity issues are found. Science cannot leave
this task to reviewers who often are non-experts
in image analysis and therefore are not prepared
for such a task. Besides the fact that this would
extend the review process for another
uncompensated task. Hence, this is a matter of
personnel as well as automatable approaches.
Consider to partner with an image analyst to
efficiently outsource such a task.
Investment in regular training of dedicated
editorial staff as well as related software
development to enable image integrity
assessment and plagiarism checks as routine.
If regular in-house integrity screening is
difficult to achieve, considering the
consultation of experts in assisting with it or
outsourcing image screening to experts might be
an option.
Revise own guidelines for authors and
consider extension or linking to available more
extensive guidelines for scientists regarding
image editing (such as the ones provided by ORI
[33] or JBC [34]).
Foster a more positive perception and
transparency of corrections. We are humans
and make mistakes. It should be honored if
authors have the courage and take responsibility
for honest errors and offer correction for it.
Page | 9
Finally, the scientific community also might
develop a more positive perception of
corrections if presented in a positive way.
Attitude might make a big difference. To this
end there is an interesting talk available online
by A. Marušić [35].
Implementation of a versioning systems to
include updates and corrections on
previously published manuscripts, would
facilitate corrections and transparent tracing of
those. Such systems are standard in software
deployment and could be adopted for literature
as well.
There is no need to hide correction and
retraction notices, but rather highlight them as
a badge of rigorous high quality standards of
the journal and a sign of rigid quality measures
taken to secure scientific integrity. This must be
seen as an added value there is no reason why
this should actually raise trust and quality
perception for a specific journal in the scientific
community.
Research Institutions:
As for journals, it might help if research
institutions offer open discussions with
employed researchers, build acceptance for
genuine errors and the effort to correct them
proactively.
Promote GSP already to undergraduates.
Thus, transmitting extent and importance of
GSP as a collective understanding of your
institution helps to foster responsible young
researchers. In this context, making employees
to only sign vaguely formulated codes of
conduct might only have very limited success.
Give ombudspersons and research integrity
officers more visibility (e.g. on the institutions
homepage) and communicate their role to staff
and students. This could easily be achieved
during annual information for new students and
staff to directly create awareness of the topic and
show the institutional interest and attitude in
securing research integrity. That this process
might be actually on the way might be indicated
by an almost 50% increase in inquiries at the
German Ombuds Committee in 2019 compared
to 2018 and earlier [36]. Additionally, 44% of
those were coming from life and natural
sciences. Research fields that often publish
imaging-related data. Here, manipulations make
up 6% of all total inquiries.
Install possibilities for trainings and
workshops for staff and students with hands-
on experience regarding image data handling as
well as other research integrity topics.
Enable the establishment of data management
infrastructure to keep pace with growing
challenges, such as storage including backup,
sharable annotation and documentation
possibilities. As a general example for imaging
related data, the open source software OMERO
[37][39] would be a good and cost-effective
possibility to be considered by labs or faculties
dealing with huge amounts of imaging data.
This might improve data organization, sharing
as well as retrieval of raw data in cases where
authors are asked to present original data for
comparison to a submitted publication figures
during peer review processes.
Follow misconduct allegations rigorously also
by considering consultation of external
reviewers and experts (i.e. image analysts) to
support independence and neutrality of the
investigation process [40]. The argument of
research institutions, involving external experts
would speak against the confidentiality of such
investigations is invalid. Non-disclosure
agreements can be put in place and signed as is
the standard in industries.
Adopt clear standards regarding misconduct
investigations (e.g. report templates, stepwise
procedure for the investigation and the auditors)
[40].
Implement protection of whistleblowers
against aggression, threads, etc. as well as for
personnel accused of misbehavior as good as
possible and communicate this! If someone
feels understood and protected this might
actively help in finding solutions in those cases
and facilitate cooperation.
Support trainings for research integrity
officers to acquire the necessary know-how and
efficiently act during investigations involving
scrutiny of image or statistical data or situations
of mediation.
Reporting on closed investigation cases
(anonymously) does not need to be negative
publicity. Demonstrating it as showcase of the
implementation of research integrity at the
institute can serve as a positive figurehead.
Finally, this would publically indicate that
measures are taken to prevent misuse of funding
by individuals. A study by Shuai et al. indicated
that retractions due to scientific misconduct,
besides impacting the authors’ reputation and
career, have little to no effect on the reputation
of the employing institution [1]. So, advantages
predominate.
Page | 10
Modify internal rewarding system to not only
promote by measures such as publication
metrics, numbers of publications or amount of
graduating students. Also consider the integrity
of research executed as well as the quality in
teaching and mentoring (e.g. rated by the
students). This elevates the effort invested in
those areas as well as the competitive advantage
in comparison to scientists outside universities
which can do 100% research without the
obligation to teach.
Offer seminars or coaching for seniors related
to mentoring and supervision skills. This
know-how is often taken for granted when
someone becomes a professor. However, it
needs some understanding of psychological,
social and legal aspects as well as empathetic
ways of teaching, which are not automatically
gained along the way towards becoming a senior
scientist. Fostering seniors in acquiring those
skills might nurture open communication
between supervisor and students and thus can
reduce otherwise unnoticed problems (see
below).
Scientists:
Live GSP as senior to promote it and make
junior researchers adopt the same behavior. This
will most likely be the most efficient way of
teaching it.
Consider taking some coaching regarding
mentoring skills. Potentially, it is already
offered at your institution.
Take supervision seriously. Juniors need a
certain individual guidance in respect to
methods and data analysis as well as data
handling and publications. Finally, it is in favor
of the supervisors’ publication record and career
to teach those tasks as part of scientific work.
This also might help to prevent potential
situations, in which data alterations would
otherwise go unnoticed and lead to severe
repercussions once detected after publication.
Accept that genuine mistakes may happen as
students learn new techniques and openly
discuss problems and issues with junior
colleagues and students. Openness is key
towards prevention of concealing mistakes and
errors out of the fear getting scolded. Increased
pressure might promote an elevation of
concealment of any issue occurring or might
foster wrongdoing to achieve success.
Consider to get involved in building new
infrastructure at the institute regarding data
management or organization of training and
workshops. This needs time investment at first,
but might reduce some workload on the long run
and improve data integrity.
Encourage students to take part in courses
leveraging software (and other needed) skills
as well as methods courses to adopt new
knowledge and proper analyses capability.
And yes, they will need to be absent from the lab
for two or three days to take such a course. Final
advantage will be, analyses afterwards might
run in a fragment of the time they would need
for manual evaluation. So, that time is very well
invested, rewarded and regained.
If acting as whistleblower, oblige to a certain
confidentiality and report to the organs to
whom it concerns, such as journals or the
institute if they signal openness to
whistleblowing. Try to do this act in a fair way
to allow all parties to be heard or defend against
the allegation if applicable.
Funding Organizations:
Fostering and funding nation-wide
institutionalization related to research
integrity might improve the general
development towards stabilizing scientific
integrity. There are initiatives often based on
proactive individuals [41], [42] or institutions
[43] to achieve this. Nevertheless, the personnel
is too limited to make this scalable. One
potential idea might be the establishment of a
nation-wide and unified body to:
o Develop, improve and communicate GSP
guidelines and foster their effective and
practical teaching, including publicly
available teaching resources as result of a
common consensus on the relevant points.
o Build initiatives such as “Train the trainer”
to leverage teaching of proper data
handling and analysis according to common
guidelines standards and hence solve the
scalability issue making use of a snowball
effect.
o Create a contact point for individual
authors as well as research institutions if in
doubt about a certain issue related to data
integrity.
o Offer trainings to research integrity officers
or ombudspersons at diverse institutions
according to common standards
o Offer independent and neutral expert
consulting during misconduct investigations
or serving as body executing those similar to
the US Office of Research Integrity.
Page | 11
This needs a certain budget, but simply consider the
amount of funding money which is lost due to
misconduct and retractions. As mentioned in the
beginning, preventing those will easily balance some
of the costs invested in those programs on the long-
run among all the other advantages for science.
Closing remark:
The often proclaimed self-correcting nature of
science is only functional by combining prevention
through teaching, supervision, and living up to good
practices as role model. Furthermore, scrutiny of
data, rigorous investigations of misconduct and
correction of scientific literature are necessary. Thus,
all parties addressed above (scientific journals,
scientists, research institutions and funding
organizations) will need to work hand in hand to
achieve this goal. Continuing business as usual
might lead to an increasing loss of credibility and
reputation of science in the public and other potential
socio-economical damage. It will increasingly waste
more financial resources coming to a big part from
the public and it will promote of the perception
among scientists that wrongdoing is rewarded with
increasing career opportunities, promotion and
tenure. This will be the wrong incentive for all
parties involved in the scientific system. Science is
part of society and should serve it as a force for good.
Therefore, positive incentives are key to be
established to tackle challenges.
Where there is a will, there is a way!
Note
Publications investigated in the small scale study
described in the text are not listed by journal,
author or title here, since some found issues are
recently reported to journals and potentially still
under internal investigation. This measure is
taken to comply with statements made here and to
allow a fair investigation process for all parties
until a final decision is taken.
Acknowledgement
Thanks goes to Dr. T. Ramirez for scientific
input, discussion and proofreading.
References
[1] X. Shuai, J. Rollins, I. Moulinier, T. Custis, M.
Edmunds, and F. Schilder, “A Multidimensional
Investigation of the Effects of Publication
Retraction on Scholarly Impact,” J. Assoc. Inf.
Sci. Technol., vol. 68, no. 9, pp. 22252236,
Sep. 2017.
[2] A. M. Stern, A. Casadevall, R. G. Steen, and F.
C. Fang, “Financial costs and personal
consequences of research misconduct resulting
in retracted publications.,” Elife, vol. 3, p.
e02956, Aug. 2014.
[3] J. M. DuBois et al., “Understanding research
misconduct: a comparative analysis of 120 cases
of professional wrongdoing.,” Account. Res.,
vol. 20, no. 56, pp. 32038, 2013.
[4] A. McCook, “Figures in cancer paper at root of
newly failed compound called into question
Retraction Watch,” 2018. [Online]. Available:
https://retractionwatch.com/2018/03/22/figures-
in-cancer-paper-at-root-of-newly-failed-
compound-called-into-question/. [Accessed: 05-
Sep-2019].
[5] D. W. Cromey, “Avoiding Twisted Pixels :
Ethical Guidelines for the Appropriate Use and
Manipulation of Scientific Digital Images,” no.
September 2006, pp. 639667, 2010.
[6] Office of Reasearch Integrity, “Guidelines for
Best Practices in Image Processing.” [Online].
Available:
https://ori.hhs.gov/education/products/RIandIma
ges/guidelines/list.html. [Accessed: 10-Sep-
2019].
[7] E. M. Bik, A. Casadevall, and F. C. Fang, “The
Prevalence of Inappropriate Image Duplication
in Biomedical Research Publications.,” MBio,
vol. 7, no. 3, pp. e00809-16, Jul. 2016.
[8] D. E. Acuna, P. S. Brookes, and K. P. Kording,
“Bioscience-scale automated detection of figure
element reuse,” bioRxiv, p. 269415, Feb. 2018.
[9] E. M. Bucci, “Automatic detection of image
manipulations in the biomedical literature,” Cell
Death Dis., vol. 9, no. 3, p. 400, Mar. 2018.
[10] M. Rossner and K. M. Yamada, “What’s in a
picture? The temptation of image
manipulation.,” J. Cell Biol., vol. 166, no. 1, pp.
115, Jul. 2004.
[11] “Adobe Photoshop.” [Online]. Available:
https://www.adobe.com/products/photoshop.htm
l?promoid=PC1PQQ5T&mv=other#. [Accessed:
10-Sep-2019].
[12] “GIMP - GNU Image Manipulation Program.”
[Online]. Available: https://www.gimp.org/.
[Accessed: 10-Sep-2019].
[13] “ImageMagick - Convert, Edit, or Compose
Bitmap Images.” [Online]. Available:
https://imagemagick.org/index.php. [Accessed:
10-Sep-2019].
[14] J. Schindelin et al., “Fiji: An open-source
platform for biological-image analysis,” Nature
Methods, vol. 9, no. 7. pp. 676682, 2012.
[15] C. Schneider, W. Rasband, and K. Eliceiri,
“ImageJ,” Nat. Methods, vol. 9, no. 7, pp. 671
675, 2012.
[16] K. W. Eliceiri et al., “Biological imaging
software tools,” Nat. Methods, vol. 9, no. 7, pp.
697710, Jul. 2012.
Page | 12
[17] “Image.sc Forum.” [Online]. Available:
https://forum.image.sc/. [Accessed: 10-Sep-
2019].
[18] “Microscopy Image Analysis Software - Imaris -
Oxford Instruments.” [Online]. Available:
https://imaris.oxinst.com/. [Accessed: 10-Sep-
2019].
[19] D. H. Pink, Drive: The Surprising Truth About
What Motivates Us. Riverhead Books, 2011.
[20] Scott Howell, “A Transformative Approach to
Ethics Education,” in Book of Abstracts of the
PRINTEGER European Conference on Research
Integrity, 2018, p. 14.
[21] L. Koppers, H. Wormer, and K. Ickstadt,
“Towards a Systematic Screening Tool for
Quality Assurance and Semiautomatic Fraud
Detection for Images in the Life Sciences.,” Sci.
Eng. Ethics, vol. 23, no. 4, pp. 11131128, 2017.
[22] M. Cicconet, H. Elliott, D. L. Richmond, D.
Wainstock, and M. Walsh, “Image Forensics:
Detecting duplication of scientific images with
manipulation-invariant image similarity,” Feb.
2018.
[23] C. Qi, J. Zhang, and P. Luo, “Emerging Concern
of Scientific Fraud: Deep Learning and Image
Manipulation,” bioRxiv, p. 2020.11.24.395319,
Nov. 2020.
[24] D. P. Sullivan and E. Lundberg, “Seeing More:
A Future of Augmented Microscopy,” Cell, vol.
173, no. 3. Cell Press, pp. 546548, 19-Apr-
2018.
[25] COPE, “Committee on Publication Ethics:
COPE | Promoting integrity in research
publication,” 2017. [Online]. Available:
https://publicationethics.org/%0Ahttps://publicat
ionethics.org/%0Ahttp://publicationethics.org/.
[Accessed: 01-Mar-2018].
[26] COPE, “How to respond to whistle blowers
when concerns are raised directly,” 2015.
[Online]. Available:
https://publicationethics.org/files/RespondingTo
Whistleblowers_ConcernsRaisedDirectly.pdf.
[Accessed: 01-Mar-2018].
[27] COPE, “Responding to Whistleblowers -
concerns raised via social media.” [Online].
Available:
https://publicationethics.org/files/RespondingTo
Whistleblowers_ConcernsRaisedViaSocialMedi
a.pdf. [Accessed: 01-Mar-2018].
[28] COPE, “Flowcharts | Committee on Publication
Ethics: COPE.” [Online]. Available:
https://publicationethics.org/files/Full set of
English flowcharts_9Nov2016.pdf. [Accessed:
01-Mar-2018].
[29] OFFICE OF RESEARCH INTEGRITY, “The
Whistleblower’s Conditional Privilege To
Report Allegations of Scientific Misconduc.”
[30] L. M. Bouter and S. Hendrix, “Both
Whistleblowers and the Scientists They Accuse
Are Vulnerable and Deserve Protection,”
Account. Res., vol. 24, no. 6, pp. 359366, Aug.
2017.
[31] A. McCook, L. Bouter, and S. Hendrix, “It’s not
just whistleblowers who deserve protection
during misconduct investigations, say
researchers,” May 2017, 2017. [Online].
Available:
https://retractionwatch.com/2017/05/29/not-just-
whistleblowers-deserve-protection-misconduct-
investigations-say-researchers/. [Accessed: 01-
Mar-2018].
[32] Lex Bouter, “Videos vom Ombudssymposium
2020 | Ombudsman für die Wissenschaft,” 2020.
[Online]. Available: https://ombudsman-fuer-
die-wissenschaft.de/5430/videos-vom-
ombudssymposium-
2020/#5_What_can_research_institutes_do_to_f
oster_research_integrity_Prof_Dr_Lex_M_Bout
er. [Accessed: 09-Oct-2020].
[33] “Guidelines for Best Practices in Image
Processing,” 2008. [Online]. Available:
https://ori.hhs.gov/education/products/RIandIma
ges/guidelines/list.html. [Accessed: 13-Feb-
2018].
[34] “Collecting and presenting data | JBC:
Resources.” [Online]. Available:
ttp://jbcresources.asbmb.org/collecting-and-
presenting-data. [Accessed: 21-Aug-2019].
[35] Ana Marusic, “Videos vom Ombudssymposium
2020 | Ombudsman für die Wissenschaft,” 2020.
[Online]. Available: https://ombudsman-fuer-
die-wissenschaft.de/5430/videos-vom-
ombudssymposium-
2020/#6_New_strategies_to_promote_research_i
ntegrity_in_the_field_of_biomedicine_An_edito
rs_perspective_Prof_Ana_Marusic_PhD.
[Accessed: 13-Oct-2020].
[36] Ombudsman für die Wissenschaft,
“Jahresbericht 2019 an den Senat der DFG und
die Öffentlichkeit,” pp. 1–34, 2020.
[37] C. Allan et al., “OMERO: flexible, model-driven
data management for experimental biology,”
Nat. Methods, vol. 9, no. 3, pp. 245253, Mar.
2012.
[38] J.-M. Burel et al., “Publishing and sharing multi-
dimensional image data with OMERO,” Mamm.
Genome, vol. 26, no. 910, pp. 441447, Oct.
2015.
[39] “Managing your microscopy big image data:
Challenges, strategies, solutions | Science |
AAAS.” [Online]. Available:
http://www.sciencemag.org/custom-
publishing/webinars/managing-your-
microscopy-big-image-data-challenges-
strategies-solutions. [Accessed: 01-Mar-2018].
[40] C. K. Gunsalus, A. R. Marcus, and I. Oransky,
“Institutional Research Misconduct Reports
Need More Credibility,” JAMA, Mar. 2018.
[41] M. Gommel, H. Nolte, and Sponholz Gerlinde,
“Good Scientific Practice.” [Online]. Available:
http://www.scientificintegrity.de/en-
integrity.html. [Accessed: 06-Sep-2019].
[42] J. Brocher, “BioVoxxel – Scientific Image
Processing and Analysis.” [Online]. Available:
http://www.biovoxxel.de/. [Accessed: 06-Sep-
2019].
[43] “HEADT Centre - Research Integrity.” [Online].
Available: https://headt.eu/Research-Integrity.
[Accessed: 06-Sep-2019].
Table 1: Blinded results from the image integrity assessment study presented in this publication.
Blinded
random ID
Publication
Year
No. of analyzed
figures
No. of issues
found
Severity
Classification
126528 2009 7 3 1 4
847038 2014 5 1 1 3
340900 2015 722 3
830352 2015 40nothing detected 0
872298 2015 30nothing detected 0
813115 2015 5 0 nothing detected 0
708789 2015 7 0 nothing detected 0
103194 2015 5 0 nothing detected 0
647430 2014 7 0 nothing detected 0
125902 2014 6 0 nothing detected 0
871864 2014 5 2 2 2
791677 2015 70nothing detected 0
339230 2015 50nothing detected 0
707987 2014 7 0 nothing detected 0
837139 2014 7 0 nothing detected 0
900981 2014 7 0 nothing detected 0
231916 2014 14 0 nothing detected 0
121188 2015 4 0 nothing detected 0
974810 2015 6 0 nothing detected 0
827853 2013 7 1 1 3
367883 2015 4 0 nothing detected 0
297862 2015 3 0 nothing detected 0
452903 2014 5 0 nothing detected 0
236631 2012 4 0 nothing detected 1
115642 2013 10 0 nothing detected 0
543894 2013 6 0 1 1
125376 2013 3 0 nothing detected 0
748218 2010 4 3 2 4
346383 2010 7 1 1 3
204414 2000 8 0 nothing detected 0
752451 2016 4 1 1 2
405032 2016 4 0 nothing detected 0
421142 2016 3 0 nothing detected 0
561443 2016 4 3 3 3
534491 2016 4 0 nothing detected 1
472592 2015 4 0 nothing detected 1
285911 2015 15 0 nothing detected 0
528556 2016 7 0 nothing detected 0
572188 2016 11 0 nothing detected 0
549139 2016 10 0 nothing detected 1
143676 2016 4 0 nothing detected 0
765490 2016 4 0 nothing detected 1
509604 2016 6 0 nothing detected 0
374401 2016 4 0 nothing detected 1
706948 2016 5 0 nothing detected 0
162542 2016 10 0 nothing detected 0
872783 2016 6 0 nothing detected 0
828452 2016 14 1 1 3
656663 2016 7 0 nothing detected 0
191242 2016 8 0 nothing detected 0
504661 2016 14 0 nothing detected 0
762579 2016 10 0 nothing detected 0
985239 2016 7 0 nothing detected 0
158079 2016 6 0 nothing detected 0
386516 2015 9 0 nothing detected 0
646713 2016 6 0 nothing detected 1
557215 2016 8 2 4 4
448335 2010 5 0 nothing detected 1
691247 2016 6 0 nothing detected 0
551459 2016 5 0 nothing detected 1
344767 2016 6 0 nothing detected 1
854128 2016 7 0 nothing detected 0
770250 2016 5 0 nothing detected 1
235307 2016 8 0 nothing detected 0
848346 2016 1 0 nothing detected 0
829347 2015 7 2 2 4
266392 2015 2 0 nothing detected 0
289163 2015 6 0 nothing detected 0
697615 2015 6 0 nothing detected 0
148882 2015 10 0 nothing detected 0
566023 2015 10 0 nothing detected 0
902334 2015 6 0 nothing detected 0
603887 2016 5 0 nothing detected 0
714537 2016 10 0 nothing detected 0
485584 2016 9 0 nothing detected 0
205712 2016 6 1 1 1
640138 2015 4 0 nothing detected 0
429318 2016 8 1 1 2
716875 2016 7 0 nothing detected 0
236909 2016 8 0 nothing detected 0
727663 2016 7 0 nothing detected 0
700190 2014 6 0 nothing detected 1
583031 2016 8 0 nothing detected 1
131881 2016 6 1 1 3
297720 2016 8 1 3 1
504660 2016 8 0 nothing detected 0
890879 2015 8 0 nothing detected 0
113997 2015 12 0 nothing detected 0
569163 2017 11 0 nothing detected 0
115875 2017 4 0 nothing detected 0
235979 2015 6 0 nothing detected 0
878759 2015 6 0 nothing detected 0
556098 2017 4 0 nothing detected 0
966316 2015 10 0 nothing detected 0
239776 2015 4 0 nothing detected 0
430369 2016 5 1 1 4
764819 2014 6 0 nothing detected 1
253942 2010 5 1 1 3
302992 2011 4 1 1 3
471774 2015 5 0 nothing detected 0
522867 2016 5 2 2 4
138060 2016 5 0 nothing detected 0
851782 2016 8 0 nothing detected 0
115172 2016 5 1 2 1
814444 2017 21 1 1 3
288878 2017 5 0 nothing detected 0
254660 2017 36 0 nothing detected 0
345900 2017 4 0 nothing detected 0
465608 2017 3 0 nothing detected 0
280797 2016 4 0 nothing detected 0
401951 2015 7 0 nothing detected 0
599583 2015 8 0 nothing detected 0
632743 2015 3 1 1 3
552433 2014 1 0 nothing detected 0
563268 2013 2 1 2 2
113546 2016 1 0 nothing detected 0
182696 2016 3 0 nothing detected 0
904427 2016 2 0 nothing detected 0
905836 2016 8 0 nothing detected 1
274832 2017 5 0 nothing detected 0
990611 2017 8 0 nothing detected 0
579406 2016 7 0 nothing detected 0
173653 2016 6 1 2 1
348741 2017 7 0 nothing detected 0
927505 2016 7 0 nothing detected 0
614041 2014 3 0 nothing detected 0
369925 2014 12 0 nothing detected 0
593316 2014 1 0 nothing detected 0
122160 2013 4 0 nothing detected 1
841983 2017 6 0 nothing detected 0
959587 2017 7 0 nothing detected 0
380832 2017 8 0 nothing detected 0
241215 2016 11 0 nothing detected 0
687760 2017 6 0 nothing detected 0
376649 2006 3 4 3 4
328406 2015 7 0 nothing detected 0
476978 2015 5 0 nothing detected 0
174449 2016 7 0 nothing detected 0
249131 2016 7 0 nothing detected 0
285078 2017 4 0 nothing detected 0
724355 2017 6 0 nothing detected 0
127590 2017 7 0 nothing detected 0
624182 2017 6 0 nothing detected 0
977283 2018 3 0 nothing detected 0
292707 2018 4 0 nothing detected 0
389414 2018 6 0 nothing detected 0
776740 2018 5 0 nothing detected 0
994554 2018 5 0 nothing detected 0
297882 2018 2 1 1 2
934398 2018 9 0 nothing detected 0
762793 2017 2 0 nothing detected 0
704551 2018 7 0 nothing detected 0
145926 2018 5 0 nothing detected 0
740301 2018 1 0 nothing detected 0
721367 2018 4 0 nothing detected 0
280673 2018 4 0 nothing detected 0
668431 2018 3 0 nothing detected 0
754821 2018 7 0 nothing detected 0
367317 2018 7 0 nothing detected 0
analyzed fig. issues figures with issues
Sum 1008 41 45
# %
total no. of publications analyzed 159 100.00
no issues identified 115 72.33
image quality and editing issue 21 13.21
low/no impact on data/conclusion 5 3.14
potential effect on data/conclusion 11 6.92
definitive impact on data/conclusion 7 4.40
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The number of retracted scientific articles has been increasing. Most retractions are associated with research misconduct, entailing financial costs to funding sources and damage to the careers of those committing misconduct. We sought to calculate the magnitude of these effects. Data relating to retracted manuscripts and authors found by the Office of Research Integrity (ORI) to have committed misconduct were reviewed from public databases. Attributable costs of retracted manuscripts, and publication output and funding of researchers found to have committed misconduct were determined. We found that papers retracted due to misconduct accounted for approximately 58millionindirectfundingbytheNIHbetween1992and2012,lessthan158 million in direct funding by the NIH between 1992 and 2012, less than 1% of the NIH budget over this period. Each of these articles accounted for a mean of 392,582 in direct costs (SD $423,256). Researchers experienced a median 91.8% decrease in publication output and large declines in funding after censure by the ORI.
Article
Full-text available
Images in scientific papers are used to support the experimental description and the discussion of the findings since several centuries. In the field of biomedical sciences, in particular, the use of images to depict laboratory results is widely diffused, at such a level that one would not err in saying that there is barely any experimental paper devoid of images to document the attained results. With the advent of software for digital image manipulation, however, even photographic reproductions of experimental results may be easily altered by researchers, leading to an increasingly high rate of scientific papers containing unreliable images. In this paper I introduce a software pipeline to detect some of the most diffuse misbehaviours, running two independent tests on a random set of papers and on the full publishing record of a single journal. The results obtained by these two tests support the feasibility of the software approach and imply an alarming level of image manipulation in the published record.
Article
Full-text available
Whistle blowers play an important role diagnosing research misconduct, but often experience severe negative consequences. That is also true for incorrectly accused scientists. Both categories are vulnerable and deserve protection. Whistle blowers must proceed carefully and cautiously. Anonymous whistle blowing should be discouraged but cannot be ignored when the allegations are specific, serious and plausible. When accused of a breach of research integrity it's important to be as transparent as possible. Sometimes accusations are false in the sense that the accuser knows or should know that the allegations are untrue. A mala fide whistle blower typically does not act carefully and we postulate a typology that may help in detecting them. Striking the right balance between whistle blower protection and timely unmasking false and identifying incorrect accusations is a tough dilemma leaders of research institutions have to face.
Article
Full-text available
The quality and authenticity of images is essential for data presentation, especially in the life sciences. Questionable images may often be a first indicator for questionable results, too. Therefore, a tool that uses mathematical methods to detect suspicious images in large image archives can be a helpful instrument to improve quality assurance in publications. As a first step towards a systematic screening tool, especially for journal editors and other staff members who are responsible for quality assurance, such as laboratory supervisors, we propose a basic classification of image manipulation. Based on this classification, we developed and explored some simple algorithms to detect copied areas in images. Using an artificial image and two examples of previously published modified images, we apply quantitative methods such as pixel-wise comparison, a nearest neighbor and a variance algorithm to detect copied-and-pasted areas or duplicated images. We show that our algorithms are able to detect some simple types of image alteration, such as copying and pasting background areas. The variance algorithm detects not only identical, but also very similar areas that differ only by brightness. Further types could, in principle, be implemented in a standardized scanning routine. We detected the copied areas in a proven case of image manipulation in Germany and showed the similarity of two images in a retracted paper from the Kato labs, which has been widely discussed on sites such as pubpeer and retraction watch.
Article
Full-text available
Inaccurate data in scientific papers can result from honest error or intentional falsification. This study attempted to determine the percentage of published papers that contain inappropriate image duplication, a specific type of inaccurate data. The images from a total of 20,621 papers published in 40 scientific journals from 1995 to 2014 were visually screened. Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence of papers with problematic images has risen markedly during the past decade. Additional papers written by authors of papers with problematic images had an increased likelihood of containing problematic images as well. As this analysis focused only on one type of data, it is likely that the actual prevalence of inaccurate data in the published literature is higher. The marked variation in the frequency of problematic images among journals suggests that journal practices, such as prepublication image screening, influence the quality of the scientific literature.
Article
Full-text available
Over the past few decades, the rate of publication retractions has increased dramatically in academia. In this study, we investigate retractions from a quantitative perspective, aiming to answer two fundamental questions. One, how do retractions influence the scholarly impact of retracted papers, authors, and institutions? Two, does this influence propagate to the wider academic community through scholarly associations? Specifically, we analyzed a set of retracted articles indexed in Thomson Reuters Web of Science (WoS), and ran multiple experiments to compare changes in scholarly impact against a control set of non-retracted articles, authors, and institutions. We further applied the Granger Causality test to investigate whether different scientific topics are dynamically affected by retracted papers occurring within those topics. Our results show two key findings: first, the scholarly impact of retracted papers and authors significantly decreases after retraction, and the most severe impact decrease correlates to retractions based on proven purposeful scientific misconduct; second, this retraction penalty does not seem to spread through the broader scholarly social graph, but instead has a limited and localized effect. Our findings may provide useful insights for scholars or science committees to evaluate the scholarly value of papers, authors, or institutions related to retractions.
Article
Full-text available
Imaging data are used in the life and biomedical sciences to measure the molecular and structural composition and dynamics of cells, tissues, and organisms. Datasets range in size from megabytes to terabytes and usually contain a combination of binary pixel data and metadata that describe the acquisition process and any derived results. The OMERO image data management platform allows users to securely share image datasets according to specific permissions levels: data can be held privately, shared with a set of colleagues, or made available via a public URL. Users control access by assigning data to specific Groups with defined membership and access rights. OMERO's Permission system supports simple data sharing in a lab, collaborative data analysis, and even teaching environments. OMERO software is open source and released by the OME Consortium at www.openmicroscopy.org .
Article
Microscope images are information rich. In this issue of Cell, Christiansen et al. show that label-free images of cells can be used to predict fluorescent labels representing cell type, state, and organelle distribution using a deep-learning framework. This paves the way for computationally multiplexed assays derived from inexpensive label-free microscopy. Microscope images are information rich. In this issue of Cell, Christiansen et al. show that label-free images of cells can be used to predict fluorescent labels representing cell type, state, and organelle distribution using a deep-learning framework. This paves the way for computationally multiplexed assays derived from inexpensive label-free microscopy.
Article
Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.
Article
Manipulation and re-use of images in scientific publications is a concerning problem that currently lacks a scalable solution. Current tools for detecting image duplication are mostly manual or semi-automated, despite the availability of an overwhelming target dataset for a learning-based approach. This paper addresses the problem of determining if, given two images, one is a manipulated version of the other by means of copy, rotation, translation, scale, perspective transform, histogram adjustment, or partial erasing. We propose a data-driven solution based on a 3-branch Siamese Convolutional Neural Network. The ConvNet model is trained to map images into a 128-dimensional space, where the Euclidean distance between duplicate images is smaller than or equal to 1, and the distance between unique images is greater than 1. Our results suggest that such an approach has the potential to improve surveillance of the published and in-peer-review literature for image manipulation.