Article

Breaking the Bottleneck in the Protein Biomarker Pipeline

Department of Obstetrics, Gynecology, and Reproductive Sciences, University of California San Francisco, San Francisco, CA, USA.
Clinical Chemistry (Impact Factor: 7.91). 12/2011; 58(2):321-3. DOI: 10.1373/clinchem.2011.175034
Source: PubMed

Full-text preview

Available from: clinchem.org
Breaking the Bottleneck in the Protein Biomarker Pipeline
H. Ewa Witkowska,
1,2
Steven C. Hall,
1,2
and Susan J. Fisher
1,2*
The process for discovery and development of bio-
markers of solid tumors presents exceptional chal-
lenges. The major technical obstacle is the disconnec-
tion between the site of their generation (tissue,
proximal fluid) and the source of detection (body flu-
ids, predominantly serum and plasma). This single fac-
tor limits, in practical terms, the potential pool of bio-
markers to those that are secreted, shed, or leaked from
the cell surface. On reaching the circulation, a bio-
marker undergoes “dilution” into a mixture of thou-
sands of proteins that are present at concentrations
spanning at least 10 orders of magnitude. Furthermore,
the integrity of a circulating biomarker may be com-
promised by proteolytic degradation or distortion of its
posttranslational modifications. Other levels of com-
plexity involve the inherent variability of biological sys-
tems (intraindividual and across a population) and
ambiguities in defining disease phenotypes. Finally,
fundamental and technological difficulties involved in
biomarker studies are exacerbated by preanalytical
variables associated with sample collection, handling,
and storage (1, 2 ).
An ideal cancer biomarker would be detectable in
body fluids at an early stage of disease in a highly spe-
cific and selective fashion and be measurable for an
inexpensive price. Development of such a biomarker is
not a trivial endeavor. To promote efficiency and rigor
in cancer biomarker research, Pepe et al. (3 ) intro-
duced guidelines for a generally appropriate process
that could be applied to many other diseases and classes
of biomarker (e.g., a protein or a metabolite). These
guidelines propose specific aims and measures of suc-
cess for each of the 5 phases of a biomarker discovery
pipeline in the context of progress being made in the
field and relevant published studies. It is now evident
that the majority of proteomics and genomics studies
published to date do not progress far beyond the dis-
covery stage (phase 1), because the development of
clinical immunoassays (e.g., ELISA assays) required to
move a multitude of putative biomarkers to the verifi-
cation stage (phase 2) is prohibitively slow and expen-
sive. Innovative approaches are necessary to accelerate
biomarker credentialing and improve healthcare for
cancer patients. To this end, the recent work of White-
aker et al. (4) focuses on the process of proteomics-
based targeted verification of breast cancer biomarkers
in plasma, currently a major bottleneck in the pipeline.
The authors systematically address the challenges and
provide a resounding “yes” to the question, “Can large-
scale verification of protein biomarkers in plasma be
done with current proteomics technologies?” Their re-
sults clearly indicate that the protein biomarker discov-
ery field is maturing.
Whiteaker et al. successfully deployed an efficient
and standardized mass spectrometry– based pipeline of
biomarker discovery and verification for credentialing
putative biomarkers. This work used a well-characterized
doxycycline-inducible, bitransgenic MMTV-rtTA/TetO-
NeuNT (Her2/Neu) mouse model of breast cancer. The
differential proteomics analyses also included samples
from healthy transgenic TetO-Neu control mice. To
avoid bias, the investigators paired experimental and
control animals at weaning and matched them with
respect to age, sex, litter, cage, and treatment protocols.
Thus, biological and environmental variation was min-
imized, thereby decoupling sample-related interfer-
ences (e.g., interindividual differences) from noise in-
herent to the methods. The major focus of this work
was verification, which built on the discovery data gen-
erated by the authors and other investigators with this
animal model. In their study, the investigation reduced
the initial pool of 1908 putative biomarkers that
emerged from the discovery studies to a set of 36 pro-
teins that were verified as increased in the plasma
of tumor-bearing mice. Although the authors were
aware that the complexity of their model is much lower
than that of human cancer patients, they conducted a
study that benchmarked the currently available tech-
nologies in a tightly controlled in vivo model. In doing
so, they have provided a much needed reality check
regarding the amount of effort required to generate
high-quality data sets for identifying valuable bio-
marker targets worthy of further development.
The approach described by Whiteaker et al. en-
compassed 3 essential steps in the biomarker pipeline:
discovery, triage, and verification. The discovery step
used data from 13 independent microarray experi-
ments and proteomics experiments, with the latter per-
1
Department of Obstetrics, Gynecology, and Reproductive Sciences, and
2
San-
dler-Moore Mass Spectrometry Core Facility, University of California San Fran-
cisco, San Francisco, CA.
* Address correspondence to this author at: 521 Parnassus Ave., Box 0665, San
Francisco, CA 94143-0665. E-mail sfisher@cgl.ucsf.edu.
Received September 30, 2011; accepted October 18, 2011.
Previously published online at DOI: 10.1373/clinchem.2011.175034
Clinical Chemistry 58:2
321–323 (2012)
Perspective
321
Page 1

You are reading a preview. Would you like to access the full-text?

    • "or MS - based analysis Collection and preservation of samples in biobanks is crucial * Very complex proteomes need fractionation Separation of proteins in multiple dimensions Working with subproteomes potential to speed up the validation of candidate biomarkers due to their large multiplex capabilities combined with excellent limits of detection ( Witkowska et al . , 2012 ) . Development of MS - based targeted methods however still take several months for a specific type of biomarker . Furthermore , the production of stable isotope - labeled standards ( SIS ) peptides that allow absolute quantification is still costly ( Parker and Borchers , 2014a ) . An overview of all mass spectrometry - based proteomi"
    [Show abstract] [Hide abstract] ABSTRACT: Although genomics has delivered major advances in cancer prognostics, treatment and diagnostics, it still only provides a static image of the situation. To study more dynamic molecular entities, proteomics has been introduced into the cancer research field more than a decade ago. Currently, however, the impact of clinical proteomics on patient management and clinical decision-making is low and the implementations of scientific results in the clinic appear to be scarce. The search for cancer-related biomarkers with proteomics however, has major potential to improve risk assessment, early detection, diagnosis, prognosis, treatment selection and monitoring. In this review, we provide an overview of the transition of oncoproteomics towards translational oncology. We describe which lessons are learned from currently approved protein biomarkers and previous proteomic studies, what the pitfalls and challenges are in clinical proteomics applications, and how proteomic research can be successfully translated into medical practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
    No preview · Article · Aug 2015 · Critical reviews in oncology/hematology
  • Source
    • "Due to the paucity of FDA-approved protein biomarkers [4] – and the need for additional ones of high specificity for increased confidence [5] – considerable research is currently being focused on verifying the multitude of candidate markers that have been discovered through genomic (e.g., transcriptome profiling [6]) or proteomic (e.g., shotgun or multidimensional separations with tandem MS [7,8]) technologies. Biomarker verification is the bottleneck of the biomarker pipeline [9,10]. It is at this stage that hundreds of candidate markers need to be screened against hundreds to thousands of patient cohorts for evaluation of their true clinical utility [11]. "
    [Show abstract] [Hide abstract] ABSTRACT: Accurate and rapid protein quantitation is essential for screening biomarkers for disease stratification and monitoring, and to validate the hundreds of putative markers in human biofluids, including blood plasma. An analytical method that utilizes stable isotope-labeled standard (SIS) peptides and selected/multiple reaction monitoring-mass spectrometry (SRM/MRM-MS) has emerged as a promising technique for determining protein concentrations. This targeted approach has analytical merit, but its true potential (in terms of sensitivity and multiplexing) has yet to be realized. Described herein is a method that extends the multiplexing ability of the MRM method to enable the quantitation 142 high-to-moderate abundance proteins (from 31 mg/mL to 44 ng/mL) in undepleted and non-enriched human plasma in a single run. The proteins have been reported to be associated to a wide variety of non-communicable diseases (NCDs), from cardiovascular disease (CVD) to diabetes. The concentrations of these proteins in human plasma are inferred from interference-free peptides functioning as molecular surrogates (2 peptides per protein, on average). A revised data analysis strategy, involving the linear regression equation of normal control plasma, has been instituted to enable the facile application to patient samples, as demonstrated in separate nutrigenomics and CVD studies. The exceptional robustness of the LC/MS platform and the quantitative method, as well as its high throughput, makes the assay suitable for application to patient samples for the verification of a condensed or complete protein panel. This article is part of a Special Issue entitled: Biomarkers: A Proteomic Challenge.
    Full-text · Article · Jun 2013 · Biochimica et Biophysica Acta
  • [Show abstract] [Hide abstract] ABSTRACT: Perhaps paradoxically, we argue that the biological sciences are "data-limited". In contrast to the glut of DNA sequencing data available, high-throughput protein analysis is expensive and largely inaccessible. Hence, we posit that access to robust protein-level data is inadequate. Here, we use the framework of the formal engineering design process to both identify and understand the problems facing measurement science in the 21st century. In particular, discussion centers on the notable challenge of realizing protein analyses that are as effective (and transformative) as genomics tools. This Perspective looks through the lens of a case study on protein biomarker validation and verification, to highlight the importance of iterative design in realizing significant advances over currently available measurement capabilities in the candidate or targeted proteomics space. The Perspective follows a podium presentation given by the author at The 16th International Conference on Miniaturized Systems for Chemistry and Life Sciences (μTAS 2012), specifically focusing on novel targeted proteomic measurement tools based in microfluidic design. The role of unmet needs identification, iteration in concept generation and development, and the existing gap in rapid prototyping tools for separations are all discussed.
    No preview · Article · Aug 2013 · Analytical Chemistry
Show more