Reporting recommendations for tumor MARKer prognostic studies (REMARK)

Article (PDF Available)inJournal of Clinical Oncology 23(36):9067-72 · October 2006with50 Reads
DOI: 10.1200/JCO.2004.01.0454 · Source: PubMed
Abstract
Despite years of research and hundreds of reports on tumour markers in oncology, the number of markers that have emerged as clinically useful is pitifully small. Often initially reported studies of a marker show great promise, but subsequent studies on the same or related markers yield inconsistent conclusions or stand in direct contradiction to the promising results. It is imperative that we attempt to understand the reasons that multiple studies of the same marker lead to differing conclusions. A variety of methodological problems have been cited to explain these discrepancies. Unfortunately, many tumour marker studies have not been reported in a rigorous fashion, and published articles often lack sufficient information to allow adequate assessment of the quality of the study or the generalisability of the study results. The development of guidelines for the reporting of tumour marker studies was a major recommendation of the US National Cancer Institute and the European Organisation for Research and Treatment of Cancer (NCI-EORTC) First International Meeting on Cancer Diagnostics in 2000. Similar to the successful CONSORT initiative for randomised trials and the STARD statement for diagnostic studies, we suggest guidelines to provide relevant information about the study design, preplanned hypotheses, patient and specimen characteristics, assay methods, and statistical analysis methods. In addition, the guidelines suggest helpful presentations of data and important elements to include in discussions. The goal of these guidelines is to encourage transparent and complete reporting so that the relevant information will be available to others to help them to judge the usefulness of the data and understand the context in which the conclusions apply.
1180 COMMENTARY Journal of the National Cancer Institute, Vol. 97, No. 16, August 17, 2005
Specimen availability may be related to tumor size and patient
outcome ( 12 ) , and the quantity, quality, and preservation method
of the specimen may affect feasibility of conducting certain
assays. There can also be biases or large variability inherent in
the assay results, depending on the particular assay methods used
( 13 – 17 ) . Statistical problems are commonplace. These problems
include underpowered studies or overly optimistic reporting of
effect sizes and signi cance levels due to multiple testing, subset
analyses, and cutpoint optimization ( 18 ) .
Unfortunately, many tumor marker studies have not been
reported in a rigorous fashion, and published articles often lack
suf cient information to allow adequate assessment of the qual-
ity of the study or the generalizability of study results. Such
reporting de ciencies are increasingly being highlighted by sys-
tematic reviews of the published literature on particular markers
or cancers ( 19 – 25 ) .
The development of guidelines for the reporting of tumor
marker studies was a major recommendation of the National Can-
cer Institute European Organisation for Research and Treatment
of Cancer (NCI-EORTC) First International Meeting on Cancer
Diagnostics (From Discovery to Clinical Practice: Diagnostic
Innovation, Implementation, and Evaluation) that was convened
in Nyborg, Denmark, in July 2000. The purpose of the meeting
was to discuss issues, accomplishments, and barriers in the eld
of cancer diagnostics. Poor study design and analysis, assay vari-
ability, and inadequate reporting of studies were identi ed as some
of the major barriers to progress in this eld. One of the working
groups formed at the Nyborg meeting was charged with address-
ing statistical issues of poor design and analysis and with report-
ing of tumor marker prognostic studies. The guidelines that we
present in this commentary are the product of that committee. The
Program for the Assessment of Clinical Cancer Tests (PACCT)
Strategy Group of the U.S. NCI has also strongly endorsed this
effort ( http://www.cancerdiagnosis.nci.nih.gov/assessment ).
COMMENTARY
COMMENTARY
Reporting Recommendations for Tumor Marker
Prognostic Studies (REMARK)
Lisa M. McShane , Douglas G. Altman , Willi Sauerbrei , Sheila E. Taube ,
Massimo Gion , Gary M. Clark for the Statistics Subcommittee of the
NCI-EORTC Working Group on Cancer Diagnostics
Despite years of research and hundreds of reports on tumor
markers in oncology, the number of markers that have
emerged as clinically useful is pitifully small. Often, initially
reported studies of a marker show great promise, but subse-
quent studies on the same or related markers yield inconsis-
tent conclusions or stand in direct contradiction to the
promising results. It is imperative that we attempt to under-
stand the reasons that multiple studies of the same marker
lead to differing conclusions. A variety of methodologic prob-
lems have been cited to explain these discrepancies. Unfortu-
nately, many tumor marker studies have not been reported in
a rigorous fashion, and published articles often lack suf cient
information to allow adequate assessment of the quality of the
study or the generalizability of study results. The development
of guidelines for the reporting of tumor marker studies was
a major recommendation of the National Cancer Institute
European Organisation for Research and Treatment of Can-
cer (NCI-EORTC) First International Meeting on Cancer
Diagnostics in 2000. As for the successful CONSORT initia-
tive for randomized trials and for the STARD statement for
diagnostic studies, we suggest guidelines to provide relevant
information about the study design, preplanned hypotheses,
patient and specimen characteristics, assay methods, and
statistical analysis methods. In addition, the guidelines sug-
gest helpful presentations of data and important elements to
include in discussions. The goal of these guidelines is to encour-
age transparent and complete reporting so that the relevant
information will be available to others to help them to judge
the usefulness of the data and understand the context in which
the conclusions apply. [J Natl Cancer Inst 2005;97:1180–4]
Despite years of research and hundreds of reports on tumor
markers in oncology, the number of markers that have emerged
as clinically useful is pitifully small ( 1 – 3 ) . Often, initially re-
ported studies of a marker show great promise, but subsequent
studies on the same or related markers yield inconsistent conclu-
sions or stand in direct contradiction to the promising results.
It is imperative that we attempt to understand the reasons that
multiple studies of the same marker lead to differing conclusions.
A variety of problems have been cited to explain these discrep-
ancies, such as general methodologic differences, poor study
design, assays that are not standardized or lack reproducibility,
and inappropriate or misleading statistical analyses that are often
based on sample sizes too small to draw meaningful conclusions
( 4 – 11 ) . For example, in retrospective studies, patient populations
are often biased toward patients with available tumor specimens.
Af liations of authors: Biometric Research Branch (LMM) and Cancer
Diagnosis Program (SET), National Cancer Institute, Bethesda, MD; Medi-
cal Statistics Group, Cancer Research UK, Center for Statistics in Medicine,
Wolfson College, Oxford, UK (DGA); Institut fuer Medizinische Biometrie und
Medizinische Informatik, Universitaetsklinikum Freiburg, Germany (WS);
Centro Regionale Indicatori Biochimici di Tumore, Ospedale Civile, Venezia,
Italy (MG); OSI Pharmaceuticals, Inc., Boulder, CO (GMC).
Correspondence to: Lisa M. McShane, PhD, National Cancer Institute, Bio-
metric Research Branch, DCTD, Rm. 8126, Executive Plaza North, MSC 7434,
6130 Executive Blvd., Bethesda, MD 20892-7434 (e-mail: lm5h@nih.gov ).
See “ Notes following “ References. ”
DOI: 10.1093/jnci/dji237
© The Author 2005. Published by Oxford University Press. All rights reserved.
For Permissions, please e-mail: journals.permissions@oupjournals.org .
Journal of the National Cancer Institute, Vol. 97, No. 16, August 17, 2005 COMMENTARY 1181
The guidelines that we present in this commentary build on
earlier suggestions ( 21 , 26 – 29 ) and on educational publications
( 30 – 33 ) . They recommend elements and formats for presenta-
tion with the objectives of facilitating evaluation of the appro-
priateness and quality of study design, methods, analyses, and
improving the ability to compare results across studies. As for
the successful CONSORT initiative for randomized clinical tri-
als ( 34 ) , and the STARD statement for studies of diagnostic test
accuracy ( 35 ) , these guidelines suggest relevant information that
should be provided about the study design, preplanned hypoth-
eses, patient and specimen characteristics, assay methods, and
statistical analysis methods. In addition, the guidelines suggest
helpful presentations of data and important elements to include
in discussions. To be published elsewhere, in an explanatory
document, are speci c justi cations for the need for each of the
elements of the recommendations.
We have developed these reporting guidelines primarily for
studies evaluating a single tumor marker of interest, often includ-
ing adjustment for standard clinical prognostic variables. They
are largely relevant for studies exploring more than one marker,
but they are not intended to speci cally address statistical con-
siderations in development of prognostic models from very large
numbers of candidate markers. The reason we chose to empha-
size prognostic marker studies is that they represent a large pro-
portion of the tumor marker literature and tend to be particularly
fraught with problems because they are often conducted on retro-
spective collections of specimens, and analyses may contain sub-
stantial exploratory components. For this commentary, we de ne
prognostic markers to be markers that have an association with
some clinical outcome, typically a time-to-event outcome such
as overall survival or recurrence-free survival. (Some individuals
adhere to a stricter de nition of prognostic marker as applying
only to the natural history of patients who received no treatment
following local therapy.) Prognostic markers may be considered
in the clinical management of a patient. For example, they may
be used as decision aids in determining whether a patient should
receive adjuvant chemotherapy or how aggressive that therapy
should be. Predictive markers are generally used to make more
speci c choices between treatment options. Predictive markers
are used as indicators of the likely bene t to a speci c patient
of a speci c treatment. For example, a predictive marker might
indicate that a patient expressing the marker will bene t more
from a new treatment than from standard treatment, whereas a
patient not expressing the marker will derive little or no bene t
from the new treatment. Predictive marker studies usually occur
later in the marker development process, and there are far fewer
published examples. Knowledge of speci c treatments received
and of how those treatment decisions were made become even
more critical. In our judgment, the issues in reporting predictive
marker studies are complex and different enough from those of
prognostic marker studies that we are not willing to claim that
these guidelines give predictive marker studies adequate cover-
age, although we believe that most of the guidance is relevant to
such studies also.
The goal of these guidelines is to encourage transparent and
complete reporting so that the relevant information will be avail-
able to others to help them to judge the usefulness of the data
and understand the context in which the conclusions apply. These
guidelines are not intended to dictate speci c designs or analysis
strategies. In general, there is more than one acceptable approach
to the design or analysis of a particular study, although these
guidelines should help to eliminate some clearly unacceptable
options, as have been discussed in other papers ( 7 , 26 , 33 , 36 ) . For
example, unacceptable options include reporting statistical sig-
ni cance of a markers prognostic effect without acknowledging
that the signi cance testing was preceded by extensive manipula-
tions involving derivation of data-dependent cutpoints or variable
selection procedures. High-quality reporting of a study cannot
transform a poorly designed or -analyzed study into a good one,
but it can help to identify the poor studies, and we believe it is
an important rst step in improving the overall quality of tumor
marker prognostic studies.
M ATERIALS AND M ETHODS
Initial ideas for key elements to be addressed in the guidelines
were assembled from literature citing empirical evidence of in-
adequate reporting or problematic analysis methods ( 9 , 18 , 36 , 37 )
that are based on published reviews of tumor marker studies.
Ideas were also generated by reviewing similar reporting guide -
lines that have been produced for other types of medical
research studies (CONSORT, QUOROM, MOOSE, and STARD)
( 34 , 35 , 38 , 39 ) . Three individuals from the working group (L.M.,
D.A., and G.C.) wrote a rst draft to serve as a starting point for
discussion by the full group. Comments on drafts were made by
the full group on a conference call and through multiple e-mail
exchanges. A very preliminary draft was presented to the PACCT
Strategy Group in January 2001. In response to comments, the
guidelines were shortened, reformatted, and recirculated to the
full committee. They were posted to the PACCT website ( http://
www.cancerdiagnosis.nci.nih.gov/assessment/progress/clinical.
html ) for public comment and circulated to attendees of the
NCI-EORTC Second International Meeting on Cancer Diagnos-
tics (Conference on the Development of New Diagnostic Tools
for Cancer) that was held in Washington, DC, in June 2002. In
February 2003, three committee members (D.A., L.M., and W.S.)
met for 2 days to make further revisions. The version produced
in that February meeting was sent to the full committee for nal
comment. The version presented here incorporates those nal
comments and was approved by the full committee.
R ESULTS
Table 1 shows the recommendations for reporting studies on
tumor markers. Speci c items are grouped under headings Intro-
duction, Materials and Methods, Results, and Discussion, re ect-
ing the relevant sections of a published scienti c article. Further
details about the recommendations and explanatory material will
be provided elsewhere.
As noted in item 12, a diagram may be helpful to indicate
numbers of individuals included at different stages of a study. As
a minimum, such a diagram could show the number of patients
originally in the sample, the number remaining after exclusions,
and the numbers incorporated into univariate and multivariable
analyses.
D ISCUSSION
The reporting guidelines in this commentary are the result of
a collaborative effort among statisticians, clinicians, and labora-
tory scientists who are committed to improving and accelerating
1182 COMMENTARY Journal of the National Cancer Institute, Vol. 97, No. 16, August 17, 2005
the process by which tumor markers that provide useful informa-
tion for management of cancer patients are adopted into clini-
cal practice. In addition to the authors of this commentary, we
gratefully acknowledge the contributions of many individuals
with whom we have had informal discussions regarding these
guidelines and who have been supportive of this effort. All of us
participating in the development of these guidelines are actively
involved in the design, conduct, and analysis of studies involv-
ing tumor markers. We serve as editors and reviewers for many
scienti c journals that publish tumor marker studies. We serve
on program committees for international meetings, as decision-
makers for funding agencies, and as participants in national and
international committees charged with evaluating and prioritizing
tumor markers for further study or making recommendations for
clinical use. We also are actively involved in our own research
involving tumor markers. As editors, reviewers, and program and
advisory committee members, we have struggled with having to
make decisions when insuf cient information is provided about
study design or analysis methods. As individual investigators, we
have experienced the frustration of trying to interpret often con-
fusing literature to guide our own research programs.
There are consequences of poor study reporting for the
r esearch community as a whole. Poorly designed or inappro-
priately analyzed studies can attract undeserved attention when
they produce very dramatic but unfortunately incorrect results.
In contrast, some carefully designed and analyzed studies have
been overlooked because they produced less dramatic but
perhaps more accurate and realistic results. The poor quality of
reporting of prognostic marker studies may have contributed to
the relative scarcity of markers whose prognostic in uence is
well supported. Thorough reporting is required no matter what
methods of design and analysis are used. Thorough reporting
does not solve problems of poor design or analysis that are being
reported; rather, it just fairly describes what problems may exist
and need to be considered in interpretation. It is our hope that
these guidelines will be embraced and used by journal editors,
reviewers, funding agencies, decision-making bodies, and indi-
vidual investigators.
Table 1. Reporting recommendations for tumor marker prognostic studies (REMARK)
INTRODUCTION
1. State the marker examined, the study objectives, and any prespeci ed hypotheses.
MATERIALS AND METHODS
Patients
2. Describe the characteristics (e.g., disease stage or comorbidities) of the study patients, including their source and inclusion and exclusion criteria.
3. Describe treatments received and how chosen (e.g., randomized or rule-based).
Specimen characteristics
4. Describe type of biological material used (including control samples) and methods of preservation and storage.
Assay methods
5. Specify the assay method used and provide (or reference) a detailed protocol, including speci c reagents or kits used, quality control procedures,
reproducibility assessments, quantitation methods, and scoring and reporting protocols. Specify whether and how assays were performed blinded to the study
endpoint.
Study design
6. State the method of case selection, including whether prospective or retrospective and whether strati cation or matching (e.g., by stage of disease or age) was
used. Specify the time period from which cases were taken, the end of the follow-up period, and the median follow-up time.
7. Precisely de ne all clinical endpoints examined.
8. List all candidate variables initially examined or considered for inclusion in models.
9. Give rationale for sample size; if the study was designed to detect a speci ed effect size, give the target power and effect size.
Statistical analysis methods
10. Specify all statistical methods, including details of any variable selection procedures and other model-building issues, how model assumptions were veri ed,
and how missing data were handled.
11. Clarify how marker values were handled in the analyses; if relevant, describe methods used for cutpoint determination.
RESULTS
Data
12. Describe the ow of patients through the study, including the number of patients included in each stage of the analysis (a diagram may be helpful) and reasons
for dropout. Speci cally, both overall and for each subgroup extensively examined report the numbers of patients and the number of events.
13. Report distributions of basic demographic characteristics (at least age and sex), standard (disease-speci c) prognostic variables, and tumor marker, including
numbers of missing values.
Analysis and presentation
14. Show the relation of the marker to standard prognostic variables.
15. Present univariate analyses showing the relation between the marker and outcome, with the estimated effect (e.g., hazard ratio and survival probability).
Preferably provide similar analyses for all other variables being analyzed. For the effect of a tumor marker on a time-to-event outcome, a Kaplan Meier plot
is recommended.
16. For key multivariable analyses, report estimated effects (e.g., hazard ratio) with con dence intervals for the marker and, at least for the nal model, all other
variables in the model.
17. Among reported results, provide estimated effects with con dence intervals from an analysis in which the marker and standard prognostic variables are
included, regardless of their statistical signi cance.
18. If done, report results of further investigations, such as checking assumptions, sensitivity analyses, and internal validation.
DISCUSSION
19. Interpret the results in the context of the prespeci ed hypotheses and other relevant studies; include a discussion of limitations of the study.
20. Discuss implications for future research and clinical value.
Journal of the National Cancer Institute, Vol. 97, No. 16, August 17, 2005 COMMENTARY 1183
These guidelines have been labeled as applying to clinical prog-
nostic studies. Not all of the elements apply to studies conducted
in earlier phases of marker development ( 40 ) , for example, early
marker studies seeking to nd an association between a new
marker and other clinical variables or existing prognostic factors.
However, our recommendation is that investigators conducting
early marker studies strive to adhere to as many of the reporting
guidelines as applicable in their situation, and the guidelines might
also suggest issues that will be important for them to consider
in planning follow-up studies on their investigational markers.
Studies of markers that can be used to predict the success of
particular therapies, such as molecular targeted therapies, need
additional considerations. It is our opinion that predictive marker
studies should generally be conducted within randomized trials
and should require a suf cient (usually larger) effective sample
size and that assays should be in a more advanced state of devel-
opment. The CONSORT statement for randomized clinical trials
can serve as a starting point for reporting guidelines for predictive
marker studies, but more issues relating to the marker assays must
be addressed. It is our feeling that more stringent and speci c
guidelines need to be developed for reporting studies of predictive
markers. Such studies will be considered in somewhat more detail
in the planned explanatory paper to be published elsewhere.
It may not be possible to report every detail for every study. For
example, it is often dif cult to provide detailed patient inclusion/
exclusion criteria or treatment information in retrospective prog-
nostic marker studies using archived tumor specimens. The impact
of such missing information must be judged in the speci c con-
text of the study and its stated conclusions. For example, a pure
prognostic study should be conducted in a group of patients who
have not received any systemic adjuvant therapy, but treatment in-
formation is often missing or unreliable in retrospective studies. In
these cases, it is important to recognize that apparent prognostic
effects may be in uenced by potential treatment by marker inter-
actions. The key point is that there must be a clear statement of
what is and is not known. In addition, it was beyond the scope
of these guidelines to recommend speci c details that should be
reported for each of the major classes of marker assays, for ex-
ample, immunohistochemistry, in situ hybridization methods, or
DNA-based assays. There is an ongoing effort to de ne such assay-
speci c checklists by another working group evolving from the
NCI-EORTC International Meetings on Cancer Diagnostics.
Some of the reviewers suggested that the guidelines should
promote full public access to data, possibly even individual-level
data. We have chosen not to include this issue in the current scope
of the guidelines even though we view movement in this direction
as generally positive. One concern is that if a study was poorly
designed or inadequately reported, making its data publicly avail-
able may simply propagate bad science. Good study design and
data quality have to come rst. We do recognize the potential ben-
e ts of promoting full public access to good-quality data. It would
allow veri cation of published analysis methods and results and
would facilitate alternative analyses and meta-analyses. Attainment
of these goals would be helped substantially if guidelines 10 and
11 were strictly applied so that statistical analysis methods were
described in suf cient detail to allow an individual independent
of the original research team to reproduce the results of the study
if supplied with the raw data. For extensive analyses, it is possible
that some of this information would have to be provided as supple-
mentary material available outside of the main published report,
for example on the journal’s or authors Web site.
Although some might view adherence to these guidelines as
yet another burden in trying to publish or obtain funding, we
would argue that use of these guidelines is more likely to re-
duce burdens on the research community. Making clear what is
considered relevant and important to report in journal articles or
funding proposals will likely reduce review time, reduce requests
for revisions, and help to ensure a fair review process. Further-
more, we consider it as a prerequisite for a thoughtful presentation
and interpretation of the results of a speci c study and a key aid
for a summary assessment of the effect of a marker in a review
paper. Most importantly, what greater reduction in burden could
there be than to eliminate some of the false leads generated by
poorly designed, analyzed, or reported studies that send research-
ers down unproductive paths, wasting years of time and money?
The ultimate usefulness of these guidelines will rely on how
widely they are adopted. We are heartened by the enthusiastic
responses that we received from the several journals who have
agreed to simultaneously publish this paper. There is a clear rec-
ognition in the community that the time has come (if not long
overdue) to improve the quality of tumor marker study report-
ing and conduct. We hope that many journals will adopt these
guidelines as part of their editorial requirements. To the extent
that does not happen immediately, we have to rely on authors
of journal articles and reviewers of those articles to initiate the
movement toward adherence to these guidelines.
We expect that just as tumor marker research will evolve,
these guidelines will have to evolve to address new study para-
digms and new assay technologies. It is our hope that publication
of these guidelines will generate vigorous discussion leading to
continually improved versions and, ultimately, improved quality
of tumor marker studies.
The guidelines presented in this paper are available at http://
www.cancerdiagnosis.nci.nih.gov/assessment/progress/clinical.
html , as will be other recommendations from the group in due
course. As noted, a detailed explanatory paper is to be published
elsewhere, following the model of similar articles relating to the
CONSORT and STARD statements ( 41 – 42 ) .
R EFERENCES
(1) Hayes DF, Bast RC, Desch CE, Fritsche H Jr, Kemeny NE, Jessup JM,
et al. Tumor marker utility grading system: a framework to evaluate clini -
cal utility of tumor markers. J Natl Cancer Inst 1996 ; 88 : 1456 – 66.
(2) Bast RC Jr, Ravdin P, Hayes DF, Bates S, Fritsche H Jr, Jessup JM, et al. for
the American Society of Clinical Oncology Tumor Markers Expert Panel.
2000 update of recommendations for the use of tumor markers in breast and
colorectal cancer: clinical practice guidelines of the American Society of
Clinical Oncology. J Clin Oncol 2001 ; 19 : 1865 – 78.
(3) Schilsky RL and Taube SE. Introduction: Tumor markers as clinical cancer
tests — are we there yet? Semin Oncol 2002 ; 29 : 211 – 2.
(4) McGuire WL. Breast cancer prognostic factors: evaluation guidelines.
J Natl Cancer Inst 1991 ; 83 : 154 – 5.
(5) Fielding LP, Fenoglio-Preiser CM, and Freedman LS. The future of prog-
nostic factors in outcome prediction for patients with cancer. Cancer
1992 ; 70 : 2367 – 77.
(6) Burke HB, Henson DE. Criteria for prognostic factors and for an enhanced
prognostic system. Cancer 1993 ; 72 : 3131 – 5.
(7) Concato J, Feinstein AR, Holford TR. The risk of determining risk with
multivariable models. Ann Intern Med 1993 ; 118 : 201 – 10.
(8) Gasparini G, Pozza F, Harris AL. Evaluating the potential usefulness of new
prognostic and predictive indicators in node-negative breast cancer patients.
J Natl Cancer Inst 1993 ; 85 : 1206 – 19.
(9) Simon R, Altman DG. Statistical aspects of prognostic factor studies in
oncology. Br J Cancer 1994 ; 69 : 979 – 85.
1184 COMMENTARY Journal of the National Cancer Institute, Vol. 97, No. 16, August 17, 2005
(10) Gasparini G. Prognostic variables in node-negative and node-positive breast
cancer. Breast Cancer Res Treat 1998 ; 52 : 321 – 31.
(11) Hall PA, Going JJ. Predicting the future: a critical appraisal of cancer prog-
nosis studies. Histopathology 1999 ; 35 : 489 – 94.
(12) Hoppin JA, Tolbert PE, Taylor JA, Schroeder JC, Holly EA. Potential for
selection bias with tumor tissue retrieval in molecular epidemiology studies.
Ann Epidemiol 2002 ; 12 : 1 – 6.
(13) Thor AD, Liu S, Moore DH II, Edgerton SM. Comparison of mitotic index,
in vitro bromodeoxyuridine labeling, and MIB-1 assays to quantitate prolif-
eration in breast cancer. J Clin Oncol 1999 ; 17 : 470 – 7.
(14) Gancberg D, Lespagnard L, Rouas G, Paesmans M, Piccart M, DiLeo A,
et al. Sensitivity of HER-2/neu antibodies in archival tissue samples of
invasive breast carcinomas. Correlation with oncogene ampli cation in
160 cases. Am J Clin Pathol 2000 ; 113 : 675 – 82.
(15) McShane LM, Aamodt R, Cordon-Cardo C, Cote R, Faraggi D, Fradet Y,
et al., and the National Cancer Institute Bladder Tumor Marker Network.
Reproducibility of p53 immunohistochemistry in bladder tumors. Clin Can-
cer Res 2000 ; 6 : 1854 – 64.
(16) Paik S, Bryant J, Tan-Chiu E, Romond E, Hiller W, Park K, et al. Real-
world performance of HER2 testing National Surgical Adjuvant Breast
and Bowel Project Experience. J Natl Cancer Inst 2002 ; 94 : 852 – 4.
(17) Roche PC, Suman VJ, Jenkins RB, Davidson NE, Martino S, Kaufman PA,
et al. Concordance between local and central laboratory HER2 testing in the
breast intergroup trial N9831. J Natl Cancer Inst 2002 ; 94 : 855 – 7.
(18) Altman DG, De Stavola BL, Love SB, Stepniewska KA. Review of
survival analyses published in cancer journals. Br J Cancer 1995 ; 72 :
511 – 8.
(19) Brundage MD, Davies D, Mackillop WJ. Prognostic factors in non-small
cell lung cancer: a decade of progress. Chest 2002 ; 122 : 1037 – 57.
(20) Mirza AN, Mirza NQ, Vlastos G, Singletary SE. Prognostic factors in node-
negative breast cancer: a review of studies with sample size more than 200
and follow-up more than 5 years. Ann Surg 2002 ; 235 : 10 – 26.
(21) Riley RD, Abrams KR, Sutton AJ, Lambert PC, Jones DR, Heney D,
et al. Reporting of prognostic markers: current problems and develop-
ment of guidelines for evidence-based practice in the future. Br J Cancer
2003 ; 88 : 1191 – 8.
(22) Riley RD, Burchill SA, Abrams KR, Heney D, Sutton AJ, Jones DR, et al.
A systematic review of molecular and biological markers in tumours of the
Ewing’s sarcoma family. Eur J Cancer 2003 ; 39 : 19 – 30.
(23) Burton A, Altman DG. Missing covariate data within cancer prognostic
studies: a review of current reporting and proposed guidelines. Br J Cancer
2004 ; 91 : 4 – 8.
(24) Popat S, Matakidou A, Houlston RS. Thymidylate synthase expression and
prognosis in colorectal cancer: a systematic review and meta-analysis. J Clin
Oncol 2004 ; 22 : 529 – 36.
(25) Riley RD, Heney D, Jones DR, Sutton AJ, Lambert PC, Abrams KR, et al.
A systematic review of molecular and biological tumor markers in neuro-
blastoma. Clin Cancer Res 2004 ; 10 : 4 – 12.
(26) Altman DG, Lyman GH. Methodological challenges in the evaluation
of prognostic factors in breast cancer. Breast Cancer Res Treat 1998 ; 52 :
289 – 303.
(27) Gion M, Boracchi P, Biganzoli E, Daidone MG. A guide for reviewing sub-
mitted manuscripts (and indications for the design of translational research
studies on biomarkers). Int J Biol Markers 1999 ; 14 : 123 – 33.
(28) Altman DG. Systematic reviews of evaluations of prognostic variables. In:
Egger M, Davey Smith G, Altman DG, editors. Systematic reviews in health
care. Meta-analysis in context. 2nd ed. London (UK): BMJ Books; 2001 .
p. 228 – 47.
(29) Altman DG. Systematic reviews of evaluations of prognostic variables.
BMJ 2001 ; 323 : 224 – 8.
(30) McShane LM, Simon R. Statistical methods for the analysis of prognostic
factor studies. In: Gospodarowicz MK, Henson DE, Hutter RV, O’Sullivan B,
Sobin LH, Wittekind Ch, editors. Prognostic factors in cancer. 2nd ed. New
York (NY): Wiley-Liss; 2001 . p. 37 – 48.
(31) Simon R. Evaluating prognostic factor studies. In: Gospodarowicz MK,
Henson DE, Hutter RV, O’Sullivan B, Sobin LH, Wittekind Ch, editors.
Prognostic factors in cancer. 2nd ed. New York (NY): Wiley-Liss; 2001 .
p. 49 – 56.
(32) Biganzoli E, Boracchi P, Marubini E. Biostatistics and tumor marker stud-
ies in breast cancer: Design, analysis and interpretation issues. Int J Biol
Markers 2003 ; 18 : 40 – 8.
(33) Schumacher M, Hollander N, Schwarzer G, Sauerbrei W. Prognostic factor
studies. In: Crowley J, editor. Handbook of statistics in clinical oncology.
New York (NY): CRC Press; 2005 . p. 307 – 51.
(34) Moher D, Schulz KF, Altman D for the CONSORT Group. The CONSORT
statement: revised recommendations for improving the quality of reports of
parallel-group randomized trials. JAMA 2001 ; 285 : 1987 – 91.
(35) Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM,
et al. Towards complete and accurate reporting of studies of diagnostic
accuracy: the STARD initiative. Standards for Reporting of Diagnostic
Accuracy. Clin Chem 2003 ; 49 : 1 – 6.
(36) Altman DG, Lausen B, Sauerbrei W, Schumacher M. Dangers of using
optimal cutpoints in the evaluation of prognostic factors. J Natl Cancer
Inst 1994 ; 86 : 829 – 35.
(37) Hilsenbeck SG, Clark GM, McGuire WL. Why do so many prognostic
factors fail to pan out? Breast Cancer Res Treat 1992 ; 22 : 197 – 206.
(38) Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup D for the QUO-
ROM Group. Improving the quality of reports of meta-analyses of randomised
controlled trials: the QUOROM statement. Lancet 1999 ; 354 : 1896 – 900.
(39) Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D,
et al. Meta-analysis of observational studies in epidemiology: a proposal
for reporting. Meta-analysis Of Observational Studies in Epidemiology
(MOOSE) group. JAMA 2000 ; 283 : 2008 – 12.
(40) Hammond ME, Taube SE. Issues and barriers to development of clinically
useful tumor markers: a development pathway proposal. Semin Oncol
2002 ; 29 : 213 – 21.
(41) Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D,
et al. for the CONSORT Group. The revised CONSORT statement for re-
porting randomized trials: explanation and elaboration. Ann Intern Med
2001 ; 134 : 663 – 94.
(42) Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM,
et al. Standards for Reporting of Diagnostic Accuracy. The STARD state-
ment for reporting studies of diagnostic accuracy: explanation and elabora-
tion. Clin Chem 2003 ; 49 : 7 – 18.
N OTES
We are grateful to the U.S. National Cancer Institute and the European Organi-
zation for Research and Treatment of Cancer for their support of the NCI-EORTC
International Meetings on Cancer Diagnostics, from which the idea for these
guidelines originated. We thank the U.K. National Translational Cancer Research
Network for nancial support provided to D. G. Altman.
Members of the Statistics Subcommittee of the NCI-EORTC Working Group on
Cancer Diagnostics are: Douglas G. Altman, DSc (Co-chair), Medical Statistics
Group, Cancer Research UK, Centre for Statistics in Medicine, Wolfson College,
Oxford OX2 6UD, UK; Lisa M. McShane, PhD (Co-chair), Biometric Research
Branch, US National Cancer Institute, Bethesda, MD 20892; Gary M. Clark, PhD,
OSI Pharmaceuticals, Inc., Boulder, CO 80301; Jose Costa, MD, Yale Cancer Center,
New Haven, CT 06510-3202; Angelo Di Leo, MD, PhD, Department of Oncology,
Hospital of Prato, 59100 Prato, Italy; Massimo Gion, MD, Centro Regionale Indica-
tori Biochimici di Tumore, Ospedale Civile, 30122 Venezia, Italy; Robert J. Mayer,
MD, Dana-Farber Cancer Institute, Boston, MA 02115; Willi Sauerbrei, PhD, Insti-
tut fuer Medizinische Biometrie und Medizinische Informatik, Universitaetsklinikum
Freiburg, 79104 Freiburg, Germany; and Sheila E. Taube, PhD, Cancer Diagnosis
Program, US National Cancer Institute, Bethesda, MD 20892.
Manuscript received December 28, 2004; revised June 14, 2005; accepted
July 21, 2005.
    • "In reporting our study, we have adhered to the guidelines of an important methodological paper from 2005 titled [30] . " To decrease any potential bias arising from a review of the medical records, we included " Patient Cohort " analysis to fulfill these criteria (Fig. 1). "
    [Show abstract] [Hide abstract] ABSTRACT: Background Triple-negative breast cancer (TNBC) is known for aggressive biologic features and poor prognosis. Epidermal growth factor receptor (EGFR) overexpression in TNBC indicates poor prognosis. However, there is no previous study of the relationship between expression of the entire human epidermal growth factor receptor (HER) family genes and patient prognosis in TNBC. Accordingly, we investigated the expression profiles of HER family genes in patients with TNBC to determine the prognostic value and clinical implications of HER family expression. Methods We used the nCounter expression assay (NanoString®) to measure the expression of EGFR, erb-B2 receptor tyrosine kinase 2 (ERBB2), ERBB3, ERBB4, and estrogen receptor 1 (ESR1) genes using mRNA extracted from paraffin-embedded tumor tissues from 203 patients diagnosed with TNBC. Our data were validated using a separate cohort of 84 TNBC patients. Results A total of 203 TNBC patients who received adjuvant chemotherapy after curative surgery from 2000 to 2004 formed the training set. The 84 TNBC patients in the validation consort were selected from breast cancer patients who received curative surgery since 2005 to 2010. Analysis of the expression profiles of the HER family genes in TNBC tissue specimens revealed that increased expression of ERBB4 was associated with poor prognosis according to survival analysis (5-year distant relapse free survival [5Y DRFS], low vs. high expression [cut-off: median]: 90.1 % vs. 80.2 %; p = 0.022). This trend was also observed in the validation set of TNBC patients (5Y DRFS, low vs. high: 69.4 % vs. 44.7 %; p = 0.053). In a multivariate Cox regression model, ERBB4 expression was identified as a indicator of long-term prognosis in patients with TNBC. Conclusions The expression profile of ERBB4, a member of the HER family, might serve as a prognostic marker in patients with TNBC.
    Full-text · Article · Dec 2016
    • "The study procedures were approved by the Ethics Committee for Epidemiological and General Research at the Faculty of life Science, Kumamoto University (Approval number: Ethic 559). Throughout this article, the definition of " prognostic marker " is consistent with REMARK Guide- lines [16]. "
    [Show abstract] [Hide abstract] ABSTRACT: Background: Phosphatidylinositol-4,5-bisphosphate 3-kinase, catalytic subunit alpha (PIK3CA) mutations that activate the PI3K/AKT signaling pathway have been observed in several types of carcinoma and have been associated with patient prognosis. However, the significance of PIK3CA mutations in gastric cancer remains unclear. This retrospective study investigated the relationship between PIK3CA mutations and clinical outcomes in patients with gastric cancer. Additionally, we reviewed the rate of PIK3CA mutations in gastric cancer and the association between PIK3CA mutations and prognosis in human cancers. Methods: The study included 208 patients with gastric cancer who underwent surgical resection at Kumamoto University Hospital, Japan, between January 2001 and August 2010. Mutations in PIK3CA exons 9 and 20 were quantified by pyrosequencing assays. Results: PIK3CA mutations were detected in 25 (12 %) of the 208 patients. Ten patients had c.1634A > G (p.E545G), 10 had c.1624G > A (p.E542K), 13 had c.1633G > A (p.E545K), nine had c.3139C > T (p.H1047R), and 1 had c.3140A > G (p.H1047Y) mutations. PIK3CA mutations were not significantly associated with any clinical, epidemiologic, or pathologic characteristic. Kaplan-Meier analysis showed no significant differences in disease-free survival (log rank P = 0.84) and overall survival (log rank P = 0.74) between patients with and without PIK3CA mutations. Conclusions: Mutations in PIK3CA did not correlate with prognosis in patients with gastric cancer, providing additional evidence for the lack of relationship between the two.
    Full-text · Article · Dec 2016
    • "The absence of DNA methylation markers in clinical settings is mainly due to an ill-powered marker identification strategy in small selected series resulting in chance findings and false-positive identification of biomarkers. In addition, a lack of adequate validation of markers in independent patient series generating a good level of evidence [7][8][9], and the use of technologies with varying sensitivity and specificity to analyze these markers, leads to discrepant results [10, 11] . With the development of diverse DNA methylation analysis techniques, comparison of methylation data has become more complex. "
    [Show abstract] [Hide abstract] ABSTRACT: Background Already since the 1990s, promoter CpG island methylation markers have been considered promising diagnostic, prognostic, and predictive cancer biomarkers. However, so far, only a limited number of DNA methylation markers have been introduced into clinical practice. One reason why the vast majority of methylation markers do not translate into clinical applications is lack of independent validation of methylation markers, often caused by differences in methylation analysis techniques. We recently described RET promoter CpG island methylation as a potential prognostic marker in stage II colorectal cancer (CRC) patients of two independent series. Methods In the current study, we analyzed the RET promoter CpG island methylation of 241 stage II colon cancer patients by direct methylation-specific PCR (MSP), nested-MSP, pyrosequencing, and methylation-sensitive high-resolution melting (MS-HRM). All primers were designed as close as possible to the same genomic region. In order to investigate the effect of different DNA methylation assays on patient outcome, we assessed the clinical sensitivity and specificity as well as the association of RET methylation with overall survival for three and five years of follow-up. Results Using direct-MSP and nested-MSP, 12.0 % (25/209) and 29.6 % (71/240) of the patients showed RET promoter CpG island methylation. Methylation frequencies detected by pyrosequencing were related to the threshold for positivity that defined RET methylation. Methylation frequencies obtained by pyrosequencing (threshold for positivity at 20 %) and MS-HRM were 13.3 % (32/240) and 13.8 % (33/239), respectively. The pyrosequencing threshold for positivity of 20 % showed the best correlation with MS-HRM and direct-MSP results. Nested-MSP detected RET promoter CpG island methylation in deceased patients with a higher sensitivity (33.1 %) compared to direct-MSP (10.7 %), pyrosequencing (14.4 %), and MS-HRM (15.4 %). While RET methylation frequencies detected by nested-MSP, pyrosequencing, and MS-HRM varied, the prognostic effect seemed similar (HR 1.74, 95 % CI 0.97–3.15; HR 1.85, 95 % CI 0.93–3.86; HR 1.83, 95 % CI 0.92–3.65, respectively). Conclusions Our results show that upon optimizing and aligning four RET methylation assays with regard to primer location and sensitivity, differences in methylation frequencies and clinical sensitivities are observed; however, the effect on the marker’s prognostic outcome is minimal. Electronic supplementary material The online version of this article (doi:10.1186/s13148-016-0211-8) contains supplementary material, which is available to authorized users.
    Full-text · Article · Dec 2016
Show more