Article

A critique of the hypothesis, and a defense of the question, as a framework for experimentation.

Novartis Institutes for Biomedical Research, Cambridge, MA, USA.
Clinical Chemistry (Impact Factor: 7.77). 07/2010; 56(7):1080-5. DOI: 10.1373/clinchem.2010.144477
Source: PubMed

ABSTRACT Scientists are often steered by common convention, funding agencies, and journal guidelines into a hypothesis-driven experimental framework, despite Isaac Newton's dictum that hypotheses have no place in experimental science. Some may think that Newton's cautionary note, which was in keeping with an experimental approach espoused by Francis Bacon, is inapplicable to current experimental method since, in accord with the philosopher Karl Popper, modern-day hypotheses are framed to serve as instruments of falsification, as opposed to verification. But Popper's "critical rationalist" framework too is problematic. It has been accused of being: inconsistent on philosophical grounds; unworkable for modern "large science," such as systems biology; inconsistent with the actual goals of experimental science, which is verification and not falsification; and harmful to the process of discovery as a practical matter. A criticism of the hypothesis as a framework for experimentation is offered. Presented is an alternative framework-the query/model approach-which many scientists may discover is the framework they are actually using, despite being required to give lip service to the hypothesis.

0 Bookmarks
 · 
85 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Single protein biomarkers measured with antibody-based affinity assays are the basis of molecular diagnostics in clinical practice today. There is great hope in discovering new protein biomarkers and combinations of protein biomarkers for advancing medicine through monitoring health, diagnosing disease, guiding treatment, and developing new therapeutics. The goal of high-content proteomics is to unlock protein biomarker discovery by measuring many (thousands) or all (∼23,000) proteins in the human proteome in an unbiased, data-driven approach. High-content proteomics has proven technically difficult due to the diversity of proteins, the complexity of relevant biological samples, such as blood and tissue, and large concentration ranges (in the order of 10(12) in blood). Mass spectrometry and affinity methods based on antibodies have dominated approaches to high-content proteomics. For technical reasons, neither has achieved adequate simultaneous performance and high-content. Here we review antibody-based protein measurement, multiplexed antibody-based protein measurement, and limitations of antibodies for high-content proteomics due to their inherent cross-reactivity. Finally, we review a new affinity-based proteomic technology developed from the ground up to solve the problem of high content with high sensitivity and specificity. Based on a new generation of slow off-rate modified aptamers (SOMAmers), this technology is unlocking biomarker discovery.
    Expert Review of Molecular Diagnostics 11/2010; 10(8):1013-22. · 4.09 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In 1997, while still working at NeXstar Pharmaceuticals, several of us made a proteomic bet. We thought then, and continue to think, that proteomics offers a chance to identify disease-specific biomarkers and improve healthcare. However, interrogating proteins turned out to be a much harder problem than interrogating nucleic acids. Consequently, the 'omics' revolution has been fueled largely by genomics. High-scale proteomics promises to transform medicine with personalized diagnostics, prevention, and treatment. We have now reached into the human proteome to quantify more than 1000 proteins in any human matrix - serum, plasma, CSF, BAL, and also tissue extracts - with our new SOMAmer-based proteomics platform. The surprising and pleasant news is that we have made unbiased protein biomarker discovery a routine and fast exercise. The downstream implications of the platform are substantial.
    New Biotechnology 12/2011; 29(5):543-9. · 2.11 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: INTRODUCTION: The goal of early predictive safety assessment (PSA) is to keep compounds with detectable liabilities from progressing further in the pipeline. Such compounds jeopardize the core of pharmaceutical research and development and limit the timely delivery of innovative therapeutics to the patient. Computational methods are increasingly used to help understand observed data, generate new testable hypotheses of relevance to safety pharmacology, and supplement and replace costly and time-consuming experimental procedures. AREAS COVERED: The authors survey methods operating on different scales of both physical extension and complexity. After discussing methods used to predict liabilities associated with structures of individual compounds, the article reviews the use of adverse event data and safety profiling panels. Finally, the authors examine the complexities of toxicology data from animal experiments and how these data can be mined. EXPERT OPINION: A significant obstacle for data-driven safety assessment is the absence of integrated data sets due to a lack of sharing of data and of using standard ontologies for data relevant to safety assessment. Informed decisions to derive focused sets of compounds can help to avoid compound liabilities in screening campaigns, and improved hit assessment of such campaigns can benefit the early termination of undesirable compounds.
    Expert Opinion on Drug Metabolism &amp Toxicology 11/2011; 7(12):1497-511. · 2.94 Impact Factor

David J Glass