Andrew R. Leach

The University of Sheffield, Sheffield, England, United Kingdom

Are you Andrew R. Leach?

Claim your profile

Publications (60)138.5 Total impact

  • Michael M. Hann · Andrew R. Leach ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this chapter, we explore several issues of complexity in molecular design, focusing initially on the complexity model that we introduced over 10 years ago, which explores the probability of useful interactions occurring with ligands and binding sites of differing complexity. We explore some extensions of the model, which include uniqueness of binding mode and sensitivity of detection. In addition, we review the promiscuity data that supports and challenges the model, concluding that the use of molecular weight as a complexity measure does not account for the promiscuity introduced by excessive lipophilicity. We explore the subject of molecular interactions from the perspective of Shannon entropy and how this may help to understand some of the issues associated with lipophilic interactions. We then address the issues associated with sampling of chemical space and the challenges of understanding and navigating the vastness of such spaces. The challenge of understanding the complexity of thermodynamic entropy and enthalpy is also discussed. Finally, we discuss the challenges of taking a reductionist approach to drug discovery and how the emergence of new behaviors as complexity is rebuilt is difficult to predict and hence prepare for in advance.
    De novo Molecular Design, 10/2013: pages 57-77; , ISBN: 9783527334612
  • Andrew R. Leach · Richard A. Bryce · Alan J. Robinson ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional de novo design algorithms are able to generate many thousands of ligand structures that meet the constraints of a protein structure, but these structures are often not synthetically tractable. In this article, we describe how concepts from structure-based de novo design can be used to explore the search space in library design. A key feature of the approach is the requirement that specific templates are included within the designed structures. Each template corresponds to the "central core" of a combinatorial library. The template is positioned within an acyclic chain whose length and bond orders are systematically varied, and the conformational space of each structure that results (core plus chain) is explored to determine whether it is able to link together two or more strongly interacting functional groups or pharmacophores located within a protein binding site. This fragment connection algorithm provides "generic" 3D molecules in the sense that the linking part (minus the template) is built from an all-carbon chain whose synthesis may not be easily achieved. Thus, in the second phase, 2D queries are derived from the molecular skeletons and used to identify possible reagents from a database. Each potential reagent is checked to ensure that it is compatible with the conformation of its parent 3D conformation and the constraints of the binding site. Combinations of these reagents according to the combinatorial library reaction scheme give product molecules that contain the desired core template and the key functional/pharmacophoric groups, and would be able to adopt a conformation compatible with the original molecular skeleton without any unfavorable intermolecular or intramolecular interactions. We discuss how this strategy compares with and relates to alternative approaches to both structure-based library design and de novo design.
    Journal of Molecular Graphics and Modelling 07/2013; 18(4-5):358-67, 526. DOI:10.1016/S1093-3263(00)00062-0 · 1.72 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A major challenge in toxicology is the development of non-animal methods for the assessment of human health risks that might result from repeated systemic exposure. We present here a perspective that considers the opportunities that computational modelling methods may offer in addressing this challenge. Our approach takes the form of a commentary designed to inform responses to future calls for research in predictive toxicology. It is considered essential that computational model-building activities be at the centre of the initiative, driving an iterative process of development, testing and refinement. It is critical that the models provide mechanistic understanding and quantitative predictions. The aim would be to predict effects in humans; in order to help define a challenging but yet feasible initial goal the focus would be on liver mitochondrial toxicity. This will inevitably present many challenges that naturally lead to a modular approach, in which the overall problem is broken down into smaller, more self-contained sub-problems that will subsequently need to be connected and aligned to develop an overall understanding. The project would investigate multiple modelling approaches in order to encourage links between the various disciplines that hitherto have often operated in isolation. The project should build upon current activities in the wider scientific community, to avoid duplication of effort and to ensure that investment is maximised. Strong leadership will be required to ensure alignment around a set of common goals that would be derived using a problem-statement driven approach. Finally, although the focus here is on toxicology, there is a clear link to the wider challenges in systems medicine and improving human health.
    Toxicology 11/2012; 302(2-3). DOI:10.1016/j.tox.2012.10.007 · 3.62 Impact Factor
  • Darren V S Green · Andrew R Leach · Martha S Head ·

    Journal of Computer-Aided Molecular Design 12/2011; 26(1):51-6. DOI:10.1007/s10822-011-9514-1 · 2.99 Impact Factor
  • Andrew R Leach · Michael M Hann ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We review the concept of molecular complexity in the context of the very simple model of molecular interactions that we introduced over ten years ago. A summary is presented of efforts to validate this simple model using screening data. The relationship between the complexity model and the problem of sampling chemical space is discussed, together with the relevance of these theoretical concepts to fragment-based drug discovery.
    Current opinion in chemical biology 06/2011; 15(4):489-96. DOI:10.1016/j.cbpa.2011.05.008 · 6.81 Impact Factor
  • Source
    Andrew R Leach ·

    Journal of Cheminformatics 04/2011; 3(Suppl 1):1-1. DOI:10.1186/1758-2946-3-S1-O5 · 4.55 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    ChemInform 11/2010; 32(48). DOI:10.1002/chin.200148231
  • Gianpaolo Bravi · Darren V. S. Green · Michael M. Hann · Andrew R. Leach ·
    [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    ChemInform 02/2010; 32(7). DOI:10.1002/chin.200107219
  • Andrew R Leach · Valerie J Gillet · Richard A Lewis · Robin Taylor ·

    Journal of Medicinal Chemistry 10/2009; 53(2):539-58. DOI:10.1021/jm900817u · 5.45 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    ChemInform 07/2008; 39(29). DOI:10.1002/chin.200829203
  • [Show abstract] [Hide abstract]
    ABSTRACT: Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.
    Journal of Chemical Information and Modeling 05/2008; 48(4):719-29. DOI:10.1021/ci700130j · 3.74 Impact Factor
  • Stefan Senger · Andrew R. Leach ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years the volume of data describing the interactions between small, drug-like molecules and biological targets has dramatically increased. With this data explosion comes the need for electronic mechanisms to store, search and retrieve relevant information for subsequent analysis—and hopefully beneficial insights. The relatively new development that is the primary subject of this review is the availability of databases that contain large amounts of quantitative data characterising the activity of compounds in biological assays. Key features and capabilities of the main databases currently available are summarised, some of the ways in which the information in such databases can be used in drug discovery is discussed, and possible future directions are considered.
    Annual Reports in Computational Chemistry 01/2008; 4(4):203-216. DOI:10.1016/S1574-1400(08)00011-X
  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditional lead discovery has historically been driven by screening very large libraries of compounds to identify molecules with enzymatic or biological activity. However, traditional methods may not provide drug leads with suitable potency, novelty, molecular diversity or physicochemical properties. As a result, fragment-based screening has recently gained acceptance as an alternative approach for generating high quality lead molecules in pharmaceutical discovery. Why has fragment-based screening become so popular? A fragment-based approach can effectively represent the chemical diversity of a large, fully enumerated library without requiring the purchase or synthesis of enormous numbers of compounds. In addition, smaller scaffolds can provide better starting points for medicinal chemistry, as they can be elaborated into larger, more potent compounds, without pushing the limits of physicochemical properties such as molecular weight, polar surface area and clogP (which are known to correlate with oral bioavailability) (Lipinski et al., 1997; Teague et al., 1999). From a practical perspective, fragment-based screening may be carried out via any physical method that is capable of detecting binding of a small molecule to a macromolecular target. The most popular techniques employ NMR spectroscopy, X-ray crystallography, and mass spectroscopy. NMR screening in particular has evolved into a proven method for lead generation, and in this chapter we will describe theoretical aspects and practical applications of the most commonly used NMR-based screening approaches. X-ray crystallographic-based methods will be examined elsewhere in this volume. All fragment-based design strategies, whether they use biophysical or biochemical methods to detect binding of small molecules to a drug target, share common elements: i) a purified drug target, ii) a means of detecting binding, and iii) a strategy for use of binding information to generate drug leads. When the field of fragment-based screening was born, literature descriptions of NMR-based screening work focused on the method of detection, rather than the lead generation strategy used. For example, the SAR by NMR approach (Shuker et al., 1996), as initially proposed, used 15N-1H heteronuclear NMR to detect binding, and a fragment linking strategy to identify and optimize leads. Similarly, the SHAPES strategy, as originally described (Fejzo et al., 1999), used ligand-directed rather than proteindirected methods of detection, and a combination or fragment fusion strategy to generate more potent binders. As the number of studies expanded, it became evident that experimental approaches and ligand design strategies could be combined in a wide variety of ways to best address each target and drug design problem. For this reason, it is best to consider the physical methods used in NMR-based screening separately from the strategies, as we have done in this review. Although the techniques described in this chapter were sometimes initially proposed as standalone technologies, the examples we provide clearly show that NMR screening is best deployed as one component of an integrated platform of biophysical, biochemical, computational and chemical approaches such as X-ray crystallography, enzymology, virtual screening and combinatorial chemistry. In this context, significant synergies exist that can accelerate the identification of novel, drug-like lead classes of compounds for synthesis and optimization. While the early literature focused primarily on proof-of-concept studies with model systems (Fejzo, 2002; Shuker et al., 1996) and development of the experimental techniques required to detect ligand binding by NMR, most recent work has described applications of NMR fragment-based screening and the insight into inhibitor design that these methods provide to numerous real-life drug discovery programs. Because there are many excellent, comprehensive reviews available on the subject (Pellecchia et al., 2002b; Peng et al., 2004; Peng et al., 2001; Stockman and Dalvit, 2002; van Dongen et al., 2002b; Wyss et al., 2002), we will not attempt to review the entire field of NMR screening, but rather focus on some key examples from the recent literature. We will illustrate the common ligand design strategies by providing examples of how NMR methods have been applied to generate and optimize new chemical classes of drug leads for therapeutically relevant drug targets.
    05/2007: pages 72-98;
  • Chun-Wa Chung · Peter N. Lowe · Harren Jhoti · Andrew R. Leach ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this chapter is to outline the processes involved in designing an effective and efficient mechanism of action (MOA) strategy, and to introduce the principles behind some of the techniques commonly employed for this purpose and the considerations in their usage. The term 'mechanism of action' has many meanings but here we define it as the essential information about the target-ligand interaction that permits a compound to be moved with confidence from one stage of the drug discovery process to the next. Consequently, the information required from MOA investigations largely depends upon the stage of drug discovery pipeline at which these studies are initiated and the degree of characterisation the compound has already undergone. When a compound is first identified as a putative modulator of a target, determining its MOA may merely refer to confirming that it interacts directly with the target, rather than being an 'assay artifact'. As a compound is progressed, more detailed questions about the precise nature and kinetics of the interaction may be posed, and MOA studies are focused on properties that assess a compound's drug-like potential. Therefore the first consideration when designing an efficient MOA process is to determine the level of information required. Often the following questions may be asked about a compound's MOA: Does it bind the target directly? With what affinity? What is the site of interaction? Is there direct competition with a substrate or cofactor, or is there an allosteric mechanism? What is the stoichiometry of the interaction? What are the atomic details of the interaction? Is the binding covalent, what residues are involved and are there opportunities to modify the compound to gain potency and specificity? Are only equilibrium parameters required or will kinetic measurements be useful in determining therapeutic benefit? The information desired and practical limitations (e.g. protein availability, throughput, assay sensitivity and the availability of tool compounds) clearly govern the techniques that may be employed. Commonly, several methods, of different principle, will be available, and choices should be actively made to select the technique, or combination of techniques, which are most appropriate. These choices are system dependent and require a clear understanding of the system and techniques themselves. Fortunately, common fundamental principles dictate how the parameters of interest can be determined; these principles are discussed in section 2. The choice between techniques is then determined by the theoretical and practical limitations of each method as outlined in section 3. The key to a successful MOA approach is to understand the drawbacks of each technique and to try and compensate for these using careful experimental design and, particularly with more complex systems, several complementary methods. Many of these considerations are true for any MOA approach and are not unique to a fragment based method. However, the low molecular weight (MW) and low affinities of leads typically associated with this method may challenge some techniques more than others; where appropriate these challenges will be highlighted.
    Structure-Based Drug Discovery, 05/2007: pages 155-199;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The completed sequencing and initial characterization of the human genome in 2001 (Lander et al 2001; Venter et al 2001) and that of other organisms such as Drosophila melanogaster (Adams et al 2000) and the SARS Corona Virus (Marra et al 2003), have educated us on the vast complexity of the proteome. Full genome characterization efforts highlight how critical it is to understand at a molecular level all of the protein products from multiple organisms. An important issue for addressing the molecular characterization challenge is the need to quickly and economically characterize normal and diseased biological processes in order to understand the basic biology and chemistry of the systems and to facilitate the discovery and development of new therapeutic and diagnostic protocols. In order to fully characterize the proteins at the molecular level, three-dimensional protein structure determination has proven to be invaluable, complementing biological and biochemical information from other types of experiments. Structural information is also the ultimate rational drug design tool, with the potential to save an estimated 50% of the cost of drug discovery (Stevens 2004). However, the best means by which to attain structural knowledge is a topic of controversy. The traditional approach was a complex and labor-intensive process in which one protein or complex was studied at a time. The alternative is a high throughput (HT), discovery-oriented approach wherein entire families, pathways or genomes are characterized. Benefits include the economy of scale, the speed of mass production, and a dramatic increase in discovery rates through the systematic collection and analysis of data. Prior to the late 1990's, the technologies and approaches were too slow and unreliable to allow for such larger scale analyses. In the past, we have reviewed some of the technology developments in miniaturizing and streamlining structure determination pipelines (Stevens 2004; Abola et al 2000). For this chapter, we summarize the input and output of several structural genomics efforts that have validated new technology efforts over the first 5 years of the HT structural biology era. These technologies have been used by various HT pipelines that have contributed to the determination of over 1600 new structures, a high percentage of which were novel folds, and 70% had less than 30% identity to any other protein in the Protein Data Bank (PDB) at the time of release. As an example of the implementation of the HT pipeline, we discuss in some detail the specific approach of the Joint Center for Structural Genomics (JCSG) that we have been involved in.
    05/2007: pages 1-26;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The application of the newly established fragment screening and hit optimisation technology, based on automated high-throughput X-ray crystallography, iterative structure-based design and synthetic medicinal chemistry, has enabled the discovery of novel potent inhibitors of kinases that have now reached the clinical trial stage of development in a remarkably short time-frame. Fragment-based screening is now clearly established as an important new tool for drug discovery.
    Structure-Based Drug Discovery, 05/2007: pages 99-127;
  • Andrew R. Leach ·
    [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    Reviews in Computational Chemistry, Volume 2, 01/2007: pages 1 - 55; , ISBN: 9780470125793
  • A.R. Leach · V.J. Gillet ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Chemoinformatics draws upon techniques from many disciplines including computer science, mathematics, computational chemistry and data visualisation to tackle these problems. This, the first text written specifically for this field, aims to provide an introduction to the major techniques of chemoinformatics. The first part of the book deals with the representation of 2D and 3D molecular structures, the calculation of molecular descriptors and the construction of mathematical models. The second part describes other important topics including molecular similarity and diversity, the analysis of large data sets, virtual screening, and library design. Simple illustrative examples are used throughout to illustrate key concepts, supplemented with case studies from the literature. The book is aimed at graduate students, final-year undergraduates, and professional scientists. No prior knowledge is assumed other than a familiarity with chemistry and some basic mathematical concepts.
  • Andrew R. Leach · Michael M. Hann ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this chapter we have provided an overview of the theoretical background to fragment methods and lead-likeness. Fragment-based approaches are still in an early stage of development and are just one of several techniques that can be used to identify novel lead compounds for drug development. There are in particular some practical challenges associated with fragment screening that relate to the generally lower level of potency that such compounds possess. Nevertheless, the approach also offers some significant advantages by providing less complex molecules which may have better potential for drug optimisation and by enabling chemical space to be more effectively explored. The next few years will undoubtedly see a maturing of the area and improvements in our understanding of how the concepts can be applied more widely to drug discovery.
  • Source
    Andrew R Leach · Brian K Shoichet · Catherine E Peishoff ·
    [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract, please click on HTML or PDF.
    Journal of Medicinal Chemistry 11/2006; 49(20):5851-5. DOI:10.1021/jm060999m · 5.45 Impact Factor

Publication Stats

6k Citations
138.50 Total Impact Points


  • 2008-2009
    • The University of Sheffield
      Sheffield, England, United Kingdom
  • 2007
    • The Scripps Research Institute
      • Department of Cell and Molecular Biology
      La Jolla, CA, United States
  • 1991-1999
    • University of California, San Francisco
      • • Department of Pharmaceutical Chemistry
      • • Computer Graphics Laboratory (CGL)
      San Francisco, CA, United States
  • 1993-1995
    • University of Southampton
      • Division of Chemistry
      Southampton, England, United Kingdom
  • 1992
    • CSU Mentor
      Long Beach, California, United States
  • 1987-1990
    • University of Oxford
      Oxford, England, United Kingdom