Andrew R. Leach

The University of Sheffield, Sheffield, England, United Kingdom

Are you Andrew R. Leach?

Claim your profile

Publications (59)176.63 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The pharmaceutical industry remains under huge pressure to address the high attrition rates in drug development. Attempts to reduce the number of efficacy- and safety-related failures by analysing possible links to the physicochemical properties of small-molecule drug candidates have been inconclusive because of the limited size of data sets from individual companies. Here, we describe the compilation and analysis of combined data on the attrition of drug candidates from AstraZeneca, Eli Lilly and Company, GlaxoSmithKline and Pfizer. The analysis reaffirms that control of physicochemical properties during compound optimization is beneficial in identifying compounds of candidate drug quality and indicates for the first time a link between the physicochemical properties of compounds and clinical failure due to safety issues. The results also suggest that further control of physicochemical properties is unlikely to have a significant effect on attrition rates and that additional work is required to address safety-related failures. Further cross-company collaborations will be crucial to future progress in this area.
    dressNature Reviews Drug Discovery 06/2015; 14(7). DOI:10.1038/nrd4609 · 37.23 Impact Factor
  • Michael M. Hann · Andrew R. Leach
    [Show abstract] [Hide abstract]
    ABSTRACT: In this chapter, we explore several issues of complexity in molecular design, focusing initially on the complexity model that we introduced over 10 years ago, which explores the probability of useful interactions occurring with ligands and binding sites of differing complexity. We explore some extensions of the model, which include uniqueness of binding mode and sensitivity of detection. In addition, we review the promiscuity data that supports and challenges the model, concluding that the use of molecular weight as a complexity measure does not account for the promiscuity introduced by excessive lipophilicity. We explore the subject of molecular interactions from the perspective of Shannon entropy and how this may help to understand some of the issues associated with lipophilic interactions. We then address the issues associated with sampling of chemical space and the challenges of understanding and navigating the vastness of such spaces. The challenge of understanding the complexity of thermodynamic entropy and enthalpy is also discussed. Finally, we discuss the challenges of taking a reductionist approach to drug discovery and how the emergence of new behaviors as complexity is rebuilt is difficult to predict and hence prepare for in advance.
    De novo Molecular Design, 10/2013: pages 57-77; , ISBN: 9783527334612
  • Andrew R. Leach · Richard A. Bryce · Alan J. Robinson
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional de novo design algorithms are able to generate many thousands of ligand structures that meet the constraints of a protein structure, but these structures are often not synthetically tractable. In this article, we describe how concepts from structure-based de novo design can be used to explore the search space in library design. A key feature of the approach is the requirement that specific templates are included within the designed structures. Each template corresponds to the "central core" of a combinatorial library. The template is positioned within an acyclic chain whose length and bond orders are systematically varied, and the conformational space of each structure that results (core plus chain) is explored to determine whether it is able to link together two or more strongly interacting functional groups or pharmacophores located within a protein binding site. This fragment connection algorithm provides "generic" 3D molecules in the sense that the linking part (minus the template) is built from an all-carbon chain whose synthesis may not be easily achieved. Thus, in the second phase, 2D queries are derived from the molecular skeletons and used to identify possible reagents from a database. Each potential reagent is checked to ensure that it is compatible with the conformation of its parent 3D conformation and the constraints of the binding site. Combinations of these reagents according to the combinatorial library reaction scheme give product molecules that contain the desired core template and the key functional/pharmacophoric groups, and would be able to adopt a conformation compatible with the original molecular skeleton without any unfavorable intermolecular or intramolecular interactions. We discuss how this strategy compares with and relates to alternative approaches to both structure-based library design and de novo design.
    Journal of Molecular Graphics and Modelling 07/2013; 18(4-5):358-67, 526. DOI:10.1016/S1093-3263(00)00062-0 · 2.02 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A major challenge in toxicology is the development of non-animal methods for the assessment of human health risks that might result from repeated systemic exposure. We present here a perspective that considers the opportunities that computational modelling methods may offer in addressing this challenge. Our approach takes the form of a commentary designed to inform responses to future calls for research in predictive toxicology. It is considered essential that computational model-building activities be at the centre of the initiative, driving an iterative process of development, testing and refinement. It is critical that the models provide mechanistic understanding and quantitative predictions. The aim would be to predict effects in humans; in order to help define a challenging but yet feasible initial goal the focus would be on liver mitochondrial toxicity. This will inevitably present many challenges that naturally lead to a modular approach, in which the overall problem is broken down into smaller, more self-contained sub-problems that will subsequently need to be connected and aligned to develop an overall understanding. The project would investigate multiple modelling approaches in order to encourage links between the various disciplines that hitherto have often operated in isolation. The project should build upon current activities in the wider scientific community, to avoid duplication of effort and to ensure that investment is maximised. Strong leadership will be required to ensure alignment around a set of common goals that would be derived using a problem-statement driven approach. Finally, although the focus here is on toxicology, there is a clear link to the wider challenges in systems medicine and improving human health.
    Toxicology 11/2012; DOI:10.1016/j.tox.2012.10.007 · 3.75 Impact Factor
  • Darren V S Green · Andrew R Leach · Martha S Head
    Journal of Computer-Aided Molecular Design 12/2011; 26(1):51-6. DOI:10.1007/s10822-011-9514-1 · 2.78 Impact Factor
  • Andrew R Leach · Michael M Hann
    [Show abstract] [Hide abstract]
    ABSTRACT: We review the concept of molecular complexity in the context of the very simple model of molecular interactions that we introduced over ten years ago. A summary is presented of efforts to validate this simple model using screening data. The relationship between the complexity model and the problem of sampling chemical space is discussed, together with the relevance of these theoretical concepts to fragment-based drug discovery.
    Current opinion in chemical biology 06/2011; 15(4):489-96. DOI:10.1016/j.cbpa.2011.05.008 · 7.65 Impact Factor
  • Source
    Andrew R Leach
    Journal of Cheminformatics 04/2011; 3(Suppl 1):1-1. DOI:10.1186/1758-2946-3-S1-O5 · 4.54 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    ChemInform 11/2010; 32(48). DOI:10.1002/chin.200148231
  • [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    ChemInform 02/2010; 32(7). DOI:10.1002/chin.200107219
  • Journal of Medicinal Chemistry 10/2009; 53(2):539-58. DOI:10.1021/jm900817u · 5.48 Impact Factor
  • ChemInform 07/2008; 39(29). DOI:10.1002/chin.200829203
  • [Show abstract] [Hide abstract]
    ABSTRACT: Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.
    Journal of Chemical Information and Modeling 05/2008; 48(4):719-29. DOI:10.1021/ci700130j · 4.07 Impact Factor
  • Stefan Senger · Andrew R. Leach
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years the volume of data describing the interactions between small, drug-like molecules and biological targets has dramatically increased. With this data explosion comes the need for electronic mechanisms to store, search and retrieve relevant information for subsequent analysis—and hopefully beneficial insights. The relatively new development that is the primary subject of this review is the availability of databases that contain large amounts of quantitative data characterising the activity of compounds in biological assays. Key features and capabilities of the main databases currently available are summarised, some of the ways in which the information in such databases can be used in drug discovery is discussed, and possible future directions are considered.
    Annual Reports in Computational Chemistry 01/2008; DOI:10.1016/S1574-1400(08)00011-X
  • Structure-Based Drug Discovery, 05/2007: pages 99-127;
  • 05/2007: pages 72-98;
  • Chun-Wa Chung · Peter N. Lowe · Harren Jhoti · Andrew R. Leach
    Structure-Based Drug Discovery, 05/2007: pages 155-199;
  • 05/2007: pages 1-26;
  • Andrew R. Leach
    [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
    Reviews in Computational Chemistry, Volume 2, 01/2007: pages 1 - 55; , ISBN: 9780470125793
  • Source
    Andrew R Leach · Brian K Shoichet · Catherine E Peishoff
    [Show abstract] [Hide abstract]
    ABSTRACT: ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract, please click on HTML or PDF.
    Journal of Medicinal Chemistry 11/2006; 49(20):5851-5. DOI:10.1021/jm060999m · 5.48 Impact Factor
  • Andrew R Leach · Michael M Hann · Jeremy N Burrows · Ed J Griffen
    [Show abstract] [Hide abstract]
    ABSTRACT: There are clearly many different philosophies associated with adapting fragment screening into mainstream Drug Discovery Lead Generation strategies. Scientists at Astex, for instance, focus entirely on strategies involving use of X-ray crystallography and NMR. However, AstraZeneca uses a number of different fragment screening strategies. One approach is to screen a 2000 compound fragment set (with close to "lead-like" complexity) at 100 microM in parallel with every HTS such that the data are obtained on the entire screening collection at 10 microM plus the extra samples at 100 microM; this provides valuable compound potency data in a concentration range that is usually unexplored. The fragments are then screen-specific "privileged structures" that can be searched for in the rest of the HTS output and other databases as well as having synthesis follow-up. A typical workflow for a fragment screen within AstraZeneca is shown below (Figure 24) and highlights the desirability (particularly when screening >100 microM) for NMR and X-ray information to validate weak hits and give information on how to optimise them. In this chapter, we have provided an introduction to the theoretical and practical issues associated with the use of fragment methods and lead-likeness. Fragment-based approaches are still in an early stage of development and are just one of many interrelated techniques that are now used to identify novel lead compounds for drug development. Fragment based screening has some advantages, but like every other drug hunting strategy will not be universally applicable. There are in particular some practical challenges associated with fragment screening that relate to the generally lower level of potency that such compounds initially possess. Considerable synthetic effort has to be applied for post-fragment screening to build the sort of potency that would be expected to be found from a traditional HTS. However, if there are no low-hanging fruit in a screening collection to be found by HTS then the use of fragment screening can help find novelty that may lead to a target not being discarded as intractable. As such, the approach offers some significant advantages by providing less complex molecules, which may have better potential for novel drug optimisation and by enabling new chemical space to be more effectively explored. Many literature examples that cover examples of fragment screening approaches are still at the "proof of concept" stage and although delivering inhibitors or ligands, may still prove to be unsuitable when further ADMET and toxicity profiling is done. The next few years should see a maturing of the area, and as our understanding of how the concepts can be best applied, there are likely to be many more examples of attractive, small molecule hits, leads and candidate drugs derived from the approaches described.
    Molecular BioSystems 09/2006; 2(9):430-46. DOI:10.1039/b610069b · 3.18 Impact Factor

Publication Stats

5k Citations
176.63 Total Impact Points

Institutions

  • 2008–2009
    • The University of Sheffield
      Sheffield, England, United Kingdom
  • 2007
    • The Scripps Research Institute
      • Department of Cell and Molecular Biology
      La Jolla, CA, United States
  • 1991–1999
    • University of California, San Francisco
      • • Department of Pharmaceutical Chemistry
      • • Computer Graphics Laboratory (CGL)
      San Francisco, CA, United States
  • 1993–1995
    • University of Southampton
      • Division of Chemistry
      Southampton, England, United Kingdom
  • 1992
    • CSU Mentor
      Long Beach, California, United States
  • 1987–1990
    • University of Oxford
      Oxford, England, United Kingdom