Andrew R. Leach

GlaxoSmithKline plc., Londinium, England, Belgium

Are you Andrew R. Leach?

Claim your profile

Publications (24)56.03 Total impact

  • [show abstract] [hide abstract]
    ABSTRACT: A major challenge in toxicology is the development of non-animal methods for the assessment of human health risks that might result from repeated systemic exposure. We present here a perspective that considers the opportunities that computational modelling methods may offer in addressing this challenge. Our approach takes the form of a commentary designed to inform responses to future calls for research in predictive toxicology. It is considered essential that computational model-building activities be at the centre of the initiative, driving an iterative process of development, testing and refinement. It is critical that the models provide mechanistic understanding and quantitative predictions. The aim would be to predict effects in humans; in order to help define a challenging but yet feasible initial goal the focus would be on liver mitochondrial toxicity. This will inevitably present many challenges that naturally lead to a modular approach, in which the overall problem is broken down into smaller, more self-contained sub-problems that will subsequently need to be connected and aligned to develop an overall understanding. The project would investigate multiple modelling approaches in order to encourage links between the various disciplines that hitherto have often operated in isolation. The project should build upon current activities in the wider scientific community, to avoid duplication of effort and to ensure that investment is maximised. Strong leadership will be required to ensure alignment around a set of common goals that would be derived using a problem-statement driven approach. Finally, although the focus here is on toxicology, there is a clear link to the wider challenges in systems medicine and improving human health.
    Toxicology 11/2012; · 4.02 Impact Factor
  • Darren V S Green, Andrew R Leach, Martha S Head
    Journal of Computer-Aided Molecular Design 12/2011; 26(1):51-6. · 3.17 Impact Factor
  • Andrew R Leach, Michael M Hann
    [show abstract] [hide abstract]
    ABSTRACT: We review the concept of molecular complexity in the context of the very simple model of molecular interactions that we introduced over ten years ago. A summary is presented of efforts to validate this simple model using screening data. The relationship between the complexity model and the problem of sampling chemical space is discussed, together with the relevance of these theoretical concepts to fragment-based drug discovery.
    Current opinion in chemical biology 06/2011; 15(4):489-96. · 8.30 Impact Factor
  • Source
    Andrew R Leach
    Journal of Cheminformatics 01/2011; 3:1-1. · 3.59 Impact Factor
  • ChemInform 01/2010; 32(7).
  • ChemInform 01/2010; 32(48).
  • Journal of Medicinal Chemistry 10/2009; 53(2):539-58. · 5.61 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.
    Journal of Chemical Information and Modeling 05/2008; 48(4):719-29. · 4.30 Impact Factor
  • ChemInform 01/2008; 39(29).
  • 05/2007: pages 99-127;
  • 05/2007: pages 72-98;
  • 05/2007: pages 1-26;
  • 05/2007: pages 155-199;
  • [show abstract] [hide abstract]
    ABSTRACT: There are clearly many different philosophies associated with adapting fragment screening into mainstream Drug Discovery Lead Generation strategies. Scientists at Astex, for instance, focus entirely on strategies involving use of X-ray crystallography and NMR. However, AstraZeneca uses a number of different fragment screening strategies. One approach is to screen a 2000 compound fragment set (with close to "lead-like" complexity) at 100 microM in parallel with every HTS such that the data are obtained on the entire screening collection at 10 microM plus the extra samples at 100 microM; this provides valuable compound potency data in a concentration range that is usually unexplored. The fragments are then screen-specific "privileged structures" that can be searched for in the rest of the HTS output and other databases as well as having synthesis follow-up. A typical workflow for a fragment screen within AstraZeneca is shown below (Figure 24) and highlights the desirability (particularly when screening >100 microM) for NMR and X-ray information to validate weak hits and give information on how to optimise them. In this chapter, we have provided an introduction to the theoretical and practical issues associated with the use of fragment methods and lead-likeness. Fragment-based approaches are still in an early stage of development and are just one of many interrelated techniques that are now used to identify novel lead compounds for drug development. Fragment based screening has some advantages, but like every other drug hunting strategy will not be universally applicable. There are in particular some practical challenges associated with fragment screening that relate to the generally lower level of potency that such compounds initially possess. Considerable synthetic effort has to be applied for post-fragment screening to build the sort of potency that would be expected to be found from a traditional HTS. However, if there are no low-hanging fruit in a screening collection to be found by HTS then the use of fragment screening can help find novelty that may lead to a target not being discarded as intractable. As such, the approach offers some significant advantages by providing less complex molecules, which may have better potential for novel drug optimisation and by enabling new chemical space to be more effectively explored. Many literature examples that cover examples of fragment screening approaches are still at the "proof of concept" stage and although delivering inhibitors or ligands, may still prove to be unsuitable when further ADMET and toxicity profiling is done. The next few years should see a maturing of the area, and as our understanding of how the concepts can be best applied, there are likely to be many more examples of attractive, small molecule hits, leads and candidate drugs derived from the approaches described.
    Molecular BioSystems 09/2006; 2(9):430-46. · 3.35 Impact Factor
  • 05/2005: pages 43 - 57; , ISBN: 9783527603749
  • Andrew R. Leach, Irwin D. Kuntz
    [show abstract] [hide abstract]
    ABSTRACT: A computational method for exploring the orientational and conformational space of a flexible ligand within a macromolecular receptor site is presented. The approach uses a variant of the DOCK algorithm [Kuntz et al., J. Mol. Biol., 161, 288 (1982)] to determine orientations of a fragment of the ligand within the site. These positions then form the basis for exploring the conformational space of the rest of the ligand, using a systematic search algorithm. The search incorporates a method by which the ligand conformation can be modified in response to interactions with the receptor. The approach is applied to two test cases, in both of which the crystallographically determined structures are obtained. However, alternative models can also be obtained that differ significantly from those observed experimentally. The ability of a variety of measures of the intermolecular interaction to discriminate among these structures is discussed.
    Journal of Computational Chemistry 09/2004; 13(6):730 - 748. · 3.84 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Scoring function research remains a primary focus of current structure-based virtual screening (SVS) technology development. Here, we present an alternative method for scoring function design that attempts to combine crystallographic structural information with data derived from directly within SVS calculations. The technique utilizes a genetic algorithm (GA) to optimize functions based on binding property data derived from multiple virtual screening calculations. These calculations are undertaken on protein data bank (PDB) complex active sites using ligands of known binding mode in conjunction with "noise" compounds. The advantages of such an approach are that the function does not rely on assay data and that it can potentially use the "noise" binding data to recognize the sub-optimal docking interactions inherent in SVS calculations. Initial efforts in technique exploration using DOCK are presented, with comparisons made to existing DOCK scoring functions. An analysis of the problems inherent to scoring function development is also made, including issues in dataset creation and limitations in descriptor utility when viewed from the perspective of docking mode resolution. The future directions such studies might take are also discussed in detail.
    Journal of Molecular Graphics and Modelling 10/2003; 22(1):41-53. · 2.33 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Scoring function research remains a primary focus of current structure-based virtual screening (SVS) technology development. Here, we present an alternative method for scoring function design that attempts to combine crystallographic structural information with data derived from directly within SVS calculations. The technique utilizes a genetic algorithm (GA) to optimize functions based on binding property data derived from multiple virtual screening calculations. These calculations are undertaken on protein data bank (PDB) complex active sites using ligands of known binding mode in conjunction with “noise” compounds. The advantages of such an approach are that the function does not rely on assay data and that it can potentially use the “noise” binding data to recognize the sub-optimal docking interactions inherent in SVS calculations. Initial efforts in technique exploration using DOCK are presented, with comparisons made to existing DOCK scoring functions. An analysis of the problems inherent to scoring function development is also made, including issues in dataset creation and limitations in descriptor utility when viewed from the perspective of docking mode resolution. The future directions such studies might take are also discussed in detail.
    Journal of Molecular Graphics & Modelling - J MOL GRAPH MODEL. 01/2003; 22(1):41-53.
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Three commercially available pharmacophore generation programs, Catalyst/HipHop, DISCO and GASP, were compared on their ability to generate known pharmacophores deduced from protein-ligand complexes extracted from the Protein Data Bank. Five different protein families were included Thrombin, Cyclin Dependent Kinase 2, Dihydrofolate Reductase, HIV Reverse Transcriptase and Thermolysin. Target pharmacophores were defined through visual analysis of the data sets. The pharmacophore models produced were evaluated qualitatively through visual inspection and according to their ability to generate the target pharmacophores. Our results show that GASP and Catalyst outperformed DISCO at reproducing the five target pharmacophores.
    Journal of Computer-Aided Molecular Design 01/2002; 16(8-9):653-81. · 3.17 Impact Factor
  • Andrew R. Leach, Darren V. S. Green
    [show abstract] [hide abstract]
    ABSTRACT: We describe a variety of the computational techniques which we use in the drug discovery and design process. Some of these computational methods are designed to support the new experimental technologies of high-throughput screening and combinatorial chemistry. We also consider some new approaches to problems of long-standing interest such as protein-ligand docking and the prediction of free energies of binding.
    Molecular Simulation 01/2001; 26(1):33-49. · 1.06 Impact Factor

Publication Stats

407 Citations
4 Downloads
1k Views
56.03 Total Impact Points

Institutions

  • 2009–2012
    • GlaxoSmithKline plc.
      Londinium, England, Belgium
  • 2002–2008
    • The University of Sheffield
      Sheffield, England, United Kingdom
  • 2007
    • The Scripps Research Institute
      • Department of Cell and Molecular Biology
      La Jolla, CA, United States
  • 1998–2004
    • University of California, San Francisco
      • • School of Pharmacy
      • • Department of Pharmaceutical Chemistry
      San Francisco, CA, United States