Chapter

Quantum Chemistry Program Exchange, Facilitator of Theoretical and Computational Chemistry in Pre-Internet History

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The Quantum Chemistry Program Exchange (QCPE) was a service conceived in 1962 and that started operating in 1963. Its purpose was to provide an inexpensive mechanism for theoretical chemists and other scientists to exchange software. Most of the computer programs were distributed as source code, so scientists, if they wanted to, could learn from or improve upon the inner workings of the algorithms. QCPE reached its zenith in the 1980s when computational chemistry was growing rapidly and becoming widely recognized by the scientific community. The service was convenient and much used by experts, students, and experimentalists who wanted to perform research calculations in the study of molecules. QCPE also played an educational role by conducting workshops and providing on-call help to countless beginners. QCPE was based at Indiana University in Bloomington, Indiana, and serviced a worldwide clientele. Introduction of the Internet in the 1990s diminished the role of QCPE.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Numerous refinements of semi empirical methods appeared over the years, beyond the scope of this discussion. They were reviewed by Beveridge and Pople in their 1970 book [65], and later developments, extending to the evolution of Hartree Fock and DFT algorithms are also recounted in a recent fascinating history of the Quantum Chemistry Program Exchange (QCPE) by Boyd [136]. ...
... In 1967, while at IBM, he developed a program for carrying out self-consistent field molecular orbital (SCF-MO) calculations invoking the now standard approximation of expressing the molecular orbitals as a linear combination of atomic orbitals (LCAO), which were themselves represented as a set of Gaussian functions [140]. This program was deposited the same year, with the QCPE, the major platform for the exchange of computational chemistry programs at the time [136]. Clementi had an early interest in the properties of biomolecules. ...
... Though not focused on either potential function development or biomolecules, John Pople was perhaps the individual most responsible for the wide application of ab initio Hartree-Fock and post HF methods to these fields, through his Gaussian series of programs [136]. His work focused on establishing the concept of "model chemistry" [163], essentially analogous to the philosophy for QM method development as Lifson's Consistent Force Field philosophy for empirical force field development; namely rigorously determining the difference between the chemistry derived by a given model and experiment, for many molecules in many families, and then applying the model to studies of new systems [164]. ...
Article
Full-text available
In this perspective, we review the theory and methodology of the derivation of force fields (FFs), and their validity, for molecular simulations, from their inception in the second half of the twentieth century to the improved representations at the end of the century. We examine the representations of the physics embodied in various force fields, their accuracy and deficiencies. The early days in the 1950s and 60s saw FFs first introduced to analyze vibrational spectra. The advent of computers was soon followed by the first molecular mechanics machine calculations. From the very first papers it was recognized that the accuracy with which the FFs represented the physics was critical if meaningful calculated structural and thermodynamic properties were to be achieved. We discuss the rigorous methodology formulated by Lifson, and later Allinger to derive molecular FFs, not only obtain optimal parameters but also uncover deficiencies in the representation of the physics and improve the functional form to account for this physics. In this context, the known coupling between valence coordinates and the importance of coupling terms to describe the physics of this coupling is evaluated. Early simplified, truncated FFs introduced to allow simulations of macromolecular systems are reviewed and their subsequent improvement assessed. We examine in some depth: the basis of the reformulation of the H-bond to its current description; the early introduction of QM in FF development methodology to calculate partial charges and rotational barriers; the powerful and abundant information provided by crystal structure and energetic observables to derive and test all aspects of a FF including both nonbond and intramolecular functional forms; the combined use of QM, along with crystallography and lattice energy calculations to derive rotational barriers about ɸ and ψ; the development and results of methodologies to derive “QM FFs” by sampling the QM energy surface, either by calculating energies at hundreds of configurations, or by describing the energy surface by energies, first and second derivatives sampled over the surface; and the use of the latter to probe the validity of the representations of the physics, reveal flaws and assess improved functional forms. Research demonstrating significant effects of the flaws in the use of the improper torsion angle to represent out of plane deformations, and the standard Lorentz–Berthelot combining rules for nonbonded interactions, and the more accurate descriptions presented are also reviewed. Finally, we discuss the thorough studies involved in deriving the 2nd generation all-atom versions of the CHARMm, AMBER and OPLS FFs, and how the extensive set of observables used in these studies allowed, in the spirit of Lifson, a characterization of both the abilities, but more importantly the deficiencies in the diagonal 12-6-1 FFs used. The significant contribution made by the extensive set of observables compiled in these papers as a basis to test improved forms is noted. In the following paper, we discuss the progress in improving the FFs and representations of the physics that have been investigated in the years following the research described above.
... MMP2 calibrates the force fields around aromatic rings by the semi-empirical QM (Allinger 1976(Allinger , 1977Allinger et al. 1994). The Quantum Chemistry Program Exchange (QCPE) distributed the programs of MM2/MMP2, ECEPP and many program-source codes to computer chemists all over the world by free (Halgren 1996a(Halgren , 1996bBoyd 2013). Halglen et al. applied the highlevel ab-initio QM to many compounds and developed MMFF94 force field (Halgren 1996a(Halgren , 1996b. ...
Article
Full-text available
Prediction of ligand-receptor complex structure is important in both the basic science and the industry such as drug discovery. We report various computation molecular docking methods: fundamental in silico (virtual) screening, ensemble docking, enhanced sampling (generalized ensemble) methods, and other methods to improve the accuracy of the complex structure. We explain not only the merits of these methods but also their limits of application and discuss some interaction terms which are not considered in the in silico methods. In silico screening and ensemble docking are useful when one focuses on obtaining the native complex structure (the most thermodynamically stable complex). Generalized ensemble method provides a free-energy landscape, which shows the distribution of the most stable complex structure and semi-stable ones in a conformational space. Also, barriers separating those stable structures are identified. A researcher should select one of the methods according to the research aim and depending on complexity of the molecular system to be studied.
... About sixty years ago computational quantum chemistry was undergoing a similar rapid evolution. This evolution was greatly helped by the creation of the Quantum Chemistry Program Exchange [104] , which helped to share software, instruct students, and foster collaborations. It is quite likely that the creation of a similar initiative in the field of automation of kinetic mechanism development would help greatly in reaching the targets summarized in Fig. 1 . ...
Article
Driven by synergic advancements in high performance computing and theory, the capability to estimate rate constants from first principles has evolved considerably recently. When this knowledge is coupled with a procedure to determine a list of all reactions relevant to describe the evolution of a reacting system, it becomes possible to envision a methodology to predict theoretically the reaction kinetics. However, if a thorough examination of all possible reaction channels is desired, the number of reactions for which a rate constant estimate is needed can become quite large. This determines the need for rate constant estimation automation. In the present work, the status of this rapidly evolving field is reviewed, with emphasis on recent advancements and present challenges. Thermochemistry is the field where automation is most advanced. Entropies, heat capacities, and enthalpies can be determined efficiently with accuracy comparable to experiments for most chemical species containing a limited number of atoms, while machine learning can be used to improve the computational predictions for large chemical species using reduced computational resources. Several approaches have been proposed to automatically investigate the reactivity over complex potential energy surfaces, while rate constants for elementary steps can be determined accurately for several reaction classes, such as abstraction, addition, beta-scission, and isomerization. Kinetic mechanisms can be automatically generated using methodologies that differ for level of complexity and required physical insight. Among the challenges that are still to be met are the estimation of rate constants for intrinsically multireference reaction classes, such as barrierless processes, the containment of the number of reactions to screen in mechanism development, and the integration of the existing automated software. It is suggested that the synergy between experiment and theory should evolve towards a stage where experiments are focused on the estimation of parameters where theoretical tools are least predictive, and vice versa.
... Computational materials science dates back to mid-twentieth century, an early example being the quantum chemistry exchange programme, which allowed experimental chemists to perform quantum chemical calculations with relative ease [34]. At this early stage, the paradigm of computational materials science was to use computational methods to help interpret experimental results by doing a few expensive calculations on materials whose structure was already well known. ...
Article
Full-text available
This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory (RAL) site at Harwell near Oxford. Such ‘Big Scientific Data’ comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility and the UK's Central Laser Facility. Increasingly, scientists are now required to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Google's DeepMind has now used the deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, it has been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the RAL, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from several different scientific domains. We conclude with some initial examples of our ‘scientific machine learning’ benchmark suite and of the research challenges these benchmarks will enable. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
... Computational materials science dates back to mid-twentieth century, an early example being the quantum chemistry exchange programme, which allowed experimental chemists to perform quantum chemical calculations with relative ease [35]. At this early stage, the paradigm of computational materials science was to use computational methods to help interpret experimental results by doing a small number of expensive calculations on materials whose structure was already well known. ...
Preprint
Full-text available
This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory site at Harwell near Oxford. Such "Big Scientific Data" comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility, and the UK's Central Laser Facility. Increasingly, scientists are now needing to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and also to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Google's DeepMind has now also used deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, they have been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the Rutherford Appleton Laboratory, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from a number of different scientific domains. We conclude with some initial examples of our "SciML" benchmark suite and of the research challenges these benchmarks will enable.
... John Pople, realizing the importance of rapidly developing computer technologies, created a program-Gaussian 70-that could perform ab initio calculations: predicting the behaviour, for molecules of modest size, purely from the fundamental laws of physics 2 . In the 1960s, the Quantum Chemistry Program Exchange brought quantum chemistry to the masses in the form of useful practical tools 3 . Suddenly, experimentalists with little or no theoretical training could perform quantum calculations too. ...
Article
Full-text available
Here we summarize recent progress in machine learning for the chemical sciences. We outline machine-learning techniques that are suitable for addressing research questions in this domain, as well as future directions for the field. We envisage a future in which the design, synthesis, characterization and application of molecules and materials is accelerated by artificial intelligence.
... Software was programmed once and then sent to the Quantum Chemistry Program Exchange to be given away to any interested party for free and "as is." 46 It was now planned, designed, and developed to be distributed in the academic and industrial disciplines of chemistry that could benefit from computational methods. In the 1980s, the number of publications using commercial computational chemistry software grew exponentially, as grew the number of chemistry calculations published in the industry. ...
... The merging of methods into packages were conceived with the aim to enlarge the user base. Software was once programmed and then sent to the QCPE (Quantum Chemistry Program Exchange) to be given away to any interested party for free and "as is" [39]. It was now planned, designed and developed to be distributed in the academic and industrial disciplines of chemistry that could benefit from computational methods. ...
Article
Computational chemistry is a scientific field within which the computer is a pivotal element. This scientific community emerged in the 1980s and was involved with two major industries: the computer manufacturers and the pharmaceutical industry, the latter becoming a potential market for the former through molecular modeling software packages. We aim to address the difficult relationships between scientific modeling methods and the software implementing these methods throughout the 1990s. Developing, using, licensing, and distributing software leads to multiple tensions among the actors in intertwined academic and industrial contexts. The Computational Chemistry mailing List (CCL), created in 1991, constitutes a valuable corpus for revealing the tensions associated with software within the community. We analyze in detail two flame wars that exemplify these tensions. We conclude that models and software must be addressed together. Interrelations between both imply that openness in computational science is complex.
... QCPE was to serve as a living repository of programs useful for theoretical chemistry, beyond quantum chemistry itself so that programs or part of programs would not have to be rewritten from start. 14 With these antecedents, in an informal 1963 summer seminar attended by several graduate students at the Chemistry Department of the University of Florida, Joe Hirschfelder eventually announced: "I predict!, I predict than in ten years from now!-Joe searched around after the sympathy of the audience while his tie was vanishing under tons of chalk-in 1973!, Quantum Chemistry will come of age," and that means that "there will be a striking demand for quantum chemists at Chemistry Departments" all over the world. ...
Chapter
Per-Olov Löwdin was an inspiring and compelling teacher. His most prominent papers were written more than 50 years ago, whereas since then quantum chemistry, its software, and its computers have changed almost beyond recognition. In accurate calculations with truncation energy errors, an important part of Per-Olov’s themes and thoughts appears highly relevant today in applications to atoms and small molecules. My purpose here is to place projection operators, natural orbitals, error bounds, and the variational theorem for finite Hermitian matrices, in the light of current challenges in the field.
... At Indiana University, Harry Shull founded the Quantum Chemistry Program Exchange (QCPE) in 1962-1963. QCPE was to serve as a living repository of programs useful for theoretical chemistry, beyond quantum chemistry itself so that programs or part of programs would not have to be rewritten from start [14]. ...
Chapter
Full-text available
Recent progress in selected configuration interaction (CI) with truncation energy error (SCI-TEE) is discussed together with applications. In molecular CI, we take up (i) preselection of huge numbers of configurations and sensitivity analyses, (ii) highlights of SCI-TEE applied to H2O ground state, and (iii) symmetric dissociation of H2O ground state. We describe automatic optimization of atomic orbital bases to within a prescribed complete basis set energy error and an application of it to Ne ground state. New perspectives on the use of optimized orbital bases are briefly outlined. We discuss opportunities for new theory and new predictions in connection with a genuine variational theorem for the Breit-Dirac Hamiltonian. We explain the meaning of positive- and negative-energy orbitals in contrast with positive-energy and unphysical N-electron states. We conclude with an overview of current and planned work.
... The Quantum Chemistry Program Exchange (QCPE) was one important factor (among many) that led to the explosion of computational chemistry. 1 The advantages of exchanging source code were clear to the participants in the QCPE. The exchange of code was efficient, avoiding the need for students to reinvent the wheel for each new project. ...
Article
Full-text available
"Soft theories," i.e., "heuristic models based on reasoning by analogy" largely drove chemistry understanding for 150 years or more. But soft theories have their limitations and with the expansion of chemistry in the mid-20th century, more and more inexplicable (by soft theory) experimental results were being obtained. In the past 50 years, quantum chemistry, most often in the guise of applied theoretical chemistry including computational chemistry, has provided (a) the underlying "hard evidence" for many soft theories and (b) the explanations for chemical phenomena that were unavailable by soft theories. In this publication, we define "hard theories" as "theories derived from quantum chemistry." Both soft and hard theories can be qualitative and quantitative, and the "Houk quadrant" is proposed as a helpful categorization tool. Furthermore, the language of soft theories is often used appropriately to describe quantum chemical results. A valid and useful way of doing science is the appropriate use and application of both soft and hard theories along with the best nomenclature available for successful communication of results and ideas.
Chapter
Twentieth and early twenty-first centuries have observed a lot of fascinating developments in pharmaceutical research and development with the rising role of computers. Computers have transformed drug discovery form hit-and-trial approach to rational drug design. Initial quantitative structure-activity relationship (QSAR) studies led to the foundation of computer-aided drug design (CADD), which subsequently evolved to structure-based drug design (SBDD), ligand-based drug design (LBDD), and fragment-based drug design (FBDD). The successes of computational chemistry and CADD have brought many interesting drug molecules from bench to the patient’s bedside. Drug discovery, being a multidisciplinary field, has been benefited with the advancements from the field of not only computers but also from associated technologies like softwares, Internet, big data, omics, internet of things, and artificial intelligence (AI). This chapter is the author’s attempt to explore history from 1960 to the present time for finding key developments that have played a larger role in evolving drug discovery and development.
Article
Computational chemistry has emerged as a sub‐field in science over the last five decades, not at least due to the amazing increase in computational resources. Equally important, however, is the continuing developments of theoretical methods and in software technology. The refinement of theoretical models moves forward at a steady pace, but at times takes a radical turn in new directions. Computational chemistry methods are now integrated elements in many research fields and are routine tools for non‐experts. The plethora of different models, however, forms a bewildering jungle of choices, often resulting in practitioners defaulting to the tried and true. The present contribution contains some personal reflections on the development of computational chemistry methods over the last four decades, with special focus on the development of basis sets and density functional methods.
Article
Psi4NumPy demonstrates the use of efficient computational kernels from the open-source Psi4 program through the popular NumPy library for linear algebra in Python to facilitate the rapid development of clear, understandable Python computer code for new quantum chemical methods, while maintaining a relatively low execution time. Using these tools, reference implementations have been created for a number of methods, including self-consistent field (SCF), SCF response, many-body perturbation theory, coupled-cluster theory, configuration interaction, and symmetry-adapted perturbation theory. Furthermore, several reference codes have been integrated into Jupyter notebooks, allowing background, underlying theory, and formula information to be associated with the implementation. Psi4NumPy tools and associated reference implementations can lower the barrier for future development of quantum chemistry methods. These implementations also demonstrate the power of the hybrid C++/Python programming approach employed by the Psi4 program.
Article
Full-text available
This study evaluates the effectiveness of simple and expert searches in Google Scholar (GS), EconLit, GEOBASE, PAIS, POPLINE, PubMed, Social Sciences Citation Index, Social Sciences Full Text, and Sociological s. It assesses the recall and precision of 32 searches in the field of later-life migration: nine simple keyword searches and 23 expert searches constructed by demography librarians at three top universities. For simple searches, Google Scholar’s recall and precision are well above average. For expert searches, the relative effectiveness of GS depends on the number of results users are willing to examine. Although Google Scholar’s expert-search performance is just average within the first fifty search results, GS is one of the few databases that retrieves relevant results with reasonably high precision after the fiftieth hit. The results also show that simple searches in GS, GEOBASE, PubMed, and Sociological s have consistently higher recall and precision than expert searches. This can be attributed not to differences in expert-search effectiveness, but to the unusually strong performance of simple searches in those four databases.
Article
This article reports a 2010 empirical study using a 2005 study as a base to compare Google Scholar's coverage of scholarly journals with commercial services. Through random samples of eight databases, the author finds that, as of 2010, Google Scholar covers 98 to 100 percent of scholarly journals from both publicly accessible Web contents and from subscription-based databases that Google Scholar partners with. In 2005 the coverage of the same databases ranged from 30 to 88 percent. The author explores de-duplication of search results by Google Scholar and discusses its impacts on searches and library resources. With the dramatic improvement of Google Scholar, the uniqueness and effectiveness of subscription-based abstracts and indexes have dramatically changed.
Chapter
Introduction and OverviewMethodology and ResultsProficiencies in DemandAnalysisAn Aside: Economics 101PrognosisAcknowledgmentsReferences
Chapter
IntroductionGermination: The 1960sGaining a Foothold: The 1970sGrowth: The 1980sGems Discovered: The 1990sFinal ObservationsAcknowledgmentsReferences
The Development of Chemistry at Indiana University 1836-1991
  • H G Day