AI: The Tumultuous History of the Search for Artificial Intelligence
Abstract
Nominated for the Los Angeles Times Book Prize in Science and Technology.
AI chronicles the dramatic successes and equally dramatic failures of the half-century long search for artificial intelligence. Rich with anecdotes about the founders and leaders of the field, AI is also an exhilarating saga of new programs, new hardware, and the slow but steady acquisition of knowledge about how humans think. Will we humans one day have to share our world with entities smarter than ourselves? And can we rely on these creations to make vital decisions for us? Daniel Crevier discusses these questions with the leaders of AI, and they offer some surprising answers.
... The report triggered the UK government to dismantle AI research projects within the UK (Howe 1994, Russell andNorvig 2003). Only a few universities in the UK such as Essex, Sussex and Edinburgh continued with AI research at a small scale until 1983 when AI research was revived at a large scale with £350 million funding as a response to the ambitious Japanese Fifth Generation Computer Project (Crevier 1993). ...
... After the massive adoption of expert systems, companies discovered that they lacked the capacity to provide explanation on the advice provided at an abstraction level for easy understanding of the naïve end user (AI News Letter 2005). In the 1990s, the XCON was becoming expensive to maintain, lacked adaptation and robustness with erroneous inputs producing ridiculous output (Crevier 1993). As a result, companies started abandoning expert systems continuously. ...
... Focusing on key moments in AI history, as shown in the Fig. 1 (Buchiokonicha, 2024), we note the "Turing Test" in the 1950s (Turing, 1950), the first appearance of the term AI in 1955(McCarthy et al., 1955, and later developments in machine learning. Funding hiatuses in the 1970s and 1980s, known as "AI Winters" (Crevier, 1993), were followed by advances in deep learning and convolutional neural networks in the 1990s (LeCun et al., 2015). The proliferation of AI applications such as Apple Siri, Google Assistant, and Amazon Alexa further accelerated AI's influence. ...
This article explores the evolution, impact, and challenges of artificial intelligence (AI) in education. It defines AI and machine learning, tracing their historical development from the Turing Test to modern Large Language Models (LLMs) like ChatGPT and Gemini. The discussion highlights AI's transformative role in education, emphasizing the need for data literacy and AI literacy among educators and students and introduces the AI Literacy Framework, outlining key competencies necessary for effective AI integration in learning environments. Additionally, the article critiques traditional educational methods in the AI era, advocating for experiential and personalized learning approaches. While AI presents both opportunities and risks-such as ethical concerns and academic integrity issues-the article underscores the irreplaceable value of human intelligence, creativity, and critical thinking.
... Winter, which was a period of reduced funding and interest in AI research. The term AI Winter was coined from the analogy of a nuclear winter (Crevier, 1993). AI experienced several ups and downs, followed by criticism and disappointment. ...
As we are in the middle of the Covid-19 pandemic, each and every sector is affected and continuously trying to search for alternate solutions. It is clearly observed and helps to prevent the spread of this virus by adopting the covid appropriate behavior, as our Hon'ble Prime Minister said several times the mantra as "Do Gaj Ki Doori" or "a Distance of 2 Yard". This paper attempts to trace and explain the alternative solutions to the users of the library along with the covid appropriate behavior. There is a big debate on the existence of the libraries if users are not allowed to come into the libraries to prevent the covid virus. Libraries need to adopt the new tools and techniques which can fulfill the demand of their users as well as follow the covid appropriate behavior. In the last few years, we have observed that Artificial intelligence (AI) has attracted the interest of researchers and its application in each and every sector. The aim of this paper is to expedite the application of AI in libraries which provides a breakthrough for the information and knowledge sector, to give a platform to attract more and more new users too. The paper discusses a brief overview of AI applications in libraries, their benefits and the challenges faced by the libraries in their implementations. The need of the current situation to explore the new domain is fulfilled with the help of AI applications in the libraries and that may become the new milestone in the development of the library services to its user to create a new type of library named "Intelligent Libraries".
... Artificial intelligence (AI) enabled farm cultivation, which helps farmers to make perfect decisions about crop selection, disease prediction, and pest detection [2]. Recently, farmers-initiated data-driven strategies such as precision agriculture (PA), which uses AI-driven methods to increase crop yields by selecting suitable crops and supporting the nation's ecological farming growth. ...
In India, agriculture is a major sector that fulfils the population's food requirements and significantly contributes to the gross domestic product (GDP). The careful selection of crops is fundamental to maximizing agricultural yield, thereby elevating the economic vitality of the farming community. Precision agriculture (PA) leverages weather and soil data to inform crop selection strategies. Conventional machine learning (ML) models such as decision trees (DT), support vector classifier, K-nearest neighbors (KNN), and extreme gradient boost (XGBoost) have been deployed to predict the best crop. However, these model's efficiency is suboptimal in the current circumstances. The enhanced stacked ensemble ML model is a sophisticated meta-model that addresses these limitations. It harnesses the predictive power of individual ML models, stratified in a layered architecture to improve the prediction accuracy. This advanced model has demonstrated a commendable accuracy rate of 93.1% prediction by taking input of 12 soil parameters such as Nitrogen, Phosphorus, Potassium, and weather parameters such as temperature and rainfall, substantially outperforming the accuracies achieved by the individual contributing models. The efficacy of the proposed meta-model in crop selection based on agronomic parameters signifies a substantial advancement, fortifying the economic resilience of India's agriculture.
... Sus argumentos resonaron en un momento en que la comunidad de IA ya se enfrentaba a dudas sobre su capacidad para cumplir sus ambiciones más elevadas, y contribuyeron al ambiente de desilusión generalizada. Este período, conocido como el primer invierno de la IA, fue un momento de reflexión crítica para la comunidad, marcando una necesidad de una reformulación tanto de los objetivos como de los métodos empleados en la investigación (Crevier, 1993). ...
Este artículo examina las ventajas y desventajas de la integración de la Inteligencia Artificial (IA) en la educación, destacando cómo la personalización del aprendizaje, el acceso a recursos educativos, el feedback instantáneo y la automatización de tareas administrativas son beneficios clave ofrecidos por la IA. Sin embargo, se plantean preocupaciones éticas sobre la privacidad de los datos y el posible desplazamiento de empleos. En este contexto, se destaca la importancia de abordar estos desafíos para maximizar los beneficios de la IA en la educación, mientras se mitigan sus posibles riesgos.
La multiplication des énoncés concernant le pouvoir des algorithmes conduit à rappeler leur ancrage dans des écosystèmes numériques et techniques. L’enjeu est de saisir, à travers les agencements matériels, liés à des conventions, des normes et des pratiques, la façon dont les algorithmes sont pris dans une multitude d’interactions tout à la fois physiques et sociales. Les erreurs de communication qu’ils peuvent générer, les incomplétudes des boucles computationnelles, ou encore leurs dysfonctionnements concrets sont autant d’occasions de voir surgir l’infrastructure matérielle qui porte et rend possible l’usage des algorithmes. Les trous structuraux révélés au fil de bogues, de troubles ou de tensions rendent visibles tous les câblages et les liens nécessaires au fonctionnement des algorithmes, obligeant à des séries de corrections, d’arrangements, voire de détournements, qui ne se réduisent pas à des routines de maintenance. Comme les infrastructures du numérique sont elles-mêmes potentiellement vulnérables, une sociologie des algorithmes peut se donner pour tâche d’explorer les prises critiques élaborées par les acteurs lorsqu’ils doivent prendre en compte l’écologie dense des dispositifs et des réseaux qui font tenir les mondes numériques.
This chapter traces the historical trajectory of artificial intelligence, from its symbolic beginnings and early optimism through the disillusionment of the AI Winter, to the current renaissance powered by deep learning and transformer-based models. It examines the factors that enabled AI’s resurgence—advances in computational power, data availability, and algorithmic innovation—and culminates in the rise of generative AI as a paradigm-shifting development. The chapter also critically assesses ongoing challenges, including environmental costs, infrastructure limitations, data scarcity, and equity concerns. By anchoring contemporary breakthroughs in historical perspective, it underscores the need for balanced, responsible innovation that avoids the overpromising that once stalled the field’s progress.
In the evolving landscape of education, artificial intelligence (AI) is poised to revolutionize math classrooms, offering transformative potential that extends beyond traditional instructional methods. These technologies use sophisticated algorithms to analyze individual student performance, identify learning gaps, and tailor educational experiences accordingly. Moreover, AI facilitates the development of interactive and immersive learning environments. Additionally, AI can analyze patterns in student data to predict future learning difficulties and suggest preemptive interventions, thereby supporting proactive rather than reactive teaching strategies. The use of AI in math education also extends to administrative and organizational aspects. AI-driven analytics can help educators track and assess student progress on a granular level, enabling more informed decisions about instructional strategies and curriculum adjustments.
The first systematic study of parallelism in computation by two pioneers in the field.
Reissue of the 1988 Expanded Edition with a new foreword by Léon Bottou
In 1969, ten years after the discovery of the perceptron—which showed that a machine could be taught to perform certain tasks using examples—Marvin Minsky and Seymour Papert published Perceptrons, their analysis of the computational capabilities of perceptrons for specific tasks. As Léon Bottou writes in his foreword to this edition, “Their rigorous work and brilliant technique does not make the perceptron look very good.” Perhaps as a result, research turned away from the perceptron. Then the pendulum swung back, and machine learning became the fastest-growing field in computer science. Minsky and Papert's insistence on its theoretical foundations is newly relevant.
Perceptrons—the first systematic study of parallelism in computation—marked a historic turn in artificial intelligence, returning to the idea that intelligence might emerge from the activity of networks of neuron-like entities. Minsky and Papert provided mathematical analysis that showed the limitations of a class of computing machines that could be considered as models of the brain. Minsky and Papert added a new chapter in 1987 in which they discuss the state of parallel computers, and note a central theoretical challenge: reaching a deeper understanding of how “objects” or “agents” with individuality can emerge in a network. Progress in this area would link connectionism with what the authors have called “society theories of mind.”
A program called 'AM', is described which models one aspect of elementary mathematics research: developing new concepts under the guidance of a large body of heuristic rules. 'Mathematics' is considered as a type of intelligent behavior, not as a finished product. The local heuristics communicate via an agenda mechanism, a global list of tasks for the system to perform and reasons why each task is plausible. A single task might direct AM to define a new concept, or to explore some facet of an existing concept, or to examine some empirical data for regularities, etc. Repeatedly, the program selects from the agenda the task having the best supporting reasons, and then executes it. Each concept is an active, structured knowledge module. A hundred very incomplete modules are initially provided, each one corresponding to an elementary set-theoretic concept (e.g.,union). This provides a definite but immense 'space' which AM begins to explore. AM extends its knowledge base, ultimately rediscovering hundreds of common concepts (e.g., numbers) and theorems (e.g., unique factorization). This approach to plausible inference contains great powers and great limitations.