Article

Deep blue

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Deep Blue is the chess machine that defeated then-reigning World Chess Champion Garry Kasparov in a six-game match in 1997. There were a number of factors that contributed to this success, including: •a single-chip chess search engine,•a massively parallel system with multiple levels of parallelism,•a strong emphasis on search extensions,•a complex evaluation function, and•effective use of a Grandmaster game database.This paper describes the Deep Blue system, and gives some of the rationale that went into the design decisions behind Deep Blue.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The multi-player game represented by multi-player poker is an important problem in game theory research and practice. With the development of artificial intelligence, many previous studies have tackled two-person complete information games, such as Go, Shogi, and so on [1][2][3][4], and have also made great achievements in the two-person poker game, which is a typical kind of two-person incomplete information game [5][6][7][8]. While involving more than two players, the multi-player game not only results in a tremendous amount of incomplete information and uncertainty, but also requires players to analyze multiple opponents simultaneously. ...
... To avoid the impact of player labeling on the prediction algorithm and ensure the scalability of the dataset, the labeling dimension of the game history is only reserved for whether it is the target player. As for the player i with style k, if this action is performed by him, the label denotes 1 otherwise 0. Then for the player i, a decorated action history is formed as (2). ...
Article
Full-text available
Multi-player games, such as multi-player poker, are a critical area of research and application in game theory. Currently, there is no theoretical optimal policy for multi-player game problems, including multi-player poker, and there is a lack of policy learning methods capable of adaptively making decisions against diverse opponents. To address this gap, this paper proposes an adaptive multi-player poker policy (AMP3) learning method based on opponent style modeling (OSM) for multi-player Texas Hold’Em poker games. First, we construct a style library and a gaming dataset for poker, designing style features based on traditional and statistical indicators specific to Texas Hold’Em. Next, we propose an OSM algorithm leveraging deep learning to predict style features from opponents’ historical data. Third, a novel reinforcement learning (RL) algorithm, based on the Actor-Critic framework, is introduced to learn adaptive policies by utilizing the opponents’ style features. To the best of our knowledge, this paper is the first to integrate OSM into RL to develop the AMP3 learning algorithm. Experimental results show that the proposed method exhibits strong adaptability in adjusting its policy style when facing players with varying strategies in six-player Texas Hold’Em poker.
Chapter
Artificial intelligence technologies continue to be rapidly utilized and developed in the business world. The benefits it provides are not limited to simplifying business processes; it plays a significant role in strategic management processes by providing real time information in the most complex and critical decision-making processes. Also, AI applications are increasingly being used in many areas of personal life, raising numerous questions. Despite all its advantages, human-AI interaction raises concerns in various sectors and can lead to challenges for managers in some cases. Preventing such negative situations and effectively leveraging AI requires building trust in human-AI interaction and understanding how AI technologies work. Therefore, it is essential to analyse and thoroughly examine human decision-making processes, and how AI technologies operate and reach conclusions. This will help raise awareness in the field and ensure the safe and transparent use of AI technologies in everything from complex decision-making processes to various areas of personal life.
Article
The paper thematizes contemporary moment of humanity’s self-understanding in the context of advanced technological development, by looking at it through the lens of (un)imaginability of artificial lifeform education. This perspective allows us to thematize the problem of discrimination in both of its meanings – how to distinguish the human form of life from others, in this case artificial forms of life, and how to ensure that this distinction does not serve as a basis for degradation – in order to argue for the suspension of human narcissism and suggest the possibility of their equality in access to education, and not only education. The first part presents the challenges artificial intelligence and especially androids represent to the traditional vision of the human, suggesting a necessity of its renewed examination and rearticulation in the style of critical posthumanism. The middle section differentiates two potential as well as typical reactions to the drama whose protagonists are humans and self-aware human-like robots, both of which arose from a fear of losing a recognizable human identity. It is concluded that, running parallel to changes in thinking humanity and the development of techno-science, there have been changes in approaches to humanity’s artificial Other and the (im)possibility of its education: from a fundamental rejection of such an idea that “soulless machines” might attend school, through a softened stance that autonomous automatons might be capable of learning, to allowing for the possibility that they even join humans in schools.
Chapter
This section emphasises the importance of teacher leadership and determine their artificial intelligence literacy level. Teachers should have a mission that enables students in their classrooms to realise themselves, transform education and training processes according to the requirements of the age group in question, inspire and guide their colleagues in terms of their professional development, and strive for the development of the school and society. In other words, the teacher leader should optimise their potential for the advancement of their students, colleagues, school and society by enhancing their knowledge, abilities and experience. The objective is to enhance teachers' awareness of the potential applications of artificial intelligence in their instructional practices and the broader education and training processes, within the context of Education 4.0. Furthermore, it will assist teachers in making more informed decisions regarding the ethical and beneficial use of artificial intelligence.
Chapter
Artificial intelligence (AI) has rapidly evolved from theoretical models to real-world applications that impact every sector of the global economy. While AI promises increased efficiency, productivity, and innovation, it also brings significant disruption to traditional labor markets. The transition from industrial automation to AI-driven decision-making has accelerated job displacement across both blue-collar and white-collar industries. This chapter explores the historical context of job displacement due to AI, the sectors most at risk, and potential strategies for mitigating the economic and social consequences of this transformation.
Chapter
Artificial Intelligence (AI) encompasses a broad range of methodologies aimed at creating systems capable of performing tasks that would typically require human intelligence. An overview of Artificial Intelligence (AI) reveals a diverse and rapidly evolving field with numerous methodologies and applications. Some of the key methodologies in AI is described in detail inside the paper. These methodologies are continuously evolving, and researchers are constantly exploring new techniques and approaches to tackle complex AI problems. The choice of methodology depends on the specific task, available data, computational resources, and desired performance metrics. Also, selected case studies are described based on specific criteria that indicate Google's broader efforts in AI, which reflect the company's commitment to leveraging technology to improve various aspects of our lives, including online safety and education. This paper aims to contribute to a better understanding of AI by presenting an overview of methodologies, applications and specifically chosen case studies.
Chapter
Full-text available
Multi-agent environments represent a ubiquitous case of application for game-theoretic techniques. The default game model involves the presence of a large number of agents with asymmetric information on the environment that sequentially interact between them. Algorithmic Game Theory starts from particular notions of equilibria , i.e., sets of strategies for the players such that no one has incentive to deviate from her strategy, and studies the development of algorithms for computing or approximating those. The milestones achieved by researchers in the field over the last decades made it clear how, in order to obtain successful deployments of such game-theoretic techniques in complex real-world settings, it is of uttermost importance to leverage the formulation of learning algorithms for finding approximately optimal solutions of the games. Most of the research has been focused on the design of learning algorithms for simple scenarios, e.g., two-players games, but algorithms for more general cases are still far from reaching adequate performances. The goal of this manuscript is to advance the research in this sense. In particular, we investigate different multi-agent scenarios, which we differentiate based on the role of the players holding information on the environment, focusing on the definition of suitable learning algorithms for finding optimal players’ strategies. In the first part of the manuscript, we study cases in which the agents holding information are active , i.e., they can leverage their information to take informed actions in the game. In this context, we tackle two distinct cases: team games , which model cases in which two teams of agents compete one against the other, as well as the broader class of general-sum games , in which we do not make any particular assumption on the players. For team games, we introduce a simple transformation that uses a correlation protocol based on public information for obtaining a compact formulation of the teams’ strategy sets. The transformation yields an equivalent two-players zero-sum game, which can be naturally used to obtain the first no-regret learning-based algorithm for computing equilibria in team games. Then, inspired by previous literature, we lay the ground for the formulation in the context of team games of popular techniques that proved crucial for achieving strong performances in two-players games, i.e., stochastic regret minimization and subgame solving. For general-sum games, instead, we observe that the mainstream approach that is being used, which consists in the use of decentralized and coupled learning dynamics for approximating different types of correlated equilibria, suffers from major drawbacks as it does not offer any guarantee on the type of equilibrium reached. To mitigate this issue, we take the perspective of a mediator issuing action recommendations to the players and design a centralized learning dynamic that guarantees convergence to the set of optimal correlated equilibria in sequential games.The second part of the manuscript is devoted to the study of cases in which the agents holding information are passive , i.e., they cannot directly take actions in the game, but can only report their information (possibly untruthfully) in order to influence the behavior of another uninformed party. This setting corresponds to the case of information acquisition , in which we take the perspective of the uninformed agent (which we call the principal ) that is interested in gathering information from the agents, incentivizing their behavior by means of mechanisms composed by an action policy and/or payment functions. In this context, we separately study the cases in which the principal’s mechanisms are composed exclusively by action policies and payment schemes and, for both cases, we provide algorithms for learning optimal mechanisms via interactions with the agents.
Article
Full-text available
The article explores how the integration of artificial intelligence (AI) and emotional intelligence (EI) can transform university education. It focuses on analyzing the impact of AI on the personalization of learning and the automation of administrative tasks, as well as examining the role of socio-emotional skills in the comprehensive development of students. Additionally, it proposes strategies for the effective implementation of Education 5.0 in universities, promoting adaptive and emotionally enriching learning. The methodology used is based on a descriptive theoretical approach, with a literature review and case study analysis. The results highlight the importance of developing adaptive educational platforms that integrate AI modules to monitor both academic progress and students' emotional well-being, allowing timely interventions that improve academic performance and overall well-being. The creation of a research center dedicated to exploring and developing new methodologies and technologies that integrate AI and EI is proposed, collaborating with universities and technology companies to experiment with new approaches and evaluate their effectiveness in real educational settings, such as the collaboration between the University of Oxford and Google DeepMind.
Chapter
Artificial intelligence (AI), the science and engineering of artificial intelligent agents, is a multidisciplinary and ubiquitous field that has become increasingly important to society. In fact, some argue that AI is at the heart of a new industrial revolution. However, with the rapid pace of research and development, the impact of AI on our lives is still unfolding. While there are certainly a myriad of positive impacts, there are also concerns that have led many experts to call for greater regulation and control. In this chapter, we address the main paradigms of AI and specifically focus on the perspective of artificial intelligent agents, a common denominator for all the paradigms and one of the most consensually accepted approaches for AI, without forgetting their interaction with human agents. We start by reviewing the history of AI and contextualizing its major paradigms over time, highlighting the key breakthroughs that have brought AI to its present state of grace. From there, we delve into the perspective of artificial intelligent agents and the various technologies that rely on them. We superficially examine issues related to coordination, cooperation, and negotiation between artificial and human agents, with special focus on human-AI agents cooperation. We emphasize the importance of viewing these agents as part of a broader ecosystem in which both humans and artificial agents “live” in a symbiotic relation. In the light of this ecosystem, where human-AI cooperation flourishes, we examine the human and machine in the loop concepts, more precisely, we discuss the various ways in which humans are involved in AI, including as sources of data for AI agents, beneficiaries of AI agents output, and in the design of AI agents (e.g., creation of knowledge representation structures, and coding decision-making and machine learning algorithms). We overview the fundamental areas of machine learning, including reinforcement learning and the related fields of learning by demonstration, apprenticeship learning, imitation learning, and inverse reinforcement learning, which hold great promise for the future of AI. Finally, we discuss the limitations of AI, the challenges AI faces, the opportunities that lie ahead, with all their positive and negative impacts.
Article
Full-text available
Bridge, as a strategic card game, has a bidding phase that critically influences the final outcome. However, optimizing the policy during this phase is extremely challenging owing to incomplete information. Models often converge to local optima, which can severely limit their overall performance, particularly in the context of bridge bidding, where such convergence can result in suboptimal bidding decisions. To address this, we propose Diverse PPO (Proximal policy optimization) Ensembling, a method that improves policy updates by incorporating diverse constraints, including different clipping functions and regularized entropy, to promote exploration and mitigate the risk of becoming trapped in local optima. These improvements are further integrated through ensemble learning to create a robust and efficient strategic framework. Additionally, techniques such as policy initialization accelerate the self-play process, whereas search methods utilize pruning to filter out irrelevant situations, further enhancing the model performance. Our model outperformed WBridge5 by 0.73 IMPs (International Match Points) during the deep reinforcement learning phase, further improving to a 0.99 IMP advantage after incorporating search methods.
Article
In this paper, I introduce a novel benchmark in games, super‐Nash performance , and a solution concept, optimin , whereby players maximize their minimal payoff under unilateral profitable deviations by other players. Optimin achieves super‐Nash performance in that, for every Nash equilibrium, there exists an optimin where each player not only receives but also guarantees super‐Nash payoffs under unilateral profitable deviations by others. Furthermore, optimin generalizes Nash equilibrium in ‐person constant‐sum games and coincides with it when . Finally, optimin is consistent with the direction of non‐Nash deviations in games in which cooperation has been extensively studied.
Chapter
With the accentuating development of technology, new potentials as well as new challenges crop up that raise questions of balancing scientific temper with human rights. The proliferation of Artificial Intelligence (AI) technologies poses questions on issues of stereotyping, practice of bias and the legalities of the use and development of such technologies. It is no longer a thing of future in a utopian or dystopia, but a reality glaring us in the face of its existence. The aim of this paper is to explore the various violations of human rights that can be perpetrated by the use of AI and the complex question as to who the perpetrator is: the AI, Developer or user. The study reveals the complexities and grey areas that if not addressed, act as a peril to human rights.
Article
Full-text available
Benchmarks are seen as the cornerstone for measuring technical progress in artificial intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to emotion recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the ‘ethicality’ of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is ‘ethical’. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about ‘values’ (and ‘value alignment’) rather than ‘ethics’ when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI.
Article
Full-text available
Latin America struggles to reach advanced Artificial Intelligence (AI) development due to limited resources and talent retention. However, different public indexes indicate the region’s potential to become an AI hub. Our analysis of research databases indicates a growing trend towards NeuroAI — an interdisciplinary field emerging from neuroscience and artificial intelligence — suggesting that this area of study may be a driving force behind the potential for Latin America. This paper explores how NeuroAI could enable Latin America to bridge the development gap with global AI leaders, leveraging its potential to overcome resource constraints. We analyze strategies for integrating NeuroAI advancements into the private sector, which could provide essential financial support for highimpact scientific research and foster regional innovation.
Article
Full-text available
Understanding how individuals learn and progress within complex social networks is crucial for various fields, from education to online gaming platforms. In this study, we investigate the impact of an individual’s topological position within a network of game interactions on their learning process. Leveraging a novel implementation of the state-of-the-art TrueSkill Through Time model, we accurately estimate the initial abilities of players in an online Go gaming platform. Utilizing dynamic graph analysis techniques, we analyze the centrality measures of individuals, including popularity, closeness, and intermediacy, to characterize their network positions. Our results reveal distinct learning patterns influenced by network centrality, particularly among individuals with intermediate initial abilities. Notably, we find significant differences in learning rates between players with low and high centrality, underscoring the role of network structure in shaping individual learning trajectories. These findings provide valuable insights into the interplay between social network dynamics and individual skill acquisition, with implications for optimizing learning environments and enhancing performance in complex social systems.
Article
Full-text available
Between 2013 and 2023, the scientific literature on artificial intelligence grew at an exponential rate. The subject appears to be gaining even greater popularity across a wide range of fields. Although interest in the area is now at its peak, there is a scarcity of comprehensive bibliometric study. As a result, a new perspective on the constantly changing accomplishments, contributions, and research trends is necessary. This study employed Scopus database data to investigate 11,897 AI-related papers produced between 2013 and 2023, as well as VOSviewer software to depict research trends utilizing co-citation and citation networks. The analysis found a consistent increase in AI-related papers from 2013 to 2023, with substantial increases in areas such as generative AI, explainable AI, reinforcement learning, and optimization approaches, showing their growing relevance in improving technology and tackling challenging real-world challenges. The visualized research patterns also highlight the major institutions, significant partnerships, and crucial authors in AI research, providing a more complete picture of worldwide research activities. The categorization in this study divides artificial intelligence research into five major areas: algorithmic methods, advanced machine learning techniques, data-driven automation technologies, and advanced AI methodology. Academia and industry must acknowledge these findings, which highlight the importance of ongoing research, collaboration, and advancement in artificial intelligence applications across sectors such as healthcare, finance, and manufacturing in order to foster innovation, improve decision-making, and address ethical concerns. It also provides significant insights for policymakers, educators, and business leaders, guiding the formulation of AI policies that foster responsible innovation, improve regulatory frameworks, and ensure fair access to AI developments and proposes rules that promote the ethical use of AI in content production, emphasizing transparency and equity in AI decision-making processes, thereby ensuring technical advancement and societal welfare.
Article
Full-text available
For a long time, researchers have sought artificial intelligence (AI) that matches or exceeds human intelligence. AI agents, which are artificial entities capable of sensing the environment, making decisions, and taking actions, are seen as a means to achieve this goal. Extensive efforts have been made to develop AI agents, with a primary focus on refining algorithms or training strategies to enhance specific skills or particular task performance. The field, however, lacks a sufficiently general and powerful model to serve as a foundation for building general agents adaptable to diverse scenarios. With their versatile capabilities, large language models (LLMs) pave a promising path for the development of general AI agents, and substantial progress has been made in the realm of LLM-based agents. In this article, we conduct a comprehensive survey on LLM-based agents, covering their construction frameworks, application scenarios, and the exploration of societies built upon LLM-based agents. We also conclude some potential future directions and open problems in this flourishing field.
Article
In the past decade, despite significant advancements in Artificial Intelligence (AI) and deep learning technologies, they still fall short of fully replicating the complex functions of the human brain. This highlights the importance of researching human-machine collaborative systems. This study introduces a statistical framework capable of finely modeling integrated performance, breaking it down into the individual performance term and the diversity term, thereby enhancing interpretability and estimation accuracy. Extensive multi-granularity experiments were conducted using this framework on various image classification datasets, revealing the differences between humans and machines in classification tasks from macro to micro levels. This difference is key to improving human-machine collaborative performance, as it allows for complementary strengths. The study found that Human-Machine collaboration (HM) often outperforms individual human (H) or machine (M) performances, but not always. The superiority of performance depends on the interplay between the individual performance term and the diversity term. To further enhance the performance of human-machine collaboration, a novel Human-Adapter-Machine (HAM) model is introduced. Specifically, HAM can adaptively adjust decision weights to enhance the complementarity among individuals. Theoretical analysis and experimental results both demonstrate that HAM outperforms the traditional HM strategy and the individual agent (H or M).
Article
Full-text available
This paper examines the democratic ethics of artificially intelligent polls. Driven by machine learning, AI electoral polls have the potential to generate predictions with an unprecedented level of granularity. We argue that their predictive power is potentially desirable for electoral democracy. We do so by critically engaging with four objections: (1) the privacy objection, which focuses on the potential harm of the collection, storage, and publication of granular data about voting preferences; (2) the autonomy objection, which argues that polls are an obstacle to independently formed judgments; (3) the tactical voting objection, which argues that voting strategically on the basis of polls is troublesome; and finally (4) the manipulation objection, according to which malicious actors could systematically bias predictions to alter voting behaviours.
Article
Full-text available
A mesterséges intelligencia (MI) gyors fejlődése jelentős mértékben átalakítja a közigazgatást, javítva a hatékonyságot és a közszolgáltatások minőségét, ugyanakkor komoly adatvédelmi és etikai kihívások elé állítja a közszolgálatot. A Magyarország Mesterséges Intelligencia Stratégiája 2020-as bevezetése a közigazgatás modernizációját célozza meg, hangsúlyozva az adatvezérelt döntéshozatal és az MI-alapú rendszerek fontosságát. Az MI integrálása azonban megköveteli a közszolgálati tisztviselők megfelelő felkészültségét és tudatosságát. Vajon milyen lesz a jövő közszolgálata? Mesterséges vagy még emberibb?
Article
Dijitalleşme ve internet teknolojisindeki gelişmeler ile kamu hizmetlerinin elektronik ortamda gerçekleştirilebilmesi mümkün hale gelmiştir. Vergilendirme işlemlerinde de dijital dönüşüm uzun zamandır söz konusu olup özellikle yapay zekâ teknolojisindeki ilerlemelerin vergi idaresi sistemlerine adapte edilmesi ile işlemlerin daha az maliyetle, daha hızlı ve doğru yapılmasına imkân tanınmıştır. Özellikle vergi denetimlerinde yapay zekâ kullanılması üzerine vergi kayıplarının ve kayıt dışılığın önemli ölçüde azaltılması söz konusu olabilmektedir. Çalışmada öncelikle yapay zekâ kavramının kısa gelişimi, tanımı ve fonksiyonel anlamda çeşitlerinden bahsedilmiştir. Daha sonra Türk vergi hukukunda yer verilen yoklama, inceleme, arama ve bilgi toplama denetim yolları genel hatlarıyla açıklanmış ve dijital denetim ile hâlihazırda uygulanan yapay zekâ destekli vergilendirme işlemlerine değinilmiştir. Bu çerçevede Türkiye’de ve çeşitli ülkelerde, detaylı risk analizi yapabilen ve bu yolla vergi kaçırma potansiyeli bulunan mükellefleri tespit etmeye yardım eden yapay zekâ destekli vergi denetimi sistemlerinin uygulandığı görülmüştür. Çalışmanın son kısımda ise yapay zekâ teknolojisinin ilerleyen zamanda makine öğrenmesi ve derin öğrenme kabiliyetlerine sahip kılınması üzerine vergi denetimlerinin tam otonom şekilde gerçekleştirilmesine yönelik bir perspektif sunulmaya çalışılmıştır. Bu minvalde otonom vergi denetimine kaynak oluşturacak büyük vergi verisinin oluşturulmasında veri güvenliğinin ve mahremiyetin sağlanması, veri kalitesinin geliştirilmesi, vergi mevzuatının karmaşık ve değişkenliğinin giderilmesi, gerektiğine değinilmiştir. Son tahlilde ise otonom yargılama yapan robot yargıçlar ile otonom vergi denetimleri ilişkilendirilerek gelecekte vergi uyuşmazlıklarının yapay zekâ programları arasındaki yorum farklılıklarından kaynaklanabileceği değerlendirilmiştir.
Chapter
As the name suggests, artificial intelligence (AI) is a discipline that runs on the edge between science and engineering, aiming to develop machines capable of performing actions that typically require human intelligence skills. The path of this discipline had seen light and darkness following the course of technological evolution until one day, in 1997, a machine defeated the world chess champion. This revealed true AI’s potential. Mechanical chip connections started to look more and more similar to those existing between nerve cells in a human brain, leading the machines to gain experience from their own mistakes through the so-called machine and deep learning. These findings really changed how humans approached different fields, first of which was medical healthcare.
Article
Full-text available
Recently, Artificial Intelligence (AI) technology use has been rising in sports to reach decisions of various complexity. At a relatively low complexity level, for example, major tennis tournaments replaced human line judges with Hawk-Eye Live technology to reduce staff during the COVID-19 pandemic. AI is now ready to move beyond such mundane tasks, however. A case in point and a perfect application ground is chess. To reduce the growing incidence of ties, many elite tournaments have resorted to fast chess tiebreakers. However, these tiebreakers significantly reduce the quality of games. To address this issue, we propose a novel AI-driven method for an objective tiebreaking mechanism. This method evaluates the quality of players’ moves by comparing them to the optimal moves suggested by powerful chess engines. If there is a tie, the player with the higher quality measure wins the tiebreak. This approach not only enhances the fairness and integrity of the competition but also maintains the game’s high standards. To show the effectiveness of our method, we apply it to a dataset comprising approximately 25,000 grandmaster moves from World Chess Championship matches spanning from 1910 to 2018, using Stockfish 16, a leading chess AI, for analysis.
Article
Full-text available
Artificial intelligence (AI) refers to the theory and development of computer systems capable of performing tasks that normally require human intelligence. This revolutionary technology holds immense potential in multiple domains, including the development of educational curricula. This article has explored the impact of AI on curriculum development in global higher education institutions, analyzing data from 2,000 faculty and student respondents, across five continents: North America, Europe, Asia, Africa, and Latin America, using a logistic regression model. The study found that frequent use of AI, the extent of the faculty knowledge, institution support to faculty, and the future expectation about AI are promoting curriculum development. Furthermore, the effectiveness of AI-driven tools in personalizing learning experiences, enhancing student engagement, identifying and addressing individual needs, providing real-time feedback, improving the quality of teaching and learning materials, and promoting critical thinking and problem-solving skills is driving curriculum development. Moreover, the challenges limiting AI integration in curriculum development include its ability to personalize learning, adapt content based on student needs, ethical concerns, and hesitations in recommending AI use to other educational institutions. Besides, with respect to cultural and educational contexts in AI-powered tools, the integration of AI in global higher education curriculum development is hindered by its inability to align with and navigate the complexities of these contexts. In addition, educators’ and leaders’ perceptions and attitudes also influence AI’s role in curriculum development. Factors such as AI’s ability to create personalized learning experiences, familiarity with current AI tools, its effectiveness in identifying student learning gaps, willingness to undergo training and professional development, and its capacity to address biases in curriculum content stimulate development yet also present limitations. Importantly, our findings indicate that, while AI has enormous potential to revolutionize curriculum development, strategic approaches and policies are required to overcome the identified issues and improve AI integration in varied educational settings.
Article
Full-text available
The success of the alpha-beta algorithm in game-playing has shown its value for problem solving in artificial intelligence, especially in the domain of two-person zero-sum games with perfect-information. However, there exist different algorithms for game-tree search. This paper describes and assesses those proposed alternatives according to how they try to overcome the limitations of alpha-beta. We conclude that for computer chess no practical alternative exists, but many promising ideas have good potential to change that in the future. 1 Introduction and Motivation Conventional search methods, such as A* or alpha-beta, are powerful artificial intelligence (AI) techniques. They are appealing because of their algorithmic simplicity and clear separation of search and knowledge. Describing the basic alpha-beta algorithm takes only a few lines of code, and all the domain-dependent knowledge is encoded in a few functions called by a generic search engine. Additionally, the depth-firs...
Article
Deep Thought is the first chess machine to achieve Grandmaster performances against human opposition. In November 1988, the machine tied for first in the Software Toolworks Championship with Grandmaster Anthony Miles, defeating four-time World Championship candidate Grandmaster Bent Larsen along the way. Since then, the machine has been successful against other Grandmasters.
Chapter
The computer chess program Belle is currently the World Computer Chess Champion and the North American Computer Chess Champion. In human play, Belle has consistently obtained master performance ratings. This paper describes the special-purpose hardware that gives Belle its advantage: speed.
Article
The success of the alpha-beta algorithm in game playing has shown its value for problem solving in artificial intelligence, especially in the domain of two-person zero-sum games with perfect information. Still, there are different algorithms for game-tree search which challenge the value of the alpha-beta algorithm. This paper describes and assesses the alternatives proposed according to how they try to overcome the limitations of alpha-beta. We conclude that for computer chess no practical alternative exists, but many promising ideas have good potential to change that in the future.
Conference Paper
The supervised learning methodology of "comparison training" (Tesauro 1989a) on a database of expert preferences is extended to search depths beyond 1-ply, and applied to the problem of training the weights in a linear evaluation function for the game of chess. An initial set of experiments was performed using SCP, a public-domain chess program. Training based on simple 1-ply searches was found to be ineffective, but for 1-ply plus quiescence expansion, high-quality solutions were found that outperform SCP's hand-tuned weights. The trained weights had performance that scaled well with search depth, and consistent improvement over the hand-tuned solution was found even for test depths much greater than the training search depth.A discretized version of the algorithm was also developed and used to tune a subset of the weights in DEEP BLUE, having to do primarily with king safety evaluation. Training was based on 4-ply search (plus quiescence), and good test-set generalization was found out to 7-ply. During the 1997 rematch with Garry Kasparov, the tuning of the king-safety weights made a critical difference in one important position in game 2, and in the program's general understanding and handling of game 6.
Article
The strength of the current generation of chess programs is strongly correlated with their search speed. Thus, any improvement in the efficiency of the tree search usually results in a stronger program. Here we describe a technique, called the null-move heuristic, that can be effectively used to improve search speed with only a small chance of error. Although the technique has been previously used in specialized programs for chess tactics, this chapter describes an implementation suitable for general chess programs.
Conference Paper
CHESS 4.5 is the latest version of the Northwestern University chess program. CHESS 4.5 and its predecessors have won the U.S. Computer Chess Championships in 1970, 1971, 1972, 1973, and 1975, placing second in the 1974 U.S. Tourney and also in the first World tournament held the same year. This chapter will describe the structure of the program, focusing on the practical considerations that motivated the implementation of its various features. An understanding of not only what CHESS 4.5 is, but also why it turned out that way, is necessary if one is to appreciate its role in the present and future development of chess programming.
Article
We introduce a variant of alpha-beta search in which each node is associated with two depths rather than one. The purpose of alpha-beta search is to find strategies for each player that together establish a value for the root position. A max strategy establishes a lower bound and the min strategy establishes an upper bound. It has long been observed that forced moves should be searched more deeply. Here we make the observation that in the max strategy we are only concerned with the forcedness of max moves and in the min strategy we are only concerned with the forcedness of min moves. This leads to two measures of depth - one for each strategy - and to a two-depth variant of alpha-beta called ABC search. The two-depth approach can be formally derived from conspiracy theory and the structure of the ABC procedure is justified by two theorems relating ABC search and conspiracy numbers.
Article
Brute-force alpha-beta search of games trees has proven relatively effective in numerous domains. In order to further improve performance, many brute-force game-playing programs have used the technique of selective deepening, searching more deeply on lines of play identified as important. Typically these extensions are based on static, domain-dependent knowledge. This paper describes a modification of brute-force search, singular extensions, that allows extensions to be identified in a dynamic, domain-independent, low-overhead manner. Singular extensions, when implemented in a chess-playing program, resulted in significant performance improvements.
Article
The alpha-beta technique for searching game trees is analyzed, in an attempt to provide some insight into its behavior. The first portion of this paper is an expository presentation of the method together with a proof of its correctness and a historical discussion. The alpha-beta procedure is shown to be optimal in a certain sense, and bounds are obtained for its running time with various kinds of random data.
Article
A vast database of human experience can be used to direct a search.
Article
This thesis concerns itself with progress that has been made in the development of a better model of computer chess. The author considers the fact that chess programs have made almost no gain in strength, as measured on the human scale, in the period 1968 - 1973, as indicative that the popular model of computer chess is near the limits of its exploitability. Some indication of why this could be so is provided in a chapter which discusses some very basic flaws in the current popular model of computer chess. Most serious of these is the Horizon Effect which is shown to cause arbitrary errors in the performance of any program employing a maximum depth in conjunction with a quiescence procedure. (Modified author abstract)
Article
The IBM Deep Blue supercomputer that defeated World Chess Champion Garry Kasparov in 1997 employed 480 custom chess chips. This article describes the design philosophy, general architecture, and performance of the chess chips, which provided most of Deep Blue's computational power
Article
A single-chip chess move generator has been developed as the first chip of a two-chip set that forms a special-purpose chess computer. The resultant special-purpose chess computer is expected to be an order of magnitude faster than chess programs now running on the fastest supercomputers. The move generator itself has a peak throughput of two-million moves/s. Implemented in 3-μm p-well double-metal CMOS technology, the chip measures 6.8×6.9 mm, contains 36000 transistors, and dissipates less than 0.5 W. The chip is comprised mainly of an 8×8 array of combinatorial circuits, each with less than 550 transistors.
Article
this paper. The taxonomy will be broken into two major categories: fffi-based algorithms, and algorithms based on other search paradigms (SSS , ER, and theoretical methods). For the former category, a table is given to isolate the fundamental differences between the algorithms. The table is divided into two parts: the first part contains characteristics of the fffi-based algorithms, while the second part contains details about an implementation of each algorithm. Section 2 describes the various columns given in the table, and then gives some brief details on the algorithms contained therein. The algorithms based on other search paradigms are given in Section 3. Due to the varied nature of the methods, a brief description is given for each of the algorithms and no attempt has been made to categorize them to the same extent as the fffi-based algorithms. The implementation details have not been organized into a table, since some of the algorithms given are of a theoretical nature and have not been implemented or simulated. The final section deals with some conclusions that can be drawn from the taxonomy. 2 fffi-BASED PARALLEL GAME-TREE SEARCH