ArticlePDF Available

Are There Practical Alternatives To Alpha-Beta in Computer Chess?

Authors:

Abstract

The success of the alpha-beta algorithm in game-playing has shown its value for problem solving in artificial intelligence, especially in the domain of two-person zero-sum games with perfect-information. However, there exist different algorithms for game-tree search. This paper describes and assesses those proposed alternatives according to how they try to overcome the limitations of alpha-beta. We conclude that for computer chess no practical alternative exists, but many promising ideas have good potential to change that in the future. 1 Introduction and Motivation Conventional search methods, such as A* or alpha-beta, are powerful artificial intelligence (AI) techniques. They are appealing because of their algorithmic simplicity and clear separation of search and knowledge. Describing the basic alpha-beta algorithm takes only a few lines of code, and all the domain-dependent knowledge is encoded in a few functions called by a generic search engine. Additionally, the depth-firs...
... To find this move, only part of the game tree has to be inspected, the remainder is 'pruned away'. However, since reduced games are used, it is not obvious that these reduced games should be zerosum games, and several alternative approaches have been proposed [5]. One of the alternative approaches is called Opponent-Model search [2] [3] [4]. ...
Article
A special case of two-player games is proposed for symmetric opponent modelling: games with bounded common interest. It constitutes a new type of heuristic search as a further development of Opponent-Model search. The bound on the common interest of both players causes a bound on the possible equilibria and by that on positive and negative effects of the approach. The bound also allows pruning in an Alpha-Beta-like search algorithm.
... For example chess has an average branching factor of 35 (Russell and Norvig 2010) and a complete game tree could have about 35 100 different positions to evaluate and search, which is impractical using methodologies which require a full search tree to be built. In order to address this problem, by being able to prune the search tree, alpha-beta (or α.β) search was introduced, which seeks to reduce the number of nodes that are evaluated in the search tree when using the minimax algorithm (Hsu1990; Rich and Knight 1991;Norvig 1992;Junghanns 1998;Luger 2008;Russell and Norvig 2010). α.β search is commonly used for two-player games. ...
Article
Full-text available
In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this thesis, artificial neural networks are employed to evolve game playing strategies for the game of checkers by introducing a league structure into the learning phase of a system based on Blondie24. We believe that this helps eliminate some of the randomness in the evolution. The best player obtained is tested against an evolutionary checkers program based on Blondie24. The results obtained are promising. In addition, we introduce an individual and social learning mechanism into the learning phase of the evolutionary checkers system. The best player obtained is tested against an implementation of an evolutionary checkers program, and also against a player, which utilises a round robin tournament. The results are promising. N-tuple systems are also investigated and are used as position value functions for the game of checkers. The architecture of the n-tuple is utilises temporal difference learning. The best player obtained is compared with an implementation of evolutionary checkers program based on Blondie24, and also against a Blondie24 inspired player, which utilises a round robin tournament. The results are promising. We also address the question of whether piece difference and the look-ahead depth are important factors in the Blondie24 architecture. Our experiments show that piece difference and the look-ahead depth have a significant effect on learning abilities.
... Despite the great success of the weighted sum approach to ev $ aluation, the method has quite a few weaknesses, and many of the alternative methods discussed in the survey (Junghanns 1998) were designed to address such weaknesses . The main drawback of using a single number for evaluation is that information is lost. ...
Article
Games are a popular test bed for AI research. Many of the search techniques that are currently used in areas such as single-agent search and AI planning have been originally de-veloped for games such as chess. Games share one funda-mental problem with many other fields such as AI planning or operations research: how to evaluate and compare complex states? The classical approach is to 'boil down' state evalua-tion to a single scalar value. However, a single value is often not rich enough to allow meaningful comparisons between states, and to efficiently control a search. In the context of games research, a number of search methods using multicriteria evaluation have been developed in recent years. This paper surveys these approaches, and outlines a possible joint research agenda for the fields of AI planning and game-playing in the domain of multicriteria evaluation.
... The purpose of this paper is to work on move generation, not on the search technique. Therefore, αβ was preferred because the strengths and weaknesses of this algorithm are well tested and known [29]. We think that our idea of move generation can be transferred to other search algorithms such as Monte-Carlo Tree Search [11], [12], [30], [31]. ...
Conference Paper
Full-text available
We consider the move generation in a modern board game where the set of all the possible moves is too large to be generated. The idea is to provide a set of simple abstract tactics that would generate enough combinations to provide strong opposition. The reduced search space is then traversed using the αβ search. We also propose a technique that allows us to remove the stochasticity from the search space. The model was tested in a game called Axis and Allies: a modern, turn-based, perfect information, non-deterministic, strategy board game. We first show that a tree search technique based on a restrained set of moves can beat the actual scripted AI engine - E.Z. FODDER. We can conclude from the experiments that searching deeper generates complex maneuvers which in turn significantly increase the likelihood of victory.
Article
Full-text available
This document describes computer go researches carried out since 1997 at the university of Paris 5 to obtain the accreditation to supervise re- search. Computer games have witnessed improvements performed in AI. However, go programming constitutes a very hard task for two reasons. First, the size of the game tree forbids any global tree search, and sec- ond, a good evaluation function is very hard to nd out. The research presented within the Indigo project aims at producing a go program \as strong as possible" while publishing the results. Between 1997 and 2002, the approach was based on knowledge, and had many links with many AI sub-domains. The scientic results are described for each sub-domain: DAI, cognitive science, fuzzy logic, uncertainty representation, combina- torial games, retrograde analysis, tree search, evaluation function and spatial reasoning. In 2002, the Monte Carlo approach was launched, and gave promising results. This fact brought about the programming of an approach associating Monte Carlo and knowledge in 2003. Then, that association yielded good results at the computer olympiads held in Graz in 2003. Thus, the Monte Carlo approach will be pursued in the future, and will be associated with bayesian and reinforcement learning.
Article
Full-text available
The efficiency of the αβ-algorithm as a minimax search procedure can be attributed to its effective pruning at so-called cut-nodes; ideally only one move is examined there to establish the minimax value. This paper explores the benefits of investing additional search effort at cut-nodes by also expanding some of the remaining moves. Our results show a strong correlation between the number of promising move alternatives at cut-nodes and a new principal variation emerging. Furthermore, a new forward-pruning method is introduced that uses this additional information to ignore potentially futile subtrees. We also provide experimental results with the new pruning method in the domain of chess.
Article
Full-text available
To play a game well a player needs to under-stand the game. To defeat an opponent, it may be suf-ficient to understand the opponent's weak spots and to be able to exploit them. In human practice, both ele-ments (knowing the game and knowing the opponent) play an important role. This article focuses on opponent modelling independent of any game. So, the domain of interest is a collection of two-person games, multi-person games, and commercial games. The emphasis is on types and roles of opponent models, such as specula-tion, tutoring, training, and mimicking characters. Vari-ous implementations are given. Suggestions for learning the opponent models are described and their realization is illustrated by opponent models in game-tree search. We then transfer these techniques to commercial games. Here it is crucial for a successful opponent model that the changes of the opponent's reactions over time are adequately dealt with. This is done by dynamic script-ing, an improvised online learning technique for games. Our conclusions are (1) that opponent modelling has a wealth of techniques that are waiting for implementa-tion in actual commercial games, but (2) that the games' publishers are reluctant to incorporate these techniques since they have no definitive opinion on the successes of a program that is outclassing human beings in strength and creativity, and (3) that game AI has an entertain-ment factor that is too multifaceted to grasp in reason-able time.
Article
An algorithm based on state space search is introduced for computing the minimax value of game trees. The new algorithm SSS∗ is shown to be more efficient than α-ß in the sense that SSS∗ never evaluates a node that α-ß can ignore. Moreover, for practical distributions of tip node values, SSS∗ can expect to do strictly better than α-ß in terms of average number of nodes explored. In order to be more informed than α-ß, SSS∗ sinks paths in parallel across the full breadth of the game tree. The penalty for maintaining these alternate search paths is a large increase in storage requirement relative to α-ß. Some execution time data is given which indicates that in some cases the tradeoff of storage for execution time may be favorable to SSS∗.
Conference Paper
Article
In this thesis we investigate two issues relating to heuristic search algorithms. The first and most important issue addressed is the technique used to represent knowledge within a search tree. Previous techniques have used either single values or ranges. We demonstrate that probability distributions, using a modified B*-type search algorithm, can successfully be used as a knowledge representation technique. Furthermore we show that the use of probability distributions is superior to the use of either of the previous techniques. The former conclusion is based on experiments that show that the probability-based algorithm is able to solve a wide variety of tactical chess problems. The latter conclusion is based on both analytical examples as well as experimental results. In analyzing search algorithms that use single-valued or range-based state descriptions, several important problems arise. For each problem we show how it is solved by the use of probability-based state descriptions. Experimentally we show that the probability-based algorithm solves over one-third more problems than the comparable range-based algorithm and expands approximately one-tenth the nodes on problems that both algorithms solve. Keywords: Reports, Military publications, Periodicals.
Article
The M & N procedure is an improvement to the mini-max backing-up procedure widely used in computer programs for game-playing and other purposes. It is based on the principle that it is desirable to have many options when making decisions in the face of uncertainty. The mini-max procedure assigns to a MAX (MIN) node the value of the highest (lowest) valued successor to that node. The M & N procedure assigns to a MAX (MIN) node some function of the M (N) highest (lowest) valued successors. An M & N procedure was written in LISP to play the game of kalah, and it was demonstrated that the M & N procedure is significantly superior to the mini-max procedure. The statistical significance of important conclusions is given. Since information on statistical significance has often been lacking in papers on computer experiments in the artificial intelligence field, these experiments can perhaps serve as a model for future work.
Article
In this paper we present a new algorithm for searching trees. The algorithm, which we have named B∗, finds a proof that an arc at the root of a search tree is better than any other. It does this by attempting to find both the best arc at the root and the simplest proof, in best-first fashion. This strategy determines the order of node expansion. Any node that is expanded is assigned two values: an upper (or optimistic) bound and a lower (or pessimistic) bound. During the course of a search, these bounds at a node tend to converge, producing natural termination of the search. As long as all nodal bounds in a sub-tree are valid, B∗ will select the best arc at the root of that sub-tree. We present experimental and analytic evidence that B∗ is much more effective than present methods of searching adversary trees.The B∗ method assigns a greater responsibility for guiding the search to the evaluation functions that compute the bounds than has been done before. In this way knowledge, rather than a set of arbitrary predefined limits can be used to terminate the search itself. It is interesting to note that the evaluation functions may measure any properties of the domain, thus resulting in selecting the arc that leads to the greatest quantity of whatever is being measured. We conjecture that this method is that used by chess masters in analyzing chess trees.