Article

A test for comparison of human and computer performance in chess”

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Cost o f lea f nodes evaluated by f processors _________________________________________ − 1 (15) Thus for an alpha beta tree, (16) where E d, f is given by equation (10) or (13). For our results in Tables 2, 3 (17) which in turn becomes ...
... Thus from (16) and since E d, ...
... The test results, as based on Parabelle's performance on a standard sequence of 24 chess positions [16], indicate that PV splitting has low search overhead. For comparison we present the data from a series of five ply searches, with and without transposition tables, performed by a system consisting of from one to four processors. ...
Article
Full-text available
Tree searching is a fundamental and computationally intensive problem in artificial intelligence. Parallelization of tree-searching algorithms is one method of improving the speed of these algorithms. However, a high-performance parallel two-player game-tree search algorithm has eluded researchers. Most parallel game-tree search approaches follow synchronous methods, where the work is concentrated within a specific part of the tree, or a given search depth. This thesis shows that asynchronous gametree search algorithms can be as efficient as synchronous methods in determining the minimax value. A taxonomy of previous work in parallel game-tree search is presented. A theoretical model is developed for comparing the efficiency of synchronous and asynchronous search algorithms under realistic assumptions. APHID, a portable parallel game-tree search library, has been built based on the asynchronous parallel game-tree search algorithm proposed in the comparison. The library is easy to imple...
... The evaluation function is ported from CilkChess, a program developed at the Laboratory of Computer Science, MIT. We used the Bratko-Kopec test set [22] (shown in Appendix B.4) to test our chess implementation. The nominal search depth varies from seven to thirteen plies; quiescence search extends this depth up to twelve plies. ...
... The pattern databases are read into main memory before the search begins. With respect to the other games, we halved the number of transposition table entries (2 21 instead of 2 22 ), to leave room for the pattern databases. Partitioned performs transposition table lookups for all nodes in the tree, and replicated updates all interior nodes. ...
... The authors hypothesized that verbal knowledge is more highly correlated with positional evaluations than with tactical judgment for it is consistent with the findings of Kopec and Bratko (1982) in which it is suggested that deep search and little evaluation is enough for tactics. ...
Thesis
Full-text available
Nine (9) expert chess players with an ELO rating ranging from 1800 to 2600 were asked to verbally report their solution, evaluation, and judgment towards chess board puzzles intended to measure one's tactical ability, positional judgment, and endgame knowledge respectively. The goal of this study was to find out the differences of chess-specific verbal knowledge among expert chess players of different skill levels through think-aloud protocol. The verbal reports gathered were subjected to content analysis.
... Our distributed system achieves speedups of 15.77 running on 16 processors and 25.08 running on 32 processors searching a 7-ply search on the positions of the Bratko-Kopec experiment ( BK82]). Very recent experiments show that 64 processors can achieve a speedup of 34 for a 7-ply search running on tournament speed. ...
Article
We show how to implement the fffi-enhancements like iterative deepening, transposition tables, history tables etc. used in sequential chess programs in a distributed system such that the distributed algorithm profits by these heuristics as well as the sequential does. Moreover the methods we describe are suitable for very large distributed systems. We implemented these fffi-enhancements in the distributed chess program ZUGZWANG. For a distributed system of 64 processors we obtain a speedup between 28 and 34 running at tournament speed. The basis for this chess program is a distributed fffi-algorithm with very good load balancing properties combined with the use of a distributed transposition table that grows with the size of the distributed system. 1. INTRODUCTION In this paper we describe a fully distributed chess program ZUGZWANG running on a network of Transputers. We present experimental results that show the efficiency of our implementation. The good behavior of the sequential f...
Chapter
Computer chess was originally purposed for insight into the human mind. It became a quest to get the most power out of computer hardware and software. The goal was specialized but the advances spanned multiple areas, from heuristic search to massive parallelism. Success was measured not by standard software or hardware benchmarks, nor theoretical aims like improving the exponents of algorithms, but by victory over the best human players. To gear up for limited human challenge opportunities, designers of chess machines needed to forecast their skill on the human rating scale. Our thesis is that this challenge led to ways of rating computers on the whole and also rating the effectiveness of our field at solving hard problems. We describe rating systems, the workings of chess programs, advances from computer science, the history of some prominent machines and programs, and ways of rating them.
Chapter
Die menschliche Intelligenz spiegelt sich auch in der Fähigkeit wider, Entscheidungen unter Berücksichtigung zukünftiger Entwicklungen zu treffen. Daher werden in der Informatik seit vielen Jahren Algorithmen untersucht, die eine solche Vorausschau in die Zukunft ermöglichen und dann aufgrund von Abschätzungen der Ergebnisse der einzelnen Handlungsalternativen eine Alternative auswählen. Strategische Spiele erfordern eine derartige Leistung, wenn der Ausgang einer Folge von eigenen Handlungen unter Berücksichtigung der Interventionen eines Gegners abgeschätzt werden muß. Auch wegen ihrer klaren Definitionen bieten sich daher immer wieder Spiele, wie z.B. Schach, als Testumgebung für solche Algorithmen an. Den Blick in die Zukunft ermöglicht dabei die Spielbaumsuche.
Conference Paper
We present a distributed algorithm for searching game trees. A general strategy for distributed computing is used that can be applied also to other search algorithms. Two new concepts are introduced in order to reduce search overhead and communication overhead: the “Young Brothers Wait Concept” and the “Helpful Master Concept”. We describe some properties of our distributed algorithm including optimal speedup on best ordered game trees. An implementation of this algorithm in a distributed chess program is studied and experimental data showing surprisingly good performance are presented. Since the performance of our algorithm increases with a better move ordering, this algorithm promises to outperform other known algorithms, especially when combined with state-of-the-art chess programs.
Conference Paper
We present our distributed αΒ-algorithm and show how αΒ- enhancements like iterative deepening, transposition tables, history tables etc. that are useful in the sequential game tree search can be applied to a distributed algorithm. The methods we describe are suitable even for large distributed systems. We describe an extension of the Young Brothers Wait Concept that we introduced to reduce the search overhead. For the first time experiments with bigger processor networks (up to 256 Transputers) show good results. We obtained a speedup of 126 running our algorithm with 256 processors. There are mainly two reasons for this improvement. The first is that our algorithm has an inherent good load balancing, i.e. the workload using 256 processors is roughly 83% although one computation takes on the average only 300 seconds (with 256 processors). The second reason for the good speedup achieved is the bounding of the search overhead by the extended Young Brothers Wait Concept and the efficient use of a distributed hash table. We give a cost and gain analysis of this hash table showing its superior behavior compared to other approaches. The developed techniques have been incorparated in the distributed chess program Zugzwang, that serves as a tool for our experiments. Moreover Zugzwang participated with good results in some tournaments, for example winning the bronce medall in the 2nd Computer Games Olympiad 1990.
Conference Paper
Improving a long chain of works we obtain a randomized EREW PRAM algorithm for finding the connected components of a graph G=(V,E) with n vertices and m edges in O(log n) time using an optimal number ...
Conference Paper
Most of the data on the relative efficiency of different implementations of the alpha-beta algorithm is neither readily available nor in a form suitable for easy comparisons. In the present study four enhancements to the alpha-beta algorithm--iterative deepening, aspiration search, memory tables and principal variation search--are compared separately and in various combinations to determine the most effective alpha-beta implementation. The rationale for this work is to ensure that new parallel algorithms incorporate the best sequential techniques. Rather than relying on simulation or searches of specially constructed trees, a simple chess program was used to provide a uniform basis for comparisons.
Article
The chapter describes the technical developments that are responsible for the present strong level of computer chess. Since 1979, there have been a number of new developments including special-purpose hardware, parallel search on multiprocessing systems, windowing techniques, and increased use of transposition tables. The chapter describes these advances. It reviews various search techniques that improved chess programs: the minimax algorithm; depth-first search and the basic data structures for chess trees; the alpha-beta algorithm; move generation, the principal continuation, and the killer heuristic; pruning techniques and variable depth quiescence search; transposition tables; iterative deepening; windows, parallel search techniques; special-purpose hardware; and time control and thinking on the opponent's time. The chapter also presents a brief history of computer chess play and relation between computer speed and program strength—faster computers play better chess. The chapter also illustrates a sample computer chess game played between DEEP THOUGHT 0.02 and HITECH in the third round of the ACM's 19th North American Computer Chess Championship in Orlando, Florida in November 1988.
Article
The design issues affecting a parallel implementation of the alpha-beta search algorithm are discussed with emphasis on a tree decomposition scheme which is intended for use on well ordered trees. In particular, the Principal Variation splitting method has been implemented, and experimental results are presented which show how such refinements as progressive deepening, narrow window searching and the use of memory tables affect the performance of multiprocessor based chess-playing programs. When dealing with parallel processing sys- tems, communication delays are perhaps the greatest source of lost time. Therefore, an implementation of our tree decomposition based algorithm is presented that can operate with a limited amount of communication within a network of processors. This system has almost negligible search overhead, and so the principal basis for comparison is the communication overhead, based on a new mathematical model of this component.
Article
A parallel alpha-beta search algorithm called unsynchronized iteratively deepening parallel alpha-beta search is described. The algorithm's simple control strategy and strong performance in complicated positions make it a viable alternative to the principal variation splitting algorithm (PVSA). Processors independently carry out iteratively deepening searches on separate subsets of moves. The iterative deepening is unsynchronized, e.g. one processor may be in the middle of the fifth iteration while another is in the middle of the sixth. Narrow windows diminish the importance of backing-up a score to the root of the tree as quickly as possible (one of the principal objectives of the PVSA). Speedups measured on one, two, four, and eight chess-playing computers are reported
Article
. In this paper we will describe some of the basic techniques that allow computers to play chess like human grandmasters. In the first part we will give an overview about the sequential algorithms used. In the second part we will describe the parallelization that has been developed by us. The resulting parallel search algorithm has been used successfully in the chess program Zugzwang even on massively parallel hardware. In 1992 Zugzwang became Vize World Champion at the Computer Chess Championships in Madrid, Spain, running on 1024 processors. Moreover, the parallelization proves to be flexible enough to be applied successfully to the new Zugzwang program, although the new program uses a different sequential search algorithm and runs on a completely different hardware. 1 Introduction The game of chess is one of the most fascinating two-person zero-sum games with complete information. Besides of being one of the oldest games of this kind it is still played by millions of people all ove...
ResearchGate has not been able to resolve any references for this publication.