Article

An impossible program

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Around 1965, Strachey wrote a letter to the editor of the Computer Journal, titled, 'An impossible program' [120]. In this letter, Strachey presented his own proof of the unsolvability of Turing's Halting Problem, which Turing of course had already published in his 1936 paper [122]. ...
... "A well-known piece of folk-lore among programmers holds that it is impossible to write a program which can examine any other program and tell, in every case, if it will terminate or get into a closed loop when it is run." [120] It is not entirely clear what Strachey meant with a 'program' and a 'closed loop'. Let us first unrealistically assume that Strachey only wanted to consider 'programs' that could run on a pre-defined programmable computing machine. ...
... This left me with an uneasy feeling that the proof must be long or complicated, but in fact it is so short and simple that it may be of interest to casual readers. The version below uses CPL 6 , but not in any essential way." [120] The first sentence in the previous quote strongly suggests that Strachey had not read Turing's 1936 paper. On the other hand, Strachey's ability to reproduce the essentials of the proof 7 is equally noteworthy. ...
Article
The Advent of Recursion & Logic in Computer Science Karel Van Oudheusden –alias Edgar G. Daylight Abstract: The history of computer science can be viewed from a number of disciplinary perspectives, ranging from electrical engineering to linguistics. As stressed by the historian Michael Mahoney, different 'communities of computing' had their own views towards what could be accomplished with a programmable computing machine. The mathematical logicians, for instance, had established what programmable computing machines with unbounded resources could not do, while the switching theorists had showed how to analyze and synthesize circuits. "But no science accounted for what finite machines with finite, random access memories could do or how they did it. That science had to be created." -Mahoney [78, p.6]. With the advent of the programmable computing machine, new communities were created, such as the community of numerical analysts. Unlike the logicians and the electrical engineers, the numerical analysts, by their very profession, took programming seriously. Several of them gradually became more involved in seeking specific techniques to overcome the tediousness in programming their machines. One such, and important, technique was the recursive procedure. While logicians had been well-acquainted with the concept of recursion for quite some time, and the development of mathematical logic had, itself, contributed to the advent of the programmable computing machine, it is unclear whether the idea of the recursive procedure entered the arena of programming languages via the logic community. More generally, it is unclear how and to what extent, exactly, ideas from logic have influenced the computer pioneers of the 1950-60s. Both unclarities, described above, are addressed in this thesis. Concerning the first unclarity, the recursive procedure entered the arena of programming languages in several ways by different people. Special attention will be paid to the pioneer Edsger W. Dijkstra who, in 1960, received world-wide recognition for implementing recursive procedures for the ALGOL60 programming language, i.e. by building a compiler. While recursive procedures remained highly controversial during the 1960s, Dijkstra was one of its few strong advocates. His views, led by linguistic ideals, were in sharp contrast to those that were led by specific machine features. With respect to the second unclarity, it will be shown that several ideas from logic that did influence some computer pioneers, were primarily received indirectly and without full comprehension. In fact, these pioneers, in the aftermath of their successes, openly stressed that they were not logicians and had not completely understood all of the logic underlying their sources of inspiration. Similarly, the logicians, themselves, did not initially grasp the connection between Turing's 1936 paper and the programmable computing machine either. Finally, emphasis will be laid on Dijkstra's ability, in later years, to connect the unsolvability of Turing's Halting Problem with the practical engineering problems that his community faced.
... In teaching courses about programming languages, program analysis, and software engineering, the halting problem is frequently invoked to justify the well-known piece of folk-lore among programmers that it is impossible to write a program which can examine any other program and tell, in every case, if it will terminate or get into a closed loop when it is run. [18] This is often given as a simplified definition of the problem, as Turing machines and computability issues are poorly known by newcomer students or practicioners. In contrast, the practical relevance of such program termination problem is easily understood as soon as one points to well-known misconducts of programs like "hanging executions", which often hide neverending loops or infinite recursions, i.e., an undesired termination behavior (see, e.g., [1, page 24] for an informal presentation of the halting problem in this vein). ...
... In this paper we only use #. 7 To improve readability, for the informal examples of Turing machines, we use a simple imperative language whose syntax is hopefully well-understood for anybody with some familiarity with (imperative) programming languages. This is a usual practice in the literature, see, e.g., [18,8,9]. ...
... Another indirect reason to think like that is Strachey's account of his encounter with Turing in 1953. Although Strachey does not mention the halting problem of TMs as he rather speaks of programs and program termination, as quoted in the introduction, in [18] Strachey writes I have never actually seen a proof of this in print, and though Alan Turing once gave me a verbal proof (in a railway carriage on the way to a Conference at the NPL in 1953), I unfortunately and promptly forgot the details. ...
Article
Full-text available
The halting problem is a prominent example of undecidable problem and its formulation and undecidability proof is usually attributed to Turing’s 1936 landmark paper. Copeland noticed in 2004, though, that it was so named and, apparently, first stated in a 1958 book by Martin Davis. We provide additional arguments partially supporting this claim as follows: (i) with a focus on computable (real) numbers with infinitely many digits (e.g., π), in his paper Turing was not concerned with halting machines; (ii) the two decision problems considered by Turing concern the ability of his machines to produce specific kinds of outputs, rather than reaching a halting state, something which was missing from Turing’s notion of computation; and (iii) from 1936 to 1958, when considering the literature of the field no paper refers to any “halting problem” of Turing Machines until Davis’ book. However, there were important preliminary contributions by (iv) Church, for whom termination was part of his notion of computation (for the λ-calculus), and (v) Kleene, who essentially formulated, in his 1952 book, what we know as the halting problem now.
... Strachey's letter 'An impossible program' appeared in January 1965 in the Computer Journal with the following opening sentence: "A well-known piece of folklore among programmers holds" -folklore which I call Strachey's Halting Problem -that it is "impossible to write a program which can examine any other program and tell, in every case, if it will terminate or get into a closed loop when it is run." [50] This modern, and now common, interpretation of the Halting Problem is about executable programs. I refer to Appendix A for another popular instance of the same, common interpretation. ...
... This left me with an uneasy feeling that the proof must be long or complicated, but in fact it is so short and simple that it may be of interest to casual readers. The version below uses CPL, but not in any essential way." [50] The programming language ALGOL 60 was a predecessor of CPL (= Combined Programming Language), which in turn was a predecessor of the C language [54, p.116]. CPL programs could be compiled and executed, as also the following comment from one of Strachey's readers indicates: I equate "program" with "program capable of being run". ...
... Strachey's alleged 1965 proof[50] ...
Article
The term ‘Halting Problem’ arguably refers to computer science’s most celebrated impossibility result and to the core notion underlying the language-theoretic approach to security. Computer professionals often ignore the Halting Problem however. In retrospect, this is not too surprising given that several advocates of computability theory implicitly follow Christopher Strachey’s alleged 1965 proof of his Halting Problem (which is about executable – i.e., hackable – programs) rather than Martin Davis’s correct 1958 version or his 1994 account (each of which is solely about mathematical objects). For the sake of conceptual clarity, particularly for researchers pursuing a coherent science of cybersecurity, I will scrutinize Strachey’s 1965 line of reasoning – which is widespread today – both from a charitable, historical angle and from a critical, engineering perspective.
... The halting problem of Turing machines in general is known to be undecidable, while it also known that the halting problem for all finite-state systems is decideable. There are several impossibility proofs available of the halting problem [1,2] where it is not obvious why they do not apply to finitestate systems. ...
... Strachey's Impossible Program Strachey proposed a program based on the result of an assumed halting function [2]. The way Strachey's construction and other similar constructions are used to show the impossibility of a decideable halting function is quite similar to Turing's original disproof. ...
Article
Full-text available
The undecidability of the halting problem is a well-known research result of theoretical computer science, dating back to Turing's work in 1936. Nevertheless, it is commonly known that the halting prob-lem on finite-state computer systems is decidable. Thus, any undecid-ability proof given for the halting problem must imply that it does not apply to finite-state computer systems. The aim of this paper is to deepen the understanding of why the undecid-ability proofs of the halting problem cannot be instantiated as finite-state programs. To bridge the gap between theory and practice, the arguments are based on simple mathematics rather than an extensive use of abstract formalisms.
... A similar proof has been presented by Kfoury et al.[4, p. 11]. The idea seems to originate from Strachey[9]. ...
Article
The programming approach to computability presented in the textbook by Kfoury, Moll, and Arbib in 1982 has been embedded into a programming course following the textbook by Abelson and Sussman. This leads to a course concept teaching good programming practice and clear theoretical concepts simultaneously. Here, we explain some of the main points of this approach: the halting problem, primitive and -recursive functions and the operational counterpart of these functions, i.e., the Loop and the While programs. 1
... In rendering the proof for the unsolvability of the halting problem one normally does not distinguish infinite state programs from finite state ones. As an example, let us consider a popular variant of such a proof, as it was givenb yC hristopher Stracheyi n 1965 [13], which is also used in [11]. Strachey'sp roof is by contradiction. ...
Conference Paper
Full-text available
With the steady increase in computational power of general purposecomputers, our ability to analyze routine software artifacts is also steadilyincreasing. As aresult, we are witnessing a shift in emphasis from the verificationof abstract hand-built models of code, towards the direct verification ofimplementation levelcode. This change in emphasis poses a newset of challengesin software verification. Weexplore some of them in this paper.
... In The Emperor's New Mind [80] and especially in Shadows of the Mind [81], the eminent physicist, Roger Penrose, contends that human reasoning cannot be captured by a mechanical device because humans detect nontermination of programs in cases where digital machines cannot [82]. Penrose thus adapts the similar argumentation of Lucas [83], which was based on Gödel's incompleteness results. ...
Chapter
By providing quantifier-free axioms systems, without any form of induction, for a slight variation of Euclid’s proof and for the Goldbach proof for the existence of infinitely many primes, we highlight the fact that there are two distinct and very likely incompatible concepts of infiniteness that are part of the theorems proved. One of them is the concept of cofinality, the other is the concept of equinumerosity with the universe.
... In rendering the proof for the unsolvability of the halting problem one normally does not distinguish infinite state programs from finite state ones. As an example, let us consider a popular variant of such a proof, as it was givenb yC hristopher Stracheyi n 1965 [13], which is also used in [11]. Strachey'sp roof is by contradiction. ...
Article
Full-text available
With the steady increase in computational power of general purpose computers, our ability to analyze routine software artifacts is also steadily increasing. As aresult, we are witnessing a shift in emphasis from the verification of abstract hand-built models of code, towards the direct verification of implementation levelcode. This change in emphasis poses a newset of challenges in software verification. Weexplore some of them in this paper.
... Technically he did not, but termination's undecidability is an easy consequence of the result that is proved. A simple proof can be found in [36]. build useful tools. ...
Article
Full-text available
We suggest two simple additions to packages that use wellfounded recursion to justify termination of recursive programs: - The contraction condition, to be proved in cases when termination conditions are di#cult or impossible to extract automatically; - user-supplied inductive invariants in cases of nested recursion.
... procedure compute_g(i): // (Wikipedia:Halting Problem) if f(i, i) == 0 then // adapted from (Strachey, C 1965) return 0 // originally written in CPL else // ancestor of the BCPL, B and C loop forever // programming languages This problem is overcome on the basis that a simulating halt decider would abort the simulation of its input before ever returning any value to this input. It aborts the simulation of its input on the basis that its input specifies what is essentially infinite recursion (infinitely nested simulation) to any simulating halt decider. ...
Preprint
Full-text available
The halting theorem counter-examples present infinitely nested simulation (non-halting) behavior to every simulating halt decider. The pathological self-reference of the conventional halting problem proof counter-examples is overcome. The halt status of these examples is correctly determined. A simulating halt decider remains in pure simulation mode until after it determines that its input will never reach its final state. This eliminates the conventional feedback loop where the behavior of the halt decider effects the behavior of its input.
... // Simplified Linz(1990) Ĥ andStrachey(1965) ...
Preprint
Full-text available
The halting theorem counter-examples present infinitely nested simulation (non-halting) behavior to every simulating halt decider. Whenever the pure simulation of the input to simulating halt decider H(x,y) never stops running unless H aborts its simulation H correctly aborts this simulation and returns 0 for not halting.
Conference Paper
Full-text available
Most software developers today rely on only a small number of techniques to check their code for defects: peer review, code walkthroughs, and testing. Despite a rich literature on these subjects, the results often leave much to be desired. The current software testing process consumes a significant fraction of the overall resources in industrial software development, yet it cannot promise zero-defect code. There is reason to hope that the process can be improved. A range of tools and techniques has become available in the last few years that can asses the quality of code with considerably more rigor than before, and often also with more ease. Many of the new tools can be understood as applications of automata theory, and can readily be combined with logic model checking techniques.
Article
A significant part of the call processing software for Lucent's new PathStar access server [FSW98] was checked with automated formal verification techniques. The verification system we built for this purpose, named FeaVer, maintains a database of feature requirements which is accessible via a web browser. Via the browser the user can invoke verification runs. The verifications are performed by the system with the help of a standard logic model checker that runs in the background, invisibly to the user. Requirement violations are reported as C execution traces and stored in the database for user perusal and correction. The main strength of the system is in the detection of undesired feature interactions at an early stage of systems design, the type of problem that is notoriously difficult to detect with traditional testing techniques.
Article
The proof of the unsolvability of the halting problem can be reformulated to show that, in general, the best one can do to estimate the run time of programs is to execute them.
Article
Full-text available
Two mechanisms are generally proposed to explain right precordial ST-segment elevation in Brugada syndrome: 1) right ventricular (RV) subepicardial action potential shortening and/or loss of dome causing transmural dispersion of repolarization; and 2) RV conduction delay. Here we report novel mechanistic insights into ST-segment elevation associated with a Na(+) current (I(Na)) loss-of-function mutation from studies in a Dutch kindred with the COOH-terminal SCN5A variant p.Phe2004Leu. The proband, a man, experienced syncope at age 22 yr and had coved-type ST-segment elevations in ECG leads V1 and V2 and negative T waves in V2. Peak and persistent mutant I(Na) were significantly decreased. I(Na) closed-state inactivation was increased, slow inactivation accelerated, and recovery from inactivation delayed. Computer-simulated I(Na)-dependent excitation was decremental from endo- to epicardium at cycle length 1,000 ms, not at cycle length 300 ms. Propagation was discontinuous across the midmyocardial to epicardial transition region, exhibiting a long local delay due to phase 0 block. Beyond this region, axial excitatory current was provided by phase 2 (dome) of the M-cell action potentials and depended on L-type Ca(2+) current ("phase 2 conduction"). These results explain right precordial ST-segment elevation on the basis of RV transmural gradients of membrane potentials during early repolarization caused by discontinuous conduction. The late slow-upstroke action potentials at the subepicardium produce T-wave inversion in the computed ECG waveform, in line with the clinical ECG.
Article
The noncovalent chiral domino effect (NCDE), defined as chiral interaction upon an N-terminus of a 3(10)-helical peptide, will provide a unique method for structural control of a peptide helix through the use of external chirality. On the other hand, the NCDE has not been considered to be effective for the helicity control of peptides strongly favoring a one-handed screw sense. We here aim to promote the NCDE on peptide helicity using two types of nonapeptides: H-beta-Ala-Delta(Z)Phe-Aib-Delta(Z)Phe-X-(Delta(Z)Phe-Aib)(2)-OCH(3) [Delta(Z)Phe = alpha,beta-didehydrophenylalanine, Aib = alpha-aminoisobutyric acid], where X as the single chirality is L-leucine (1) or L-phenylalanine (2). NMR, IR, and CD spectroscopy as well as energy calculation revealed that both peptides alone form a right-handed 3(10)-helix. The original CD amplitudes or signs in chloroform, irrespective of a strong screw-sense preference in the central chirality, responded sensitively to external chiral information. Namely added Boc-L-amino acid stabilized the original right-handed helix, while the corresponding d-isomer destabilized it or transformed it into a left-handed helix. These peptides were also shown to bind more favorably to an L-isomer from the racemate. Although similar helicity control was observed for analogous nonapeptides bearing an N-terminal Aib residue (Inai, Y.; et al. Biomacromolecules 2003, 4, 122), the present findings demonstrate that the N-terminal replacement by the beta-Ala residue significantly improves the previous NCDE to achieve more effective control of helicity. Semiempirical molecular orbital calculations on complexation of peptide 2 with Boc-(L or D)-Pro-OH reasonably explained the unique conformational change induced by external chirality.
Article
In contrast to popular belief, proving termination is not always impossible.
Chapter
We aim to put some order to the multiple interpretations of the Church-Turing Thesis and to the different approaches taken to prove or disprove it.
Conference Paper
Full-text available
Conference Paper
The possibility to specify a collection of statements (without any preferred execution order) is clamed to be important for the programming of parallel computers.
Article
In this paper we introduce the basic concepts behind the problem of proving program termination, including well-founded rela- tions, well-ordered sets, and ranking functions. We also connect notion of termination of computer programs with that of well-founded relations. This paper introduces no original concepts, nor does it propose solutions towards the the problem of automating termination proofs. It does, how- ever, provide a foundation from which the reader can then peruse the more advanced literature on the topic.
Article
This bibliography is a product of the National Collection Strategy (NCS) program being undertaken by the Charles Babbage Institute. NCS is a three-year program to develop a national collecting strategy for preserving the historic records of computing, made possible through the generous support of the AT&T Foundation, IBM Corporation, the Andrew W. Mellon Foundation, and Unisys Corporation.
Article
People who buy software want a guarantee that it works. When they cannot satisfy themselves, and do not wholly trust the programmer, it is natural to seek the help of a third party who will certify the software. So far, formal certification is available only for compilers. This paper discusses some of the theoretical problems and reviews existing compiler validation schemes. Finally there is a brief examination of other methods of quality assurance that might provide a better solution.
Article
This bibliography records publications of Alan Mathison Turing (1912– 1954). Title word cross-reference 1 [PSS11, WWG12]. 2 [Fai10b]. 3 [Ano11c]. λ [Tur37a]. λ − K [Tur37c]. p [Tur37c]. -computably [Fai10b]. -conversion [Tur37c]. -D [WWG12]. -definability [Tur37a]. -function [Tur37c]. ˙ Zycie [Hod02]. / [Don01a].
Article
Though logic has permeated through several fields in computing during the past decades, it is not clear exactly how and to what extent it has done so. In an attempt to better understand logic's influence in the history of programming languages (and software engineering in general), I have conducted a discussion with the computing scientist Tony Hoare who, already during the late 1960s, used ideas from mathematical logic to define programming-language semantics. The discussion's transcript, presented in this article, is compared to previously published interviews with Hoare rather technical in that many questions are directly related to mathematical logic and computability theory in particular.
Thesis
Full-text available
Le tristement célèbre Ecran Bleu de la Mort de Windows introduit bien le problème traité. Ce bug est souvent causé par la non-terminaison d'un pilote matériel : le programme s'exécute infiniment, bloquant ainsi toutes les ressources qu'il s'est approprié pour effectuer ses calculs. Cette thèse développe des techniques qui permettent de décider, préalablement à l'exécution, la terminaison d'un programme donné pour l'ensemble des valeurs possibles de ses paramètres en entrée. En particulier, nous nous intéressons aux programmes qui manipulent des nombres flottants. Ces nombres sont omniprésents dans les processeurs actuels et sont utilisés par pratiquement tous les développeurs informatiques. Pourtant, ils sont souvent mal compris et, de fait, source de bugs. En effet, les calculs flottants sont entachés d'erreurs, inhérentes au fait qu'ils sont effectués avec une mémoire finie. Par exemple, bien que vraie dans les réels, l'égalité 0.2 + 0.3 = 0.5 est fausse dans les flottants. Non gérées correctement, ces erreurs peuvent amener à des évènements catastrophiques, tel l'incident du missile Patriot qui a fait 28 morts. Les théories que nous développons sont illustrées, et mises à l'épreuve par des extraits de codes issus de programmes largement répandus. Notamment, nous avons pu exhiber des bugs de terminaisons dues à des calculs flottants incorrects dans certains paquets de la distribution Ubuntu.
Article
Most of us who work with software know all too well how easy it is to make small mistakes that escape detection in tests and come back to haunt us later. Unfortunately, when you study formal software verification techniques, one of the first things you learn is that a foolproof method for analyzing your code to reliably prevent these types of unpleasantries does not exist. Worse, you learn that it cannot exist. Although you may not remember a proof, you?ve certainly heard of the halting problem. Alan Turing already showed in 1936 that there cannot be an algorithm that can decisively show whether an arbitrary program will terminate on a given input or not.
Article
Full-text available
With the steady increase in computational power of general purpose computers, our ability to analyze routine software artifacts is also steadily increasing. As ar esult, we are witnessing a shift in emphasis from the verifica- tion of abstract hand-built models of code, towards the direct verification of implementation leve lc ode. This change in emphasis poses a ne ws et of chal- lenges in software verification. W ee xplore some of them in this paper.
Article
A powerful iterative descent method for finding a local minimum of a function of several variables is described. A number of theorems are proved to show that it always converges and that it converges rapidly. Numerical tests on a variety of functions confirm these theorems. The method has been used to solve a system of one hundred non-linear simultaneous equations.
Article
A technique for empirical optimisation is presented in which a sequence of experimental designs each in the form of a regular or irregular simplex is used, each simplex having all vertices but one in common with the preceding simplex, and being completed by one new point. Reasons for the choice of design are outlined, and a formal procedure given. The performance of the technique in the presence and absence of error is studied and it is shown (a) that in the presence of error the rate of advance is inversely proportional to the error standard deviation, so that replication of observations is not beneficial, and (b) that the “efficiency” of the technique appears to increase in direct proportion to the number of factors investigated. It is also noted that, since the direction of movement from each simplex is dependent solely on the ranking of the observations, the technique may be used even in circumstances when a response cannot be quantitatively assessed. Attention is drawn to the ease with which second-order designs having the minimum number of experimental points may be derived from a regular simplex, and a fitting procedure which avoids a direct matrix inversion is suggested. In a brief appendix one or two new rotatable designs derivable from a simplex are noted.
Article
An iterative method for finding stationary values of a function of several variables is described. In many ways it is similar to the method of steepest descents; however this new method has second order convergence.
Article
The greatest or least value of a function of several variables is to be found when the variables are restricted to a given region. A method is developed for dealing with this problem and is compared with possible alternatives. The method can be used on a digital computer, and is incorporated in a program for Mercury.