ArticlePDF Available

Comparing various concepts of function prediction. Part I

Authors:

Abstract

Original title: К.М.Подниекс. Сравнение различных типов предельного синтеза и прогнозирования функций. Ученые записки Латвийского государственного университета, 1974, том 210, стр. 68-81. Prediction: f(m+1) is guessed from given f(0), ..., f(m). Program synthesis: a program computing f is guessed from given f(0), ..., f(m). The hypotheses are required to be correct for all sufficiently large m, or with some positive frequency. These approaches yield a hierarchy of function prediction and program synthesis concepts. The comparison problem of the concepts is solved.
... In Section 3.3, we restrict attention to computable methods for the next-value learning of computable sequences and find that for any method, the sets of success and failures are equivalent both from the point of view of cardinality and the point of view of topology-but that the successes are nonetheless incomparably less common than the failures in the hybrid topological-computational sense (due to Mehlhorn (1973)) that they form an effectively meagre set. 8 Along the way we will see that the notion of weak NV-learning, while strictly more inclusive than the notion of NVlearning, is neither weaker nor stronger than two other variants of NV-learning, NV 1 -learning (due to Bārzdiņš and Freivalds (1972)) and NV 2 -learning (due to Podnieks (1974)). ...
... This notion is due toBārzdiņš and Freivalds (1972). 24 This notion is due toPodnieks (1974). 25 That N V Ă N V 1 is due toBārzdiņš and Freivalds (1972); that N V 1 Ă N V 2 is due toPodnieks (1974). ...
... 24 This notion is due toPodnieks (1974). 25 That N V Ă N V 1 is due toBārzdiņš and Freivalds (1972); that N V 1 Ă N V 2 is due toPodnieks (1974). SeeCase and Smith (1983, Corollary 2.29, Corollary 2.31, Theorem 3.1, and Theorem 3.5). ...
Preprint
This paper is concerned with learners who aim to learn patterns in infinite binary sequences: shown longer and longer initial segments of a binary sequence, they either attempt to predict whether the next bit will be a 0 or will be a 1 or they issue forecast probabilities for these events. Several variants of this problem are considered. In each case, a no-free-lunch result of the following form is established: the problem of learning is a formidably difficult one, in that no matter what method is pursued, failure is incomparably more common that success; and difficult choices must be faced in choosing a method of learning, since no approach dominates all others in its range of success. In the simplest case, the comparison of the set of situations in which a method fails and the set of situations in which it succeeds is a matter of cardinality; in other cases, it is a topological matter (meagre vs. co-meagre), or a hybrid computational-topological matter (effectively meagre vs. effectively co-meagre, in the sense of Mehlhorn (1973)).
... The most well-known of them is "small in terms of measure". Inference in the limit of GSdel numbers of functions which can differ from the given function on a set of bounded measure was studied thoroughly by K.Podnieks [8]. ...
Article
Full-text available
We consider approximate in the limit of G6del numbers for total recursive functions. The set of possible errors is allowed to be infinite but "effectively small". The latter notion is precise in several ways. as "immune", "hyperimmune", "hyperhyperimrnune", "cohesive", etc. All the identification types considered tum out to the differen
... The most well-known of them is "small in terms of measure". Inference in the limit of GSdel numbers of functions which can differ from the given function on a set of bounded measure was studied thoroughly by K.Podnieks [8]. ...
Article
Full-text available
We consider approximate in the limit of Gödel numbers for total recursive functions. The set of possible errors is allowed to be infinite but “effectively small”. The latter notion is precise in several ways, as “immune”, “hyperimmune”, “hyperhyperimmune”, “cohesive”, etc. All the identification types considered turn out to the different.
Article
We introduce a problem set-up we call the Iterated Matching Pennies (IMP) game and show that it is a powerful framework for the study of three problems: adversarial learnability, conventional (i.e., non-adversarial) learnability and approximability. Using it, we are able to derive the following theorems. (1) It is possible to learn by example all of Σ10Π10\Sigma^0_1 \cup \Pi^0_1 as well as some supersets; (2) in adversarial learning (which we describe as a pursuit-evasion game), the pursuer has a winning strategy (in other words, Σ10\Sigma^0_1 can be learned adversarially, but Π10\Pi^0_1 not); (3) some languages in Π10\Pi^0_1 cannot be approximated by any language in Σ10\Sigma^0_1. We show corresponding results also for Σi0\Sigma^0_i and Πi0\Pi^0_i for arbitrary i.
Article
Introduced is a new inductive inference paradigm, dynamic modeling. Within this learning paradigm, for example, function h learns function g iff, in the i-th iteration, h and g both produce output, h gets the sequence of all outputs from g in prior iterations as input, g gets all the outputs from h in prior iterations as input, and, from some iteration on, the sequence of h@?s outputs will be programs for the output sequence of g. Dynamic modeling provides an idealization of, for example, a social interaction in which h seeks to discover program models of g@?s behavior it sees in interacting with g, and h openly discloses to g its sequence of candidate program models to see what g says back. Sample results: every g can be so learned by some h; there are g that can only be learned by an h if g can also learn that h back; there are extremely secretive h which cannot be learned back by any g they learn, but which, nonetheless, succeed in learning infinitely many g; quadratic time learnability is strictly more powerful than linear time learnability. This latter result, as well as others, follows immediately from general correspondence theorems obtained from a unified approach to the paradigms within inductive inference. Many proofs, some sophisticated, employ machine self-reference, a.k.a., recursion theorems.
Article
Full-text available
A natural ωpLω+1 hierarchy of successively more general criteria of success for inductive inference machines is described based on the size of sets of anomalies in programs synthesized by such machines. These criteria are compared to others in the literature. Some of our results are interpreted as tradeoff results or as showing the inherent relative-computational complexity of certain processes and others are interpreted from a positivistic, mechanistic philosophical stance as theorems in philosophy of science. The techniques of recursive function theory are employed including ordinary and infinitary recursion theorems.
Article
Inductive inference machines construct programs for total recursive functions given only example values of the functions. Probabilistic inductive inference machines are defined, and for various criteria of successful inference, it is asked whether a probabilistic inductive inference machine can infer larger classes of functions if the inference criterion is relaxed to allow inference with probability at least p, (0 < p < 1) as opposed to requiring certainty. For the most basic criteria of success (EX and BC), it is shown that any class of functions that can be inferred from examples with probability exceeding 1/2 can be inferred deterministically, and that for probabilities p ≤ 1/2 there is a discrete hierarchy of inferability parameterized by p. The power of probabilistic inference strategies is characterized by equating the classes of probabilistically inferable functions with those classes that can be inferred by teams of inductive inference machines (a parallel model of inference), or by a third model called frequency inference.
Chapter
The usual information in inductive inference for the purposes of learning an unknown recursive function f is the set of all input /output examples (n,f(n)), n . In contrast to this approach we show that it is considerably more powerful to work with finite sets of good examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the learning problem has to be solved, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Gdel numberings. We then show that similar effects can be achieved for learning pattern languages and finite automata from good examples in polynomial time essentially using the structure of these objects. Here the number of the good examples is polynomially bounded by the size of the objects to be learnt (length of pattern, number of states, respectively).
ResearchGate has not been able to resolve any references for this publication.