Article

[IMM] Introduction to metamathematics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, distributivity will be of crucial importance for our arguments, and it will be assumed throughout the development of our discourse. In fact, our starting point will be the class of distributive bisemilattices: the variety generated by the three-element algebra in Example 2.2, which is the algebraic structure arising from the bisemilattice reduct of the tables of the weak propositional connectives (introduced by Kleene in [31]). For a wider account on the logical aspects we refer the reader to section 3. ...
... For our discourse, a relevant example of a distributive bisemilattice (in fact, it generates DBS as a variety) is the algebra that arises from the bisemilattice reduct of the tables of the binary weak propositional connectives, discussed by Kleene in his "Introduction to Metamathematics" [31] (for further detail on the logical aspects, we refer the reader to section 3). It is not difficult to verify that this algebra is a distributive bisemilattice, although it is not a lattice, as shown by the Hasse diagrams of ≤ ∧ and ≤ ∨ : ...
... As we mentioned on page 5, the algebra 3 in Example 3.1 (which generates IDBS as a variety) arises from the matrices of the weak propositional connectives, discussed by Kleene in [31], in which he distinguishes between a weak and strong sense of propositional connectives. These tables are devised to describe situations in which partially defined / unknown properties (weak / strong sense of the connectives, respectively) are present. ...
Preprint
In this article we will focus our attention on the variety of distributive bisemilattices and some linguistic expansions thereof: bounded, De Morgan, and involutive bisemilattices. After extending Balbes' representation theorem to bounded, De Morgan, and involutive bisemilattices, we make use of Hartonas-Dunn duality and introduce the categories of 2spaces and 2spaces^{\star}. The categories of 2spaces and 2spaces^{\star} will play with respect to the categories of distributive bisemilattices and De Morgan bisemilattices, respectively, a role analogous to the category of Stone spaces with respect to the category of Boolean algebras. Actually, the aim of this work is to show that these categories are, in fact, dually equivalent.
... Aside from its innate interest, outer Pasch is the key tool to prove the plane separation theorem, Theorem 2.5 of [3]. 16 ...
... It is desired to find the midpoint M of AB. Let α and β be any two distinct points, 16 Gupta's proof of outer Pasch from inner Pasch does not use the parallel axiom. That raises the possibility that it might be easy to prove outer Pasch if we allow the use of the parallel axiom. ...
... The next potential example is I. 16, the exterior angle theorem. Euclid's proof constructs a crucial point F , but he left a gap in failing to prove that F lies in the interior of a certain angle. ...
Preprint
We explore the relationship between Brouwer's intuitionistic mathematics and Euclidean geometry. Brouwer wrote a paper in 1949 called "The contradictority of elementary geometry". In that paper, he showed that a certain classical consequence of the parallel postulate implies Markov's principle, which he found intuitionistically unacceptable. But Euclid's geometry, having served as a beacon of clear and correct reasoning for two millenia, is not so easily discarded. Brouwer started from a "theorem" that is not in Euclid, and requires Markov's principle for its proof. That means that Brouwer's paper did not address the question whether Euclid's "Elements" really requires Markov's principle. In this paper we show that there is a coherent theory of "non-Markovian Euclidean geometry." We show in some detail that our theory is an adequate formal rendering of (at least) Euclid's Book~I, and suffices to define geometric arithmetic, thus refining the author's previous investigations (which include Markov's principle as an axiom). Philosophically, Brouwer's proof that his version of the parallel postulate implies Markov's principle could be read just as well as geometric evidence for the truth of Markov's principle, if one thinks the geometrical "intersection theorem" with which Brouwer started is geometrically evident.
... As one can see, constructivism of quantum formalism (to be exact, a constructive substitute for quantum formalism) can be guaranteed in every situation, in which all the terms in the equations (9) and (10) involving the projector ∞ can be safely neglected. ...
... However, while one can prove that any constructive function is a computable function (for instance, using Kleene's realizability interpretation[9]), it is important to keep the distinction between the notions of constructivism and computability. As explained in[10], the notion of function in constructive mathematics is a primitive one, which cannot be explained in a satisfactory way in term of recursivity. ...
... In the SS, the universe always has and always will exist in a state statistically like its current one, and time has no beginning. Needless to say, the SS cosmology is appealing because it avoids an initial singularity, has no beginning of time, and does not require an initial condition for the universe.9 Putting it differently, because an essential feature of macroscopic assemblies of microscopic particles is that the state equations are size independent, one can naturally arrive at an idealization of an infinite universe as an infinite-volume limit of increasingly large finite systems with constant density. ...
Preprint
Full-text available
This paper focuses on a constructive treatment of the mathematical formalism of quantum theory and a possible role of constructivist philosophy in resolving the foundational problems of quantum mechanics, particularly, the controversy over the meaning of the wave function of the universe. As it is demonstrated in the paper, unless the number of the universe's degrees of freedom is fundamentally upper bounded (owing to some unknown physical laws) or hypercomputation is physically realizable, the universal wave function is a non-constructive entity in the sense of constructive recursive mathematics. This means that even if such a function might exist, basic mathematical operations on it would be undefinable and subsequently the only content one would be able to deduce from this function would be pure symbolical.
... Hence, many-valued logicians have initially addressed this problem by introducing logics whose semantics provides for infectious truth values: namely, values that "spread" from propositional variables to any formula containing them. Cases in point are the Weak Kleene logics B 3 (paracomplete Weak Kleene logic) [4,20,8,25] and PWK (paraconsistent Weak Kleene logic) [17,20,11,5,24,8]; see below for more details. ...
... Hence, many-valued logicians have initially addressed this problem by introducing logics whose semantics provides for infectious truth values: namely, values that "spread" from propositional variables to any formula containing them. Cases in point are the Weak Kleene logics B 3 (paracomplete Weak Kleene logic) [4,20,8,25] and PWK (paraconsistent Weak Kleene logic) [17,20,11,5,24,8]; see below for more details. ...
... When L = CL, i.e., classical logic formulated in the language L 1 , known results to be found in [11,35] imply that CL r and CL l are, respectively, the so-called paracomplete and paraconsistent weak Kleene logics B 3 and PWK-investigated also in [4,17,20], usually defined as follows via their characteristic matrices: ...
... The logics K3 and LP were originally introduced in [11,1,17] and have been widely studied in the literature. Instead, we shall focus on the other six lesser-known logics of this group. ...
... However, it has been restricted to logics defined through standards that are upsets. 11 We have generalize this kind of interpretation for every mixed Strong Kleene logic, regardless of whether it is defined through standards that are upsets or not. And what we gained from this is the possibility to represent, with the valid inferences of the logics that are usually left aside (or not even considered as logics), every possible formal commitment that relates these three types of epistemic attitudes. ...
... To achieve a unified framework that encapsulates all formal commitments possible, given this three epistemic attitudes, we first need to 10 For more about how bad logics that are trivial with respect to a particular set of sentences are, see [21]. 11 Following [5], a standard is an upset iff if x ≤ y and x ∈ D, then y ∈ D. ...
Article
Full-text available
In this paper, we present two ways of modelling every epistemic formal conditional commitment that involves (at most) three key epistemic attitudes: acceptance, rejection and neither acceptance nor rejection. The first one consists of adopting the plurality of every mixed Strong Kleene logic (along with an epistemic reading of the truth-values), and the second one involves the use of a unified system of six-sided inferences, named 6SK, that recovers the validities of each mixed Strong Kleene logic. We also introduce a sequent calculus that is sound and complete with respect to both approaches. We compare both accounts, and finally, we suggest that the plurality of Strong Kleene logic as well as the general framework 6SK are linked to formal epistemic norms via bridge principles.
... [Huf57,Cal58]), but is also closely related to questions in logic (e.g. [Kle52,Kör66,Mal14]) and cybersecurity ([TWM + 09, HOI + 12]). Objects are called differently in the different fields; for presentational simplicity, we use the parlance of hardware circuits throughout the paper. ...
... In 1938 Kleene defined his strong logic of indeterminacy [Kle38,p. 153], see also his later textbook [Kle52,§64]. It can be readily defined by setting u = 1 2 , not x := 1 − x, x and y := min(x, y), and x or y := max(x, y), as it is commonly done in fuzzy logic [PCRF79,Roj96]. ...
Preprint
The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.
... Prototypical examples of variable inclusion companions are found in the realm of three-valued logics. For instance, the left and the right variable inclusion companions of classical (propositional) logic are respectively paraconsistent weak Kleene logic (PWK for short) [33,40], and Bochvar logic [7]. The fact that these logics coincide with the variable inclusion companions of classical logic was shown in [20,62]. ...
... Let be propositional classical logic. Then l is the logic known as Paraconsistent Weak Kleene, PWK for short, originally introduced in [40]. This logic is equivalently defined, syntactically, by imposing the variable inclusion constrain, as in Definition 10, to classical logic or, semantically via the so-called weak Kleene tables with two of the three truth values as designated (see [10,20]). ...
Preprint
The paper aims at studying, in full generality, logics defined by imposing a variable inclusion condition on a given logic \vdash. It turns out that the algebraic counterpart of the variable inclusion companion of a given logic \vdash is obtained by constructing the Plonka sum of the matrix models of \vdash. This association allows to obtain a Hilbert-style axiomatization of the logics of variable inclusion and to describe the structure of their reduced models.
... Three valued logic Next, consider the three valued logic of (Kleene, 1952)[page 334] that consists of three values, t (true), f (false) and u (unknown). We only consider here the crucial equivalence relation ≡ defined by the truth table Six valued logic In (Van Hentenryck et al., 1992) the constraint logic programming language CHIP is used for the automatic test-pattern generation (ATPG) for the digital circuits. ...
... We now run the following query: Three valued logic Next, consider the and3 constraint in the three valued logic of (Kleene, 1952)[page 334] represented by the truth table ...
Preprint
We study here a natural situation when constraint programming can be entirely reduced to rule-based programming. To this end we explain first how one can compute on constraint satisfaction problems using rules represented by simple first-order formulas. Then we consider constraint satisfaction problems that are based on predefined, explicitly given constraints. To solve them we first derive rules from these explicitly given constraints and limit the computation process to a repeated application of these rules, combined with labeling.We consider here two types of rules. The first type, that we call equality rules, leads to a new notion of local consistency, called {\em rule consistency} that turns out to be weaker than arc consistency for constraints of arbitrary arity (called hyper-arc consistency in \cite{MS98b}). For Boolean constraints rule consistency coincides with the closure under the well-known propagation rules for Boolean constraints. The second type of rules, that we call membership rules, yields a rule-based characterization of arc consistency. To show feasibility of this rule-based approach to constraint programming we show how both types of rules can be automatically generated, as {\tt CHR} rules of \cite{fruhwirth-constraint-95}. This yields an implementation of this approach to programming by means of constraint logic programming. We illustrate the usefulness of this approach to constraint programming by discussing various examples, including Boolean constraints, two typical examples of many valued logics, constraints dealing with Waltz's language for describing polyhedral scenes, and Allen's qualitative approach to temporal logic.
... Replacing the axiom schema A∨¬A by the axiom schema ¬A⊃(A⊃B) in the given Hilbert-style deductive system for LP ⊃,F yields a Hilbert-style deductive system for Kleene's strong three-valued logic introduced in Section 64 of [21] with its implication connective replaced by an implication connective for which the standard deduction theorem holds and enriched with a falsity constant. The name K3 ⊃,F is used to denote this logic. ...
... The conditions that a valuation for K3 ⊃,F must satisfy uniquely characterize the set of valuations induced by the truth value that naturally corresponds to the falsity constant and the functions on the set of truth values that correspond to the different connectives according to Section 3.1 of [4]. The functions concerned, except the function that corresponds to the implication connective, are the same as the ones that are represented by the truth tables presented in Section 64 of [21]. ...
Preprint
LP,F^{\supset,\mathsf{F}} is a three-valued paraconsistent propositional logic which is essentially the same as J3. It has most properties that have been proposed as desirable properties of a reasonable paraconsistent propositional logic. However, it follows easily from already published results that there are exactly 8192 different three-valued paraconsistent propositional logics that have the properties concerned. In this paper, properties concerning the logical equivalence relation of a logic are used to distinguish LP,F^{\supset,\mathsf{F}} from the others. As one of the bonuses of focussing on the logical equivalence relation, it is found that only 32 of the 8192 logics have a logical equivalence relation that satisfies the identity, annihilation, idempotent, and commutative laws for conjunction and disjunction. For most properties of LP,F^{\supset,\mathsf{F}} that have been proposed as desirable properties of a reasonable paraconsistent propositional logic, its paracomplete analogue has a comparable property. In this paper, properties concerning the logical equivalence relation of a logic are also used to distinguish the paracomplete analogue of LP,F^{\supset,\mathsf{F}} from the other three-valued paracomplete propositional logics with those comparable properties.
... The involution-free reduct of i-ubands coincides with the variety of unital bands (i.e., idempotent monoids), whose subvariety lattice is characterized in [27] (see also [11]). Interestingly, and aside from their obvious generalization of Boolean algebras and classical propositional logic, i-ubands are general enough to provide a common root for a large class of other well-studied algebras related to nonclassical logics, such as ortho(modular) lattices (related to the foundation of quantum logic), De Morgan algebras (related to mathematical fuzzy logic), and in particular also for Kleene 3-valued logics (see [15,6]), showing that the logic of McCarthy can be seen as their non-commutative companion (see also [12]). ...
Preprint
Full-text available
We provide an equational basis for McCarthy algebras, the variety generated by the three-element algebra defining the logic of McCarthy (the non commutative version of Kleene three-valued logics), solving a problem left open by Konikowska [17]. Differently from Konikowska, we tackle the problem in a more general algebraic setting by introducing McCarthy algebras as a subvariety of unital bands (idempotent monoids) equipped with an involutive (unary) operation ' satisfying xxx''\approx x; herein referred to as i-ubands. Prominent (commutative) subvarieties of i-ubands include Boolean algebras, ortholattices, Kleene algebras, and involutive bisemilattices, hence i-ubands provides an algebraic common ground for several non-classical logics. Beside our main result, we also get a semilattice decomposition theorem for McCarthy algebras and a characterization of the (lowest levels of the) lattice of subvarieties of i-ubands, shedding new light on the connection with Kleene logics.
... Let us finally consider the 3-valued Kleene logic, also known as Kleene's "strong logic of indeterminacy"; this is a generalization of classical logic which adds to the intended model of the logic a third indeterminate value to the truth and falsum constants [38]. Its algebraic semantics, the variety KA of Kleene algebras, is a subvariety of bounded distributive lattices with an involution ¬ [37]. ...
Preprint
Full-text available
We provide a new foundational approach to the generalization of terms up to equational theories. We interpret generalization problems in a universal-algebraic setting making a key use of projective and exact algebras in the variety associated to the considered equational theory. We prove that the generality poset of a problem and its type (i.e., the cardinality of a complete set of least general solutions) can be studied in this algebraic setting. Moreover, we identify a class of varieties where the study of the generality poset can be fully reduced to the study of the congruence lattice of the 1-generated free algebra. We apply our results to varieties of algebras and to (algebraizable) logics. In particular we obtain several examples of unitary type: abelian groups; commutative monoids and commutative semigroups; all varieties whose 1-generated free algebra is trivial, e.g., lattices, semilattices, varieties without constants whose operations are idempotent; Boolean algebras, Kleene algebras, and G\"odel algebras, which are the equivalent algebraic semantics of, respectively, classical, 3-valued Kleene, and G\"odel-Dummett logic.
... The second system, formed by the sets of operators {Θ, Ζ}, constitutes what we now know as Łukasiewicz's (1920) and Kleene's "strong" calculi (Kleene, 1938). 24 Meanwhile, the third system, formed by the operators {Ω, Υ}, comprises Kleene's "weak" logics (Kleene, 1971) and Bochvar's "internal" logics (Bochvar;Bergmann, 1981). 25 The first system, however, which I call P 3 , formed by the set of operators {Φ, ᴪ}, is little known and studied, so much so that Turquette considered these connectives "mysterious", as seen in the previous section. ...
Article
Full-text available
Peirce é hoje reconhecido como um dos pioneiros da lógica matemática e algébrica, mas seu trabalho original em lógicas não-clássicas ainda recebe escassa atenção fora do círculo estreito de especialistas peircianos. Esse é o caso do cálculo proposicional trivalorado que Peirce registrou em seu “Logic Notebook”, mais de uma década antes do surgimento das lógicas multivaloradas. A lógica triádica, como Peirce a chamava, foi formalizada por Turquette no final dos anos 1960. Turquette apresentou uma interpretação axiomática das tabelas trivaloradas em uma série de artigos que se tornaram referência nesse estudo. Recentemente, propusemos uma nova abordagem, enfatizando um fragmento não-explosivo da lógica triádica. Este artigo objetiva ampliar a pesquisa nos seguintes pontos: (i) uma análise crítica dos trabalhos de Turquette, incluindo a discussão do método Rosser-Turquette de axiomatização; e (ii) reconstrução do fragmento da lógica triádica em um sistema baseado na implicação material de Sobociński, formalizado em cálculo de sequentes. Concluímos que a matriz trivalorada de Peirce induz uma lógica paraconsistente, relevante e subestrutural, com potencial investigativo para as pesquisas contemporâneas em lógicas não-clássicas.
... Real-valued logic generalizes Boolean truth values by extending them to the interval [0, 1], allowing for continuous operators such as t-norms and tconorms (Menger, 1942;Schweizer & Sklar, 1961;1983;Klement et al., 2000). These operators provide smooth approximations of AND and OR functions, enabling reasoning over continuous-valued domains (Tarski, 1944;Kleene, 1952). Similarly, fuzzy logic introduces the concept of partial membership, where truth values represent degrees of belonging rather than strict binary assignments. ...
Preprint
Full-text available
Neural networks, as currently designed, fall short of achieving true logical intelligence. Modern AI models rely on standard neural computation-inner-product-based transformations and nonlinear activations-to approximate patterns from data. While effective for inductive learning, this architecture lacks the structural guarantees necessary for deductive inference and logical consistency. As a result, deep networks struggle with rule-based reasoning, structured generalization, and interpretability without extensive post-hoc modifications. This position paper argues that standard neural layers must be fundamentally rethought to integrate logical reasoning. We advocate for Logical Neural Units (LNUs)-modular components that embed differentiable approximations of logical operations (e.g., AND, OR, NOT) directly within neural architectures. We critique existing neurosymbolic approaches, highlight the limitations of standard neural computation for logical inference, and present LNUs as a necessary paradigm shift in AI. Finally, we outline a roadmap for implementation, discussing theoretical foundations, architectural integration, and key challenges for future research.
... It is important to observe that the quasivariety of Bochvar algebras algebraises not only Bochvar's, but also Halldén's external logic. Finally, it is worth mentioning that S.C. Kleene independently introduced the internal fragments of both logics in his [17]. ...
Preprint
The proper quasivariety BCA of Bochvar algebras, which serves as the equivalent algebraic semantics of Bochvar's external logic, was introduced by Finn and Grigolia in and extensively studied in a recent work by two of these authors. In this paper, we show that the algebraic category of Bochvar algebras is equivalent to a category whose objects are pairs consisting of a Boolean algebra and a meet-subsemilattice (with unit) of the same. Furthermore, we provide an axiomatisation of the variety $V(BCA) generated by Bochvar algebras. Finally, we axiomatise the join of Boolean algebras and semilattices within the lattice of subvarieties of V(BCA).
... A natural ordering, reflecting differences in the 'measure of truth' of else elements, is f <? < t. The meet (minimum) ∧, join (maximum) ∨ and the order reversing involution ¬, define by ¬t = f, ¬f = t and ¬? =?, are taken to be the basic operators on ≤ for defining eh conduction, disjunction, and the negation connectives (respectively) of Kleene's well-known three-valued logic (see [44]). Another operator which will be useful in the sequel is defined as follows: a ⊃ b = t if ∈ {f, ?} and a ⊃ b = b otherwise (see [42] for some explanations why this operator is useful for defining an implication connective). ...
Preprint
We add strong negation N to classical logic and interpret the attack relation of "x attacks y" in argumentation as (xNy)(x\to Ny). We write a corresponding object level (using N only) classical theory for each argumentation network and show that the classical models of this theory correspond exactly to the complete extensions of the argumentation network. We show by example how this approach simplifies the study of abstract argumentation networks. We compare with other translations of abstract argumentation networks into logic, such as classical predicate logic or modal logics, or logic programming, and we also compare with Abstract Dialectical Frameworks.
... The theoretical feasibility of this process, in the context of recursive functions, has already been established in (Kleene 1952) and is known as Kleene's S-M-N theorem. However, while Kleene was concerned with theoretical issues of computability and his construction often yields functions which are more complex to evaluate than the original, the goal of partial evaluation is to exploit the static input in order to derive more efficient programs. ...
Preprint
Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It is achieved through a well-automated application of parts of the Burstall-Darlington unfold/fold transformation framework. The main challenge in developing systems is to design automatic control that ensures correctness, efficiency, and termination. This survey and tutorial presents the main developments in controlling partial deduction over the past 10 years and analyses their respective merits and shortcomings. It ends with an assessment of current achievements and sketches some remaining research challenges.
... In [10], Kleene discussed various three-valued logics that are extensions of Boolean logic. McCarthy in [14] first studied the three-valued non-commutative logic in the context of programming languages. ...
Preprint
This paper introduces the notions of atoms and atomicity in C-algebras and obtains a characterisation of atoms in the C-algebra of transformations. Further, this work presents some necessary conditions and sufficient conditions for the atomicity of C-algebras and shows that the class of finite atomic C-algebras is precisely that of finite adas. This paper also uses the if-then-else action to study the structure of C-algebras and classify the elements of the C-algebra of transformations.
... However, the domain of F is allowed to be bigger than that of ξ X . Therefore, F being a realizer of f does not translate to the diagram being commutative in the usual way. Figure 1 On the Baire space there exists a well-established computability theory originating from [Kle52], see [Lon05] for an overview. A functional F :⊆ B → B is called computable if there is an oracle Turing machine M ? ...
Preprint
This paper investigates second-order representations in the sense of Kawamura and Cook for spaces of integrable functions that regularly show up in analysis. It builds upon prior work about the space of continuous functions on the unit interval: Kawamura and Cook introduced a representation inducing the right complexity classes and proved that it is the weakest second-order representation such that evaluation is polynomial-time computable. The first part of this paper provides a similar representation for the space of integrable functions on a bounded subset of Euclidean space: The weakest representation rendering integration over boxes is polynomial-time computable. In contrast to the representation of continuous functions, however, this representation turns out to be discontinuous with respect to both the norm and the weak topology. The second part modifies the representation to be continuous and generalizes it to Lp-spaces. The arising representations are proven to be computably equivalent to the standard representations of these spaces as metric spaces and to still render integration polynomial-time computable. The family is extended to cover Sobolev spaces on the unit interval, where less basic operations like differentiation and some Sobolev embeddings are shown to be polynomial-time computable. Finally as a further justification quantitative versions of the Arzel\`a-Ascoli and Fr\'echet-Kolmogorov Theorems are presented and used to argue that these representations fulfill a minimality condition. To provide tight bounds for the Fr\'echet-Kolmogorov Theorem, a form of exponential time computability of the norm of Lp is proven.
... To define the satisfaction relation on theories, we extend the interpretation of symbols to arbitrary terms and formulas using the Kleene truth assignments (Kleene 1952). For a theory T and a partial structure S, we say that S is a model of T (or in symbols S T ) if T S = t and S is two-valued. ...
Preprint
The knowledge base paradigm aims to express domain knowledge in a rich formal language, and to use this domain knowledge as a knowledge base to solve various problems and tasks that arise in the domain by applying multiple forms of inference. As such, the paradigm applies a strict separation of concerns between information and problem solving. In this paper, we analyze the principles and feasibility of the knowledge base paradigm in the context of an important class of applications: interactive configuration problems. In interactive configuration problems, a configuration of interrelated objects under constraints is searched, where the system assists the user in reaching an intended configuration. It is widely recognized in industry that good software solutions for these problems are very difficult to develop. We investigate such problems from the perspective of the KB paradigm. We show that multiple functionalities in this domain can be achieved by applying different forms of logical inferences on a formal specification of the configuration domain. We report on a proof of concept of this approach in a real-life application with a banking company. To appear in Theory and Practice of Logic Programming (TPLP).
... For a detailed presentation of metamathematics, seeKleene (1971). 7For further discussions of Gödel's incompleteness theorem, seeBudiansky (2021) andGoldstein (2005). ...
Chapter
Full-text available
Mathematics is an activity—something we do—not just something inert that we study. This rich collection begins from that premise to explore the various social influences, institutional forces and lived realities that shape and mould the study and practice of mathematics, and are moulded by it in turn. These twenty-one essays explore questions of mathematics as a topic of philosophy, but also the nature and purpose of mathematics education and the role of mathematics in moulding citizens. It challenges the biases and prejudices inherent within uninformed histories of mathematics, including problems of white supremacy, the denial of cultural difference and the global homogenization of teaching methods. In particular, the book contrasts the effectiveness of mathematics and science in modelling physical phenomena and solving technical problems with its ineffectiveness in modelling social phenomena and solving human problems, and urges us to consider how mathematics might better meet the urgent crises of our age. The book addresses anybody who is interested in reflecting on the role of mathematics in society from different perspectives. It allows mathematicians to ponder about the cultural connections of mathematics and provides new perspectives for philosophical, sociological and cultural studies of mathematics. Because of the book’s emphasis on education in mathematics, it is especially interesting for mathematics teachers and teacher educators to challenge their understanding of the subject.
... In other words, if any of the four applications of f are undefined, weak associativity "forgives" the equality requirement. The two approaches to associativity for functions correspond to the two notions of equality for partial functions, which date back to work of Kleene [Kle52]. 1 Formally, one should speak of relations rather than functions. However the longstanding convention in computer science is to refer to such objects as "multivalued functions" (see [BLS84,BLS85,Sel96]). Similarly, the notation "set-f (...)" is also standard in the literature on multivalued functions, and we follow it. ...
Preprint
The nondeterministic advice complexity of the P-selective sets is known to be exactly linear. Regarding the deterministic advice complexity of the P-selective sets--i.e., the amount of Karp--Lipton advice needed for polynomial-time machines to recognize them in general--the best current upper bound is quadratic [Ko, 1983] and the best current lower bound is linear [Hemaspaandra and Torenvliet, 1996]. We prove that every associatively P-selective set is commutatively, associatively P-selective. Using this, we establish an algebraic sufficient condition for the P-selective sets to have a linear upper bound (which thus would match the existing lower bound) on their deterministic advice complexity: If all P-selective sets are associatively P-selective then the deterministic advice complexity of the P-selective sets is linear. The weakest previously known sufficient condition was P=NP. We also establish related results for algebraic properties of, and advice complexity of, the nondeterministically selective sets.
... Let us note that from the injective maps j and j I we can construct a bijective correspondence between B V A and B V G by a Schroeder-Bernstein type construction (see [Kle52]) and this can be lifted to an algebra isometry. But for us, the isomorphisms induced by maps like j and j I (these are certainly not unique) will be greatest interest. ...
Preprint
This work proposes a complete algebraic model for classical information theory. As a precursor the essential probabilistic concepts have been defined and analyzed in the algebraic setting. Examples from probability and information theory demonstrate that in addition to theoretical insights provided by the algebraic model one obtains new computational and anlytical tools. Several important theorems of classical probahility and information theory are formulated and proved in the algebraic framework.
... Recently, P lonka sums have been surprisingly connected to logic. Indeed, the algebraic semantics of one among the logics within the so-called Kleene family [15], namely paraconsistent Weak Kleene logic, PWK for short, coincide with the regularization of the variety of Boolean algebras, firstly axiomatised in [24,25]. ...
Preprint
Plonka sums consist of an algebraic construction similar, in some sense to direct limits, which allows to represent classes of algebras defined by means of regular identities (namely those equations where the same set of variables appears on both sides). Recently, Plonka sums have been connected to logic, as they provide algebraic semantics to logics obtained by imposing a syntactic filter to given logics. In this paper, I present a very general topological duality for classes of algebras admitting a Plonka sum representation in terms of dualisable algebras.
... The ⋆ Church-Turing Hypothesis (CTH) claims that every function which would naturally be regarded as computable is computable under his [i.e. Turing's] definition, i.e. by one of his machines [Klee52,p.376]. ...
Preprint
We turn `the' Church-Turing Hypothesis from an ambiguous source of sensational speculations into a (collection of) sound and well-defined scientific problem(s): Examining recent controversies, and causes for misunderstanding, concerning the state of the Church-Turing Hypothesis (CTH), suggests to study the CTH relative to an arbitrary but specific physical theory--rather than vaguely referring to ``nature'' in general. To this end we combine (and compare) physical structuralism with (models of computation in) complexity theory. The benefit of this formal framework is illustrated by reporting on some previous, and giving one new, example result(s) of computability and complexity in computational physics.
... The sequent calculus G bMDL consists of the rules in Fig. 1 together with the standard propositional G3-rules (with principal formulae copied into the premisses) [14] and the standard left rule for the constant ⊥. We write ⊢ G bMDL Γ ⇒ ∆ if Γ ⇒ ∆ is derivable using these rules. ...
Preprint
Starting with the deontic principles in M\={\i}m\=a\d{m}s\=a texts we introduce a new deontic logic. We use general proof-theoretic methods to obtain a cut-free sequent calculus for this logic, resulting in decidability, complexity results and neighbourhood semantics. The latter is used to analyse a well known example of conflicting obligations from the Vedas.
... A curious observation is that when the volume of literature grows in time the average amount of citations a paper receives, cit N , is bigger than the average amount of references in a paper, There is no contradiction here if we consider an infinite network of scientific papers, as one can show using methods of the set theory (Kleene, 1952) that there are one-to-many mappings of an infinite set on itself. When we consider real, i.e. finite, network where the number of citations is obviously equal to the number of references we recall that cit N , as computed in Eq. 22, is the number of citations accumulated by a paper during its cited lifetime. ...
Preprint
Recently we proposed a model in which when a scientist writes a manuscript, he picks up several random papers, cites them and also copies a fraction of their references (cond-mat/0305150). The model was stimulated by our discovery that a majority of scientific citations are copied from the lists of references used in other papers (cond-mat/0212043). It accounted quantitatively for several properties of empirically observed distribution of citations. However, important features, such as power-law distribution of citations to papers published during the same year and the fact that the average rate of citing decreases with aging of a paper, were not accounted for by that model. Here we propose a modified model: when a scientist writes a manuscript, he picks up several random recent papers, cites them and also copies some of their references. The difference with the original model is the word recent. We solve the model using methods of the theory of branching processes, and find that it can explain the aforementioned features of citation distribution, which our original model couldn't account for. The model can also explain "sleeping beauties in science", i.e., papers that are little cited for a decade or so, and later "awake" and get a lot of citations. Although much can be understood from purely random models, we find that to obtain a good quantitative agreement with empirical citation data one must introduce Darwinian fitness parameter for the papers.
... In [14] Kleene discussed various three-valued logics that are extensions of Boolean logic. McCarthy first studied the three-valued non-commutative logic in the context of programming languages in [18]. ...
Preprint
In order to study the axiomatization of the if-then-else construct over possibly non-halting programs and tests, this paper introduces the notion of C-sets by considering the tests from an abstract C-algebra. When the C-algebra is an ada, the axiomatization is shown to be complete by obtaining a subdirect representation of C-sets. Further, this paper considers the equality test with the if-then-else construct and gives a complete axiomatization through the notion of agreeable C-sets.
... Gödel [1] employed an effective numbering of first-order formulae in the proof of his seminal incompleteness theorems. Kleene [2] (see also Theorem XXII in Ref. [3]) constructed the celebrated numbering of the family of all partial recursive functions-this is a list enumerating all unary partial recursive functions. The key property of the numbering is that the binary function is also partial recursive. ...
Article
Full-text available
The theory of numberings studies uniform computations for families of mathematical objects. In this area, computability-theoretic properties of at most countable families of sets S are typically classified via the corresponding Rogers upper semilattices. In most cases, a Rogers semilattice cannot be a lattice. Working within the framework of Formal Concept Analysis, we develop two new approaches to the classification of families S. Similarly to the classical theory of numberings, each of the approaches assigns to a family S its own concept lattice. The first approach captures the cardinality of a family S: if S contains more than 2 elements, then the corresponding concept lattice FC1(S) is a modular lattice of height 3, such that the number of its atoms to the cardinality of S. Our second approach gives a much richer environment. We prove that for any countable poset P, there exists a family S such that the induced concept lattice FC2(S) is isomorphic to the Dedekind-MacNeille completion of P. We also establish connections with the class of enumerative lattices introduced by Hoyrup and Rojas in their studies of algorithmic randomness. We show that every lattice FC2(S) is anti-isomorphic to an enumerative lattice. In addition, every enumerative lattice is anti-isomorphic to a sublattice of the lattice FC2(S) for some family S.
... Підручники з символічної логіки 50-70 років ХХ століття, які вже стали класикою, як правило, містили в назві термін «математична логіка» або «метаматематика» (див. зокрема: [Kleene 1952;1967;Mendelson 1966]). Це означало, що сферою застосування такої логіки перш за все вважалося математичне знання. ...
Article
Full-text available
This paper argued that interpreting logic solely as a formal discipline is unjustified, as this science has always included a significant informal component related to its application in argumentative discourse. The Aristotelian paradigm, based on the subject-predicate approach to the logical-philosophical analysis of language, has influenced the history of logic significantly. The functional analysis of language introduced by Gottlob Frege marked a shift to the modern logical paradigm. This led to the rise of symbolic logic, disrupting the established balance between the formal and informal components of logical knowledge for a time. The application of the latest analytical methods in informal logic has enabled research in this field to meet contemporary scientific standards. This development helps restore the balance between the formal and informal aspects of general logic.
... Definition 1.1 (Kleene three-valued logic [Kle52]). Kleene's three-valued strong logic of indeterminacy extends the two-valued Boolean logic by a third value u. ...
Preprint
This paper studies the hazard-free formula complexity of Boolean functions. As our main result, we prove that unate functions are the only Boolean functions for which the monotone formula complexity of the hazard-derivative equals the hazard-free formula complexity of the function itself. Consequently, every non-unate function breaks the so-called monotone barrier, as introduced and discussed by Ikenmeyer, Komarath, and Saurabh (ITCS 2023). Our second main result shows that the hazard-free formula complexity of random Boolean functions is at most 2(1+o(1))n2^{(1+o(1))n}. Prior to this, no better upper bound than O(3n)O(3^n) was known. Notably, unlike in the general case of Boolean circuits and formulas, where the typical complexity matches that of the multiplexer function, the hazard-free formula complexity is smaller than the optimal hazard-free formula for the multiplexer by an exponential factor in n. Additionally, we explore the hazard-free formula complexity of block composition of Boolean functions and obtain a result in the hazard-free setting that is analogous to a result of Karchmer, Raz, and Wigderson (Computational Complexity, 1995) in the monotone setting. We demonstrate that our result implies a lower bound on the hazard-free formula depth of the block composition of the set covering function with the multiplexer function, which breaks the monotone barrier.
... This distinction underscores the importance of recognizing the unique roles each plays: Boolean algebra as a powerful tool in binary mathematics and logic as the foundation of rational thought across all domains 30,31,32 . ...
Article
In clinical practice toxicity grading is guided by scales such as the Common Terminology Criteria for Adverse Events. These scales assign grades (usually from 1 to 5) to various toxicities based on severity. Cellular Immunotherapies, whose mechanism of action incudes directing T cells to target cells can be associated with special side effects such as cytokine release syndrome or neurologic toxicity. The American Society for Transplantation and Cellular Therapy has developed an easily applicable, logical and concise system to categorize and grade cytokine release syndrome and neurologic toxicity. Cytokine release syndrome is a systemic inflammatory response that can occur after immune cell therapy. Neurologic toxicity can also occur after immune cell therapy. It can include encephalopathy, delirium, headache, anxiety, sleep disorder, dizziness, aphasia, motor dysfunction, tremor, ataxia, seizure, dyscalculia, myoclonus. Boolean algebra can be used to automate and standardize this grading process by translating it into a mathematical framework that combines different clinical signs and symptoms. In this study we apply Boolean algebra as mathematical tool to define and grade Cytokine release syndrome and neurologic toxicity by the criteria of the American Society for Transplantation and Cellular Therapy.
Preprint
The arithmetization of syntax introduced by Kurt Gödel in 1931 enabled him to prove that any effectively axiomatizable system capable of representing elementary arithmetic is either incomplete or inconsistent. Gödel's encoding relied fundamentally on the unique prime factorization of natural numbers. Each formula, term, and proof was assigned a unique Gödel number through the exponents of successive prime bases. This encoding scheme guaranteed injectivity and allowed syntactic operations to be interpreted within arithmetic itself. The general technique facilitated the internal simulation of meta-theoretical predicates, such as provability, via purely arithmetic formulae [1]. The following construction introduces a second-order Gödelization scheme. This layered method builds upon the classical prime-based Gödel numbering of formulas, but adds an alternative arith-metization that avoids further dependence on prime factorization. Specifically, for each formula ϕ with standard Gödel number n, a second code is assigned by the mapping γ(n) = n! + 2. This mapping is injective, arithmetically definable, and effectively computable.
Article
Full-text available
To the best of our knowledge, we offer the first IoT-Osmotic simulator supporting 6G and Cloud infrastructures, leveraging the similarities in Software-Defined Wide Area Network (SD-WAN) architectures when used in Osmotic architectures and User-Centric Cell-Free mMIMO (massive multiple-input multiple-output) architectures. Our simulator acts as a simulator orchestrator, supporting the interaction with a patient digital twin generating patient healthcare data (vital signs and emergency alerts) and a VANET simulator (SUMO), both leading to IoT data streams towards the cloud through pre-initiated MQTT protocols. This contextualises our approach within the healthcare domain while showcas-ing the possibility of orchestrating different simulators at the same time. The combined provision of these two aspects, joined with the addition of a ring network connecting all the first-mile edge nodes (i.e., access points), enables the definition of new packet routing algorithms, streamlining previous solutions from SD-WAN architectures, thus showing the benefit of 6G architectures in achieving better network load balancing, as well as showcasing the limitations of previous approaches. The simulated 6G architecture, combined with the optimal routing algorithm and MEL (MICROELEMENTS software components) allocation policy, was able to reduce the time required to route all communications from IoT devices to the cloud by upto 50.4% compared to analogous routing algorithms used within 5G architectures.
Preprint
Full-text available
Mathematics has long been constrained by Gödel’s Incompleteness Theorems, which assert that in any sufficiently expressive system, there exist true statements that are inherently unprovable. This limitation has led to undecidable problems, non-computable functions, and incomplete mathematical frameworks. Anti-Set Logic (ASL) provides a radically new foundation that eliminates these constraints, ensuring that all mathematical truths are provable at some finite stage of recursion. At the core of ASL is the Anti-Set ¬S, which represents absolute negation—the absence of structure. From this void, mathematics emerges dynamically through recursive expansion, ensuring that no mathematical truth remains permanently inaccessible. ASL replaces classical static axiomatic assumptions with a self-expanding proof system, guaranteeing that all conjectures, functions, and algebraic structures must be provable or refutable. This work introduces the Anti-Gödelian System, which redefines proof, truth, and computability. By eliminating undecidability, reformulating calculus, and reconstructing algebra and topology, ASL provides a complete, provably computable mathematical landscape, transforming fields from pure mathematics to physics, artificial intelligence, and formal logic. Keywords: Anti-Set Logic, Anti-Gödelian System, mathematical completeness, recursive proof expansion, truth-proof equivalence, undecidability, computability, Gödel’s incompleteness, self-expanding mathematics, provability, number theory, algebra, topology, calculus, artificial intelligence, theoretical physics, quantum mechanics, proof theory, formal logic, foundational mathematics. 44 pages.
Preprint
Full-text available
This paper studies the computability of the secrecy capacity of fast-fading wiretap channels from an algorithmic perspective, examining whether it can be computed algorithmically or not. To address this question, the concept of Turing machines is used, which establishes fundamental performance limits of digital computers. It is shown that certain computable continuous fading probability distribution functions yield secrecy capacities that are non-computable numbers. Additionally, we assess the secrecy capacity's classification within the arithmetical hierarchy, revealing the absence of computable achievability and converse bounds.
Preprint
Rabi and Sherman [RS97,RS93] proved that the hardness of factoring is a sufficient condition for there to exist one-way functions (i.e., p-time computable, honest, p-time noninvertible functions; this paper is in the worst-case model, not the average-case model) that are total, commutative, and associative but not strongly noninvertible. In this paper we improve the sufficient condition to ``P does not equal NP.'' More generally, in this paper we completely characterize which types of one-way functions stand or fall together with (plain) one-way functions--equivalently, stand or fall together with P not equaling NP. We look at the four attributes used in Rabi and Sherman's seminal work on algebraic properties of one-way functions (see [RS97,RS93]) and subsequent papers--strongness (of noninvertibility), totality, commutativity, and associativity--and for each attribute, we allow it to be required to hold, required to fail, or ``don't care.'' In this categorization there are 3^4 = 81 potential types of one-way functions. We prove that each of these 81 feature-laden types stand or fall together with the existence of (plain) one-way functions.
Preprint
This paper presents a systematic study of the prehistory of the traditional subsystems of second-order arithmetic that feature prominently in the reverse mathematics program of Friedman and Simpson. We look in particular at: (i) the long arc from Poincar\'e to Feferman as concerns arithmetic definability and provability, (ii) the interplay between finitism and the formalization of analysis in the lecture notes and publications of Hilbert and Bernays, (iii) the uncertainty as to the constructive status of principles equivalent to Weak K\"onig's Lemma, and (iv) the large-scale intellectual backdrop to arithmetical transfinite recursion in descriptive set theory and its effectivization by Borel, Lusin, Addison, and others.
Preprint
Full-text available
Church's synthesis problem asks whether there exists a finite-state stream transducer satisfying a given input-output specification. For specifications written in Monadic Second-Order Logic (MSO) over infinite words, Church's synthesis can theoretically be solved algorithmically using automata and games. We revisit Church's synthesis via the Curry-Howard correspondence by introducing SMSO, an intuitionistic variant of MSO over infinite words, which is shown to be sound and complete w.r.t. synthesis thanks to an automata-based realizability model.
Preprint
We prove that for the intermediate logics with the disjunction property any basis of admissible rules can be reduced to a basis of admissible m-rules (multiple-conclusion rules), and every basis of admissible m-rules can be reduced to a basis of admissible rules. These results can be generalized to a broad class of logics including positive logic and its extensions, Johansson logic, normal extensions of S4, n-transitive logics and intuitionistic modal logics.
Preprint
Full-text available
There has been a recent interest in hierarchical generalisations of classic incompleteness results. This paper provides evidence that such generalisations are readibly obtainable from suitably hierarchical versions of the principles used in the original proof. By collecting such principles, we prove hierarchical versions of Mostowski's theorem on independent formulae, Kripke's theorem on flexible formulae, and a number of further generalisations thereof. As a corollary, we obtain the expected result that the formula expressing "T is Σn\Sigma_n-ill" is a canonical example of a Σn+1\Sigma_{n+1} formula that is Πn+1\Pi_{n+1}-conservative over T.
Article
Book
Full-text available
This book studies data, analytics, and intelligence using Boolean structure. Chapters dive into the theories, foundations, technologies, and methods of data, analytics, and intelligence. The primary aim of this book is to convey the theories and technologies of data, analytics, and intelligence with applications to readers based on systematic generalization and specialization. Sun uses the Boolean structure to deconstruct all books and papers related to data, analytics, and intelligence and to reorganize them to reshape the world of big data, data analytics, analytics intelligence, data science, and artificial intelligence. Multi-industry applications in business, management, and decision-making are provided. Cutting-edge theories, technologies, and applications of data, analytics, and intelligence and their integration are also explored. Overall, this book provides original insights on sharing computing, insight computing, platform computing, a calculus of intelligent analytics and intelligent business analytics, meta computing, data analyticizing, DDPP (descriptive, diagnostic, predictive, and prescriptive) computing, and analytics. This book is a useful resource with multi-industry applications for scientists, engineers, data analysts, educators, and university students.
Book
Full-text available
The 4th international scientific-practical conference entitled "HIGHER EDUCATION IN THE REGIONS: REALITIES AND PERSPECTIVES" was organized in a hybrid form by Guba branch of the Azerbaijan State Pedagogical University (ASPU) on October 11-12, 2024, following the education policy of the Republic of Azerbaijan in accordance with the preparation of scientific bases of measures taken by the Ministry of Science and Education, to strengthen the innovation, to attract the potential of natural and geographical conditions to production and practice. The conference was organized by order No. 237 dated July 19, 2024, of the director of Guba Branch of ASPU, Associate Professor Yusif Aliyev, under the relevant decision approved by the Ministry of Science and Education of the Republic of Azerbaijan, the command No. 3/124 of July 10, 2024, of ASPU rector Professor Jafar Jafarov in terms of the development of higher education in the regions, the increase of personnel potential, and the establishment of mutual scientific and international relations.
Article
We introduce a subclass of concurrent game structures (CGS) with imperfect information in which agents are endowed with private data-sharing capabilities. Importantly, our CGSs are such that it is still decidable to model-check these CGSs against a relevant fragment of ATL. These systems can be thought as a generalisation of architectures allowing information forks, that is, cases where strategic abilities lead to certain agents outside a coalition privately sharing information with selected agents inside that coalition. Moreover, in our case, in the initial states of the system, we allow information forks from agents outside a given set A to agents inside this group A . For this reason, together with the fact that the communication in our models underpins a specialised form of broadcast, we call our formalism A -cast systems . To underline, the fragment of ATL for which we show the model-checking problem to be decidable over A -cast is a large and significant one; it expresses coalitions over agents in any subset of the set A . Indeed, as we show, our systems and this ATL fragments can encode security problems that are notoriously hard to express faithfully: terrorist-fraud attacks in identity schemes.
Article
Full-text available
Статья посвящена классу трехзначных логик, которые сохраняют как классические, так и промежуточное значения. Ключевой вклад статьи — построение трехзначной логики с одним выде- ленным значением, эквивалентной по выразительным возможностям логике Pac. Этот результат получен с помощью использования альтернативного набора базовых операций — конъюнкции, дизъюнкции и отрицания, без импликации, присутствующей в стандартной формулировке логики Pac. Таким образом, в статье предложен вариант логики Pac и его пара с одним выделенным значением, имеющие общий язык и представляющие собой расширения конъюнктивно-дизъюнктивного фрагмента классической логики с помощью неклассического отрицания.
Article
Full-text available
The set of ST\textsf{ST} ST -valid inferences is neither the intersection, nor the union of the sets of K3\textsf{K}_3 K 3 -valid and LP\textsf{LP} LP -valid inferences, but despite the proximity to both systems, an extensional characterization of ST\textsf{ST} ST in terms of a natural set-theoretic operation on the sets of K3\textsf{K}_3 K 3 -valid and LP\textsf{LP} LP -valid inferences is still wanting. In this paper, we show that it is their relational product . Similarly, we prove that the set of TS\textsf{TS} TS -valid inferences can be identified using a dual notion, namely as the relational sum of the sets of LP\textsf{LP} LP -valid and K3\textsf{K}_3 K 3 -valid inferences. We discuss links between these results and the interpolation property of classical logic. We also use those results to revisit the duality between ST\textsf{ST} ST and TS\textsf{TS} TS . We present a notion of duality on which ST\textsf{ST} ST and TS\textsf{TS} TS are dual in exactly the same sense in which LP\textsf{LP} LP and K3\textsf{K}_3 K 3 are dual to each other.
Article
Full-text available
В статье рассмотрены степени максимальности следования в классе C-расширяющих трехзначных логик, языки которых обладают минимальной выразительной силой. Логика называется C-расширяющей, если ее операции совпадают с таковыми для классической логики при ограничении их области определения классическими истинностными значениями. Под степенью максимальности логики понимается множество всех ее дедуктивных расширений в том же языке. Каждая трехзначная C-расширяющая логика может рассматриваться как языковое расширение одной из десяти трехзначных логик. Мы даем оценку степени максимальности для каждой из этих логик. В тех случаях, когда степень максимальности конечна, нами получены точные значения. В тех случаях, когда она оказывается бесконечна, приводится нижняя граница мощности решетки дедуктивных расширений соответствующей логики. Исключение составляет одна система, для которой показано, что степень максимальности ее следования континуальна. Работа посвящена исследованию теоретико-доказательных свойств трехзначных логик на основе выразительных возможностей их языков. Полученные результаты ставят ряд вопросов, намечающих новые направления исследований в этой области.
ResearchGate has not been able to resolve any references for this publication.