Selmer Bringsjord’s research while affiliated with Rensselaer Polytechnic Institute and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (242)


Robot Cognition Simultaneously Social, Multi-Modal, Hypothetico-Causal, and Attention-Guided … Solves the Symbol Grounding Problemc
  • Chapter

February 2025

·

7 Reads

Selmer Bringsjord

·

John Slowik

·

·

[...]

·

The so-called symbol-grounding problem (SGP) has long plagued cognitive robotics (and AI). If Rob, a humanoid household robot, is asked to remove and discard the faded rose from among the dozen in the vase, and accedes, does Rob grasp the formulae/data he processed to get the job done? Does he for instance really understand the formulae inside him that logicizes “There’s exactly one faded rose in the vase”? Some (e.g., Searle, Harnad, Bringsjord) have presented and pressed a negative answer, and have held that engineering a robot for whom the answer is ‘Yes’ is, or at least may well be, insoluble. This negativity increases if Rob must understand that giving a faded rose to someone as a sign of love might not be socially adept. We change the landscape, by bringing to bear, in a cognitive robot, an unprecedented, intertwined quartet of capacities that make all the difference: namely, (i) social planning; (ii) multi-modal perception; (iii) pre-meditated attention to guide such perception; and (iv) automated defeasible reasoning about causation. In other words, a genuinely social robot that senses in varied ways under the guidance of how it directs its attention, and adjudicates among competing arguments for what it perceives, solves SGP, or at least a version thereof. An exemplar of such a robot is our PERI.2, which we demonstrate in an environment called ‘Logi-Forms,’ intelligent navigation of which requires social reasoning.


The Technology of Outrage: Bias in Artificial Intelligence
  • Preprint
  • File available

September 2024

·

64 Reads

Artificial intelligence and machine learning are increasingly used to offload decision making from people. In the past, one of the rationales for this replacement was that machines, unlike people, can be fair and unbiased. Evidence suggests otherwise. We begin by entertaining the ideas that algorithms can replace people and that algorithms cannot be biased. Taken as axioms, these statements quickly lead to absurdity. Spurred on by this result, we investigate the slogans more closely and identify equivocation surrounding the word 'bias.' We diagnose three forms of outrage-intellectual, moral, and political-that are at play when people react emotionally to algorithmic bias. Then we suggest three practical approaches to addressing bias that the AI community could take, which include clarifying the language around bias, developing new auditing methods for intelligent systems, and building certain capabilities into these systems. We conclude by offering a moral regarding the conversations about algorithmic bias that may transfer to other areas of artificial intelligence.

Download

A Universal Intelligence Measure for Arithmetical Uncomputable Environments

July 2024

·

97 Reads

We propose an extension to Legg and Hutter’s universal intelligence (UI) measure to capture the intelligence of agents that operate in uncomputable environments that can be classified on the Arithmetical Hierarchy. Our measure is based on computable environments relativized to a (potentially uncomputable) oracle. We motivate our metric as a natural extension to UI that expands the class of environments evaluated with a trade-off of further uncomputability. Our metric is able to capture intelligence of agents in uncomputable environments we care about, such as first-order theorem proving, and also lends itself to providing a notion of intelligence of oracles. We end by proving some properties of the new measure, such as convergence (given certain assumptions about the complexity of uncomputable environments).


Prolegomenon for a Family of Theorems Regarding Trustworthiness in Autonomous Artificial Agents

June 2024

·

1 Read

We lay some groundwork for a family of theorems that, in general, assert that as some factors internal to an artificial agent increase or decrease on their respective scales, where these factors seem to contribute to an increase in autonomy overall, the level of trust on the part of a human agent that this artificial agent will in fact deliver on this trust declines.


Results of experiment 1-accuracy, mean confidence, and mean agreement by group.
Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists

May 2024

·

30 Reads

To diagnose someone’s reasoning today as “sophistry” is to say that this reasoning is at once persuasive (at least to a significant degree) and logically invalid. We begin by explaining that, despite some recent scholarly arguments to the contrary, the understanding of ‘sophistry’ and ‘sophistic’ underlying such a lay diagnosis is in fact firmly in line with the hallmarks of reasoning proffered by the ancient sophists themselves. Next, we supply a rigorous but readable definition of what constitutes sophistic reasoning (=sophistry). We then discuss “artificial” sophistry: the articulation of sophistic reasoning facilitated by artificial intelligence (AI) and promulgated in our increasingly digital world. Next, we present, economically, a particular kind of artificial sophistry, one embodied by an artificial agent: the lying machine. Afterward, we respond to some anticipated objections. We end with a few speculative thoughts about the limits (or lack thereof) of artificial sophistry, and what may be a rather dark future.


Spectra: An Expressive STRIPS-Inspired AI Planner Based on Automated Reasoning: (System Description)

May 2024

·

9 Reads

·

2 Citations

KI - Ku_nstliche Intelligenz

Research in automated planning traditionally focuses on model-based approaches that often sacrifice expressivity for computational efficiency. For artificial agents that operate in complex environments, however, frequently the agent needs to reason about the beliefs of other agents and be capable of handling uncertainty. We present Spectra, a STRIPS-inspired AI planner built atop automated reasoning. Our system is expressive, in that we allow for state spaces to be defined as arbitrary formulae. Spectra is also designed to be logic-agnostic, as long as an automated reasoner exists that can perform entailment and question-answering over it. Spectra can handle environments of unbounded uncertainty; and with certain non-classical logics, our system can create plans under epistemic beliefs. We highlight all of these features using the cognitive calculus DCC\mathcal {DCC}. Lastly, we discuss that under this framework, in order to fully plan under uncertainty, a defeasible (= non-monotonic) logic can be used in conjunction with our planner.


FIGURE E A Full Trio of clues are fogged over. Fog (courtesy of a fog machine) has appeared in the RAIR Lab, and the results are not good perception-wise.
Argument-based inductive logics, with coverage of compromised perception

January 2024

·

61 Reads

·

1 Citation

Frontiers in Artificial Intelligence

Formal deductive logic, used to express and reason over declarative, axiomatizable content, captures, we now know, essentially all of what is known in mathematics and physics, and captures as well the details of the proofs by which such knowledge has been secured. This is certainly impressive, but deductive logic alone cannot enable rational adjudication of arguments that are at variance (however much additional information is added). After affirming a fundamental directive, according to which argumentation should be the basis for human-centric AI, we introduce and employ both a deductive and—crucially—an inductive cognitive calculus. The former cognitive calculus, DCEC, is the deductive one and is used with our automated deductive reasoner ShadowProver; the latter, IDCEC, is inductive, is used with the automated inductive reasoner ShadowAdjudicator, and is based on human-used concepts of likelihood (and in some dialects of IDCEC, probability). We explain that ShadowAdjudicator centers around the concept of competing and nuanced arguments adjudicated non-monotonically through time. We make things clearer and more concrete by way of three case studies, in which our two automated reasoners are employed. Case Study 1 involves the famous Monty Hall Problem. Case Study 2 makes vivid the efficacy of our calculi and automated reasoners in simulations that involve a cognitive robot (PERI.2). In Case Study 3, as we explain, the simulation employs the cognitive architecture ARCADIA, which is designed to computationally model human-level cognition in ways that take perception and attention seriously. We also discuss a type of argument rarely analyzed in logic-based AI; arguments intended to persuade by leveraging human deficiencies. We end by sharing thoughts about the future of research and associated engineering of the type that we have displayed.




The M Cognitive Meta-architecture as Touchstone for Standard Modeling of AGI-Level Minds

May 2023

·

14 Reads

Lecture Notes in Computer Science

We introduce rudiments of the cognitive meta-architecture M (majuscule of μ\mu and pronounced accordingly), and of a formal procedure for determining, with M as touchstone, whether a given cognitive architecture XiX_i (from among a finite list 1 k\ldots k of modern contenders) conforms to a minimal standard model of a human-level AGI mind. The procedure, which for ease of exposition and economy in this short paper is restricted to arithmetic cognition, requires of a candidate XiX_i, (1), a true biconditional expressing that for any human-level agent a, a property possessed by this agent, as expressed in a declarative mathematical sentence s(a), holds if and only if a formula χi(a)\chi _i(\mathfrak {a}) in the formal machinery/languages of XiX_i holds as well (a\mathfrak {a} being an in-this-machinery counterpart to natural-language name a). Given then that M is such that s(a) iff μ(m)s(a) \text { iff } \mu (\mathfrak {m}), where the latter formula is in the formal language of M, with m\mathfrak {m} the agent modeled in M, a minimal standard modeling of an AGI-level mind is certifiably achieved by XiX_i if, (2), it can be proved that χi(a) iff μ(a).\chi _i(\mathfrak {a}) \text { iff } \mu (\mathfrak {a}). We conjecture herein that such confirmatory theorems can be proved with respect to both cognitive architectures NARS and SNePS, and have other cognitive architectures in our sights.Keywordsstandard modeling of AGI-level mindscognitive architecturescomputational logic


Citations (57)


... Based on Automated Reasoning [6]. • Brandon Rozek and Selmer Bringsjord. ...

Reference:

Non-Classical Reasoning for Contemporary AI Applications
Spectra: An Expressive STRIPS-Inspired AI Planner Based on Automated Reasoning: (System Description)
  • Citing Article
  • May 2024

KI - Ku_nstliche Intelligenz

... However, the development in the cognitive intelligence stage, especially in natural language understanding, is still relatively limited. Compared with the rich linguistic experience and linguistic knowledge reserve of human beings, it is difficult to generate real intelligence by relying solely on data-driven deep learning based on data [4][5][6]. In order to break the performance bottleneck of deep learning, attempting to perform semantic analysis combined with deep learning models will become the next breakthrough of AI in cognitive functions, which can assist in analyzing the linguistic materials obtained from user research, and can significantly improve the efficiency of analyzing the results of user research, and even mine important information that is difficult to be found by manual analysis [7][8][9]. ...

Universal Cognitive Intelligence, from Cognitive Consciousness, and Lambda (Λ)
  • Citing Chapter
  • July 2023

... First, a symbolic processing capability with productivity, compositionality, and systematicity (i.e., more than just occasional cases of apparent symbol manipulation) is characteristic of human thoughts and languages, but not adequately captured by intuition and instinct (recall the debates in the 1980s; Fodor, 1975;Fodor & Pylyshyn, 1988). Also missing is rigorous logical reasoning (more than just some simple fragments of logic), which (educated) humans are clearly capable of (Bringsjord et al., 2023). More generally speaking, explicit human thinking, which is controlled (as opposed to being automatic), deliberate (as opposed to being associative or reactive), effortful, usually working memory intensive, and often rule-governed, are needed beyond intuition and instinct (as argued by, e.g., Evans, 2003;Kahneman, 2011;Sun, 1994). ...

Logic-Based Modeling of Cognition
  • Citing Chapter
  • May 2023

... However, this phenomenon achieves ever more significance when looking at the development of emerging technologies: today, some AI systems are now expected to make or suggest ethical decisions on their own completely. Thus, scholars argue that the existence of AMAs has become a reality [24]. For example, autonomous vehicles need to distribute risks between road users in traffic or algorithms in the criminal justice system to adjudicate an individual's sentence [25,26]. ...

A Partially Synthesized Position on the Automation of Machine Ethics

Digital Society

... Our second case study revolves around a very interesting and challenging reasoning game that we are using in a sustained attempt to quite literally have the cognitive robot PERI.2 attend school and progress grade-by-grade through at least high school, on the road thereby to artificial general intelligence (AGI); this project was announced in Bringsjord et al. (2022a). The game is called "Meta-Forms" (see Figure 2 for a rapid orientation to the game). ...

PERI.2 Goes to PreSchool and Beyond, in Search of AGI
  • Citing Chapter
  • January 2023

Lecture Notes in Computer Science

... En [21] elaboraron un modelo para producir resúmenes utilizando un modelo de lenguaje transformador. En su propuesta utilizan un modelo de nombre Pegasus el cual se caracteriza por ser un algoritmo que representa criterios lógicos y que parte de su funcionamiento se encuentra en la clasificación de términos o palabras claves que permitan la construcción de un resumen de acuerdo a los datos de entrada. ...

Toward Generating Natural-Language Explanations of Modal-Logic Proofs
  • Citing Chapter
  • January 2023

Lecture Notes in Computer Science

... Details regarding this class of logics and exactly how they are tailor-made for handling cognitive attitudes/verbs are provided in numerous publications in which such calculi are harnessed for various implementations (see Govindarajulu and Bringsjord, 2017a;Bringsjord et al., 2020b). Put with a brevity here that is sufficient, a cognitive calculus C is a pair L, I where L is a formal language (composed, in turn, minimally, of a formal grammar and an alphabet/symbol set), and I is a collection of inference schemata (sometimes called a proof theory or argument theory) I; in this regard, our logicist-AI work is in the tradition of prooftheoretic semantics inaugurated by Prawitz (1972) and others (and for a modern treatment, see Francez, 2015;Bringsjord et al., 2022c). ...

The (Uncomputable!) Meaning of Ethically Charged Natural Language, for Robots, and Us, from Hypergraphical Inferential Semantics
  • Citing Chapter
  • September 2022

Intelligent Systems

... However, while we certainly acknowledge that this foundational distinction is widely affirmed, it is not one that applies to our approach. In a word, the reason is that inductive logic, computationally treated, as has been explained by the lead author elsewhere (see Bringsjord et al., 2021Bringsjord et al., , 2023b, must conform to the Leibnizian dream of a "universal logic" that would serve to place rigorous argumentation (in e.g., even jurisprudence) in the same machine-verifiable category as mathematical reasoning. This means that the fundamental distinction made in the study mentioned in Bench-Capon and Dunne (2007), while nearly universally accepted, does not apply to the approach taken herein. ...

Automated argument adjudication to solve ethical problems in multi-agent environments

Paladyn, Journal of Behavioral Robotics

... Despite the preceding differences, there is a potential for a two-way influence between modal logics and the psychology of reasoning. A particular modal logic could be the basis of a theory of reasoning in cognitive science (see Bringsjord & Govindarajulu, 2020). Indeed, a reviewer suggested that system KD45 might be such a candidate. ...

Rectifying the Mischaracterization of Logic by Mental Model Theorists
  • Citing Article
  • December 2020

Cognitive Science A Multidisciplinary Journal