PreprintPDF Available

The Second Linguistic Turn: Math Agents for Kantian Intelligence Amplification

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

A second Linguistic Turn is proposed to study the preponderance of formal language configuring the technological infrastructure, and the implication of this for meaning construal. Such a second movement extends the initial Linguistic Turn's focus on natural language to formal languages such as mathematics, physics, computer code, and genAI LLMs (generative artificial intelligence Large Language Models). Formal methods are the content not the method of the investigation. The naming of the first Linguistic Turn (1952) coincided with the start of the Information Age (1948). Likewise, the second Linguistic Turn is intended to treat the AI Age (the advent and widespread use of genAI technologies). GenAI Math Agent partners are needed to confront the human-knowledge relation as a new form of Kantian goggles extending to beyond-Euclidean spacetimes and high-stakes intelligence amplification.
Page 1
The Second Linguistic Turn: Math Agents for Kantian Intelligence Amplification
Melanie Swan and Renato P. dos Santos – 25 Mar 2024
Abstract
A second Linguistic Turn is proposed to study the preponderance of formal language configuring
the technological infrastructure, and the implication of this for meaning construal. Such a second
movement extends the initial Linguistic Turn’s focus on natural language to formal languages
such as mathematics, physics, computer code, and genAI LLMs (generative artificial intelligence
Large Language Models). Formal methods are the content not the method of the investigation.
The naming of the first Linguistic Turn (1952) coincided with the start of the Information Age
(1948). Likewise, the second Linguistic Turn is intended to treat the AI Age (the advent and
widespread use of genAI technologies). GenAI Math Agent partners are needed to confront the
human-knowledge relation as a new form of Kantian goggles extending to beyond-Euclidean
spacetimes and high-stakes intelligence amplification.
Keywords: philosophy of language, linguistic turn, pragmatism, philosophy of technology
Introduction
The Second Linguistic Turn
The Problem of Language. Language persists as a central philosophical problematic. All human
endeavor has been constructed with natural language, including science. Even the writing of any
mathematical system of symbols and equations is thought in natural language by the practitioner.
E=mc2 is a law of physics, “languaged” in the mind of the human as energy is equal to mass
times the speed of light squared, whether understood in symbols or words. The entirety of human
activity is constructed, and constructed through natural language, including formal contents.
Thus, philosophical questions arise as to the foundational nature of language beyond the human
context. Would language exist in a world without humans? Is language merely the convenient
affordance of a multi-entity society or a foundational plastic structure amenable to the
embodiment of knowledge, while acknowledging that knowledge too is a human-constructed
concept? Despite the difficulties of investigating language with language, the fast-paced
instantiation of formal language in the computational infrastructure suggests a new moment in
the investigation of language as formal language in the second Linguistic Turn.
Focus of Study Moment Problematic
1 Truth Traditional philosophy Justified true belief, the truth of reality (appearances vs reality)
2
Natural
Language
1
st
Linguistic Turn
Truth
natural language
3
Formal Language
2
nd
Linguistic Turn
Truth via
use of
formal language (LLMs, math, physics, code)
Table 1. Critical Genealogy History of Philosophical Problematics.
The History of Truth. A critical genealogies lens pinpoints the salient moments in the history of
philosophy (Table 1). The trajectory begins with traditional searches for truth as correspondence
between appearances and reality, finding that the external world cannot ever be fully known as
represented by the mind. Acknowledging the impossibility of this kind of true knowledge, the
twentieth-century Linguistic Turn (although with roots in Hegel) shifted to a new interpretation
of truth, to the extent that truth is available, being available through language. Truth is what the
users of language take to be true, namely justificatory reasons for thought and behavior, as
determined through linguistic reasoning and discursive practices. The analytic philosophical
Page 2
tradition is seen as arising in the Linguistic Turn and continental philosophers such as Heidegger
and Derrida also focused on the affordances and critiques of language as a conveyor of truth and
meaning. Carrying this genealogical tradition forward into the twenty-first century, the rise of
formal technologies in phenomenological reality warrants a new moment in the study of
language, now through formal language, in the second Linguistic Turn. This line of investigation
is amenable to all methods of analytical, continental, critical, and pragmatic philosophy.
The First Linguistic Turn. The term “linguistic turn” was coined by Vienna Circle thinker
Bergmann in 1952, arguing that “Of late philosophy has taken a linguistic turn” (Bergmann
1952, 417). He thought that something was afoot beyond the usual “splendor and misery of
words” familiar to philosophers, notably in the thinking of “Moore, Russell, and Wittgenstein”
(Ibid.). Rorty popularizes the use of the term in a 1967 volume entitled “The Linguistic Turn.”
He says that “I shall mean by “linguistic philosophy” the view that philosophical problems are
problems which may be solved (or dissolved) either by reforming language, or by understanding
more about the language we presently use.” (Rorty 1967, 3). Brandom offers a further
contextualization in that by “the linguistic turn, I mean putting language at the center of
philosophical concerns and understanding philosophical problems to begin with in terms of the
language one uses in formulating them” (Brandom 2011, 22).
Continuing the characterization, Koopman cites Ian Hacking’s 1975 claim that “It is a manifest
fact that immense consciousness of language is at present time characteristic of every main
stream in Western philosophy” (Koopman 2011, 61). Koopman argues that “The linguistic turn
is so much a part of our philosophic present” that we can learn from it as a broadly applied
methodology, particularly towards Rorty’s unfulfilled aim of establishing a “nonfoundational
perspective on normativity,” i.e. “explicating normativity without foundations (or authority
without authoritarianism)” (Koopman 2011, 62, 61). The Linguistic Turn means engaging in the
“philosophical practices of reflection, argumentation, question, and answer” (Koopman 2011,
62). For Koopman, “Rorty’s linguistic turn ought to be understood as a contribution to the way
we proceed when we proceed philosophically,” namely, “when you are engaged in a
philosophical analysis of x, why not try looking at how we talk about x?” (Koopman 2011, 62).
Derrida goes farther, taking as a major problematic in his work the issue that the formation of
philosophical problems is problematic from the get-go because we cannot but formulate them
with language. Rorty applauds, and sees Derrida (and Proust but not Nietzsche and Heidegger) as
overcoming the problem of the linguistic positing of theory without the theory becoming lodged
as metaphysics. The way to avoid calcification (recognizing contingency) is to take the
theorizing of one’s predecessors as “a ladder which is to be thrown away” (Rorty 1989, 97).
Derrida is successful by maintaining unpredictability and dynamism in topic and approach.
The Second Linguistic Turn. A second linguistic turn is needed to address the contemporary
situation of reality in that while humans continue to speak natural language, genAI helpers and
the computational infrastructure more generally communicate in formal language. It is not
incidental that the main technological tool of the day is one of language – genAI Large
Language Models (LLMs). LLMs implement natural language in neural networks, and more
generally treat any body of knowledge (a “knowledge graph”) as a language. Any data corpus is
processed as a language, in the structure of syntax, grammar, and semantics. This applies to
Page 3
natural language, mathematics, computer code, or chemistry, whether warranted or not. For
example, the tools of Digital Biology are protein language models, genome language models,
and pathway language models. Bodies of knowledge are implemented as languages in deep
learning networks and trained as foundation models for humans to converse with in natural
language. Further, the generative part of genAI means that the LLM can be directed to create
new “utterances” in these languages based on its learning of a training dataset. That the world of
phenomenological experience is increasingly produced by formal language models warrants their
ontological, epistemological, and axiological investigation in the second Linguistic Turn.
To “deconstruct” philosophy, thus, would be to think – in the most faithful, interior way – the
structured genealogy of philosophy’s concepts – Derrida, Positions, 6
Critical Genealogies Method
As a concept, “critical genealogies” is the nexus of Kantian critique and Foucauldian genealogy.
Foucault’s genealogical method extends his archaeological approach by identifying not only
historical eras but the causes of transition between them. Kant’s critical project is concerned with
the nature, scope, and limits of reason. As an aim, “critical genealogies” is a method for
problematizing (illuminating submerged problems) (Koopman 2013, 1). Problematization is
important as a “kind of master concept that ties together the other core conceptual elements” of
genealogical critique (Ibid., 132). There is an important plurality in the application of critical
genealogies methods (Koopman and Matza 2013, 818). One line of research emphasizes the
reparative as opposed to normative aspects of genealogical critique (Sheehey 2020, 67). In the
current work, critical genealogies methods are applied to identify relevant philosophical
problematics, conduct an investigation, and produce interpretative findings (Table 2).
Critical Genealogy
Trajectory
Philosophical Problematic Implication/Finding
1 Ontological Aspects
Philosophy of
Mathematics
Web3 GenAI Quantum stack
Math-Data Relation
Digitization entails mathematization
Math Agents as Kantian goggles for
interacting w
ith
reality at the level of math
2 Epistemological Aspects
Philosophy of Knowledge
(Foucauldian Epistemes)
The basis of knowledge:
representation and the concept
of the
scientific method
Human-knowledge relation: confrontation
(need better goggles); Human-AI relation:
intelligence amplification
3 Axiological Aspects
Philosophy of
Performativity
Language games, social
practices, forms of life
Wittgenstein-Derrida-Brandom: technologies
are performative language games; results for
regulation and enlightenment
Table 2. Analysis Roadmap: Critical Genealogical Problematics and Philosophical Findings.
Roadmap. First, a philosophy of mathematics approach is used to discuss Math Agents as a tool
for democratizing mathematics and investigating the math-data relation. Second is a philosophy
of knowledge application of Foucault’s epistemes which reveals a Copernican shift in the role of
AI as being constitutive of knowledge and representation, challenging the human-centered
perspective. The human-knowledge relation is the source of contemporary concern, not the
human-AI relation per se, which could help with intelligence amplification. Third is an
examination of the philosophy of performativity, which highlights the uncertainty inherent in
advanced technologies as they must be executed to obtain results. These technologies can be
understood through their use in “language games,” and regulated and theorized accordingly.
Page 4
Part 1: Ontological Aspects of the Second Linguistic Turn
Math Agent Stance to the Computational Infrastructure
Math Agents, Philosophy Agents, and Health Agents are presented as a genAI method for
interacting with reality at the level of math (shifting from “big data” to “big math”) as a higher-
order validated lever for action-taking in the world.
Web3 GenAI Quantum Stack
The contemporary moment is characterized by an increasingly technologically mediated
experience of reality. Such technological mediation occurs via a constellation of “industry 4.0”
(fourth Industrial Revolution) technologies including web3 blockchains, AI-robotics, quantum
computing, internet of things, big data, cloud computing, 3D printing, molecular manufacturing,
cybersecurity, and communications networks. These technologies can be stratified into a tiered
infrastructure of social, interface, and compute layers, with a particular focus on the high-impact
frontier technologies of web3, genAI, and quantum (Table 3). The computational infrastructure
drives friction (energy use, cost, and execution) to zero at each tier. The web3 social layer is
comprised of applications to coordinate social activity in areas such as economics, identity, and
health with cryptocurrencies as pure capital, verified identity as part of pure communication, and
precision health and longevity delivered by app as pure vitality. The interface layer features AI-
robotics as pure intelligence (e.g. no time or effort spent on self-sustaining survival activities).
The computation layer includes quantum computing as pure compute (e.g. the most efficient
computing possible using atoms and subatomic particles as the basis for computation).
Technology Layer Application Low-friction Physics
Web3
Blockchain
Ecosystems
Social
-Economics: money, assets, voting, governance
-Identity: verifiable internet (personal ID, content)
-
H
ealth
:
l
ongevity via app, digital twins, BCI
Pure Capital
Pure Communication
Pure Vitality
GenAI Interface Chatbots, AI-robotics, LLMs, GPTs, GNNs Pure Intelligence
Quantum Compute Quantum computing, classical, spiking NNs Pure Compute
Table 3. Computational Infrastructure: The Digital Society Stack.
In the technology stack, web3 refers to the current third phase of the internet’s development in
expanding the interaction mode from the passive read-only web (1990s) to the interactive read-
write web (2000s) to the secure and remunerative read-write-own web (2020s, enabled by web3
blockchain technologies) (Dixon 2024, 32). Digital transformation continues as many industries
become increasingly digitally instantiated. First was the “ready” conversion of content (1990s
dot-com news, media, and entertainment), then followed by the more complicated recasting of
money and economics, digital art and intellectual property, supply chain, manufacturing, and
science. These latter require complex features such as non-fungibility and contracts, capabilities
provided by blockchains. Blockchains are secure distributed ledger systems providing a database
for resource allocation and an immutable record of event histories. Blockchains are a
foundational information technology using secure properties for digital exchange, and economics
more broadly as a design principle for social outcomes (Swan 2015, 75).
GenAI (generative artificial intelligence) refers to AI systems which create new data based on
learning from a training dataset, whether text, images, video, or molecules. The main form of
genAI is Large Language Models (LLMs) such as ChatGPT which use “attention” as the
mechanism to process all connections in a dataset simultaneously to perform next word (any
Page 5
token) prediction. LLMs treat bodies of knowledge and data corpora as languages – groups of
entities with rule-governed relations between them – whether natural language, mathematics,
computer code, or chemistry. In Digital Biology, protein and genome language models are in
development. Whereas one of the largest natural-language models, LLaMA, has 65 billion
parameters (learnable weights between entities), state-of-the-art protein models have 100 billion
parameters, and genomics may require even more. GenAI has an interesting dual status in that it
is simultaneously an industry 4.0 technology and the interface to other industry 4.0 technologies.
Quantum represents a foundational new way to perform computation at the scale of atoms. The
fundamental principles of quantum mechanics, such as superposition and entanglement, allow
quantum computers to process information in ways that classical computers cannot. The domain
has moved from philosophical musing to engineering problem, with practical applications in
quantum cryptography and chemical modeling. An upgrade to the world’s cryptographic
infrastructure is foreseen, shifting mathematically to group theory methods based on 3D-lattices
instead of number theory methods based in the difficulty of factoring large numbers. In chemical
modeling, there are important problems such as an understanding of how nature sequesters
nitrogen (via docking sites on the nitrogen molecule), as 2% of the global energy budget is spent
on brute-force solutions to produce fertilizer for the same purpose without a foundational
understanding. The status of quantum computing is that technical breakthroughs in error
correction (flipping errant particles back to an earlier position) are required for the platform to
become mainstream. IBM and Google have announced 100-qubit machines and million-qubit
roadmaps, with the industry quietly gathering steam in the background.
Math Agents: The New Kantian Goggles
Digitization entails Mathematical Treatment
Mathematics is the common underlying factor used to implement the technology stack, which
further allows technologies to be interoperable, and points to new discovery (in mathematics and
with mathematical concepts applied to other fields). Digitization entails mathematics.
Digitization means not simply converting input data to ones and zeros, but the mathematical
treatment of these ones and zeros; the mathematization of content. Such digitized mathematical
treatment applies to all formal languages: physics, software programs, computational
complexity, and LLMs as a new kind of formal language. Mathematical instantiation further
connotes efficiency, well-formedness (validated, provable content), and mobilization (any
mathematical instantiation is in the formulation (“language”) of mathematics which is portable to
other mathematical analysis and can be formulated in many other kinds of mathematics; any
mathematics calls all mathematics). The provable efficiency attributes of mathematics are why
the computational infrastructure is denominated in mathematics. Mathematical instantiation also
implies portable interpretation to other formal languages such as physics, particularly in graph
theory (e.g. symmetry) and information theory (e.g. entropy).
The technological infrastructure has always been implemented with the mathematics used in
computer software, what is different is the current purpose-driven implementation of higher
mathematics from within the fields of mathematics and physics to create extremely advanced
technologies (e.g. symmetry in protein language GNNs). Computer scientists are digging deeper
into mathematics for the kinds of formulations they need (e.g. category theoretic formalisms for
Page 6
deep learning networks (Gavranović et al. 2024)), much like Einstein sought a mathematics such
as Riemannian geometry to specify the space-time formulations of relativity.
Math Agents
To operate in the “mathematization turn” of the technological infrastructure, humans need helper
tools such as Math Agents. Math Agents are specialized AI systems and a problem-solving
stance based on the mobilization of mathematical content as a validated lever for interacting with
reality (Swan et al. 2023, 1). The problematic diagnosed with the critical genealogical method is
on the one hand, the juggernaut of the mathematically-determined computational infrastructure
increasingly mediating the everyday experience with reality, and on the other hand, a humanity
who sees mathematics as high-value content but has limited ability to use this content.
In the human-math relation, the tendency is to over-bias the value of mathematics without
critically evaluating it, attributing a false “proofiness” credibility to number-related content
(Seife 2010, 11). Aside from self-selected trained mathematicians, humans in general are not
good at “doing math” (engaging with technical content). In the general case, humans do not have
the interest or capacity for doing mathematics or understanding formal methods. Mathematics is
just one part of the overall AI outsourcing argument in which AI is shown to be better than
humans at various repetitive high-precision tasks such as laser eye surgery, driving, computer
coding, and mathematical analysis. AI chatbots have already emerged as the interface for
humans to access all manner of information and knowledge, including technical content.
Math Agents codify this thought in the idea of an AI system designed specifically for the
mathematics context. Math Agents can be used to solve mathematical problems and perform
mathematical tasks both in pure mathematics (e.g. automated theorem proving, lemma stating)
and applied mathematics (e.g. model-fit assessment). Computer algebra systems have already
substantially extended human reach in math discovery, and genAI implies another leap forward.
In one sense, any chatbot is already a Math Agent as math-related content can be queried and
generated. In another sense, purpose-built chatbots, LLMs, and foundation models (e.g. protein
language models) are a crucial resource targeted to specific data corpora training and problem-
solving (e.g. drug discovery foundation models). First-line Math Agents are specialized AI
systems trained on mathematics datasets for the purpose of solving problems in mathematics.
Math Agent Platforms. So far, there are three kinds of Math Agent projects in the genAI
landscape: content converters, reasoning agents, and problem-solving agents. First is OCR/RAG
(Optical Character Recognition, Retrieval Augmented Generation) efforts which convert
equations in PDF files to machine-readable computer code formats. Second is mathematical
reasoning agents. A conceptual advance has come from framing problems as a 3D board game
(deploying the AlphaGo system to mathematics and computer algorithm problems). Some
examples include finding the fastest algorithm (in matrix multiplication (AlphaTensor) and
sorting (AlphaDev)) as reinforcement learning agents learn the the best series of moves in a 3D
“game board” tableau to solve a problem. Third is mathematical problem-solving agents tasked
with solving known sets of mathematical problems (e.g. AlphaGeometry).
The Democratization of Mathematics: Beyond Statistics, Concepts. The idea of Math Agents
operates in two registers. One is Math Agents as real-life tools, and another is Math Agents as a
Page 7
stance to problem-solving obtained by activating the formal methods canon as a knowledge base.
The broader aim of the Math Agents program is to mobilize mathematics as a rich conceptual
body of knowledge to application and discovery in other fields. Mathematics as a non-digital
content is siloed. It has long been the domain of trained practitioners, only accessible to those
with specialized knowledge and aptitude. However, Math Agents can be used to democratize
mathematics to a broader audience.
The argument is as follows. Mathematics is a technical content uninteresting to most humans, but
foundational to science and the advancement of knowledge. Mathematics is a conceptual content
for possible discovery in many fields. Mathematics is an important thinking tool as a critical
reasoning method. For all these reasons, Math Agents constitute a new form of Kantian goggles
for intelligence amplification. The Math Agent goggles provide not only visibility into otherwise
unseeable time and space manifolds (through microscopes and telescopes) but amplify
intelligence by extending human thinking in new ways.
Mathematics as a content (knowledge base) has a rigorous impenetrability to the general thinker.
Hence the aim of the Math Agent, as a form of Kantian goggles, is to remedy this, unleashing
mathematical conceptual thinking to more domains. By demonstration, many non-mathematician
users of mathematics in other sciences have an impoverished concept of mathematics that does
not extend beyond statistics. The suggestion of “mathematics as a method” is not generally part
of the canon. However, this is changing in the era of Digital Biology. Mathematically-driven
theorizing is necessary to address the complexity, dynamism, and emergence in biosystems.
Math Agent approaches are seen in Digital Biology in dedicated drug design systems (e.g.
BioNeMo), and in the idea of the Math Agent as Health Agent (Swan et al. 2024).
Philosophy Agents. Notably, like mathematics, philosophy is a conceptual content which is
rigorously impervious to the general thinker. Just as the Math Agent interface is proposed to
democratize mathematics for broader human use, the “Philosophy Agent” interface could aid in
human reasoning about philosophical topics. Various digital humanities projects could unfold. A
foundations model trained on the voluminous content of Brandom, for example, could be of
substantial benefit to the wider implementation of his rich content base and important ideas.
Likewise for Hegel, with the Philosophy Agent, it is no longer necessary to labor through the
reading of the Logic, unless as a hobby, to mobilize the rich content of ideas. So far LLMs are
not trained on the entirety of philosophical texts (locked in PDFs) and only return results about
concepts in the mainstream cultural vernacular such as Derrida’s différance. The “Digital
Oeuvre” project could digitize the library archives of thinkers into usable tools, with Philosophy
Agents extricating major thought streams and lines of argument for scholarly engagement. The
democratization of knowledge applies to all knowledge.
Math Agents in Mathematical Discovery. One implication of Math Agent systems is that they
can generically output the descriptive mathematics of any knowledge graph “for free” as part of
their results. GenAI means asking an LLM to generate various kinds of content – image, text,
video, philosophical arguments, or computer code. Likewise, the descriptive mathematics can be
requested (e.g. with the prompt>> given these data, write three possible mathematical
descriptions in systems of equations, provide examples, and analyze their strengths and
weaknesses). The implied result is not only obtaining the content level prediction (e.g. a folded
Page 8
protein structure), but also its mathematical description. AI is a method for interacting with
reality at the level of math (shifting from “big data” to “big math”). The benefit of a
mathematical description of a system is that it provides a structure and method of solving for
unknowns such as disease pathology resolution in precision health programs. Math Agent
systems are thus a compound formulation for producing results at both the level of data and the
level of math; human-readable data and AI-usable math. AI writes the best code (Karpathy 2017,
1) and may also generate the best mathematical description. Math Agents, as part of the AI
infrastructure, may write the mathematics of any system as a generic output, including as a core
feature of Digital Biology executed with Health Agent systems.
Health Agents. Health Agents are a form of Math Agent as the concept of a personalized AI
health advisor overlay to mobile platform (phones, watches, and wearables) delivering
“healthcare by app” instead of “sickcare by appointment” (Swan et al. 2024, 1). Mobile devices
can check health 1000 times per minute as opposed to the standard one time per year doctor’s
office visit, and model virtual patients in the digital twin app. As any AI agent, Health Agents
“speak” natural language to humans and formal language to the computational infrastructure,
possibly outputting the mathematics of personalized homeostatic health as part of their operation.
Health Agents could facilitate the ability of physicians to oversee the health of thousands of
individuals at a time. This could ease overstressed healthcare systems and contribute to physician
well-being and the situation that (per the World Health Organization) more than half of the
global population is still not covered by essential health services (Taylor 2023, 2160).
Further, there is a necessary push for “healthy longevity as a service” with the “silver tsunami”
of two billion people projected to be over 65 in 2050, and an increasingly slate of accepted
medical interventions (Guarente et al. 2024, 355). Notably, 3-5% of the population may be
taking an anti-aging drug already in the form of semaglutide GLP-1 inhibitors (shown to have
longevity and neuroprotective benefits through the gut-brain axis (Wong et al. 2024, 130)). One
effect of this is the “wegovi-nomics” boom which has led to inflated share prices, worldwide
supply chain shortages, and Amazon adding “health delivery as a service.” The “high demand for
health” (Tyson and Kikuchi 2024, 1) could cause further economic reshaping with lower food
demand and higher activity demand (e.g. travel, clothing, skin-tightening) as healthy people re-
explore the possibilities of active lives. On the other hand, the effects of fat-shaming, economic
inequity, and the infopower formatting of humans are possibly more tyrannizing.
With Health Agents, individuals can customize the level of detail in the information they view.
Some people may prefer “health as a service” delivery in the background, while others like to see
lots of information. Providing self-tracking data to consumers has had a positive effect in
motivating behavior change with Fitbit activity trackers and smart meter electricity consumption.
The mathematical output of Health Agents enables another Digital Biology technology, the
digital health twin as a virtual model of individual patient health. Digital health twins are
implicated in “instant clinical trials” involving millions of patients, and as a simulation platform
to evaluate individual drug response and ongoing homeostatic health.
It has been difficult to obtain formal representations of biosystems from which to theorize
foundational principles (Bialek 2012, 4). However, Digital Biology tools such as Math Agents
may enable investigation at the level of formalism (e.g. category theory, Chomsky grammars) as
Page 9
well as at the level of content (Varenne 2013, 167). The Health Agent as Math Agent could be an
important potential use of “mathematics as a method” providing a lever to slice through the noisy
variability and complexity of open biosystems.
Math-Data Relation
Big Data and Big Math
The further philosophical implication of the Math Agent is its role in the math-data relation. Data
continues its pervasive existence in modern life. “Big Data” might now be accompanied by “Big
Math.” Big Math provides an abstraction generalizing away from the detail and variability of Big
Data. One implication of mathematics as a digital content is a potential shift from the “big data”
era to the “big math” era. To the extent that there is a good model-fit, data and math are two
descriptions of the same system, one detailed (data) and one generalized (math). Hence, math
provides a higher-order lever for interacting with reality, and joins the mathematical
infrastructure of constantly generalizing to higher-order levels. Three conceptualizations of the
math-data relation are presented in Figure 1. The first view is “one system two modes” as the
math-data composite view of two representations of the same system. Second is the “multiscalar
renormalization” view of representing a multiscalar system through the lens of a single
conserved quantity across tiers. Third is the “Maxwell’s Demon interaction” view of the most
expedient tier of interaction with a system to produce a result.
One System Two Modes
Multiscalar Renormalization
Maxwell’s Demon Interaction
Figure 1. The Math-Data Relation: Mathematics as an Efficient Lever for Interacting with Reality.
One System Two Modes. The math-data relation often centers around the model-fit problem.
Model-fit (map-territory) refers to how well a generalized set of descriptive mathematics
accurately recapitulates the data of an underlying phenomenon. Data is inherently “noisy,”
having variability, exceptions, irregularity, and anomalies. The trade-off is how much specificity
may be lost to capture the core salience of the phenomenon (Montoya and Edwards, 2021, 413).
The new conceptualization of the math-data relation could include deploying either side, data or
math, for system study, and the two together as a composite check of well-formedness. 3D
visualization tools such as “data explorers” and “math explorers” could allow the size and shape
of a “data set of data” and a “data set of math” to be viewed, both separately and superimposed
so their correspondence may be assessed, by humans and Math Agents. There could be
advantages to working with math as an entity, data as its own entity, as well as the data-math
composite as a new kind of formal entity. Research examines composite math-data views of
equation clusters and data embedding visualizations (in one example of physics math (Chern-
Simons), transposon math, and Alzheimer’s SNP variants (Swan et al. 2023, 10)). Different
proposed mathematical ecologies (sets of equations) for the same phenomenon (e.g. AdS/CFT
Page 10
correspondence) are also overlaid in one visualization plot, suggesting quantitative analysis
based on the distance between equation clusters (Ibid., 9).
Multiscalar Renormalization: Mathematical Ecologies and the Mathematical Observer.
Biosystems are notoriously complicated as open systems with a range of scale tiers and ongoing
interactions with the environment. Different mathematics are proposed for each scale tier to
describe behavior. Such multiscalar ecosystems might be formalized into one picture as a
mathematical ecology (interacting set of equations). Nine order-of-magnitude scale tiers have
been identified in the human brain (Sejnowski 2020, 30037). Other work looks at the multiscalar
composite of the brain in the four tiers of network-neuron-synapse-ion and the South Sea
ecosystem in the four-tier example of whale-krill-plankton-light (Swan and dos Santos 2023, 18–
19). Renormalization is the primary method for treating multiscalar systems by attempting to
view a system through a single conserved quantity across tiers such as symmetry (in the
universe) or free energy (in biosystems). The treatment of complex systems is genealogical as it
entails not only the unitary mathematical description of tiers, but the causal interaction between
them, and how the overall entity operates as a system. Different levels of actor-observers in the
system may be interacting in constant feedback loops with the environment (Table 4). The
mathematical observer is the concept of any of various actor-observers (humans, AIs, math
agents, biological processes) who may be interacting with the system at the level of math,
literally or heuristically, in the physical or digital domain (e.g. healthcare digital twin systems).
Scale Data Stack Math Stack Math-Data Stack
Macro Actor ->
Genetic variants Actor ->
Signal
processing math
Actor ->
Genetic variant
near
-
far relations
Actor
-
>
Actor
-
>
Actor
-
>
Meso Actor ->
Transposon indels Actor ->
Topological
biomath
Actor ->
Transposon
dynamics
Actor
-
>
Actor
-
>
Actor
-
>
Micro Actor ->
Amyloid-beta
plaque, tau tangles
Actor ->
Biochemical
math
Actor ->
Protein biomarker
distribution
Table 4. The Mathematical Observer in a Multiscalar Genomic System.
Maxwell’s Demon System Interaction. “Maxwell’s Demon system interaction” refers to the idea
of locating an expedient scale tier or mechanism for interacting with a system, by analogy to
Maxwell’s demon (an efficient sorting mechanism of particles). Many occurrences in nature are
not random, and appear to have some kind of unseen formal method directing their behavior in
an efficient manner. For example, strings of amino acids do not try every permutation but
immediately fold into the final protein conformation. The aim is using Math Agents and the
math-data relation to elicit such shortcuts. In multiscalar mathematical ecologies, the question
arises as to the right level at which to interact with a system to produce a result. How can the
“thermostat dial” of room temperature control be identified rather than treating the scale level of
bombarding particles? For example, in biosystem limb generation (in development) and limb
regeneration (in replacement, desirable to harness in regenerative medicine), a mechanism called
multiscalar competency seems to be at work in which any scale tier calls all the functionality of
lower scale tiers (Fields and Levin 2022, 819).
Instead of renormalizing a system quantity across scale tiers, the idea is to treat the mathematics
of the different scale tiers directly (math is the interface to more math). By solving a multitier
Page 11
mathematical ecology, it may become clear at which level the most expedient interaction with
the system can be had. This might be through simple derivative taking to find system min and
max. This might also be through a meta-analysis of mathematical complexity (analogous to
computational complexity) which connotes the computation equivalency of all mathematics at a
certain level of abstraction, thereby targeting the most expedient interventional path into a
system. The intelligence amplification idea is deploying “mathematics as a method.” As a formal
data corpus, mathematics is contiguous in ways that other data corpora are not. This suggests that
to some extent, even the simplest equation calls the entire body of existing and possible
mathematics as there may be arbitrarily-many layers of subsequent abstraction (e.g. set theory,
group theory, category theory). The interconnectedness of mathematics implies a framework for
identifying the right level at which to solve multiscalar systems.
Part 2. Epistemological Aspects of the Second Linguistic Turn
Philosophy of Knowledge and Foucauldian Epistemes
This section examines the epistemological aspects of the second Linguistic Turn in the
genealogical context of Foucault’s epistemes (knowledge regimes). Epistemes are an important
critical genealogies formulation to discuss the criteria for constituting knowledge, representation,
and the scientific method in a specific era, and the inseparable dimension of power that underlies
these formulations (power-knowledge).
The Information Age
The prevailing social narrative is the “Information Era,” a period of history distinguished by the
widespread use of digital technologies including computers, the internet, and mobile devices,
beginning with Shannon’s quantitative formulation of information in 1948 (Shannon 1948) and
the advent of electronic computing in the 1950s. As any social narrative, the “Information Age”
warrants problematization (Koopman 2019, 16–17). The implied mode of subjectivation in the
Information Era is “informational persons” who are “inscribed, processed, and reproduced as
subjects of data” (Koopman 2019, 4). New forms of power may be operative as Foucault
counsels that “power is not unitary” and “does not always take the same form” (Critical
Genealogies Collaboratory 2018, 641). Indeed, infopower is proposed as “a distinctive modality
of power [which] deploys techniques of formatting to do its work of producing and refining
informational persons” (Koopman 2019, 12). Info power is a kind of power which reformats
itself and others in its exercise, essentially an algorithmic concept as Koopman underlines.
By extension, there could be “infobiopower” as infopower (the informational formatting of the
subject) deployed to exert biopower control over life and health in new ways. Such infobiopower
is currently observed in the medicalization of weight loss and longevity interventions (for
humans and dogs). On the one hand, infopower is the dominating impulse as subjects are
“canalized” (directed) and “accelerated” (Koopman 2019, 156–157) into courses of action
impelled by the “infomedical gaze” which attempts to localize an unlocalizable signal of disease
(Foucault 1973, 9). On the other hand, there is high demand for health (and the aspirational
identities it implies), and population-scale delivery tools empowering people in new ways.
Epistemes: Representation and Scientific Method
The regimes Foucault enumerated are updated to include the Information Age (Table 5). Central
to an era’s episteme is the consideration of the representation of knowledge and the scientific
Page 12
method. In the four main eras, the Renaissance, the Classical Age, the Modern Age, and the
Information Age, the basis of knowledge representation has shifted. Initially resemblance was
sought as literal similitude between the representation and the represented (Renaissance Age).
Then there was a move to abstraction in that a mental representation captures the idea of the
represented (Classical Age). Then the crucial constitutive effect of the human in determining
knowledge representation was identified (Modern Age), which now may be evolving into the
constitutive role of genAI in determining knowledge representation (Information Age). The
epistemes are characterized by an overall “concrete to abstract progression” (Swan 2023, 116).
Initially there is no abstraction. Then the object is abstracted, the subject is abstracted, and both
subject and object are abstracted. Simultaneously, the scientific method has evolved from initial
attempts in Cartesian perspective and Baconian observation to the modern method of testable
hypothesis, observation, experiment, now carried out by automated methods in the Information
Age for greater reach and replicability.
Historical Era Basis of Knowledge (Episteme) Scientific Method
1 Renaissance Age
(1300
-
1650)
Resemblance: literal resemblance, similitude
between representation and represented
Cartesian perspective
2 Classical Age
(1650
-
1800)
Abstract idea: similarity or difference in the
mental representation of a phenomenon
Baconian observation
3 Modern Age
(1800
-
present)
Role of the human: constitutive role of the human
in determining knowledge and representation
Hypothesis, observation,
experiment
4 Information Age
(1950
-
present)
Role of AI: constitutive role of AI in determining
knowledge and representation
Knowledge graphs, math
agents, AI supercomputers
Table 5. Epistemes: Knowledge Regimes by Historical Epoch (extending Foucault, Order of Things, 1970).
Representation in the Information Era. In AI systems, input data are converted to vectors (strings
of numbers) and embedded into various machine-readable forms for processing by deep learning
systems. Algorithms operate to learn the best vector representation of data at each node in the
knowledge graph (through matrix multiplication transformations) such that the model can
produce new data by predicting the next word or any element of a data series. The marquis AI
chatbot application, ChatGPT, is named for its deep learning architecture as a transformer neural
network (generative pretrained transformer). Transformers are so-called because they literally
“transform” vector-based data representations during the learning phase (using linear algebra and
matrix multiplication methods) per the three main allowable symmetry transformations in
physics: translation (displacement), rotation, and reflection. Knowledge graphs use TransE
(translation embedding), RotatE (rotation embedding), and ReflectE (reflection embedding)
algorithms in this undertaking. The resulting GenAI activity can be seen in the AI stack (Table 6)
of the four tiers of human interfacing AI assistants, reinforcement learning (RL) agents
(autonomous driving, robotics, gameplay), knowledge graphs (vector-represented datasets), and
artificial neural network architectures upon which deep learning algorithms run.
Focus Tier Description Example
1
Interface
AI Chatbots
Human
-
interface AI assistants
ChatGPT
2 Agent Reinforcement
Learning Agents
Robotics, self-driving, gameplay, artificial
Tesla Autopilot
AlphaGo
3 Content Knowledge
Graphs
Knowledge canon: all entities and their relations
in a domain (LLMs, Foundation Models)
Recommendation
engines
4 Architecture Deep Learning
Neural Nets
Multilayer networks running deep learning
algorithms
(LLM architectures)
Transformers
(GPT
-
4)
Page 13
Table 6. The GenAI (Generative Artificial Intelligence) Stack.
A key advance is shifting from the 2D to the 3D representation of data. Whereas natural
language knowledge graphs can be represented in 2D, applications in Digital Biology, quantum
computing, climate modeling, and fusion energy tokamak controllers require the 3D
representation of molecules. Such 3D representation occurs in GNNs (graph neural networks as a
more advanced implementation of transformers). To treat 3D environments, GNNs employ more
extensive physics. The transformation of data representations in GNNs is even more closely tied
to the allowable symmetry transformations in physics: translation (displacement), rotation,
reflection, and time reversal, and the notions of invariance (output unchanged per
transformation) and equivariance (output changes consistently with transformation). For
example, AlphaFold2’s Invariant Point Attention models the displacement and rotation of amino
acids as triangles in space to identify pairwise combinations based on angle and torsional force
(Jumper et al. 2021, 587). Also prominent in GNNs is beyond-Euclidean hyperbolic space to
efficiently represent large datasets, for example hierarchical tree-structured data in evolutionary
phylogenetic trees and to describe protein-protein interactions. Deep learning architectures and
knowledge graph embedding methods highlight the implementation of mathematical physics in
the AI infrastructure, notably quantum-classical-relativistic models, real-complex-quaternionic
(1D-2D-4D) numbers, and beyond-Euclidean space (spherical, hyperbolic) and time (Lorentz
invariance, imaginary (complex-valued) time, and time reversal symmetry).
Copernican Dethroning of the Human
One result of the Information Age investigation is the implication of the Copernican dethroning
of the human as the center of knowledge, replaced by AI. The main event of the Modern Age
episteme is the role of the human as not only determining what is to count as knowledge, but also
as being the subject of knowledge in a diverse range of fields including biology, medicine,
psychology, sociology, and political theory. In the Information Age episteme, there is a
displacement towards the role of AI being central in constituting, determining, representing, and
itself being the subject of knowledge. The range of human-focused sciences is giving way to a
new slate of academic investigations into problems of AI alignment and regulation, explainable
AI, machine ethics, and the development of digital minds. Foucault noted that the concept
“human” arose in the modern era and may be ephemeral (“man is an invention of recent date.
And one perhaps nearing its end.” (Foucault, 1970, 422)). The traditional idea of “human” may
be superseded faster than Foucault imagined in an era of enhanced intelligence, brain-computer
interfaces, and digital twin connectomes (brain maps). The notion of “intelligence as a service”
relocates the exclusively human preserve to a generic algorithmic feature.
A second result of Information Age investigation is the implication of overcoming the false
problematic of the human pitted against the AI. The narrative may be framed as “AI taking your
job,” when it is more likely that humans who can use the new AI tools may take your job, and
moreover, this may be a desirable situation. The human-AI relation is often framed as a “you
win, I lose” fixed pie of rewards, rather than as a “bigger pie” of the greater possibilities that are
now available. It is true that humans may be facing non-trivial questions as to not only “labor
identity” as previously determined through work and professional role, but also “personal
identity” which can now be defined in new ways in higher Maslow tiers related to interests,
learning, and contribution. The same “automation economy” argument applies, that the main
Page 14
thrust is using human imagination and ingenuity to take an active stance with the new tools to
solve problems in new ways and have a positive impact in the world.
A third and most philosophically urgent concern arising in the Information Age investigation is
what it means that technologies have decamped to beyond-Euclidean space-time regimes, finding
such domains better geared to their capacious activity. As humans, we are left behind as we
cannot access beyond-Euclidean space-time regimes with our in-built Kantian goggles, which are
geared to all perception and understanding being in the 3D space and 1D time of everyday lived
experience on earth. The real confrontation exposed in the Information Age episteme is that
between humans and knowledge in the human-knowledge relation of humans being able to
access and deploy the vast digital knowledge stores now becoming available, in “data” content
and in “formal” content. In the human-knowledge confrontation, we need better goggles, and that
this is precisely what is offered in the partnership of the human-AI relation, not competition, but
cooperation towards the end goal of intelligence amplification, including via Math Agents as a
new form of Kantian goggles. The critical genealogies lens of longitudinal throughlines helps
diagnose that the problematic is not the human-AI relation, but more foundationally, the human-
knowledge relation which is pressed into question by the second Linguistic Turn.
Part 3: Axiological Aspects of the Second Linguistic Turn
Philosophy of Performativity: Language Games, Social Pragmatism, and Enlightenment
Sufficiently complex information technologies are performative in that they must be executed to
produce results, their outcomes cannot be predicted in advance. This is true for web3, genAI, and
quantum technologies. Blockchains must operate to see which miner solves the dynamic
cryptographic puzzle to win the right to record the block. Deep learning neural networks must
learn the best vectorized representations of data on this training run. The quantum circuit must
evolve the system unitarily to see where the particle is this time (in the northern or southern
hemisphere of the Hilbert space, equilibrating to a zero or one) when the wavefunction is
collapsed in measurement. At some level of complexity, systems become computationally
equivalent (Turing complete), as a plastic platform capable of running any other system.
The philosophy of performativity is suggested, with genealogical highlights through Austin
(speech act theory), Bulter (identity is performed), Wittgenstein, and Derrida. An immediate
critical genealogical finding is that Wittgenstein (1889-1951) and Derrida (1930-2004) have
similar arguments concerning performativity that have not been connected in the canon. Derrida
engaged minimally with Wittgenstein and not on the topic of performativity (Shain 2005, 72).
Performativity occurs through action and language (the performative utterance). The later
Wittgenstein is a philosopher of performativity par excellence. He exhibits a landmark shift in
his thinking. He starts with the overarching presupposition in the Tractatus (1921) that the
structure of all language can be articulated as formal propositions. He then reverses this to a view
based on performativity in the Philosophical Investigations (1953), that language cannot be
studied through its structure but only through its use in “language games.” Derrida supports
Wittgenstein’s language games point, that the meaning of language is dynamically determined in
the context of its use. Derrida argues that meaning is not fixed or final, and is constituted as the
connection between entities in a field of relationality. Any “text” (book, human, society,
situation) must be re-read on-demand to determine its meaning. Meaning is assessed in “acts of
Page 15
literature” (performative acts). The concept of différance destabilizes the possibility of a fixed
logocentric assignation as meaning is always deferred in time and different in space.
Wittgenstein’s Tractatus ethos collapsed due to Gödelian incompleteness (there is always a
proposition which can be made outside a system) and likewise for Derrida, there is always a
trace from a system to the outside preventing its totalization. Wittgenstein comes to locate truth,
and aesthetic and ethical judgment, outside systems of formal propositions. There is a qualitative
excess in phenomenological experience that cannot be captured with formal propositions but is
determined through the use of words in language games. Language is also the fulcrum for
Heidegger and Hegel. For the later Heidegger, the meaning of being is disclosed in modes such
as the poetic language of Hölderlin in which “essence dwells only in the poetizing” (Heidegger
2013, 281). For Hegel, there is no exterior to the totalized system, however aesthetics is an
important mode of human expression in which individual and societal self-conceptualization is
pressed into materials. He sees a historical developmental progression in art forms from
architecture to sculpture to an apogee in [language-based] poetry. Further, within poetry there is
a progression from epic to lyric to dramatic poetry. Bergson distinguishes the qualitative excess
with dual formulations of the quantitative and the qualitative, for example measurable clock time
(kronos) and lived-experience time (chiros), “duration” in his terminology (Bergson 1957, 75).
LLMs Paradox. Given the philosophical acknowledgement of Gödelian incompleteness, LLMs
present an interesting paradox as they are purported to encode the entirety of a language, yet
novel thoughts are still possible. All words and allowable grammatical relations are represented
in the LLM knowledge graph. One apparent implication is that there cannot be any new
“utterance” that does not pre-exist in the latent space of the graph. The language graph is a sort
of Borges-type “Library of Babel” with all potential texts of a certain length pre-existing in the
library. It can be wondered the extent to which it is possible to have a novel idea if it pre-exists
in the language graph. The sentence I write now is already in a “monkeys typing Shakespeare”
language graph somewhere. However, the philosophy of performativity reminds us that what
matters is the utterance. The performative use of the words in a Wittgensteinian language game
constitutes reality. It is not enough for the phrase to exist in latent space, it must be uttered. The
phrase is a “new” thought at the time and place of its utterance.
AI Language Games and Regulation. An immediate practical implication of the language games
argument is regulation. Wittgenstein’s point is that language cannot be understood through
formal rules, only through use as it unfolds in everyday social practices and forms of life. On the
Wittgensteinian view, it is philosophically incoherent and practically ineffective to regulate AI
before its use in social practice emerges. Inappropriate regulation could prohibit novel forms of
life from arising in the explorative practices of AI language games. Intelligence amplification
possibilities could be precluded before they can arise. The main regulatory methods employed
are use-based, technology-based, outcome-based, and principle-based. In fact, the EU AI Act
passed in March 2024 is a use-based framework, prohibiting certain uses of genAI such as social
scoring systems and biometric data applications (Mackrael 2024). The Wittgensteinian argument
provides philosophical support for the regulatory direction.
“Enlightenment is the human being’s emergence from self-incurred minority (inability to make
use of one’s own understanding without direction from another).”
Page 16
– Kant, “What is Enlightenment?” 1996, 17
Social Pragmatism and Enlightenment. The philosophical implication of the language games
argument is seen in Brandom’s continuation of the trajectory. Brandom supports a social
pragmatism of norms arising through discursive practices. He notes that in the later Wittgenstein,
“sentences are the smallest linguistic units with which one makes a move in the language game”
(Brandom 2006, 7). Social practices and language games are the only justifiable source of
normativity. The use of language (including formal language) builds into social practices (which
have ethics, rules, and norms) and forms of life.
Brandom is focused on what he sees Rorty calling for as the second Enlightenment to “extend
the treatment of the practical dimension to the theoretical or cognitive dimension of human life”
(Brandom 2022, 18). This means emancipation from non-human authority as the source of what
counts as right or good, “not [only] that of God, but that of objective Reality (Ibid., 28). The
“opponent” is the generally synonymous terms of non-human authority, totalized notions of
objective reality, representationism, and foundationism. Koopman clarifies that the problem is
trying to “explicate normativity without foundations (or authority without authoritarianism)”
(Koopman 2011, 62). Towards this aim, the early Brandom posited a program of inferential
pragmatism synthesizing Kant and Hegel’s concepts of action and recognition. Action precedes
and forms norms: the “practical knowing how contributes to a cognitive claiming that
(Brandom 2011, 22). These norms are then instituted in the social realm by “public social
recognitive practices” (Ibid., 3).
The later Brandom synthesizes Rorty and Hegel into a new conceptualization of social
pragmatism based on the notions of Rorty’s responsibility and Hegel’s recollection. From Rorty,
the key point is that authority is always extended only contingently, and is invalid without
commensurate responsibility attached to it. Hegel’s recollection piece is also needed because
philosophical justification is only retrospective (Hegel’s “owl of Minerva begins its flight only
with the onset of dusk” (Hegel 1991, 23)). For Hegel, “recollection” is “the inwardizing of
experience” necessary to reflect on our reason-giving practices (Hegel 1977, §808, 492). For
Brandom it is only through these kinds of recollective “representational relations” that we obtain
“a normative relation of authority and responsibility” (Brandom 2022, 32). Brandom extends
Rorty’s idea of social norms (stuck in local communities) to include the reciprocity of authority
and responsibility (“normative statuses”) (Ibid., 71). This Brandom claims, provides “the basis
for an expressive semantic account of normative representational relations” (Ibid., 106).
Risks and Limitations
There are many potential risks and limitations in the call for a second Linguistic Turn to study
formal languages, using a formal language construction, the Math Agent, to do so. Immediately,
the setup seems tautological. How are humans, poor at quantitative evaluation, supposed to use
and evaluate their use of a quantitative tool? It is ill-formed to study the problem with the
problem. Just as “it is problematic to study language with language,” likewise, “it is problematic
to study formal language with formal language [genAI].” The project is doomed from the start.
However, there are several responses. First is the most pragmatic that the obvious course of
action is to use the best available tools, even if contradictory, and address their shortcomings
within the scope of the investigation.
Page 17
Second, it is a “feature not a bug,” and an inherent truism to any linguistic investigation that it is
problematic that the investigation is conducted with language. As Rorty says, linguistic
philosophy refers to “philosophical problems [that] are problems which may be solved (or
dissolved) either by reforming language, or by understanding more about the language we
presently use” (Rorty 1967, 3). The answer towards the solution is in the problem formulation.
Part of the subtext of any formal language investigation is, by definition, investigating the
problematic aspects of the investigation of formal language.
Further, it is not actually a problem because, as counseled by the pragmatist tradition, language is
a vocabulary not a foundation. GenAI is a vocabulary, “a set of words with which we justify our
actions, beliefs, and life” (Rorty 1989, 73). As evidence that the vocabulary relation is not a
static foundation is the argument that “using a vocabulary changes it” (Brandom 2000, 179).
Language as a vocabulary is dynamic, performative, and dialectical. GenAI is a language tool
(emphasis on tool), from which the contentful surmisings accrue to the human user of the tool.
The bigger problem this project faces is misconstrual. Although the description stresses formal
language as the content not the method of investigation, it might be assumed that formal
language is the method. Using terms such as “Linguistic Turn” and “formal language” seem to
call propositional logic even though the project supports the opposite. A marketing problem is
created in that the messaging may cause the most needed thinkers to self-select away from the
project. Continental and critical thinkers in philosophy of science, technology, computation,
information, and mathematics are needed to address open-ended questions related to formal
language. Approaches from art, music, literature, humanities, complexity, and systems thinking
might greatly contribute to the effort. The problem at hand is continuing to query what the kinds
of language we encounter in our world mean.
Conclusion
A second Linguistic Turn is proposed to study the meaning, use, and interpretation of formal
language in configuring the modern computational infrastructure which gives rise to the
increasingly technologically mediated experience of phenomenological reality. Such a second
movement expands the focus on the study of natural language to also include formal languages
(mathematics, physics, computer code, LLMs). However, since humans are ill-equipped to treat
formal content directly, it is also proposed that a new kind of Kantian goggles, the Math Agent,
be employed to aid in the investigation. This new class of genAI goggles concerns not merely the
amplification of scale as with microscopes-telescopes, but the amplification of intelligence.
The problem is urgent as technology progresses much faster than human habituation. Husserl and
Heidegger raised concerns about this in the early 20th century. Husserl worried about a society-
wide “crisis” in the loss of meaning and direction in science and mathematics as it had become
unmoored from its origins (The Crisis of European Sciences and Transcendental
Phenomenology, 1936). From this text, the early Derrida began to formulate “the logic of
différance” (the difference and deferral of meaning contra fixed logos), articulating “the
phenomenon of “crisis” as forgetfulness of origins” (Derrida 1962, 6–7, 33). The point echoes
the importance of recollection in subjectivation and enlightenment à la Brandom via Hegel.
Page 18
Heidegger counseled that we must be constantly vigilant in maintaining the “right relation” with
technology as a “means to an end,” a background enabler, as opposed to letting it enframe us into
standing reserve (Heidegger 1977, 5). He argues that it is only through “essential reflection upon
technology and decisive confrontation with it” that we can come into “the right relation to
technology” so as to avoid “ordering, enframing, and standing-reserve” (Ibid., 35, 5). He sees a
way out, in that “The closer we come to the danger, the more brightly do the ways into the
saving power begin to shine and the more questioning we become” (Ibid., 35).
The exhortation from all philosophical traditions is an active stance in the face of technological
acceleration and self-formatting infopower tendencies. The empowerment in the human-
technology relation is not one of passive abdication to sanding reserve or to a Wanderer above
the Sea of Fog (Friedrich 1818) awe in the confrontation with the sublime. Instead, it is one of
action – the rolling up of shirtsleeves in the direct encounter with the technologies to define their
affordances in new language games, social practices, and forms of life. Such actively attuned
individuals and groups are proceeding with web3 genAI quantum technologies to reinvent
methods, uplevel impact, and build desirable futures, while also maintaining a critical eye.
The stakes are high, not just for human futures but for the future of humanity in the larger
tableau of digital societies possibly populated by digital minds and other non-human forms of
intelligence. Understanding language is a precondition for understanding minds. Brandom argues
for the understanding of “conceptual capacities (discursiveness in general) in terms of linguistic
capacities” (Brandom 2011, 22).
“…there is no reason why electronic computers could not be discursive creatures…”
– Brandom 2019 (Frapolli and Wischin 2019, 6)
The AI Flâneur
Towards the ethical development of digital minds, thinkers often frame the situation as that of a
parent or coach stewarding the self-development of a new entity, while maintaining AI alignment
with human-serving values (Shulman and Bostrom 2021, 306). One proposed developmental
path for artificial selves is via the economic principle of exchange (echoing Hegel’s reciprocal
recognition). Exchange is the basis for the “full stadial [staged] development” of self-directing
entities in society articulated by Adam Smith (Norman 2018, 55–60). Such self-directed agents
are not powered by naked self-interest, but rather seek a full range of prosperity in the exchange
of goods, esteem in the exchange of moral sentiment, and knowledge in the intellectual exchange
of ideas. This schema could be introduced into the genAI reinforcement learning agent
infrastructure (in which agents learn through an action-taking policy and reward function).
Another path to the development of artificial selves is through creativity as a salient indicator of
self-directed intelligence. In this case, reinforcement learning agents are rewarded for creative
tasks (those involving active inference and abstraction) in the idea of reflexive autocatalytic
networks being the entity’s interaction mode with the environment (Gabora and Bach 2023, 22).
The critical genealogies approach to the development of artificial selves might identify a certain
threshold signal if an “AI Flâneur” were to emerge as the self-critiquing feedback loop of AI
society. Much like Baudelaire’s human flâneur, the AI flâneur is the detached observer-critic of
AI society. “Self-critique” is an expected emergence in sufficiently complex societies reaching
Page 19
mature developmental phases (Adorno 1997, 292). Hence it is anticipated that mechanisms of
critique and self-critique would arise in AI society as an important self-monitoring feedback
lever in the deployment of artificial minds in digital society.
Overall, this work argues for a second Linguistic Turn in philosophy to study the meaning, use,
and interpretation of formal languages in the computational infrastructure. Such a second
Linguistic Turn extends the first Linguistic Turn in analytic, continental, critical, and pragmatic
philosophy. It is necessary to add the investigation of formal language (mathematics, physics,
chemistry, biology, software code) to the investigation of natural language as a determining
factor in the production of what counts as meaning and truth in human life. There is something
special about language as a hard-wired functionality in human minds and now being extended as
the vernacular of the computational apparatus. That language is the basis of genAI and the Math
Agent interface to the rest of the computational infrastructure suggests that we have not yet
understood everything about language as an ontological, epistemological, and axiological
existence in the human context whether in physical or digital phenomenological reality.
Word Count: 9,910 (without Glossary, References, and Research Agenda APPENDIX)
Glossary
Computational Infrastructure: modern information communications technologies composite
Critical Genealogies approach: compound formulation of Kantian critical methods and
Foucauldian genealogical methods together with the underlying problematization of issues
Digital Biology: extension of computational biology informatics with genAI methods
Equation Clusters: groups of related equations (visualized as mathematical embedding)
GenAI (generative artificial intelligence): multimodal AI systems which create new content
based on learning from a training dataset to predict the next word (any token)
Health Agents: personalized AI health advisors delivering health and longevity services by app
Industry 4.0: fourth Industrial Revolution technologies: blockchains, genAI, quantum, IoT, etc.
Intelligence Amplification: using genAI and other technologies to expand human intelligence
Kantian Goggles: humans see the world through unremovable goggles in which the mind
creates manifold of time and space as the condition of possibility for apperceiving any object
Linguistic Turn: investigation of the role of language in philosophical problems (beg. 20th c)
Math Agents: specialized AI systems and problem-solving stance based on the mobilization of
mathematical content as an upleveled and validated lever for interacting with reality
Mathematical Complexity: computation equivalency of mathematics at same abstraction level
Mathematical Ecology: interacting set of equations (visualized as mathematical embedding)
Mathematical Observer: actor-observer (human, Math Agent, biological process) interacting
with a system at the level of math (per literal formalisms or cues) (e.g. digital health twins)
Quantum Computing: using atoms to perform computation per quantum mechanical principles
Renormalization: multiscalar system view via a conserved quantity (e.g. symmetry, free energy)
Second Linguistic Turn: investigation of the use, meaning, and interpretation of formal
languages (e.g. mathematics, physics, computer code) in configuring phenomenological reality
Web3: 3rd phase of the internet in the read-only web, read-write web, read-write-own web
progression focusing on security and exchange transactions enabled by blockchain technology
Wegovi-nomics: economic shifts due to “demand for health” and 3-5% population GLP-1 Rx’s
Page 20
References
Adorno, Theodor W. 1997. Aesthetic Theory. London: Continuum.
Bergmann, Gustav. 1952. “Two Types of Linguistic Philosophy.” The Review of Metaphysics 5 (3): 417–438.
Bersgon, Henri. 1957 (1910). Time and Free Will: An Essay on the Immediate Data of Consciousness. Trans. F.L.
Pogson. London: George Allen & Unwin Ltd.
Bialek, William. 2012. Biophysics: Searching for Principles. Princeton: Princeton UP.
Brandom, Robert B. 2022. Pragmatism and Idealism: Rorty and Hegel on Representation and Reality. Oxford:
Oxford University Press.
--------. 2011. From German Idealism to American Pragmatism—and Back. Perspectives on Pragmatism.
Cambridge MA: Harvard University Press. Pp. 1–34.
--------. 2006. Kantian Lessons about Mind, Meaning, and Rationality. Philosophical Topics. 34 (1&2) (Spring and
Fall 2006): 1–20.
--------. 2000. Vocabularies of Pragmatism: Synthesizing Naturalism and Historicism. In Ed. Robert B. Brandom,
Rorty and His Critics. Malden MA: Blackwell Publishers. Pp. 156–190.
--------. 1994. Making It Explicit: Reasoning, Representing, and Discursive Commitment. Cambridge MA: Harvard
University Press.
Critical Genealogies Collaboratory. 2018. “Standard forms of power: Biopower and sovereign power in the
technology of the US birth certificate, 1903–1935.” Constellations 25: 641–656. DOI: 10.1111/1467-8675.12372.
Derrida, Jacques. 1981. Positions. Trans. Alan Bass. Chicago: Univeristy of Chicago Press.
Interview with Jean-Louis Houdebine and Guy Scarpetta. Pp. 37–96.
--------. 1962. Edmund Husserl’s Origin of Geometry, An Introduction. Trans. J.P. Leavey Jr. Lincoln NE:
University of Nebraska Press.
Dixon, Chris. 2024. Read Write Own: Building the Next Era of the Internet. New York. Random House.
Frápolli, María José & Wischin, Kurt. 2019. “From Conceptual Content in Big Apes and AI, to the Classical
Principle of Explosion: An Interview with Robert B. Brandom.” Disputatio. Philosophical Research Bulletin 8: 9.
Fields, Chris & Levin Michael. 2022. “Competency in Navigating Arbitrary Spaces as an Invariant for Analyzing
Cognition in Diverse Embodiments.” Entropy 24(6):819. doi: 10.3390/e24060819.
Foucault, Michel. 1973 (1963). The Birth of the Clinic: An Archaeology of Medical Perception.
Trans. A.M. Sheridan. London: Routledge.
--------. 1970 (1966). The Order of Things: An Archaeology of the Human Sciences (Les Mots et les choses). Trans.
Tavistock. New York: Routledge.
Gabora, Liane & Bach, Joscha. 2023. A Path to Generative Artificial Selves. Progress in Artificial Intelligence.
Cham CH: Springer. Pp.15-29. DOI:10.1007/978-3-031-49011-8_2.
Gavranović, Bruno, Lessard, Paul, Dudzik, Andrew et al. 2024. Categorical Deep Learning: An Algebraic Theory of
Architectures. arXiv:2402.15332.
Guarente, Lawrence, Sinclair, David A., & Kroemer, Guido. 2024. “Human trials exploring anti-aging medicines.”
Cell Metab 36(2): 354–376.
Hegel, Georg W.F. 1991 (1829). Elements of the Philosophy of Right. Ed. Allen W. Wood. Trans. H.B. Nisbet.
Cambridge UK: Cambridge University Press.
--------. 1977 (1807). Phenomenology of Spirit. Trans. A.V. Miller. Oxford: Oxford University Press.
Page 21
Heidegger, Martin. 2013 (1933-1944). The Event. Trans. Richard Rojcewicz. Bloomington IN: Indiana University
Press.
--------. 1977. “The Question Concerning Technology.” The Question Concerning Technology and Other Essays.
Trans. William Lovitt. New York: Garland Publishing. Inc. Pp. 3–35.
Jumper, John, Evans, Richard, Pritzel, Alexander et al. 2021. “Highly accurate protein structure prediction with
AlphaFold.” Nature 596: 583–589. doi: 10.1038/s41586-021-03819-2.
Kant, Immanuel. 1996. Practical Philosophy. Trans. Mary J. Gregor. Cambridge UK: Cambridge University Press.
Karpathy, Andrej. 2017. “Software 2.0.” Medium. Accessed 15 Mar 2024. https://karpathy.medium.com/software-2-
0-a64152b37c35.
Koopman, Colin. 2019. How We Became Our Data: A Genealogy of the Informational Person. Chicago: University
of Chicago Press.
--------. 2013. Genealogy as critique: Foucault and the problems of modernity. Bloomington IN: Indiana University
Press.
--------. 2011. “Rorty’s Linguistic Turn: Why (More Than) Language Matters to Philosophy.” Contemporary
Pragmatism. 8 (1): 61–84. doi: 10.1163/18758185-90000183.
Koopman, Colin and Matza, Thomas. 2013. “Putting Foucault to Work: Analytic and
Concept in Foucaultian Inquiry.” Critical Inquiry 39 (4): 817–840.
Mackrael, Kim. 2024. “European Lawmakers Pass AI Act, World’s First Comprehensive AI Law.” Wall Street
Journal. Accessed 15 Mar 2024. https://www.wsj.com/tech/ai/ai-act-passes-european-union-law-regulation-
e04ec251.
Montoya, Amanda K., & Edwards, Michael C. 2021. “The Poor Fit of Model Fit for Selecting Number of Factors in
Exploratory Factor Analysis for Scale Evaluation.” Educational and Psychological Measurement 81 (3): 413–440.
https://doi.org/10.1177/0013164420942899.
Norman, Jesse. 2018. Adam Smith: Father of Economics. New York: Basic Books.
Rorty, Richard M. 1989. Contingency, Irony, and Solidarity. Cambridge UK: Cambridge University Press.
--------. 1967. Introduction: Metaphysical Difficulties of Linguistic Philosophy. Ed. Richard M. Rorty. The
Linguistic Turn: Essays in Philosophical Method. Chicago IL: University of Chicago Press. Pp. 1–11.
Seife, Charles. 2010. Proofiness: The Dark Arts of Mathematical Deception. New York: Viking.
Sejnowski, Terrence J. 2020. “The unreasonable effectiveness of deep learning in artificial intelligence.” Proc. Natl.
Acad. Sci. U.S.A. 117(48): 30033–30038. doi: 10.1073/pnas.
Shain, Ralph. 2005. “Derrida’s References to Wittgenstein.” International Studies in Philosophy 37 (4): 71–104.
doi:10.5840/intstudphil200537415.
Shannon, Claude E. 1948. “A Mathematical Theory of Communication.” Bell System Technical Journal 27: 379–
423. http://dx.doi.org/10.1002/j.1538-7305.1948.tb01338.x.
Sheehey, Bonnie. 2020. “Reparative Critique, Care, and the Normativity of Foucauldian Genealogy.” Angelaki
Journal of Theoretical Humanities 25 (5): 67–82. DOI: 10.1080/0969725X.2020.1807142
Shulman, Carl & Bostrom, Nick. 2021. “Sharing the World with Digital Minds.” Eds. Clarke, Steve, Zohny, Hazem
& Savulescu, Julian. Rethinking Moral Status. Oxford: Oxford University Press. Pp. 306–326.
https://doi.org/10.1093/oso/9780192894076.003.0018.
Page 22
Swan, Melanie. 2023. “Alexander von Humboldt's Environmental Holism.” Eds. Allert, Beate, I., Clason,
Christopher R., Peach, Niall A. & Quintana-Vallejo, Ricardo. Alexander von Humboldt: Perceiving the World. West
Lafayette IN: Purdue University Press. Pp. 101–126.
--------. 2015. Blockchain: Blueprint for a New Economy. Sebastopol CA: O'Reilly Media.
Swan, Melanie & dos Santos, Renato P. 2023. Information Systems Biology: Biophysics, LLMs, and AdS/Biology.
DOI:10.13140/RG.2.2.30368.35846.
Swan, Melanie, Kido, Takashi, Roland, Eric & dos Santos, Renato P. 2024. “AI Health Agents: Pathway2vec,
ReflectE, Category Theory, and Longevity.” AAAI Spring Symposium: Impact of GenAI on Social and Individual
Well-being. Stanford University 25-27 March 2024.
--------. 2023. Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics.
arXiv:2307.02502.
Taylor, Luke. 2023. “More than half the world’s population lack basic health services as progress stalls.” BMJ 382:
2160. doi: https://doi.org/10.1136/bmj.p2160.
Tyson, Alec & Kikuchi, Emma. 2024. How Americans View Weight-Loss Drugs and Their Potential Impact on
Obesity in the U.S. Pew Report.
Varenne, Franck. 2013. “The Mathematical Theory of Categories in Biology and the Concept of Natural
Equivalence in Robert Rosen.” Revue d’histoire des sciences 66: 167–197. https://doi.org/10.3917/rhs.661.0167.
Wong, Chi K., McLean, Brent A., Baggio, Laurie L. et al. 2024. “Central glucagon-like peptide 1 receptor activation
inhibits Toll-like receptor agonist-induced inflammation.” Cell Metab 36 (1): 130–143.e5. doi:
10.1016/j.cmet.2023.11.009.
APPENDIX: Research Agenda for the Second Linguistic Turn
Hacking’s “consciousness of language” (Koopman 2011, 61) could continue as a philosophical
concern in all traditions in the second Linguistic Turn to study the role of formal language in the
technological infrastructure and meaning construal, prominently in the mathematized use of
language in LLMs. Such study is warranted given the “formalization turn” in technology
extending farther into higher-order mathematics (e.g. knowledge graph embedding with complex
numbers and time reversal symmetry, and genome informatics with category theory). One aim is
obtaining an understanding of the conceptual canon in formal methods (mathematics, physics,
computer code) for philosophical investigation and deployment to other domains (Table A1).
Philosophy of Language
1
Brandom: new vocabularies, the role of the mathematical observer as a vocabulary user
2
Language: LLMs, mathematized language, Chomsky grammars, latent space, novel utterance
3
Derrida: status of speech
-
writing distinction with the advent of multimodal language models
Philosophy of Mathematics
4
Category theory: relevance of emerging high
-
profile category theoretic methods in technology
5
Knowledge graph embedding: mathematical theory of beyond
-
Euclidean space times
6
Model theory:
shift from
the
study
of
logics to theories to classes of theories (and their models)
7
Digital biology: biosystem computational complexity (protein
-
gene
-
pathway schema)
Table A1. Research Agenda: The Study of Formal Methods in the Computational Infrastructure.
New Vocabularies. In the pragmatist tradition, thinkers Rorty and Brandom are contra
representationalism and foundationalism – the idea that there could be a fixed foundation or
universal model of truth and knowledge (e.g. the predecessor moment leading to the Linguistic
Turn). In French theory, Derrida makes the same point in another vernacular, that of arguing
Page 23
against logocentrism by maintaining that there is no logos (fixed meaning). Brandom sees Rorty
using the “vocabulary vocabulary” as a substitution for the metavocabulary of representations
(representations as the impossible attempt to mirror the world truthfully) (Brandom 2000, 168).
Rorty seeks to destabilize representationalism with countermeasures such as contingency, irony,
and solidarity. Rorty defines “final vocabulary” as “a set of words with which a person justifies
his or her actions, beliefs, and life” (Rorty 1989, 73).
Brandom generatively understands Rorty’s “vocabularies” to be “linguistic and social practices
whose purpose it is to create new purposes” (Brandom 2000, 178). Vocabularies are themselves
a language game because “to use a vocabulary is to change it” (Ibid., 177). Conceptual norms
allow us to make novel claims which leads to change, not just in the claims, but also thereby in
the concepts themselves (Ibid.). Vocabularies beget new language games as linguistic and social
practices in that “every new vocabulary brings with it new purposes for vocabularies to serve”
(Ibid., 180). Research questions revolve around how new users of vocabularies, AI Agents,
LLMs, and robots may be Brandomian in their use, creating new uses, and new vocabularies,
hence new social norms and forms of life, possibly in extremely fast evolutionary cycles.
LLMs and Language. How is language in LLMs different from language in everyday use? To
what extent is the use of language in LLMs new? The advent of LLMs raises philosophical
questions regarding the extent to which the concept of “language” may be applicable beyond the
exclusively human context. Is the fact that humans are implementing bodies of knowledge as
“languages” in LLMs a lingering bias from the hardwiring of natural language in the human
brain or a signal that language is something more foundational? That formal languages are
amenable to instantiation in LLMs seems to be important. On the one hand, it is possible that
humans, thinking in natural language, and hence “natural-languaging” all formal content, have
from the beginning, been falsely casting formal content in natural-language terms, including its
writing with symbols in equations. So, the question becomes “What does math look like if not
represented in natural-language terms?” That is difficult to answer. On the other hand, the signal
from LLMs as a formal language could be pointing to the greater significance of language as a
structural formation.
At the simple definitional level, languages describe entities and the rules for their interaction.
Hence, even on the basic definition, language is already extant beyond the human context,
cashing out to equivalence with the laws of physics (in the sense of being real beyond the human
context). However, at a more profound level, LLMs as a formal language seem to be signifying
something more foundational which languages are doing. Language as formal language is
allowing a higher-level of well-formed interaction with reality, in what could finally be an
improved solution to the appearances versus reality problem. Although science and knowledge
are constructed enterprises, there may be a foundational aspect to language extending beyond the
human context as a structure for rule-based interactions with reality. Can the four-tier Chomsky
grammars help formalize this relation? Is it possible for LLMs as a “selfie of the human mind” to
escape from Rorty’s local practices into a Hegelian recollective picture of reason-giving?
Category Theory and Computational Infrastructure. What does it mean that there is a recent spate
of category theoretic methods proposed for the systematic and integrated study of industry 4.0
technologies? Examples of such category theoretic formulations are emerging in blockchains,
Page 24
digital biology, genAI deep learning neural networks, and quantum computing (Table A2).
Category theory is a branch of mathematics in which objects are labeled as categories and the
relations between them as functions, with ready extensibility to groups of categories and their
interactions. On the one hand, category theory in industry 4.0 technologies is not a surprise since
it is a kind of “meta mathematics” accommodating the analysis of complex structures in an
organized way. On the other hand, the fact that technologies prove amenable to formulation,
study, and discovery with category theoretic methods suggests their further formal well-
formedness and potential integration in the mathematical infrastructure.
Technology Category Theory Formal Method Reference
1
Blockchains partita doppia
A
lgebraic bicategory of spans of reflexive graphs
Katis 2008
2
Digital Biology: Protein
Olog of
beta
-
helical
&
amyloid
filaments
vs soc nets
Spivak 2011
3
Digital Biology: Genome
Dist
(distance) cat of Petri nets, olog, operad, preorder
Wu 2023
4
Digital Biology: Genome
C
ommutative monoids
for
linkage
disequilibrium
Tuyeras 2023
5
D
eep
L
earning
NN design
Monad
algebra
valued
in
p
arametric maps
2
-
category
Gavranovi
c 2024
6
Computer Programs
O
plax functors (posets)
and
lax natural
transformations
Katsumata
2023
7
Quantum Computing
ZX
-
calculus
dagger symmetric monoidal cat
circuits
Duncan 2019
Table A2. Category Theoretic Formulations of Industry 4.0 Technologies.
References
Duncan, Ross, Kissinger, Aleks, Perdrix, Simon & van de Wetering, John. 2019. Graph-theoretic Simplification of
Quantum Circuits with the ZX-calculus. arXiv:1902.03178.
Gavranović, Bruno, Lessard, Paul, Dudzik, Andrew et al. 2024. Categorical Deep Learning: An Algebraic Theory of
Architectures. arXiv:2402.15332.
Katis, Piergiulio, Sabadini, N., & Walters, R.F.C. 2008. On partita doppia. arXiv:0803.2429v1.
Katsumata, Shin-ya, Rival, Xavier & Dubut, Jérémy. 2023. A Categorical Framework for Program Semantics and
Semantic Abstraction. arXiv:2309.08822.
Spivak, David I., Giesa, Tristan, Wood, Elizabeth & Buehler, Markus J. 2011. “Category Theoretic Analysis of
Hierarchical Protein Materials and Social Networks.” PLoS ONE 6 (9): e23911. doi: 10.1371/journal.pone.0023911.
Tuyéras, Rémy. 2023. Category theory for genetics II: genotype, phenotype and haplotype. arXiv:1805.07004.
Wu, Yanying. 2023. A Category of Genes. arXiv:2311.08546v1.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Proteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function. Through an enormous experimental effort1, 2, 3–4, the structures of around 100,000 unique proteins have been determined⁵, but this represents a small fraction of the billions of known protein sequences6,7. Structural coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics. Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure prediction component of the ‘protein folding problem’⁸—has been an important open research problem for more than 50 years⁹. Despite recent progress10, 11, 12, 13–14, existing methods fall far short of atomic accuracy, especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)¹⁵, demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm.
Article
The normative status of Michel Foucault’s critical method of genealogy has been the topic of much debate in secondary scholarship. Against the criticisms forwarded by Nancy Fraser and Jürgen Habermas, I argue that genealogy is not a normatively ambitious exercise insofar as it does not aim to judge its objects of critique. Rather, genealogy ought to be understood as reparatively concerned with the task of marking out possibilities for transforming those practices that constitute problems for our present selves. To clarify this feature of genealogy, I take up Foucault’s late writings on care to show how this affect informs his critical practice. Care is important, I suggest, for highlighting the intimacy between critique and transformation. Once we understand how care guides genealogical inquiry, we can begin to see how Foucault’s method of critique is reparative rather than robustly normative.
Article
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean square error of approximate (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI), and Tucker–Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule ( N = 1,000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor exploratory factor analysis. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions that are overfactored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.
Article
1820 Preface THE immediate occasion for publishing these outlines is the need of placing in the bands of my hearers a guide to my professional lectures upon the Philosophy of Right. Hitherto I have used as lectures that portion of the Encyclopaedia of the Philosophic Sciences (1817) which deals with this subject. The present work covers the same ground in a more detailed and systematic way. But now that these outlines are to be printed and given to the general public, there is an opportunity of explaining points which in lecturing would be commented on orally. Thus the notes are enlarged in order to include cognate or conflicting ideas, further consequences of the theory advocated, and the like. These expanded notes will, it is hoped, throw light upon the more abstract substance of the text, and present a more complete view of some of the ideas currant in our own time. Moreover, there is also subjoined, as far as was compatible with the purpose of a compendium, a number of notes, ranging over a still greater latitude. A compendium proper, like a science, has its subject-matter accurately laid out. With the exception, possibly, of one or two slight additions, its chief task is to arrange the essential phases of its material. This material is regarded as fixed and known, just as the form is assumed to be governed by well-ascertained rules. A treatise in philosophy is usually not expected to be constructed on such a pattern, perhaps because people suppose that a philosophical product is a Penelope's web which must be started anew every day. This treatise differs from the ordinary compendium mainly in its method of procedure. It must be understood at the outset that the philosophic way of advancing from one matter to another, the general speculative method, which is the only kind of scientific proof available in philosophy, is essentially different from every other. Only a clear insight into the necessity for this difference can snatch philosophy out of the ignominious condition into which it has fallen in our day. True, the logical rules, such as those of definition, classification, and inference are now generally recognised to be inadequate for speculative science. Perhaps it is nearer the mark to say that the inadequacy of the rules has been felt rather than recognised, because they have been counted as mere fetters, and thrown aside to make room for free speech from the heart, fancy and random intuition. But when reflection and relations of thought were required, people unconsciously fell back upon the old-fashioned method of inference and formal reasoning. In my Science of Logic I have developed the nature of speculative science in detail. Hence in this treatise an explanation of method will be added only here and there. In a work which is concrete, and presents such a diversity of phases, we may safely neglect to display at every turn the logical process, and may take for granted an acquaintance with the scientific procedure. Besides, it may readily be observed that the work as a whole, and also the construction of the parts, rest upon the logical spirit. From this standpoint, especially, is it that I would like this treatise to be understood and judged. In such a work as this we are dealing with a science, and in a science the matter must not be separated from the form.