ArticlePDF Available

On Something Like an Operational Virtuality

Authors:
  • Centre Pompidou

Abstract

We outline here a certain history of ideas concerning the relation between intuitions and their external verification and consider its potential for detrivializing the concept of virtuality. From Descartes and Leibniz onward to 19th-century geometry and the concept of "invariant" that it shares with 19th-century psychology, we follow the thread of what might be informally called an "opera-tional" conception of the virtual, an intuition progressively developed in the 20th century from of group theoretical thinking into "functorial" thinking (in the context of category theory), and eventually intuitions for the concept of "univalence" (homotopy type theory) and its implications for the meaning of equality and identity. At each turn, skeptical arguments haunt this history's modes of exteriorization, proof, and verification; we consider the later Wittgenstein's worries concerning rule following and the apparent unbridgeable gap between formal theory and informal practice. We show how the development of mathematical intuitions and formalisms in the last century and the discovery of deep connection between intuitionistic logic and computation have begun to respond to some of these concerns and favour a conception of virtuality that is operational, constructive, pragmatic, and hospitible to scientific detrivialization.
humanities
Article
On Something Like an Operational Virtuality
Alexander Wilson


Citation: Wilson, Alexander. 2021.
On Something Like an Operational
Virtuality. Humanities 10: 29.
https://doi.org/10.3390/h10010029
Received: 29 October 2020
Accepted: 6 January 2021
Published: 9 February 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the author.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Institute of Research and Innovation, 75004 Paris, France; alexander.wilson@iri.centrepompidou.fr
Abstract:
We outline here a certain history of ideas concerning the relation between intuitions and
their external verification and consider its potential for detrivializing the concept of virtuality. From
Descartes and Leibniz onward to 19th-century geometry and the concept of “invariant” that it
shares with 19th-century psychology, we follow the thread of what might be informally called an
“operational” conception of the virtual, an intuition progressively developed in the 20th century
from of group theoretical thinking into “functorial” thinking (in the context of category theory), and
eventually intuitions for the concept of “univalence” (homotopy type theory) and its implications for
the meaning of equality and identity. At each turn, skeptical arguments haunt this history’s modes
of exteriorization, proof, and verification; we consider the later Wittgenstein’s worries concerning
rule following and the apparent unbridgeable gap between formal theory and informal practice. We
show how the development of mathematical intuitions and formalisms in the last century and the
discovery of deep connection between intuitionistic logic and computation have begun to respond
to some of these concerns and favour a conception of virtuality that is operational, constructive,
pragmatic, and hospitible to scientific detrivialization.
Keywords:
cognition; intuition; proof; Cassirer; Leibniz; Duns Scotus; Bergson; Wittgenstein; compu-
tation; category theory; univocity; univalent foundations
1. (Re-)Naturalizing the Virtual
John Duns Scotus employed the concept of virtuality in the context of natural theology.
It was here a tool for discussing the nature of God through the rational engagements of
the intellect, without recourse to divine revelation or dogma. “Is it possible by natural
means for man’s intellect in the present life to have a simple concept in which concept
God is grasped?” (Scotus John Duns 1987, p. 17) An affirmative response to this question
necessitated a bridge between the created and the uncreated, between existence and essence,
such that being in each case could soundly be discussed using the same basic concepts, and
that we might therefore begin to tame the transcendental. In Duns Scotus, virtuality was
this means to bridge the transcendental gaps between categories, and part of a strategy for
naturalizing the divine, for it would allow us to rigorously consider the concept of infinite
being as equivalent, virtually, to the perfections of God. To be is to have attributes, and
attributes always come attached to entities. There can be no discussion of si esse, without a
discussion quid est—for, if a thing exists, it will automatically have properties and some
descriptions will automatically be appropriate to it. Thus, for Duns Scotus, all existing
things virtually contain their properties and attributes, and any eventual knowledge of
such things is always already virtually included in the thing in question. Indeed, the virtual
was not here wielded against naturalism but was actually part of a method for naturalizing
the divine. The virtual was what ensured the correspondence between what is and what is
said of what is, or between the intuition and the entity’s external manifestation as a real
existent. To exist at all is to virtually contain proofs of existence. Each thing in the world, if
it exists, thereby contains all the virtues by which we may come to know it.
Later on, of course, with Bergson the virtual becomes the site of pure flux, difference,
subjectivity, creativity, and indeterminacy, allied to the various heterogeneities and continu-
ities that he championed against the rules, equations, and discretizations of modern science.
Humanities 2021,10, 29. https://doi.org/10.3390/h10010029 https://www.mdpi.com/journal/humanities
Humanities 2021,10, 29 2 of 17
The intellect only divides and separates these fluxes into actualities, driven as it is by the
biological necessities of survival that limit memory only to what is practically attainable,
whereas it is intuition, instinct, and affect that actually harbor the truth of life: that there
are no true individuals, only flux and genesis, pure inexhaustible difference implying
that each model, each measurement, each calculation or equation is immediately falsified,
immediately made obsolete by the constant modifications of time as pure becoming.
Whereas the virtual was an instrument of naturalization for Duns Scotus, Bergson
will have retooled the concept, weaponized it against what he perceived were the excesses
of modern naturalization. We need only recall his fateful attack on Einstein’s “spatial-
ization of time”, in Duration and Simultaneity, where he patrols and defends a border
between the real and the model, between the experience and its scientific extrapolation
(Bergson and Mitchell [1911] 1944)
. The subtle doctor might have turned in his grave. Was
there not something disingenuous in this fortification of the virtual as the last bastion of
the spirit, the last desperate line of defense against the lifeless structures and mathematical
symbols of science? It is a question that forcefully returns to relevance today, in the age of
COVID-19, as various esoteric voices question vaccines and sanitary measures, or link the
virus to 5G telecommunications. There is warranted concern about possible subterranean
connections between such vitalistic spiritualisms and fascist ideologies, as we also see
Neo-Nazi’s and white supremacists marching side by side with hippies, new-agers, and
conspiracy theory peddling “shamans”. For indeed, what is the fear of science—of its
overturning of instincts and intuitions, its falsification of doxa—if not a very subtle kind
of prejudice? It is a refusal to accept the expressions of reality over soul-comforting folk
illusions about nature, and thus a kind of empirical bigotry. It might even imply that all
forms of prejudice are in fact subspecies of this original form: a great recoil from the
evidence. Bergson accuses science of being metaphysical, and claims his position is the
most natural, obvious, and unburdened by unfounded speculation, a strategy typical of the
esoteric’s critique of science as “just another religion”, or the creationist’s accusation that
evolution is a matter of belief. Thus, despite his attacks on science’s discretizations and
negations, Bergson’s oeuvre is itself a great negation, a complex apparatus for denying the
Copernican decentering of the human spirit as highest and most developed form of élan
vital. This is all obviously inseparable from the question of posthumanism. The overcoming
of humanism implies affirming the event, overcoming our hostility to it, finding the courage
of a hospitality toward the scientific detrivialization of our deepest assumptions. This is
perhaps why Deleuze found it necessary, in Difference and Repetition, to confront Bergson to
Nietzsche’s trials with affirmation (Deleuze [1968] 1994). Nevertheless, Deleuze did little to
dispel the anti-scientistic overtones of virtuality after Bergson.
These questions, these worries, unfortunately vastly beyond the scope of the present
article, at least warrant that we relax any hardline assumptions that virtuality is neces-
sarily opposed to geometry and spatialization, measurement and categorization, or even
mechanistic exteriorization and computation. What if Duns Scotus’s less contrived, more
neutral notion of virtuality, unburdened with the extra responsibility of defending qualia
from the encroachment of eliminative naturalization, was in fact the more fruitful and
pragmatic notion? Let us return to the “dictionary definition” of the virtual: that famous
entry into the Dictionary of Philosophy and Psychology authored by C. S. Peirce, where the
virtual is defined simply as something that is not X that nevertheless has the efficiency of
X
(Baldwin 1901, p. 763)
. Something that serves the same purpose as the actual thing, or
that can be considered a proxy for it. The virtual is a “stand in”, something that is taken “as
if” it were X. If posthumanism implies demilitarizing this terrain between virtuality on one
side, and empirical science and mathematical formalism on the other, it may also imply
returning to an understanding of virtuality as that by which one thing is taken for another,
such that an object’s existence is equivalent, virtually, to the collection of its attributes.
In the following sections, I sketch out a certain history of ideas meant to support
a speculative conception of what might tentatively be called an operational virtuality. In
Descartes, Leibniz, Hume, and Kant, a dialectics of intuition and exteriorization leaves us in
Humanities 2021,10, 29 3 of 17
an uncomfortable situation: either the virtual link between concepts and percepts, intuitions
and exteriorizations, is nothing more than wishful thinking, or the as-if traps us within
a finitude wherein intelligibility is tautologically disarmed by solipsistic worries. In the
later Wittgenstein’s investigations and remarks, the chasm between the two perspectives is
recast as a problem of rule-following, falling through the gap between saying and doing,
theory and practice, deontic and ontic. We pick up the thread of Cassirer’s highlighting of
parallel developments in 19th and early 20th century mathematics, physics and psychology,
which since have been extended beyond group theory into category theory and eventually,
through a convergence with intuitionistic logic and computability theory, to the notion
of univalence. In a way reminiscent of how the medieval arguments for univocity tried to
bridge the worlds of the created and uncreated, these latest mathematical trials of equality
and equivalence have begun to build a very subtle bridge between rule and execution,
theory and practice, through a progressive illumination of the ever more delicate structures
that span these oppositions from behind the scenes. Perhaps the perceived impasse was the
effect of an overly naive conception of identity, that of Leibniz’s identity of indiscernibles: the
gap between being and that which is said of being could only be bridged by detrivializing
identity and difference, such that the textures and fibrations behind their bare opposition
could begin to be mapped out, showing a way beyond the facile and unpragmatic judgment
that “it is what it is”.
2. Exteriorization and Proof
. . .
may I not [
. . .
] be deceived every time I add two and three or count the
sides of a square, or perform an even simpler operation, if that can be imagined?
(Descartes et al. [1641] 1998, p. 61)
In his Meditations on First Philosophy, Descartes showed the ease with which we can
doubt even our most basic assumptions. In everyday life, we trust our perceptions and
concepts, we trust that 2 + 3 = 5; we trust that when we are performing the calculation
some evil demon is not swooping in to change the list of numbers beneath our eyes. But
what does this trust hold to?
Leibniz, like other thinkers of his time, saw the importance of finding ways of overcom-
ing this skepticism. Is there any assurance that our thoughts somehow align, correspond,
or represent things as they are? Leibniz’s attempt to escape skepticism turned on his deep
appreciation for the fact that just because we can state the existence of something, does
not mean it is realizable, or constructable. This is obvious in mathematics: we can say “2 +
3 = 6”, or “parallels intersect”, but just stating it does not make it true. There are specific
ways the properties combine and compose that restricts their expression. Leibniz reckoned
that truth could only be determined in the bottoming out of our lines of explanation, when
the operational analysis halts, and where the intrinsic universal character of the substance
expresses its pure distinction from other substances and aggregates.
Thus, for Leibniz, substances are revealed through acts of demonstration in extension.
The concept resides in its potential exteriorization. In order to make sure the demon is
not deceiving us, we have to exteriorize our thoughts—which for him are themselves
made up of little perceptions, real substances—label them with symbols and classify the
combinatorial rules that specify how they compose with other ones. Once this is done,
calculemus, says Leibniz. We need to “shut up and calculate”, we need to prove statements
in demonstration, through an iterative analytical process that may well exceed our capacities
of intuition. We need to exteriorize the vague and indistinct concepts in our souls, actualize
them in extension, and process them down to their indubitable proofs of existence. Thus,
the universal characteristic, a language that encodes our concepts, must be combined with
the calculus ratiocinator, the physical machine, external to our minds, that processes the
universal language. He thought that by crunching the numbers on such a sophisticated
machine, even the most subtle metaphysical questions could be distilled to their bare
distinctions. The point of exteriorizing this process and not just calculating in your head,
was that we needed to make sure we were not deceiving ourselves: in this way, we could
Humanities 2021,10, 29 4 of 17
check the proof, and locate any errors. Cassirer explains this important motivation in
Leibniz’s thought:
...even where all the rules of thought are applied with formal correctness, there al-
ways remains a possibility that the contents of thought, instead of being repeated
in identical distinctness, may change unbeknown to us. As we know, Descartes
saw no epistemological but only a metaphysical way out of this labyrinth: his
invocation of “God’s veracity” does not appease or resolve the doubt but simply
strangles it. Yet here precisely lies the point of departure for Leibniz’ development
of the technique and methodology of mathematical proof. It can be shown histor-
ically that Descartes’ skepticism about the certainty of the deductive method was
the force that impelled Leibniz to his theory of proof. If a mathematical proof is to
be truly stringent, if it is to embody real force of conviction, it must be detached
from the sphere of mere anemic certainty and raised above it. The succession of
steps of thought must be replaced by a pure simultaneity of synopsis.
(Cassirer [1923] 1985, pp. 388–89)
Cassirer goes on to mention the echo of this idea in David Hilbert’s formalist program,
which was indeed in the same spirit: the axiomatic method sought clear and distinct
foundations for mathematics. But his note, at the end of the quote, that the successive
operations of thought must be replaced by a “pure simultaneity of synopsis” is a rather
Kantian reading of what Leibniz seems to have had in mind. Cassirer emphasizes the
symbolic character of the formalization and proof, rather than the mechanical character, the
characteristica rather than the ratiocinator. But Leibniz suggested that (at least) some of these
proofs could not possibly be held within the intuition, and that, even when exteriorized,
some aspects of the universe would remain indistinct: like incomputable, non-terminating
programs, some series of operations would go on crunching forever, expressing indistinct
analyses that “only God” could grasp in full distinction. It is Kant who believed that such
an externalized process of calculation was insufficient for knowledge: concepts needed to
be grasped in sensuous intuition—they could not be merely written down or materialized in
a computer process, but needed to be intuited synoptically, in the mind’s eye.
Hume had taught Kant that all the earlier metaphysical attempts at binding the soul
to the body, the inside to the outside, and of accounting for the real connection between
percepts and concepts, were ultimately dogmatic and held to nothing but blind faith in
the goodness of God. And Leibniz held no illusions about his strategy of exteriorized
proof: despite the objective demonstrability of the exteriorized method, he knew it did
not completely eliminate the doubt that there is any congruence between our indistinct
notions and the distinct symbols we encode into the mechanical oracle. Ultimately, Leibniz
entertained his own metaphysical argument, a nuanced version of the occasionalism—
popular in his day—of Cartesians like Malebranche. But Leibniz pushes it out to its limit:
God intervenes as little as possible, and with the exception of miracles, only truly acts once,
at the beginning of time, when he chooses these substances rather than others. God, being
of the highest good, was “forced” to choose the best possible world, insuring that at the
very least knowledge and rational thought were possible. This explained, for Leibniz, why
we are sometimes deceived, why there are criminals and wicked people in the world, and
why horrible tragedies befall us, despite God’s ultimate intervention: God was himself
constrained by rules, and even the best possible world is constrained by internal relations
of compossibility, such that in order for the best to happen, some bad must also take place.
Echoing Pascal’s probabilistic wager, Leibniz claims:
It is therefore infinitely more reasonable and more worthy of God to suppose
that, from the beginning, he created the machinery of the world in such a way
that, without at every moment violating the two great laws of nature, namely,
those of force and direction, but rather, by following them exactly (except in the
case of miracles) . . .
Humanities 2021,10, 29 5 of 17
. . .
we can easily judge that this hypothesis is the most probable, being the simplest,
the most beautiful, and most intelligible, at once avoiding all difficulties . . .
(Leibniz 1989, p. 84, “Letter to Arnaud” [1687], my italics)
Denying himself recourse to any such cosmological arguments, Kant found his way out
of the labyrinth with the powerful concept of the synthetic a priori (Kant [1781] 1996). We
replace the pre-critical theory of “pre-determined harmony” with a new fulcrum between
sensations and ideas, a point from which the transcendental constraints on cognition are
articulated, where their architectonics can be considered undogmatically. In order for a
concept to take hold in the mind and distill experience into its universality, it needed to be
supported by the a priori constitution of space and time, without which there would be
no possibility of conceiving. This invariant background needed to be given full credence
and priority. We had no access to things in themselves, and thus avoiding dogma meant
rebuilding knowledge on the stable structure provided by such invariants of experience,
these ultimate conditions of cognition. Knowledge needed to be held in the intuition.
Concepts needed to be imported from Leibniz’s ratiocinator back into the mind. When they
are outside, in the oracle, in the machine’s behavior, or just sitting latent in the symbols,
they are not being understood. Thus, for Kant, irrational numbers are not concepts, as any
iterative analytic sequence of operations that takes eternity to finish cannot be held within
the finitude of the mind.
Already this series of philosophical gestures exposed the gap that would need to be
fleshed out. Leibniz can only hope that God will ensure the connection between the concept
and its external reconstruction, and Kant can only be certain of that which is immediately
perceived within an internalized conceptual tautology, giving rise to subjective idealism and
“correlationism”. We are left with an untenable choice between an unfounded Leibnizian
optimism and a depressing Kantian claustrophobia.
3. Rules and Demonstration
Stiegler (see 1996,1998,2001) philosophy of technology, building on the work of
Leroi-Gourhan, Derrida, and Husserl, highlights a related aporetic condition exposed in
the question of technological exteriorization. Knowledge is only produced by exteriorizing
our memories, by progressively outsourcing our faculties into external supports. We
invent symbols and grammars, tools and machines, each exteriorization becoming an
environmental effect and conditioning future experiences from the outside, from the past,
skewing our desires and drives, modifying our priorities. We are retroactively conditioned
by all of our exteriorizations. We count dashes in the sand, we draw lines between them, we
cut the figure in half, each gesture looping back into our minds, reconfiguring our souls. In
Plato’s Meno, Socrates demonstrates anamnesis as the consequence of such a diagramming
of our intuitions in the sand. Anamnesis, true learning, is the product of exteriorization.
Even here, a precursor of Descartes’ demon rears its head: Meno asks, how can one be sure
to have discovered the new insight and have really learned something, if one did not know
it beforehand? How does following these geometric rules reveal “new” knowledge, if it
was all contained in the rules to begin with?
The late Wittgenstein (Philosophical Investigations ([1953] 2009) and Remarks on the
Foundations of Mathematics ([1953] 1967)) circled this same issue. What is the “plus value”
of the proof or of the demonstration? What does it give us? On the face of it, it only seems
to provide a vague sense of assurance. The proof’s demonstration causes a change in
dispositions, it relaxes something in us, such that “now I can go on”, a feeling of renewed
confidence in how the rules are being applied. It updates our conceptual landscape: it is a
point of articulation in our behaviors and our horizon of expectations, a cusp that sends
us off into the flow of practice until we encounter the next hiccup. “The proof changes
the grammar of our language, changes our concepts. It makes new connexions and it
creates the concept of these connexions. (RFM III: §31)” Other than that, it is just a series of
symbols or diagrams on a page, which we “follow” through a sequence of operations from
one presentation to another. The proof is just a series of transformations, an expression of
Humanities 2021,10, 29 6 of 17
what the rules of construction allow. When we follow the demonstration to its conclusion,
when the formulas wrap around to the beginning and provide us with an equal sign, the
effect that these transformations have on our future actions, and the feeling that “now I
can go on”, always remain outside the deductive demonstration.
Wittgenstein’s issues had to do with the gap between theory and practice, or between
our models and rules and their supposed execution in real life. From a series of obscure
observations on the gap between saying and doing, between the rules we declare we
follow and the actual actions we perform, Wittgenstein disentangles a strange web of
consequences. There is an echo of Meno’s paradox: “What do we learn when we see
the new proof—apart from the proposition, which we already know anyhow? Do we
learn something that cannot be expressed in a mathematical proposition?” (Wittgenstein
[1956] 1967, §60). There is also an echo of Descartes’ Demon: in the moment of applying
the rule of addition, how do I know I am not following some other rule, like that of
“quaddition”
(Kripke [1982] 2000)
? And there is a reflection of Hume’s problem. When
we teach a child how to multiply, we correct them whenever they make a mistake, and
continue to do so until we feel they have grasped the rule. But even then, we never come
to expect the student’s future application of the rule to be absolutely flawless. Even the best
pupil will go on to make mistakes. So where is the cut-off point? At which point does the
pupil’s frequency of error transition from being an indication of their ignorance of the rule,
to an indication of their momentary misapplication of the rule? This formulation mimics
Hume’s ([1748] 1921) skeptical arguments about causality, where just because we have
always seen a certain stable array of actions causing reactions or effects does not mean that
such stable behavior will always be observed in the future. The regularity of causality is
not confirmed by experience, it can only be induced. Hume will deny that there is anything
empirical that can ever confirm the existence of causation, yet will also wager that it is best
to go along with it, to trust our implicit experience of causation and treat the world as-if
causation holds.
Wittgenstein will suggest that the skeptical question itself is malformed, a corruption
of language’s proper use in language games. For him, the problem of rule following implies
acategory error, a discontinuous leap into a new domain which does not carry forward
the consequences of the first, or “preserve” the meaning of one in the other. Between the
intuitive feeling of being confident about the rule, and its actual application, execution, or
demonstration, Wittgenstein finds a seemingly unbridgeable chasm. “What one means by
‘intuition’”, Wittgenstein says, “is that one knows something immediately which others
only know after long experience or after calculation.” (Wittgenstein et al. [1939] 1976, p. 30)
But knowing by intuition tells us nothing. Our intuitions are constantly being empirically
overturned, and indeed that is the whole point of mechanical proof in Leibniz, or of the
scientific experiment’s capacity of falsification, in Popper. The empirical should be the final
test. The problem is that, “What ever is going to seem right to me is right. And that only
means that here we can’t talk about ‘right’.” (Wittgenstein [1953] 2009, p. 92). So even
when the machine halts on a given yes or no response, I am still in the position of having
a choice of assenting or dissenting from the result. Or as in the Duhem-Quine thesis, if
the scientific experiment falsifies the theory, I am always free to choose which parts of
the theory I modify in order to make it fit with the empirical evidence. It is as though the
rules only exist in theory: they fall apart in their application, for their supposed enjoining of
actions does not directly translate into the actions themselves, their consequences being
beyond the purview of the deductive system.
Recall that the whole point of Leibniz’s plea for exteriorization (calculemus) is that
since we cannot trust our intuitions, we should process them externally, we should construct
the knowledge outside of ourselves where we can proof check every step. From Leibniz’s
rational point of view, we needed to account for the proper exteriorization of our “internal”
assumptions, while from the point of view of Kant, by then juggling with Hume’s incisive
skepticism about causality and induction, the problem was that of accounting for their
proper re-internalization into concepts that stood for themselves. Kant found the nexus
Humanities 2021,10, 29 7 of 17
somewhere in the middle, in the invariants of cognition. But the later Wittgenstein can
be taken to say that even the relativized a priori never really overcomes the demon. The
concepts held in intuition and their rules of construction and decomposition might be just
as fleeting as bare empirical experiences, and our supposedly timeless “concrete universals”
might actually be sensitive to time and context, for who knows whether the axioms are not
being changed every time we conjure the mathematical object?
Paraphrasing Lotze, Cassirer warned that the practice of thought is “never satisfied to
advance to the universal concept by neglecting the particular properties without retaining an
equivalent for them” (Cassirer [1923] 2015, p. 21). Indeed, the problem of rule following is the
problem of structure preservation, that is, of the tracking of equivalences through changes of
context, as it pertains specifically to transitions between concepts and percepts, theory and
practice, intuitions and exteriorizations.
4. Artificial Equivalences
When, in 1832, Évariste Galois frantically scribbled down his theory of symmetries on
the final night of his short life, he inaugurated a new way of abstracting from identities.
A thing’s existence could begin to be rigorously conceived as equivalent to the sum of
operations or interventions that would leave the thing indiscernibly different. “The ‘nature’
or ‘essence’ of a figure is defined in terms of the operations which may be said to generate the
figure.” (1944, p. 24) Cayley took an important step further. His work would be interpreted
as showing (though not explicitly) that the group of symmetries allowing such “invariance
under transformation” can again be generalized: what would become known as Cayley’s
theorem says that a group is isomorphic to a subgroup of its symmetry group, allowing us to
imagine that each identity, and the this-ness of a thing, is owed to a particular way of being
embedded within higher-order symmetries, implying a hierarchy of abstractions.
By 1872, Klein (Klein 2008) was motivated to apply this logic to the newly discovered
non-Euclidean geometries. In what became known as the Erlangen program, Klein used
group theory to unify these otherwise distinct geometries, and order them from Euclidean,
to affine, to projective in terms of a hierarchy of generality. As Cassirer puts it, through the
geometry of affine transformations, “we can no longer maintain the distinction between
‘circle’ and ‘ellipse’” and in projective geometry, “an ellipse can be transformed into a
parabola or a hyperbola, such that, in the final analysis, there is but one single conic.”
(Cassirer 1944, p. 9)
So group theory allowed us to see all geometries as just axiomatically
stipulated collections of permitted transformations, and in particular that Euclidean ge-
ometry was just a “special case” of affine geometry, itself but a special case of projective
geometry, building an intuition for the idea that physical space itself could be warped such
that things appear to be Euclidean at local scales while appearing curved at larger scales.
This intuition would eventually contribute to the development of Einstein’s special theory
of relativity.
This systematic application of group theoretical thinking, this practice of taking an
object as the sum of operations that leave the object unchanged, enabled a new step in
a progressive hollowing out of substances, continuing an ancient philosophical impetus:
we abstract from appearances to get to the substances, which can only be grasped by
identifying the subtle invariants that remain throughout its transitions. For instance, in
Descartes’ famous example, the ball of wax has a certain shape and produces a certain
sound when knocked, but these change when we hold it close to the fire. Thus, in order
to really know the substance, we have to go beyond such accidental properties, and find
the substance’s invariance under transformation, its true properties. But we then develop new
capacities (intuitions, technology, experimental science) that allow us to make new kinds
of interventions (constructions, exteriorizations) and realize that, again, what we now take
to be invariant needs to be nuanced and displaced with subtler notions.
Cassirer knew to read the successful generalizations of group theory, and its enabling
of a new kind of intuition about abstract geometrical entities, as having an implicit relation
with similar revelations in psychology, and the curious tension that opposes the fleetingness
Humanities 2021,10, 29 8 of 17
of perception and empirical science with the apparent timelessness and universality of
mathematical truths.
Perception is not a process of reflection or reproduction at all. It is a process of
objectification, the characteristic nature and tendency of which finds expression
in the formation of invariants.
(Cassirer 1944, pp. 19–20)
He realized that the concept of invariant was also used in 19th-century psychology.
The contents of the mind had hitherto vaguely been modeled in terms of an affection of
the soul by the determinations of the real, an empirical inscription of the outside on the
inside, mere “reaction to external stimulation”. The previous theories had “rested on the
‘constancy hypothesis,’ i.e., the hypothesis of immediate correspondence between ‘stimulus’
and ‘sensation’.” (Cassirer 1944, p. 12) In Hobbes, anticipating Newton, action and reaction
were “related in no other way than strict equality”. But the main contribution of Helmholtz,
Hering, Katz, and their generation, according to Cassirer, was to have slowly abandoned
this assumption, and detrivialized the relation of equality between the object perceived and
the act of perception. As Helmholtz had suggested, we do not see what is “really there”,
but what deviates from our expectations, which we are constantly renormalizing, and
which are in a sense artificial interventions. “It henceforth appears that it is dissimilarity
rather than similarity to the objective stimulus which characterizes perceptual content.”
(Cassirer 1944, p. 12) No longer could we conceive of a Leibnizian parallelism between
extension and intension, between distinct and indistinct, because we now knew that we
are perpetually “being deceived” by our senses, that our perceptions in every instant are
themselves preselecting what they deliver to experience. “We do not merely “re-act” to the
stimulus, but in a certain sense act “against” it.” (Cassirer 1944, p. 13)
This was clear from the discovery of perceptual color constancy: we see a sheet of
paper as being of a constant white whether the room is brightly lit or darkened. We inter-
vene in the scene, we lock some parameters, we artificially freeze part of the environment’s
variability in order to be attentive to other changes. This active saturation of some param-
eters corresponds to a kind of selective coarse graining of the perceptual field, such that
differences fainter than a certain threshold are clamped to the limit. A subject will identify
the difference in shade in an experimental setting, and so we know the eye perceives it, but
in everyday perception those same differences are constantly being selectively glossed over.
As perception stabilizes the invariant, it is continually re-normalizing the scene, implying
that variations beyond a certain threshold are being taken as equivalent, even though the
eye sees their difference. Thus, the selection is happening before knowledge or understanding.
But worse, it happens even before sensuous intuition, before even our indistinct experiences
express themselves to the mind’s eye. It is the way perception itself is selectively synthe-
sizing the scene, constantly reconstituting the invariant, distinguishing the foreground
from the background, the action from the setting. In order for this to happen, perception is
constantly identifying or equating discernibly different things, stretching Leibniz’s account
of identity and difference; our perceptions are perpetually plucking out invariances by
relaxing the precision of their analysis, tracking blobs of invariance “as” objects, long
before we are ever conscious of it. The fact that the invariance of the sheet of paper ’s
color is stabilized in perception already implies a “dissociation” between what we take
to be the actual color of the object and what we take to be just an effect of the lighting. In
other words, there is a separation maintained within perception itself between the object’s
primary and secondary qualities, and a division of essential and accidental, object and
transformation, a realization that only deepens the post-critical epistemological conundrum
as per subjectivity’s alienation from the real.
5. Functorial Intuitions
In ordinary life we have all sorts of criteria for equality. ...equal weight, equal
color, equal number, etc. Aren’t there very different criteria for equality in all
these cases?
Humanities 2021,10, 29 9 of 17
(Wittgenstein et al. [1939] 1976, p. 50)
Since its inception in the 1940s with Eilenberg and MacLane (1945) introduction of the
concepts of functor, category, and natural transformation in the context of algebraic topology,
it is category theory that eventually demonstrated the extensive power of this operational
perspective on identity. Notably, they introduced category theory as an extension of Klein’s
Erlangen program, “in the sense that a geometrical space with its group of transformations
is generalized to a category with its algebra of mappings.” (Eilenberg and MacLane 1945, p.
237) Category theory stays closer to the “looseness” with which identities and equivalences
are treated in everyday practice. Category theory makes rigorous the idea that “for our
present purposes” we need only track the thing in question up to a certain degree of fidelity
that respects our current requirements.
If I just want to count the three sheep in the field, for such purpose I might only
need to hold up three fingers: the structure of their number will be “preserved” in my
finger gesture. We will say, with Frege, that there is a one-to-one correspondence between
each individual finger and each individual sheep, such that the numbers are equal. If I
want to communicate a richer idea, say that one sheep is mother to the other two, I might
draw three dots in the sand and add two arrows out from one of the dots to the other two.
We will agree that there is a correspondence, again, between the number of dots and the
number of sheep, but now also between the internal parent–child relationships of this little
family of sheep and the composition of my little diagram. This is essentially what a functor
is: a structure-preserving map. A functor maps both the objects and the morphisms (relations,
transformations, functions) between these objects from one category (or context) to another,
preserving an intended structure or order of composition. The functor is between the sheep
and my diagram: we can reveal it by drawing two new set of arrows, first from each sheep
to each dot, and then from each “relation of motherhood” to the two original arrows I
drew. We begin to see that each taking of one thing for another, each consideration of a
thing as if it were another thing, each act of substitution, is intrinsically functorial. Note
that this procedure is “substrate independent”—my diagram in the sand does not see the
sheep, but captures only the information I intended to communicate—it need only preserve
their composition, under the assumption that “up to a unique isomorphism” defining the
invertible transformations between the two contexts, the representation will have the same
effect, or can be used as a proxy for the real thing.
In functional programming, which incorporates the ideas of category theory, the
metaphor of the assembly line is sometimes used to think about functors: the production
line goes from beginning to end through several steps where materials are transformed
and assembled. If we make changes to the factory, if we bring in new machines, or decide
to group some steps together or break them into smaller ones, or if we reorganize which
steps happen in series and which happen in parallel, the constraints on these operational
changes will be functorial: we will want the same materials to be transformed into the same
end-products, and thus, for the new factory to be equivalent, for all intents and purposes,
to the old factory. Interestingly, Pierre Lévy describes the process of virtualization using a
similar example:
“Let’s look at the very contemporary example of the virtualization of a company.
The conventional organization gathers its employees in one building or a group
of buildings. Each employee occupies a precisely defined physical position, and
his schedule indicates the hours he will work. A virtual corporation, on the
other hand, makes extensive use of telecommuting. In place of the physical
presence of its employees in a single location, it substitutes their participation
in an electronic communications network and the use of software resources that
promote cooperation.”
(Lévy 1998)
Echoing Bergson, Lèvy maintains that the virtual “tends” toward actualization, so
that the production of a virtuality from an actuality is, in a sense, where the real work
Humanities 2021,10, 29 10 of 17
happens, where the actual is problematized. This may be a leftover of Bergson’s insistence on
the two tendencies of matter and life. Russell noted that for Bergson, “The whole universe
is the clash and conflict of two opposite motions: life, which climbs upward, and matter,
which falls downward.” (Russell 1912) The question is whether this apparent asymmetry
between the powers of matter and of life, and these putative tendencies of actualization
and of virtualization are fundamental or whether they are contingent effects of the types of
cognizers humans happen to be, being that we are thermodynamically (antientropically)
oriented within a material cosmos. For Bergson, of course, this polarity is fundamental.
But, be that as it may, we do not need the notion of tendency or force if we think of the
“transformations” between the virtual and actual as functors. Levy’s “virtual” company,
by going online, has maintained something of the structure of the previous “non-virtual”
company. Functors are the structure preserving maps between such unproblematic or
trivial actualities, and the non-trivial structures, patterns, and functional relations they can
be unfolded into.
Category theory can be thought of as a means of making practical acts of taking
one thing for another rigorous. It is the mathematics of squinting and seeing “family
resemblances” between otherwise disparate things. This functorial thinking, where what
something is can be rigorously considered in terms of the thing’s isomorphic relationships
with other things, makes mathematically meaningful the idea that when we consider
something, we are always already pulling it out from one context and presenting it in
another, and thus from this point of view, the thing in question (its identity) might as well
be considered as being equivalent to the isomorphisms that allow us to change the object
into something else and recover it later on to varying degrees of fidelity. As Wittgenstein
said, “the meaning of words lies in their use” and it is important to note how taking
identities as collections of isomorphisms is sometimes called “abuse of notation”, which is
common practice among working mathematicians, even if strictly speaking it is not allowed
by standard set theoretic foundations. Just getting off the ground when Wittgenstein was
concerned about mathematical foundations, there is a sense in which category theory will
have begun to respond to some of his worries, through decades of eventual development.
Defending a position in the tradition of Cassirer, Rodin (2014) recently argues that
thinking of category theory as a mere extension of the Erlangen program, and a subsequent
step into formal abstraction, misses the point. Rodin wants to defend the importance of the
intuition from the pure formalism of Hilbert and the structuralism of Dieudonnéand the
Bourbaki authors.
1
“The switch from the structuralist thinking in terms of invariance to
the new categorical thinking in terms of covariance and contravariance (i.e., functoriality)
signifies a decisive brake with the structuralist viewpoint
. . .
(Rodin 2014, p. 255)” He
argues that category theory is not just another, yet more subtle, kind of structuralism,
and that its true innovation comes with Lawvere’s work on “functorial semantics” and
his development of a new categorical “foundation” for mathematics (Lawvere 1963). I
put “foundation” in scare quotes here, because the intuition changes from a theory that
is “founded” or “grounded” in bags of dots and their one-to-one correspondences, to a
top-down “aerial” perspective on mathematics. Category theory “overcomes” set theory;
mathematics effectively swallows up its abstract set-theoretical foundations, and gains
a bird’s-eye view on them. Overcoming set-theoretical foundations means overcoming
our usual recourse to thinking in terms of ultimate atomic entities (or substances) where
existence bottoms out, because doing so once we reach this level of generality would
restrict us to the category of “small categories”, those where all arrows and objects form
sets. Analogously, type theoretical foundations would restrict us to Cartesian closed
categories. Rather, we have to work from the top down, and imagine a
1
His argument also applies to more recent structuralisms like James Ladyman’s Ontic Structural Realism, which denies the existence of objects, and
develops a rather compelling realist theory of structure, where there are only relations, no relata (Ladyman et al. 2009). Be that as it may, I do believe
OSR is compatible with the operational notion of virtuality.
Humanities 2021,10, 29 11 of 17
hypothetical category CAT of all categories as an intended model of [elementary
theory] ET and then add to ET new axioms which distinguish CAT between
other categories; then pick up from CAT an arbitrary object A (i.e., an arbitrary
category) and finally specify A as a category by internal means of CAT (stipulating
additional properties of CAT when needed).
(Rodin 2014, p. 106)
Rodin argues that this overcoming of set theory, which the Bourbaki authors had
hoped to achieve through the structuralist program, was actually only achieved through a
non-structural modification of our intuitions. He stresses that it was the taking of equalities
for isomorphisms, rather than the taking of isomorphisms for equalities, that really allowed
for Lawvere’s big leap into the top-down view offered by the category of categories. For it
allows us to make sense of some of the “similarity” we informally observe between different
domains of mathematics, and indeed between mathematics and the empirical world or
the psychological domain. Indeed, with Cassirer, Rodin wants to conceive of mathematics
as a “part of physics”, somewhere on the spectrum between the purely ideal and the
empirical. Categories are not structures, he claims, they do not deal with invariances.
Functors and their “natural equivalences” (transformations between functors that have a
“dual”, in the reverse direction) are not “invariants” in the old sense. The tendency to view
functoriality as a generalization of invariance, for Rodin, is symptomatic of a “conceptual
inertia” possibly preventing us from doing full justice to the discovery of functoriality, and
its adjustment of intuition. The new epistemic criterion introduced in functorial thinking
does not in his view reduce to the Platonic or structuralist criterion according to which
only invariant features are epistemically significant, while all the variable features are
accidental and irrelevant. Rather than just tracking invariances, the functor tracks ways in
which things can be taken for other things, it maps out different modalities of the as-if, it
develops a diagrammatical logic for modeling ways of selecting, indicating, and picking out,
or grouping, fusing and gluing things together, and thus of “moving” from one universe of
discourse or thought or practice to another. The “hollowing out of substances” is replaced
with an intuition for something like a cartography of worlding, a mapping out of virtual
transitions between contexts, between worlds, between subjective experiences or objective
constructions. It seems to detrivialize the opposition between Leibniz’s wishful realism and
Kant’s claustrophobic finitude: it provides conceptual, formal, and geometric tools that
allow a finer analysis, as it were, of what is really going on in the transition between theory
and practice, or between perception and cognition, than ever could the comparably much
blunter metaphysical tools of process, time, and becoming. The functors are not trivial
tansitions from one context to the other: they divide into covariant and contravariant,
such that their adjunctions do not recover what we had hitherto come to conceive as strict
identities. Passing from the left to the right and back again, does not necessarily ensure that
we have recoverend the original entity, as is the case with the one-to-one correspondence.
Rather, as in a game of chinese whisperers, each passage through the circuit changes the
message. It suggests, furthermore, that there are no ultimate invariants at the bottom of the
real, but rather axiomatizes the inherent incompletion and relativity of both substance and
structure. It is, in this way, more “honest” about cognition: it takes as a given that whatever
is right for me is right, that identities and equalities are always pragmatic articulations rather
that pure ontological entities. Thus, Rodin argues that category theory’s real intuitive leap
beyond set theory actually makes it more concrete. Far from being “abstract nonsense”,
as it sometimes is accused, we might more accurately say that category theory is concrete
nonsense: it makes the “nonsense” between regimes of intelligibility concrete. It is not that
we can always treat isomorphisms as equalities, but rather that all equalities are always
already only equivalent “up to isomorphism”.
6. Isomorphism and Computability
[For Duns Scotus] the understanding
. . .
objectively apprehends actually distinct
forms which yet, as such, together make up a single identical subject. ... Formal
Humanities 2021,10, 29 12 of 17
distinction is definitely a real distinction, expressing as it does the different layers
of reality that form or constitute a being.
. . .
Real and yet not numerical, such is
the status of formal distinction.
(Deleuze [1968] 1990)
This history of ideas has been launched into new territories by the late Vladimir
Voevodsky, and his influential introduction of univalent foundations. Voevodsky’s project
imports a category theoretic intuition, an influence he gained in his early reading of
Grothendieck (1997)Esquisse d’un programme
. . .
(which was written in 1984 and circulated
in the mathematical community long before its publication), but also combines it with a
very Leibnizian quest for the mechanical verifiability of mathematical proofs.
Loosely, the proposed univalence axiom, (A = B)
=
(A
=
B), stipulates that identity
is equivalent to equivalence (or isomorphic to isomorphism). It comes packaged as the
centerpiece of a new program for mathematical foundations that axiomatizes the idea that
mathematical objects derive their identity deferentially from higher level isomorphic (in this
case homotopic) equivalences. Voevodsky’s homotopy type theory builds a bridge between
logic (computer science, dependent type systems) and geometry (topology), such that each
logical type is equivalent up to unique isomorphism to a corresponding path in homotopic
space, and is related to other types through nested cascades of type dependencies, described
geometrically as a hierarchy of homotopic fibrations from one path to another. The gesture
here can be thought of as “taming” the wild jungles of category theory and establishing a
coded hierarchical order of inclusion more suitable for launching complex mathematical
research programs, where it is necessary to keep track of the underlying logic and the
higher-order symmetries at each step of the construction. If category theory swallowed
up its set-theoretical foundations, where all mathematics were built up from the empty set,
univalent foundations now spits them out again as higher-dimensional groupoids (“special
case” categories, where every morphism is an isomorphism), and mathematics is then
built down” from a hierarchy of dependent types corresponding to more or less complex
paths in homotopy space. Thus, in homotopy type theory, truth becomes a special case of
logic, logic a special case of set theory, set theory a special case of category theory, category
theory a special case of higher categories, and so on into the firmament.
Type theory first emerged as a somewhat ad hoc correction of arithmetic foundations
to avoid the Russell Paradox, which Russell famously discovered in Frege’s set-theoretic
approach. After Church and Turing, where the paradox reappears as non-terminating
programs and algorithms that do not halt on a specific result, it became obvious that a
special “typed” form of computational logic was needed, a calculus designed to avoid
these paradoxical, non-sensical, non-terminating programs. There is an aura around this
historical development, which Voevodsky himself tried to dispel, that naturally demotes
computability theory in comparison to “pure mathematics”. For the intuition after Turing
is that the oracular exteriorization Leibniz had imagined has been proven impossible.
Avoiding uncomputable programs seemed to imply compromising the full power of
mathematics, thus priming the intuition to think of computability as being more of an
engineering problem than one of pure mathematics. From the point of view habituated
to imagining all of mathematics as being reducible to ZFC and to vague intuitions about
how sets of sets behave, it looks as though incompleteness implies that we can avoid
non-terminating programs only by restricting ourselves to a small region of “computable”
mathematics, an impure mathematics.
This computable, applied, subset of pure mathematics has been found to have a
deep connection with intuitionistic logic. In intuitionistic logic (originating in the work of
Brouwer, Heyting, and Kolmogorov), a proposition can be true, in which case its truth can
be presented in the form of a proof, or absurd, which just means it is “empty” of proofs,
it has no terms. Truth is just the condition of having proofs, the idea being that a thing is
the collection of ways it can be constructed or presented, and a thing that does not exist,
cannot be presented. The Curry–Howard correspondence formalizes how this idea applies
to both logic (computer science) and mathematics. Propositions are types. Computable
Humanities 2021,10, 29 13 of 17
programs are proofs. As Brouwer argued, contra Hilbert, the truth of a proposition is not
ensured by showing that its being false would lead to a contradiction: we must be able
to construct a positive proof, we must realize it, make it manifest, rather than appeal
to its absurd “opposite”. Thus, an arrow is drawn from the absurd to the unit type. An
asymmetry is written into the logic of computability such that the entire apparatus descends
from an original insistence that the paths compose and that the functions terminate, that
the constructions be realized, or actualized. It is this idea that homotopy type theory
expands on: in addition to having this dual interpretation of types in terms of a family
of programs in computer science or formal proofs in mathematics, it provides a third,
geometric interpretation. Each “computable function” can be modeled topologically as a
continuous path in space. We have hence returned here to Duns Scotus’s use of the concept
of virtuality. Si esse depends on quid est: a thing exists, or is “true”, if and only if it has
demonstrable properties, or exemplifying attributes. Univocity was always a question of
the structural correspondences between types, terms, and instances, and the virtual was a
primitive notion of structure preserving map between objects and their attributes, subjects
and their predicates, potential operations and their provisional actualizations.
Recall that part of Wittgenstein’s unease had to do with that curious gap between
formal theory and the real-world practice or application. In what way do the rules of the
deductive system relate to a possible real-world event, quantity, quality, or behavior? There
is no “equal sign” between the material goings-on (say, the machine’s gears crunching
along) and the symbolic formalization of an operational rule. Voevodsky is concerned
with a similar gap. In first-order logic, all the special characters are ultimately forced into
a relation with some natural language equivalent, which we are supposed to intuitively
grasp. We say that
means “for all”. We say that
means “there exists”. But the meaning
of “means” in these cases is outside of the deductive system in question. The problem is
that this invisible equal sign between theory and practice seems to only exist in practice,
since no theoretical proxy ever measures up to the empirical.
2
As Wittgenstein notes, the
sentence “‘We can construct a pentagon’ is a proposition of physics. It is not a mathematical
proposition but an experiential one.” (Wittgenstein et al. [1939] 1976, p. 49) Voevodsky
reasons similarly. In a documented discussion following his 2010 IAS lecture “What if
Current Foundations of Mathematics are Inconsistent?”, Voevodsky admits to sharing this
conviction: “There can be no inconsistency in experimental science. There can only be the
result of an experiment
. . .
I definitely consider material reality as the absolute judge of
truth
. . .
” (Voevodsky 2010) A scientific experiment always gives a result, a yes or a no
response, as Wheeler (1999) put it.
Hence Voevodsky’s move. What if instead of assuming that mathematics is incomplete
in virtue of the fact that we just know that it is consistent, we rather submitted ourselves
to the possibility that it is our current intuitive understanding of foundations that is
inconsistent? Perhaps we do not “just know” that first-order arithmetic is consistent,
perhaps we do not “just know” how infinite hierarchies of sets behave. Instead, let us
replace this assumption with the functorial intuition, the intuition of what it means to
take something for another, to consider something “for our current purposes”, or “up
to isomorphism”. In order to transform what appears to be just a restricted computable
subset of mathematics (the proof-checkable part) into a foundation for all of mathematics,
thereby bringing pure math closer to applied math, Voevodsky jettisons the distinction
between strict equality and weak equivalence. Univalence means that isomorphic structures
can be formally identified, not just “in practice” or by “abuse of notation”. Univalence
“is about expanding the notion of identity so as to coincide with the (unchanged) notion
2
Voevodsky supposes that any foundation for mathematics should have the following three components: first, a formal deduction system; second, an
informal aspect that provides a natural language equivalent that is “intuitively comprehensible to humans”; and third, in the reverse direction, a
“structure that enables humans to encode mathematical ideas” into this framework (Voevodsky 2014). In ZFC, the first component is built on top of
predicate logic, supplemented with a layer of patchwork axiomatics. Its second component, according to Voevodsky, is an implicit assumption that
humans have the “ability to intuitively comprehend hierarchies”. Its third component, its main advantage, is that it provides an intuitive way of
encoding mathematical objects as sets.
Humanities 2021,10, 29 14 of 17
of equivalence.” (The Univalent Foundations Program 2013, p. 5) Turning mathematical
foundations inside out in this way means admitting within mathematics the relativity of
even the most clear and distinct logical deduction.
7. Operational Virtuality
If mathematics must ultimately be confronted to the real, to empirical becoming, if it
must make a difference in this world rather than in some other possible world, does this
not imply that the so called “universal truths” of mathematics are just as temporary as the
flow of experience? This Voevodsky tacitly admits:
Mathematics has been historically kind of static. If something has been proved
it has been proved forever. One can speculate about the possibility of a kind of
“dynamic” mathematics in that sense. It is very hard to imagine at this point
. . .
(Voevodsky 2010)
The admission that math may be just as impermanent as experimental truth evokes an
openness to a kind of dialectical materialism, no doubt part of Voevodsky’s education. But it
could equally be said to echo American pragmatism, where, in William James’ infamous
formulation, truth is understood as the “cash value” of a proposition: something is true if
it can effectively be cashed in and make a real practical difference. For Bergson, of course,
the progressive geometrization of reality could only serve to mask the truth of becoming.
This natural mathematics is only the rigid unconscious skeleton beneath our
conscious supple habit of linking the same causes to the same effects; and the
usual object of this habit is to guide actions inspired by intentions, or, what comes
to the same, to direct movements combined with a view to reproducing a pattern.
(Bergson and Mitchell [1911] 1944, pp. 50–51)
Logical and geometric concepts only created an illusory “intelligible world”, Bergson
thought. They “are not, indeed, the perception itself of things, but the representation of
the act by which the intellect is fixed on them.” (Bergson and Mitchell [1911] 1944, p. 177)
The task of the philosopher, he thought, in the face of the geomtrization of the world, was
to uphold the distinction between “real and symbolic” (Bergson and Dingle [1822] 1965,
p. 153). So Bergson ends up defending a form of equivocity: the virtual is here a means
to protect an absolute difference between the created and the uncreated, the real and the
imaginary, the real physicist making a measurement and the hypothetical observer in an
imagined spatio-temporal frame of reference. But indeed, if Einstein’s view of time won
out over Bergson’s, it is only because it could be cashed in for real effects and predictions, it
virtually included pragmatic constructions, it “made a difference”. It established, in other
words, a virtual link between the model and its external verification in practice. Practice,
and its deferential character—its Heideggerian character, let’s say, the preorder of tool-being,
where each task delivers itself to the next through an endless chain—is the only possible
site for virtuality: all things derive their identity from potential operations, through a
hierarchy of transcendental constraints. Virtuality in this way is the yet unactualized future
of realizable practice, ensuring that each actualization is an expression of the virtual, that
each subjective experience is an element of the transcendental constraints on cognition,
and that things are nothing more nor less than the virtual operations that allow their
constructions. Univalence echoes univocity: if something exists, it virtually contains all
the ways we may come to know it. A thing is equivalent to the operations required for
actualizing it. Thus, if a thing is realized, or actualized, it unfolds a proof of existence from
an equivalence class of virtual operations.
It is tempting to speculate that with such gestures, such admissions, mathematics
comes close to appeasing some of Wittgenstein’s worries, and perhaps even some of
Bergson’s objections. The synthetic process in mathematics and logic in the last century
has begun to build a very subtle bridge between the world of intuitions and the world of
exteriorized mechanical provability, and between the worlds of formal theory and practice.
Humanities 2021,10, 29 15 of 17
This bridge, it seems to me, is achieved through a most interesting compromise. Math has
admitted into its formalism the kind of relativism with which equivalences are trafficked
in everyday practice, and written this relativity into law, as a fundamental rule. It is by
admitting that whenever we pick something out from the rest, whenever we select or point
to something, there is always already something like an invisible equal sign, between the
thing we are talking about and our action of presenting it, between our intuitions and our
hypomnemonic exteriorizations. There is a kind of honesty in this, an admission that what
is right for me is right, and that truth or identity or equality is always a matter of context. But
the upshot is that we learn that this does not necessarily mean, as Wittgenstein thought,
that “here we can’t talk about ‘right’”. For indeed, this admission is precisely what allows
us to recuperate rigorous ways of addressing validity: we can legitimately “talk about
right” up to isomorphism.
Would Bergson, for his part, have acquiesced to such an operational account of virtu-
ality? Or would he have fought it on the grounds that it was too bound up with methods
of effective realization, thereby immediately, in his view, cutting us off from an irreducible
essence and blinding us to the truth of becoming? Will we not have begun, however, to
recover something of his vision? Cassirer rightly notes that even in Bergson, “a spatial
intuition and schema seem to have slipped unnoticed into his analysis of time” (Cassirer
[1923] 1985). His cones, shells, and sheaves now reappear as fibrations, functors, natu-
ral transformations, and ever more rigorous diagrammatic tools for modeling functional
relationships between perceptions and conceptions of the world at different scales and
levels of description. Time, process, and becoming are detrivialized geometrically through
a rigorous demystification of the cone, and the rules of logic and mathematics are here
no longer some rigid grid the intellect imposes on reality, so much as continual compro-
mises between the virtual and the actual, between the intuition and the exteriorization. But
contra Bergson, this progress was achieved not by defending an absolute difference in
kind between the map and the territory, or the relations and the relata, but by dropping
this equivocal criterion altogether. It is achieved not simply by dogmatically rejecting
the dualist essentialism, but through a careful, patient, and always explicitly provisional
“asymptotic” monism, a unificatory and conciliatory attitude, rather than a reactionary
empirical prejudice. We see now that the opposition of continuity was nothing more than
the preservation of structure through transformation, and that it is therefore not opposed
to logical ratiocination. Life is not the ancient enemy of matter. The virtual is not opposed
to mechanism. And the contraction of experience is a structure-preserving operation, quite
possibly owing to an evolutionarily conditioned propensity to link the before and the after,
to find a terminus for our actions, and to respect the rules of perpetual composability.
Ex nihilo nihil fit: the same gesture that establishes intelligibility and coherence, also
prohibits the summoning of the void. And continuity is the preservation of structure from
the initial to the terminal state, which is precisely why a computable function is executable.
The non-terminating program, for its part, is the site of a catastrophe, where the coherence
is broken, the continuity is cut, and Ex falso quodlibet, an inevitable leap from absurdity to
fixity, as a new paradigm or category or world is reified. Continuity is (re)established, it is a
struggle that can only take the form of a demand for perpetual composability, consequence of
our submission to the frictions of the real. Each new expression comes ready built with its
deferral to the next. There is no continuity, no coherence, no common sense, without this
process of turning the intuitions inside out through exteriorization, subjecting the doxa to
the trials of the empirical. The drama of the actual and virtual could just be a by-product,
aside effect of the most general constraints biology imposes on living things: the paths
must compose, the structures must be preserved, all actions must result in a change, and
make a difference in practice. All this in the absence of certitude: we do not know what
our bodies can do, what our systems are capable of, the world always only appearing in
hindsight. Matter and causality, objective stability and intelligibility are indeed grasped
through desperate constructions, contingent and ad hoc defaults to provisional actualities.
But the real itself is unfinished. It perhaps exists only in the making, that is, in whatever
Humanities 2021,10, 29 16 of 17
survives the transition between actualities, between models of objectivity, between scientific
paradigms, but also, importantly, between individual prejudices or biased perspectives.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
Baldwin, James Mark. 1901. Dictionary of Philosophy and Psychology; Including Many of the Principal Conceptions of Ethics, Logic, Aesthetics,
Philosophy of Religion, Mental Pathology, Anthropology, Biology, Neurology, Physiology, Economics, Political and Social Philosophy,
Philology, Physical Science, and Education; and Giving a Terminology in English, French, German, and Italian. New York: Macmillan,
Available online: http://archive.org/details/philopsych02balduoft (accessed on 3 October 2020).
Bergson, Henri, and Arthur Mitchell. 1944. Creative Evolution. New York: Random House. First published in 1911.
Bergson, Henri, and Herbert Dingle. 1965. Duration and Simultaneity: With Reference to Einstein’s Theory, 1st ed. Translated by Leon
Jacobson. Indianapolis: Bobbs-Merrill. First published in 1822.
Cassirer, Ernst. 1944. The Concept of Group and the Theory of Perception. Philosophy and Phenomenological Research. [CrossRef]
Cassirer, Ernst. 1985. The Phenomenology of Knowledge. The Philosophy of Symbolic Forms 3. Translated by Ralph Mannheim. New
Haven: Yale University Press. First published in 1923.
Cassirer, Ernst. 2015. Substance and Function and Einstein’s Theory of Relativity. Translated by William Curtis Swabey, and Marie Collins
Swabey. London: Forgotten Books. First published in 1923.
Deleuze, Gilles. 1990. Expressionism in Philosophy: Spinoza. New York: Zone Books, Cambridge: MIT Press. First published in 1968.
Deleuze, Gilles. 1994. Difference and Repetition. New York: Columbia University Press. First published in 1968.
Descartes, René, Donald A. Cress, and RenéDescartes. 1998. Discourse on Method and Meditations on First Philosophy, 4th ed. Indianapolis:
Hackett Pub. First published in 1641.
Eilenberg, Samuel, and Saunders MacLane. 1945. General Theory of Natural Equivalences. Transactions of the American Mathematical
Society 58: 231–94. [CrossRef]
Grothendieck, Alexandre. 1997. Esquisse d’un Programme. London Mathematical Society Lecture Note Series 242, Geometric Galois
actions; Cambridge: Cambridge Univ. Press, vol. 1, pp. 5–48.
Hume, David. 1921. An Enquiry Concerning Human Understanding. Philosophical Essays Concerning Human Understanding. Chicago:
Open Court Pub. Co. First published in 1748. Available online: http://catalog.hathitrust.org/Record/011204388 (accessed on 11
May 2020).
Kant, Immanuel. 1996. Critique of Pure Reason: Unified Edition, 1st ed. Edited by James W. Ellington. Translated by Werner S. Pluhar.
Indianapolis: Hackett Publishing Company, Inc. First published in 1781.
Klein, Felix C. 2008. A Comparative Review of Recent Researches in Geometry. arXiv arXiv:0807.3161. [CrossRef]
Kripke, Saul A. 2000. Wittgenstein on Rules and Private Language: An Elementary Exposition. Repr. Cambridge: Harvard University Press.
First published in 1982.
Ladyman, James, Don Ross, David Spurrett, and John Collier. 2009. Every Thing Must Go: Metaphysics Naturalized, 1st ed. Oxford:
Oxford University Press.
Lawvere, F. William. 1963. Functorial semantics of algebraic theories*. Proceedings of the National Academy of Sciences of the United States
of America 50: 869–72. [CrossRef] [PubMed]
Leibniz, Gottfried Wilhelm. 1989. Leibniz: Philosophical Essays, 1st ed. Translated by Roger Ariew, and Daniel Garber. Indianapolis:
Hackett Publishing Company.
Lévy, Pierre. 1998. Becoming Virtual: Reality in the Digital Age. New York: Plenum Trade.
The Univalent Foundations Program. 2013. Homotopy Type Theory: Univalent Foundations of Mathematics. Institute for Advanced Study:
Available online: https://homotopytypetheory.org/book (accessed on 9 November 2020).
Rodin, Andrei. 2014. Axiomatic Method and Category Theory, 1st ed. Synthese Library, Studies in Epistemology, Logic, Methodology, and
Philosophy of Science 364. Cham: Springer International Publishing. [CrossRef]
Russell, Bertrand. 1912. The Philosophy of Bergson. The Monist 22: 321–47. [CrossRef]
Scotus John Duns. 1987. Duns Scotus: Philosophical Writings: A Selection. Translated by Allan B. Wolter. Indianapolis: Hackett Publishing
Co, Inc.
Stiegler, Bernard. 1996. La technique Et Le temps: La désorientation. Paris: Galilée/Citédes sciences et de l’industrie.
Stiegler, Bernard. 1998. La Technique Et Le Temps, t. 2. La Désorientation. Paris: Editions Galilée.
Stiegler, Bernard. 2001. La Technique Et Le Temps, Tome 3: Le Temps Du Cinema et La Question Du Mal Être. Paris: Galilée.
Humanities 2021,10, 29 17 of 17
Voevodsky, Vladimir. 2010. What If Current Foundations of Mathematics Are Inconsistent?—Ideas|Institute for Advanced Study’,
4 May 2016. Available online: https://www.ias.edu/ideas/2012/voevodsky-foundations-of-mathematics (accessed on 10
November 2020).
Voevodsky, Vladimir. 2014. The Origins and Motivations of Univalent Foundations—Ideas|Institute for Advanced Study’, October 3.
Available online: https://www.ias.edu/ideas/2014/voevodsky-origins (accessed on 10 November 2020).
Wheeler, John. 1999. Information, Physics, Quantum: The Search for Links. Available online: https://doi.org/10.1201/9780429500459-
19 (accessed on 9 February 2015).
Wittgenstein, Ludwig, R. G. Bosanquet, and Cora Diamond. 1976. Wittgenstein’s Lectures on the Foundations of Mathematics, Cambridge,
1939: From the Notes of R. G. Bosanquet, Norman Malcolm, Rush Rhees, and Yorick Smythies. Ithaca: Cornell University Press. First
published in 1939.
Wittgenstein, Ludwig. 1967. Remarks on the Foundations of Mathematics. Edited by G. H. von Wright, R. Rhrees and G. E. M. Ascombe.
Translated by G. E. M. Ascombe. Cambridge: MIT Press. First published in 1956.
Wittgenstein, Ludwig. 2009. Philosophical Investigations, 4th Revised ed. Translated by G. E. M. Anscombe, P. M. S. Hacker, and J.
Schulte. Hoboken: Wiley-Blackwell. First published in 1953.
ResearchGate has not been able to resolve any citations for this publication.
Book
Full-text available
This volume explores the many different meanings of the notion of the axiomatic method, offering an insightful historical and philosophical discussion about how these notions changed over the millennia. The author, a well-known philosopher and historian of mathematics, first examines Euclid, who is considered the father of the axiomatic method, before moving onto Hilbert and Lawvere. He then presents a deep textual analysis of each writer and describes how their ideas are different and even how their ideas progressed over time. Next, the book explores category theory and details how it has revolutionized the notion of the axiomatic method. It considers the question of identity/equality in mathematics as well as examines the received theories of mathematical structuralism. In the end, Rodin presents a hypothetical New Axiomatic Method, which establishes closer relationships between mathematics and physics. Lawvere's axiomatization of topos theory and Voevodsky's axiomatization of higher homotopy theory exemplify a new way of axiomatic theory building, which goes beyond the classical Hilbert-style Axiomatic Method. The new notion of Axiomatic Method that emerges in categorical logic opens new possibilities for using this method in physics and other natural sciences. This volume offers readers a coherent look at the past, present and anticipated future of the Axiomatic Method. © Springer International Publishing Switzerland 2014. All rights reserved.
Article
Full-text available
Pierre Levy takes a fresh look at the whole idea of what is virtual. He's responding to the widespread belief, and sometimes even panic, that a digital society with emphasis on virtual interactions is necessarily depersonalizing. He takes particular exception to the notion that "virtual" and "real" are opposites. Instead, Levy argues that virtuality is one of four modes of existence, the rest of which he describes as reality, possibility, and actuality. Each is defined in terms of its relationship with its environment. In following Levy's world view, you may find that he interprets some or all of those terms in ways you're not used to, but the result is an interesting new approach to what it means to be part of an increasingly digital world. He examines the virtualization of several elements our society: the corporal body, text, the economy, language, technology, contracts, intelligence, subjects, and objects. What he finds is not a destruction of the personal so much as a transformation. Virtualization adds to, but does not replace, the real, the possible, and the actual. By understanding what virtualization means and involves, Levy believes that society will gain a greater variety of options for interaction in all areas. Becoming Virtual is a serious philosophical work, dense with ideas.
Article
In a volume of the present size, a compiler can give a broader if somewhat piecemeal view of a man's philosophy by limiting the length of the selections, or he may sacrifice comprehensiveness of subject matter in the interests of revealing his thinker at work. I have chosen the latter alternative, building the present selection around five key questions concerned with God and the human soul, the two philosophical topics of greatest interest to an ex professo theologian like Duns Scotus. Following the Avicennian interpretation of Aristotelian metaphysics, like Albertus Magnus, Siger of Brabant, Aquinas and most scholastics of his day, Scotus envisioned God as the goal of any rational metaphysic whose subject is being qua being. The two selections dealing with the existence and unicity of God, then, form the core of his "first philosophy". They are introduced by a few short sections in which Scotus describes this "transcendental science" and the type of conclusion it purports to establish, followed by a question wherein the Subtle Doctor analyzes his philosophical concept of God in terms of his controversial thesis regarding the univocity of being. Of the two questions about the human soul, one touches on its spirituality and immortality, the other concerns its ability to attain certain knowledge. (PsycINFO Database Record (c) 2012 APA, all rights reserved)