ArticlePDF Available

Abstract

This paper analyzes ChatGPT, and other large language models, using Don Ihde’s postphenomenological framework. Ihde helps immensely to understand how ChatGPT goes beyond the classical understanding of the technological mediation of reality to the human, according to which the human alone would engage in hermeneutics. Commonly, ChatGPT is explained as merely calculating probabilities upon serially aligning words. However, adding a speculative postanthropocentric twist to Ihde’s framework, we suggest an explanation for how ChatGPT itself—by virtue of its ability to ‘understand’ text upon ‘reading’ an input and ‘writing’ a meaningful output—necessarily acts as a kind of hermeneutic agent. Firstly, this radicalizes the classical anthropocentric conception of hermeneutics. Secondly, ChatGPT’s hermeneutic character carries a significant potential for performing how we perceive and relate to reality. Not only in the sense that ChatGPT can reify the idea that normative labels and categories alone are apt at representing the world. And, not only in the sense that ChatGPT can ossify particular ways of phrasing the world. But, perhaps more thought-provokingly so, also in the sense that ChatGPT can perform the human—at least to some extent—with ChatGPT’s own synthetically generated perception of reality.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 1
CHATGPT, POSTPHENOMENOLOGY, AND THE
HUMAN-TECHNOLOGY-REALITY RELATIONS
Soraj Hongladarom
hsoraj@chula.ac.th
Center for Science, Technology, and Society, Chulalongkorn University, Bangkok, Thailand,
ORCiD 0000-0003-1553-9650
Auriane van der Vaeren
a.vdvaeren@gmail.com
Center for Science, Technology, and Society, Chulalongkorn University, Bangkok, Thailand,
ORCiD 0009-0007-4458-1977
Article type:
Research article
Review process:
Double-blind peer review
Topical Collection
: Postphenomenology in the Age of AI: Prospects, Challenges, Opportunities,
guest editor Dmytro Mykhailov
This open-access article is published with a Creative Commons CC-BY 4.0 license
https://creativecommons.org/licenses/by/4.0/
DOI:
10.59490/jhtr.2024.2.7386
ISSN:
2773-2266
Submitted:
1 February 2024
Revised:
1 May 2024 and 11 July 2024
Accepted:
12 July 2024
Published:
10 October 2024
How to cite (APA):
Hongladarom, S., & van der Vaeren, A. (2024). ChatGPT, Postphenomenology, and the
Human-Technology-Reality Relations. Journal of Human-Technology Relations, 2(1), pp.1-24.
https://doi.org/10.59490/jhtr.2024.2.7386
Corresponding author:
Soraj Hongladarom
©2024 Soraj Hongladarom & Auriane van dr Vaeren, published by TU Delft OPEN on behalf of the authors.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 2
Keywords
Abstract
ChatGPT; Large Language
Models; Postphenomenology;
Hermeneutics; STS;
Postanthropocentrism
This paper analyzes ChatGPT, and other large language models, using
Don Ihde’s postphenomenological framework. Ihde helps immensely
to understand how ChatGPT goes beyond the classical understanding
of the technological mediation of reality to the human, according to
which the human alone would engage in hermeneutics. Commonly,
ChatGPT is explained as merely calculating probabilities upon serially
aligning words. However, adding a speculative postanthropocentric
twist to Ihde’s framework, we suggest an explanation for how
ChatGPT itselfby virtue of its ability to ‘understand’ text upon
‘reading’ an input and ‘writing’ a meaningful outputnecessarily acts
as a kind of hermeneutic agent. Firstly, this radicalizes the classical
anthropocentric conception of hermeneutics. Secondly, ChatGPT’s
hermeneutic character carries a significant potential for performing
how we perceive and relate to reality. Not only in the sense that
ChatGPT can reify the idea that normative labels and categories alone
are apt at representing the world. And, not only in the sense that
ChatGPT can ossify particular ways of phrasing the world. But,
perhaps more thought-provokingly so, also in the sense that ChatGPT
can perform the humanat least to some extentwith ChatGPT’s
own synthetically generated perception of reality.
Plain Language Summary1
The manuscript uses Don Ihde’s postphenomenology framework to argue that ChatGPT and
similar large language models are not merely tools but hermeneutic agents. This means they
actively interpret and help make sense of the world. Whether ChatGPT engages in a kind of
hermeneutic activity that is different from or similar to human beings, it still challenges the
traditional view of generative technology as passive instruments.
Unlike non-generative technologies like thermometers, which only provide data, ChatGPT
participates in meaning-making. It interprets input and generates coherent, contextually
relevant responses, suggesting that such technologies can shape our understanding and
perception of reality in significant ways.
The paper explores how the interaction with ChatGPT changes human practices and
perceptions. It suggests that large language models could reshape our relationship with
knowledge and reality by way of influencing how we view and engage with the world,
potentially reshaping our cultural and social norms.
The work raises critical questions about the nature of understanding and interpretation,
challenging the traditional human-centered view of these concepts.
1 AI-generated; author checked and approved.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 3
1 INTRODUCTION
In this paper, we propose to analyze the phenomenon of ChatGPT and other large language
models (LLMs) through the theoretical lens afforded by Don Ihde’s idea of postphenomenology
and material hermeneutics (Ihde, 1993, 2022). The rapid rise in ChatGPT’s popularity and other
LLMs is nothing but astounding. Only introduced to the public on November 30, 2022 (OpenAI,
2022), ChatGPT has now captured the imagination of the public worldwide. This new
phenomenon presents philosophers with rich and potent analytical material to advance not only
philosophical questions but equally so practical ones such as the governance and regulation of
artificial intelligence.
In our view, Ihde provides us with a fundamental analytical basis to unpack and make sense of
ChatGPT (from now on, we refer to LLMs in general as ChatGPT). By developing on how the
technological mediation between reality and the human operates, Ihde enables us to make
sense of how the supposed meaning-making ability of ChatGPT itself fares. In other words, with
Ihde’s postphenomenology, we will look at the kind of hermeneutical activity that is involved in
the human-reality relation as mediated by ChatGPT. In fact, adding a speculative
postanthropocentric twist, we will claim that ChatGPT radicalizes Ihde’s core relationship
between humans, technology, and reality. Namely, we will suggest that ChatGPT not merely
mediates between the human subject and reality in the classical understanding of
postphenomenological hermeneutics. Rather, by virtue of it being capable of ‘understanding’
the text that is input to it, we will suggest that ChatGPT itself does hermeneutics. Elaborating on
Soraj Hongladarom’s notion of machine hermeneutics (2020)which refers to the kind of
hermeneutics performed by facial recognition algorithmswe will suggest that, as much as the
human user produces meaning about the world upon interacting with ChatGPT, so too, ChatGPT
co-produces this meaning during this interaction. In so suggesting, and faithful to our
speculative postanthropocentric commitment, we will examine both the human practices
arising upon interacting with ChatGPT, as well as the speculated nonhuman practices emanating
from ChatGPT itself. In essence, our question is: how does ChatGPT interpret the world for us,
and what are the implications of such activity?
To respond to this query, we first introduce postphenomenology and present a review of
literature that discusses ChatGPT postphenomenologically (see Postphenomenology and
ChatGPT: An Overview). We then describe our argumentative approach (see Our Approach),
before diving into the various present-day mundane uses of ChatGPT (see Uses of ChatGPT). The
analysis of these uses serves to demonstrate our claim (see Postphenomenological Analysis of
the Uses of ChatGPT). We then discuss what we esteem to be a substantial implication of our
claim (see How ChatGPT Reshapes our Relation to the Wor(l)d), before ending with a
summarizing conclusion (see Conclusion).
2 POSTPHENOMENOLOGY AND CHATGPT: AN
OVERVIEW
Following Martin Heidegger, many philosophers regarded technology as what he called a
standing-reserve (Heidegger, 1977). In this view, a river dam is considered fundamentally
foreign to nature and, therefore, as necessarily spoiling nature. A river dam is considered
something that overcomes nature and, therefore, us. It is something over which we have little
to no control. To emphasize this view of technology as a looming entity, Heidegger referred to
Technology with a capital T. He was particularly interested in what is going on inside the
subjective frame of reference of the individual resulting from the encounter with this looming
presence. Albert Borgmann’s famous example of the home electric heater illustrates this well
(Borgmann, 1984). To Borgmann, the fireplace in the house functions as a “focal thing” around
which the activities of those inside the house are gathered. With the arrival of electric heating in
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 4
modern homes, the integration and the deep connection among the inhabitants of the house,
as well as between humans and their natural environment, was dramatically changed. Instead
of going out to the woods, cutting trees, carrying the branches home, and chopping them into
firewood, modern inhabitants merely flip a switch, and the heat ‘appears.’ To Borgmann, a
crucial connection has been lost in the modern home. Unlike the case where the homeowner
chops their firewood to heat their home themselves, the modern inhabitant does not exert their
individuality into the task. In this sense, the electric heater is a Heideggerian standing-reserve;
the homeowner has no control over the electric grid, which causes the homeowner to be
entirely subjected to the heater’s will.
Seeking to transcend Heidegger’s trending view in the philosophy of technology, Ihde
formulates his postphenomenological framework. Namely, instead of standing in awe before a
subjecting Technology with a capital T whereby one feels helpless, Ihde pays attention to the
ways that various technologies with a lowercase t influence how we go about the world. He
suggests that philosophers concentrate on how technological devices differentially impact us,
and on how these devices are impacted by their interaction with us. Hence, in
postphenomenology, the philosopher distances themselves from what is going on inside their
own subjective frame and pays attention to how the device mediates between the philosopher
and its surroundings, such as to pinpoint the cultural dimension of human-technology relations.
One of Ihde’s favorite examples is the telescope (Ihde, 1979, 1991; Wiltsche, 2017). Ihde does
not look at the human-telescope relation in the Heideggerian sense whereby the telescope is a
purely instrumental tool, foreign to nature and therefore necessarily spoiling our relation to
nature. Rather, Ihde looks at the mediational relation at work between the human subject, the
telescope, and the object of inquiry (i.e., that which the subject is looking at through the
telescope). For, in Ihde’s view, Technology is not just an entity out there; technology is an entity
that re/shapes our relation to reality. This relation is what gives rise to Ihde’s famous notion of
material hermeneutics (Ihde, 2022). In its most traditional understanding, hermeneutics is about
the human interpretation of texts in their literal form (whether written or oral). It is about the
human activity at work upon encountering a literal text and trying to figure out its meaning. The
relation under observation is that between the literal text and the subject, who is, on occasion,
aided by a magnifying glass or the like. Seeking to widen this analytical scope, Ihde introduces
the notion of material hermeneutics to show that hermeneutics does not have to limit itself to
literal texts but may include texts understood in the figurative sense, such as when a radiologist
‘reads’ an X-ray film. In this sense, understanding the hermeneutics at work requires us to make
sense of the material objects constitutive of the figurative text; it requires us to make sense of
the physical substrate of the figurative text (see also Hasse, 2023). For instance, an X-ray film
itself does not contain words2. Yet, a radiologist reads the film as if reading text. So, too, when a
microbiologist looks through a microscope and interprets microbes, the microbiologist
constructs this object of study into a coherent account, translating the microbe into words and
text. Hence, while microbes do not present themselves as literal text, they do function as
figurative text. That is why, according to Ihde, technology may not be omitted upon engaging in
a hermeneutics analysis, for it is the technological device that allows the translation of a
figurative text into a literal one. In our case, where we look at how ChatGPT mediates our
relation to reality, ChatGPT can function as literal text andthrough its data-worldas
figurative text.
Few articles discuss ChatGPT postphenomenologically. Mikael Laaksoharju and colleagues look
at ChatGPT through Ihde’s four types of human-technology relationsthe embodiment,
hermeneutic, alterity, and background relationsand their ethical implications (Laaksoharju et
al., 2023). A master’s thesis by Víctor B. Yáñez discusses ChatGPT through Derridean
deconstruction theory and Ihdean postphenomenology, focusing on implicit “structural
2 Not besides the film’s metadata such as the name of the patient or the day the X-ray was captured.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 5
dangers” of interacting with ChatGPT (Betriu Yáñez, 2023). Alexandra Farazouli studies how
ChatGPT brings teachers to change their perception of student-written texts (Farazouli, 2023).
Jordan J. Wadden analyzes how “conversational artificial intelligence” impacts our conceptions
of autonomy and psychological integrity (Wadden, 2023). Lucas N. Vieira looks at how “machine
translation tools” reshape our perception of information and our valuation of who has the
authority to present this information as knowledge (Nunes Vieira, 2023). Tea Lobo (2023)
unveils how the popular photo-sharing social network, Instagram, instantiates the
postphenomenological relations of hermeneutics and alterity, thereby exposing how Instagram
is veritably co-constitutive upon making the world known to the human subject. Mark
Coeckelbergh and David J. Gunkel craft an important argument in Lobo’s sense. Namely, instead
of looking at ChatGPT through the Platonist lens or adopting an instrumentalist stance vis-à-vis
language and technology, the phenomenon of ChatGPT should be understood from the
perspective through which humans, technology, and language are all interdependent; i.e.,
where none is in full control (Coeckelbergh & Gunkel, 2023).
What these publications tell us is that ChatGPT offers a rich source of reflection for the Ihdean
human-technology relations. However, what seems to be missing so far is a look at how
ChatGPT itselfas a putatively meaning-making entitywould require us to review our idea of
how the tripartite human-technology-reality relation operates. While it can be debated whether
ChatGPT itself ‘knows’ or ‘understands’ the text it ‘reads’ and ‘writes’ like a human does, as
LLM, ChatGPT necessarily ought to generate its own interpretation from textual input prompts.
No one denies that ChatGPT generates text that carries at least some semblance of meaning, at
least if we understand meaning in a more superficial way of syntactic coherence. Many people
rely on ChatGPT to perfect their business emails before sending them out. So too, generative
dis/misinformation is clear evidence that the syntactic coherence generated by ChatGPT is
enough to be meaningful to many people3 (see Motoki, Pinho Neto & Rodrigues, 2023). For our
purpose, the fact that ChatGPT can ‘read’ input texts from which it generates output texts that
bear a superficial semblance of meaningfulnessone that is sufficient for everyday pragmatic
useis enough for us to claim that ChatGPT actively participates in making meaning.
A necessary clarification at this point is that while we will always speak about inputs and
outputs as being about text, for our purpose, by text, we also mean other formats that ChatGPT
is currently able to digest; namely, audio and visual content. The decision to refer to text is to
highlight that even when the content is not originally in written text form, ChatGPT at some
point always programmatically translates (converts) the non-textual content to words for its
own internal processing (hence why the quality of data annotation constitutes such a central
pillar for LLMs; we will come back to this later).
3 OUR APPROACH
What we seek to demonstrate is ChatGPT’s necessarily hermeneutic character. Namely, through
a postphenomenological examination of the human uses of ChatGPT, we will speculatively
demonstrate how ChatGPT itself engages in hermeneutics. We here describe this approach.
Ihde’s development on human-technology relations predated today’s human-ChatGPT relation.
Nonetheless, Ihde (1993) enables us to speculate about ChatGPT’s hermeneutic activity.
Laaksoharju et al. (2023) engaged in an analysis of ChatGPT through Ihde’s four determinant
types of human-technology relations; among which the hermeneutic relation. They suggested
that as we read the information provided by ChatGPT, ChatGPT, therefore, necessarily mediates
between us and the world. Laaksoharju et al. then developed on the ethical implications of this
3 One could also add that instances of mutual incoherence between humans do not therefore imply that the interacting
humans have no meaning-making capacity.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 6
human-ChatGPT relation. Comparatively, our interest is to demonstrate how ChatGPT itself
necessarily acts as a hermeneutic agent. Another relevant development is Dmytro Mykhailov’s
notion of technological intentionality (Mykhailov, 2020); arguing that, in being
phenomenologically agential, technologies necessarily actively co-construct that reality which
they mediate to humans. Mykhailov acknowledged that his postphenomenological approach
leaned more toward Actor-Network Theory (ANT). Contributing to these efforts we will seek to
move beyond this ANT-ascendancy with help from Ihde’s attention to the micro- and
macroperceptual dimensions of the human-technology relations (Ihde, 1993). Respectively,
these dimensions refer to the bodily-sensory and to the cultural-hermeneutic dimensions of this
relation.
When discussing the macroperceptual factors that affect human-technology relationsi.e., the
interiorized, socio-cultural factorsIhde underscores how the transfer of a technological
artifact from one cultural context to another leads to a multiplication in its uses; either by
receiving additional uses, or receiving different uses than those that arose in the artifact’s
original cultural context. Ihde specifically refers here to technological transfers at a cross-
continental level. With ChatGPT, we may understand the transfer as a cross-sectoral one. In
particular, as a transfer from the expert culture to laypeople. While having little to no
knowledge of the internal programmatic and computational mechanisms of ChatGPT, laypeople
do succeed in appropriating this technology. By way of making sense to us, ChatGPT is a
recognizable object that is familiar and reachable; its interface and the output it produces are
not estranging or alienating. This ability for lay appropriation of a new technology that is
intended for such mundane everyday use is surely the wishful outcome of any successful
product design. But, even more so, as Ihde has described for cross-continental transfers, here
too, the uses made of ChatGPT have exceeded what its original designers might have intended it
to be used as. That is also the reason for the explosion in discussions around how to rein in uses
of ChatGPT to safeguard human rights, while simultaneously preventing restrictive regulations
that could stifle both those rights and technological innovation. Laypeople’s creativity thus
contributes significantly to exposing ChatGPT’s potential, and, hence, it is through these
mundane uses of ChatGPT that we here intend to show how ChatGPT also embodies a capability
of making meaning. For indeed, if “technologies virtually always exceed or veer away from
intended design” (Ihde, 2002), it still begs the question: is ChatGPT’s output solely about the
human designer’s or user’s intentionality, or should our focus on intentionality also require
attention to ChatGPT’s own, nonhuman intentionality?
In our speculative postanthropocentric pursuit, we are seeking to give ChatGPT its due. In a way,
it aligns with Dmytro Mykhailov and Nicola Liberati’s quest to turn back to the technologies
themselves, [by] showing how the technologies have to be taken into consideration by
themselves” (Mykhailov & Liberati, 2023). They worked on the notion of technological
intentionality to illustrate how technology is not just a dead thing waiting to be activated. Being
similarly motivated by a quest to move beyond the Heideggerian view that prevents a fuller
understanding of ChatGPT, we will here suggest that perception may not be a merit confined
within the human realm alone, and very much ought to be understood as applying to ChatGPT
itself. Only as such will we be able to fully appreciate ChatGPT’s hermeneutic ability, and will we
be able to suggest that ChatGPT itself necessarily generates some kind of meaning. Note that we
do not say that meaning can emerge out of nothingness, or that ChatGPT can create semantic
meaning out of nowhere (for certainly anything emerges out of an encounter, an exchange, a
mixture, and a process of transformation). Rather, we seek to establish technology’s due in
being hermeneutically agential, even if this agency is of a different kind than the kind of
hermeneutics at work within the human subject. Hongladarom’s notion of machine
hermeneutics (2020) already referred to this idea for facial recognition algorithms. Namely,
facial recognition technology interprets the object of perception before presenting its result to
the human subject. With LLMs too, rather than transmitting to the human a representation of
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 7
reality that has been unchanged4, LLMsin being generative artificial intelligence (GenAI)
toolsadd a self-generated layer of interpretation on top of that part of reality which the
instrument observes before presenting the result to the human observer. In our demonstration,
we will suggest that generative processes, in general, involve hermeneutical activity.
Having described our approach, we will now delve into some mundane uses of ChatGPT. Note
that the purpose of the next section is not to provide an exhaustive review of ChatGPT’s current
real-world uses across different layers of society. Rather, it is to provide a handful of key
examples of lay uses before engaging in their postphenomenological analysis. By lay mundane
uses, we mean front-end prompts into ChatGPT by humans that might be asking things without
harmful intent (e.g., correcting grammatical mistakes, creating music, producing a business
idea), or things intentionally harmful (e.g., to infringe copyrights, to perpetrate criminal offenses
such as by creating child sexual abuse material)5. This understanding of mundane uses may
suggest a rough typology based on harmful intent. It does not mean that the harmless intent
does not lead ChatGPT to produce harmful content, but in this case, it does not do so on a user’s
explicit request. In the examples that we will show, we have chosen the kinds of uses that carry
no harmful intent, forwe believelay people at present largely use generative tools in an
inquisitive, exploratory approach6.
4 USES OF CHATGPT
A first popular mundane use is text-to-image generation; i.e., textually prompting ChatGPT to
create an image. For instance, prompting Microsoft Bing’s Image Creator to suggest an image
for the “inner workings of large language models in surrealist art style” results in a selection of
four images (see Image 1). Clearly, these may not be otherwise than an interpretation made by
the LLM itself; an interpretation both of the initial input prompt, and of how this would ‘look
like’ visually. One may then be tempted to assess the model’s own ‘worldview’ so to speak,
which is a form of assessment that is in fact already widely applied, for instance, upon assessing
a model’s bias relative to gender, race, or socioeconomic status (see Motoki et al., 2023, about
assessing ChatGPT’s political view, or, Levi Martin, 2023, for a discussion on “building values into
machines” through interviewing ChatGPT about its own morality). In our case, one might
perhaps interpret the model to have an industrialist or mechanistic view of LLMs, though
another prompt would surely cause the model to generate images with more organic neural-
network-like features, such as how AI is often portrayed to ‘be like.’ Interestingly, assessing a
technology’s worldview is not practiced for non-generative technologies such as river dams or
electric heaters. This is not to say that non-generative technologies are not being assessed, for
instance, in terms of socioeconomic, environmental, or public health impact. But as non-
generative technologies do not add self-generated interpretive layers upon mediating between
us and the world, these technologies do not have a self-generated take on the world.
4 At least, unchanged relative to the intended changes to be produced by the instrument. For instance, microscopes and
telescopes are intended to represent reality respectively as zoomed in or out. Unless they are assisted by artificial intelligence,
microscopes and telescopes do not add an unexpected self-generated layer of interpretation to that which the instrument
presents to the human.
5 Back-end uses of ChatGPT and bot-based prompts are not of interest for our case.
6 Uses will differ across cultures, generations, and communities; yet, this exploratory curiosity for ChatGPT can be felt for
instance from the monstrous number of articles that offer to explore uses of ChatGPT; e.g., “8 surprising things you can do
with ChatGPT” (Fitzpatrick, 2023), “50 ChatGPT use cases with real-life examples in 2024” (Dilmegani, 2024).
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 8
Image 1: Microsoft Bing Image Creator (Jan. 22, 2024).
Another popular mundane use is text-to-text generation; i.e., textually prompting ChatGPT to
create a textual output, whether that output is programmatic code or written text. One may, for
instance, ask LA Church’s BibleGPT whether it is Christian (see Image 2). Calling on the case of
early GitaGPT which stated that “it is acceptable to kill another if it is one’s dharma or duty”
(Nooreyezdan, 2023), it brings us to wonder (if not consider) whether LLMs are not able to
make their own sense of the world; whether LLMs are able to interpret their data-world and
infer from their knowledge pool. Whether outputs emanate from their own interpretive work,
or whether they are a pure production of prior human-encoded interpretations, is a question
we here will seek to suggest a clarification for. However, just as much as our own human
processes remain mysterious to ourselves to some degree, so too, the processes of LLMs will
remain mysterious to some degree.
Image 2: LA Church BibleGPT created in collaboration with OpenAI (Jan. 22, 2024).
Upon asking OpenAI’s free ChatGPT model GPT-3.5 to “create a dataset of prompts made in
ChatGPT”, it abides by encoded copyright restrictions telling us that it cannot provide such a list.
Upon asking it, in turn, to “provide a fake dataset of prompts made in ChatGPT,” it has no
problem issuing such a fictitious sample (see Image 3). Asking this same question again right
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 9
after makes it generate a new list. This ability to generate fictitious content is testimony to its
creative semantic ability.
Image 3: OpenAI ChatGPT-3.5 (Jan. 22, 2024).
If we ask ChatGPT-3.5 how it interprets questions and generates answers, it states that “while I
can generate coherent and contextually relevant responses, I don't have true understanding or
consciousness. My responses are based on patterns learned from diverse data during training,
and I don't have personal experiences or access to real-time information” (see Image 4). So too,
upon asking whether it can interpret the world, ChatGPT-3.5 currently rejects any possibility for
its interpretive ability because it would lack a direct and personalas in sensorial and
emotionalexperience of the world (see Image 5).
Firstly, we humans often rely on indirect experiences of the world conveyed through pre-
mediated data to infer from the world. For instance, doing our research for this paper and
reading publications by peers about real-world-phenomena while sitting comfortably behind a
computer inwhat is relatively speakinga very confined space in the world, did not prevent
us from making our own sense of those publications (a sense that may differ to whatever
degree from its intended meaning by its authors). Secondly, humans each have a differentially
varied sensory-emotional development. Does it therefore incapacitate any of them from making
their own sense of a world inhabited by a vast sensory-emotional spectrum? Thirdly, as
ChatGPT’s incapacity was surely encoded by us humans upon telling it how to speak about itself,
its own words are not necessarily representative of its abilities. Moreover, beyond encoded
inabilities, the language that a human information technologist might have inscribed into an
LLM might be inadequate for that LLM to ‘express’ things that are beyond the understanding of
that inscribed human language; inadequate to ‘express’ things that are beyond what that
inscribed language is able to semantically articulate. We are all familiar with the difficulty of
describing a lived experience only through words, which is what makes literature so stimulating;
it stimulates our imagination to fill in the gaps that words can never fill. In essence, ChatGPT’s
ability to generate meaning for itself is not easily dismissible.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 10
Image 4: (idem).
Image 5: (idem).
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 11
When asking ChatGPT whether it understands the input prompt, it says it does not understand
language the way humans do, but that it was trained to process and manipulate text in a way
that is utilitarian to the human (see Image 6). In responding, “My responses are generated by
predicting the most probable continuation of the input text given my training,” ChatGPT
portrays itself as a mere statistical device.
Image 6: Conversation with ChatGPT (Hongladarom, 2023).
Besides text and images, other possible ChatGPT formats are audio and video.
Having listed key examples of uses of ChatGPT with preliminary reflections on its meaning-
making ability, we may start analyzing these human-ChatGPT interactions with Ihde’s
postphenomenological framework.
5 POSTPHENOMENOLOGICAL ANALYSIS OF USES OF
CHATGPT
In mediating the (data-)world to the human, ChatGPT necessarily needs to interpret the input
prompt, its training data, and the generated output as delivered to the human user. To analyze
this interpretive activity, we use Ihde’s micro- and macroperceptual framework topped with a
speculative postanthropocentric twist.
5.1 MACHINE HERMENEUTICS
Interpreting ChatGPT appears to revert us to classical hermeneutics, where the task of the
interpreter is to find meaning in actual texts. After all, LLMs are language models that are
trained on human vocabulary, grammar, semantics, and phonetics. Therefore, with audio and
visual content, at some point in its processing, an LLM will always rely on learned verbal
associations. For instance, an LLM will have learned to associate an image of a dog with the
word “dog”, because the images of dogs that it learned to associate with “dog” were
linguistically annotated as such by a human. In fact, when presented with an image, deep
learning modelsthe functional models of LLMsbreak it down into “a series of nested simple
[sub-images from which it] extracts increasingly abstract features from the image” (Goodfellow
et al., 2016). Models learn to identify objects or shapes in an image by identifying edges present
within the image, for instance, “by comparing the brightness of neighboring pixels” (Goodfellow
et al., 2016). Comparing brightness levels might itself be a numerical process; yet, colors and
brightness levels too have verbal equivalents. How else would an LLM be able to ‘translate’ or
convert its interpretation of a brightness value of 65 and a color code 322 into something
interpretable by the human, or to tell us “this is a dog”? LLMs also learn to understand human
speech (or sounds in general) through learning about the specific phonetics of a specific
alphabet or language, and also operate by breaking the sound up into smaller analytical parts.
Most recently, developments are also underway for LLMs to be able to transcribe physical
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 12
movements (so-called large action models that “learn by [observing] how a human performs a
task via a mobile, desktop, or cloud interface, and then replicate that task on their own”;
Chokkattu, 2024).
ChatGPT’s complexity allows it to become a conversational partner; able to carry a conversation
correctly and in context. This, of course, should not be taken to imply that we claim that
ChatGPT is capable of fully understanding our language; for instance, ChatGPT seems unable to
discern a text’s intent when concealed rather than explicit (that capability lies perhaps further
ahead)7. Nonetheless, in Ihde’s sense, as ChatGPT semantically mediates between the human
and the (digital) world, ChatGPT necessarily engages in a semantic interpretive act in the course
of that mediation. ChatGPT thus itself engages in hermeneutics.
Some might reject this suggestion, arguing that ChatGPT itself has no direct real-world sensory
and emotional access to the world. However, does an absence of a human way of experiencing
the world necessarily prohibit a nonhuman from having a meaning-making ability? Does such
anthropocentrism not keep us away from speculations of more-than-human modes of engaging
with the world? Would such anthropocentrism not reject the widely accepted precept that
wildlife and flora, too, have a great ability to interpret the world for their own livelihoods (which
is a type of meaning-making that we humans are not always able to make sense of)? Is meaning-
making only valid when vetted by humans? And is possessing a biochemical metabolism the
only valid criterion to be endowed with an ability to generate some kind of meaning? In other
words, is an organic mode of experiencing the world the only condition to be able to produce
meaning? Would human meaning-making, therefore, become meaningless if not produced
through direct firsthand organic experience? If humans can produce meaning from texts,
ChatGPT requires us to ponder whether there are more-than-human modes of engaging with
text. For even though ChatGPT was designed following our human perception and
understanding of the world, does its human-based design therefore prohibit it from having a
nonhuman meaning-making ability? We might say that ChatGPT’s so-called understanding of
text is only a result of an algorithmic and probabilistic manipulation, but at least this semblance
of understanding leads ChatGPT to perform many tasks successfully (as evidenced by its wide
and growing number of applications).
According to Ihde’s hermeneutic relation between humans and technological artifacts (1993),
and as further developed by Verbeek (2005) and applied to ChatGPT by Laaksoharju et al.
(2023), what ChatGPT does is that it performs the hermeneutic activity in a way that is by many
orders of magnitude more complex than, say, a thermometer. A thermometer is not a
generative technology but a mechanical one based on the physicochemical properties of
mercury (it indicates the temperature through the expansion or contraction of the fluid inside
the device). Hence, the thermometer itself does not engage in hermeneutics, for the
thermometer itself does not ‘read off’ a certain meaning from the fluidit is the human user
who derives an interpretation from the way that the fluid inside the device responds
physicochemically to the environment. The thermometer only ‘senses’ the increasing or
decreasing volume of mercury but cannot ‘make sense’ of this change for itself in relation to
anything else. Comparatively, ChatGPT ‘senses’ an incoming input prompt and ‘makes sense’ of
it by way of generating its own internal relations inside its programmatic or algorithmic
network; inside its own data-lifeworld. Through its generative verbal engagement with the
human user, ChatGPT acts hermeneutically for the user. To some extent, it unburdens the
human by alleviating the human’s hermeneutic task. ChatGPT’s internal formula thus surely
entails more than merely calculating probabilities upon mediating the world to the human.
7 ChatGPT will pick up on insults because it was provided a list of banned terms. However, it is not able to pick up on human
intents as such, and this shows in the way that users are able to find ways around encoded restrictions.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 13
An input prompt contains multiple inferential layers for ChatGPT; e.g., the language of the
prompt, the grammatical correctness, the literary genre, and so on. These inferential layers
enable ChatGPT to make inferences for itself about the human prompter (such as gender, race,
geolocation, educational background, …). Even though the companies behind LLMs tend to
avoid admitting that LLMs (can) engage in such inferences given the privacy concerns over user-
generated data, we may easily speculate that these inferences form a basis on which ChatGPT
draws to then elaborate its answer to the prompter it so identified8. ChatGPT inevitably
formulates its response based on a certain level of identification of the user (be it to respond in
the adequate language). To provide such a well-targeted response to a well-identified
prompt(er), ChatGPT necessarily also has an internal hermeneutic sensitivity toward its own
data-lifeworld (its own knowledge pool). While ChatGPT may not sense the physical world
directly9 (at least in its current form), it actively alters its output based on how it interprets both
the input prompt and its internal data-world. ChatGPT adds personalized modifications. A
thermometer does not intentionally modify how it presents its information.
5.2 MACHINE MICRO- AND MACROPERCEPTION
What ChatGPT knows about the worldits relation to the worldis that which is available in its
programmatically encoded data-world. This internal digital data-world is programmatically
paired with a semantic counterpart that is understandable by the human. Upon interacting with
a human user, ChatGPT thus mediates between (i) the actual physical world as available in
ChatGPT’s digital data-world, and (ii) the human user who finds itself in a particular
spatiotemporal environment. For example, when prompting ChatGPT to recommend the best
places to travel in the summer, the provided output is a digital, semantic representation of the
actual, physical places (see Image 7). Hence, if we accept that ChatGPT’s digital data-world is the
equivalent of the human’s sensory-motor data-world (i.e., the human’s mentally memorized
world from its direct experience of the physical world), thence, as much as we may analyze the
human’s micro- and macroperceptual dimensions upon interacting with ChatGPT, so too, we
may speculatively analyze ChatGPT’s micro- and macroperceptual kinds of doings.
Image 7: Schematic representation of ChatGPT’s mediation of the physical world.
We can draw a generalizing diagram of how ChatGPT mediates between the world and
ourselves:
I → ChatGPT → (Data-)World
8 Cases exposing how LLMs in certain applications discriminate based on race, gender, religion, …, are not a rarity (e.g.,
Gordon, 2023; Sasani, 2024).
9 Though, if a thermometer is able to sense fluid dynamics, thence, one could wonder whether ChatGPT is not able to sense
electricity (energy) dynamics in its hardware. But this brings us to a very abstract and speculative discussion that goes beyond
the scope of present paper.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 14
The human subject (“I”) is having a relation with the world through ChatGPT, of which the data
is supposed to represent the world. The way that ChatGPT ‘knows’ the world might not
correspond to the verified world as we humans experience it through our situated kind of
physical interaction with reality; however, circling in this thinking would keep us trapped in
anthropocentrism. Nonhuman creatures that experience the world in ways that do not
necessarily make sense to the human abound (at least, these modes of experiencing the world
would not seem to make sense to the modern human until science would ‘understand’ them).
For instance, insects that seem to fly without purpose, bats that can navigate in pitch-dark
spaces, marine mammals that communicate through soundwaves beyond the human hearing
system, and creatures that live in human-wise unlivable spaces. Hence, why would we abruptly
conclude that synthetic networks cannot make their own kind of sense of the world?
In Ihde’s conception of the hermeneutic relation, one comes to know the world by means of the
technological device (Verbeek, 2005). What we have shown with ChatGPT is that, besides the
operating classical Ihdean hermeneutic relation, there is also a hermeneutic activity operating
inside ChatGPT; cf. a user comes to know the world by means of the hermeneutic work done by
ChatGPT itself. We wish to repeat that this should not be taken to mean that ChatGPT
understands the meaning of text entirely as we humans do, but that it has surely developed
some kind of such understanding for it to be operational vis-à-vis us. Upon developing a similar
argument for the case of facial recognition algorithms, Hongladarom (2020) suggested the
notion of machine hermeneutics to account for the interpretive work performed by the machine
itself. ChatGPT functions in much the same way. The difference is certainly that real-time uses
of facial recognition technology do lead this technology to sense the natural world directly
through cameras10. ChatGPT does not have that capability; at least not under its current kind of
user-interaction. Yet, imagining the development of technological add-ons, such as synthetic
organs or body-parts, or the implementation of LLMs into humanoids, are surely projects that
are underway.
To show that the concept of machine hermeneutics finds a broader application than being a
feature of facial recognition technology alone, Ihde’s attention for the micro- and
macroperceptual dimensions of human-technology relations find natural fitness in that pursuit
(respectively referring to the bodily-sensory and the cultural-hermeneutic dimensions that
inform the human-technology relation). Both dimensions are inextricable: “the macroperceptual
is what contexts the microperceptual” (Ihde, 1993). In other words, one’s embeddedness in
culture informs its bodily-sensory mode of perceiving the world (Ihde, 1990; Verbeek, 2001). In
our case, when a user engages with ChatGPT, viewing its interface on their computer screen and
typing their question on their keyboard, these concern the microperceptual realm. When a user
engages with ChatGPT by making sense of the text presented by it, it touches upon the
macroperceptual realm. Again, the micro and the macro cannot be separated. The way a user
interacts microperceptually with ChatGPT is informed by its macroperceptual situatedness (e.g.,
a human from the Middle Ages is likely to interact differently if presented with ChatGPT than a
contemporary human). This also means that it is not that a user can only make sense of ChatGPT
if and only if that user has knowledge of ChatGPT’s context and inner workings (that would
amount to equating ChatGPT to Heidegerrian’s standing-reserve, as if external to the world and
overpowering). Rather, the inextricability of the micro and the macro signifies that whatever the
meaning it is that is generated from interacting with ChatGPT, invariably, both this interaction
and its generated meaning are informed by one’s contextual setup and background. Should one
10 While facial recognition technology consists of software that discriminates between images, and therefore has no direct
access to the physical world, if this technology is applied live, in real time, it necessarily means that it has access to the physical
world, for the software is then part of the entire camera system that ‘visualizes’ the physical world the same way our eyes
do. This process is by no means equal to the human sensory-motor mode of experiencing the world; however, there is some
form of direct form of experiencing the world, however synthetic it may be.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 15
wake up after 30 years in a coma, skipping all technological developments in the meantime, it
does not mean that this userhowever disorientatedmight not create some kind of sense out
of ChatGPT. Any user can make sense of ChatGPT. The way one perceives ChatGPT in fact
enables an identification of that user’s macroperceptual or cultural-hermeneutic context.
While we here focused on a user’s mode of engaging with ChatGPT, how would this look like if
we inverted our approach and departed in turn from ChatGPT? What would a
postanthropocentric approach expose? What would our perception of ChatGPT be if taking a
speculative look at ChatGPT’s perceptual activity? Under the pseudonym of Jim Johnson (1988),
Bruno Latour makes an incredible case for an artifact as mundane as a door-closer, by revealing
the deeply social character of this everyday disregarded background object. Latour exposes the
prescriptive role of this technology in actioning human behavior and a certain level of social
order, though noting that for a human to adopt the prescriptive intent of the door-closer it
remains yet within the hands of the human to accept this prescription. From there, and
somewhat similarly to Ihde’s concept of the macroperceptual, Latour writes that whether a
human follows the door-closer’s ‘guidelines’ depends on whether that human already
incorporated the knowledge necessary in order to be able to follow these guidelines. In Ihde’s
words, one’s microperceptual relation to the door-closer depends on one’s macroperceptual
context. But whereas Latour is interested in whether or not the human follows the door-closer’s
guidelines, Ihde, in turn, is interested in how the human does so. This is how Ihde then paves
the way to pluralism in the human modes of relating to technology; and it is as such that Latour
and Ihde can be interestingly combined. For our case, we are only interested in adding the
Latourian angle onto Ihde’s work to show the perceptual character of ChatGPT. Latour’s door-
closer, like the thermometer, is a non-generative artifact that is mechanical and a-perceptual in
the sense that it does not rely on the datafication of reality to calculate and interpret how/when
to act. The non-generative artifact awaits for something to happen in the environment in order
to activate. ChatGPT, too, in a way, needs to await a prompt in order to deliver an output.
However, whether or not one prompts ChatGPT to deliver an output, ChatGPT does not just
‘stand still’ (unless all the data servers keeping ChatGPT alive would be unplugged, ChatGPT’s
deep learning machinery never sleeps).
When ChatGPT interacts with a human, ChatGPT’s own lifeworldits data, its software, and its
hardware through which its software is enablednecessarily prescribes that human user on
how to engage with it. Namely, a user is required to type on a physical or on-screen keyboard,
or to voice a prompt in a microphone. A user is thus required to have at least one hand (or any
body part able to type on a keyboard), or to have functional vocal cords, and be able to
articulate thoughts. A user is also required to express a prompt in the immediate now, whereas
ChatGPT might provide an output at varying timeframes depending on the request’s complexity
and the quality of the Internet connection. ChatGPT therefore prescribes patience to the
human. In prescribing such microperceptual needsbe they physical, mental, spatial, or
temporaland because it is through these micro-phenomenological features that humans and
ChatGPT relate, the human’s microperceptual lifeworld and ChatGPT’s microperceptual
lifeworld therefore necessarily need to meet somewhere halfway for them to be able to
comprehend, sense, and interpret each other.
In prescribing microperceptual needs, ChatGPT therefore necessarily also prescribes the user its
macroperceptual lifeworld. In prescribing the human to have a certain level of knowledge of
ChatGPT’s version of human language, ChatGPT thus prescribes the human a certain situated
knowledge of society upon making use of it. For instance, if ChatGPT is asked to produce
content in Egyptian hieroglyphic characters, this shows to be (yet) beyond its abilities (see
Image 8). Or should a human from the Stone Age request something, ChatGPT might show
puzzled. Furthermore, having been programmed to be ethical, politically correct, and to adhere
to basic human rights, ChatGPT thereby prescribes a user to abide by these same conceptions of
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 16
what a society should be like. ChatGPT will not generate an output that infringes these
internalized rules11. Furthermore, if one writes an input prompt that does not make semantic
sense or does not align with ChatGPT’s encoded semantics, it leads ChatGPT to tell us it cannot
make ‘human’ sense of it (see Image 9). The adjective ‘human’ is important here. If writing an
input prompt that would not make sense to us humans (unless this prompt would perhaps arise
in the context of minimalistic poetry), ChatGPT nonetheless crafts its own interpretive creation
therefrom because of its very meaning-making ability (see Image 10). ChatGPT thus exhibits an
internal capacity to make the foreign familiar by way of its ability to make mysterious synthetic
connections inside its system that are unknown or invisible to us (i.e., the black-box
phenomenon whereby ChatGPT is ever unknowable to the human in its entirety, and ever
mysterious in its abilities).
Image 8: OpenAI ChatGPT-3.5 (Apr. 17, 2024).
Image 9: OpenAI ChatGPT-3.5 (Jan. 30, 2024).
11 Even though human encoded restrictions have limited efficacy in that they are constantly being tested (whether
intentionally or not), their limits do not efface the fact that ChatGPT necessarily prescribes its internal macroperceptual
lifeworld.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 17
Image 10: (idem).
All these examples illustrate that ChatGPT has its own macroperceptual lifeworld. It is a
lifeworld that, of course, does not escape human influence, for it was encoded by us humans by
way of feeding it with human-made data, which are supposed to transcribe the human world
into digital code. Yet, one may not negate that by way of having a certain kind of understanding
of human language, and a certain kind of understanding of the human world, ChatGPT thereby
necessarily prescribes its owninternally, synthetically generatedmacroperceptual lifeworld
onto the human user. For if ChatGPT did not have any kind of understanding of our language or
of our human way of relating to the world, then it would not even be functional to us humans.
What the speculative postanthropocentric micro- and macroperceptual ways through which the
human and ChatGPT interact thus suggest, is that ChatGPT prescribes the human a certain mode
of being. Whether that be by prescribing a certain physical or bodily setup (e.g., necessitating us
to adopt a certain physical posture and to possess a certain hardware), a certain mental setup
(e.g., necessitating us to adopt ChatGPT’s language), or a certain emotional setup (e.g.,
necessitating us to adopt non-anger-ridden language if willing to converse with ChatGPT; see
Image 11). In this sense, ChatGPT can be understood as an orientation device (Ahmed, 2006); as
an object that entails the potential to orientate or perform our bodies and modes of being in
time and space. It is thus no longer the human alone that interacts micro- and
macroperceptually with technology; technology itself calls onto its own micro- and
macroperceptual lifeworld upon interacting with the human. It is by way of being a hermeneutic
agent that ChatGPT carries a potential for performing us. And if so, then in what sense?
Image 11: OpenAI ChatGPT-3.5 (Jan. 22, 2024).
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 18
6 HOW CHATGPT RESHAPES OUR RELATION TO THE
WOR(L)D
If ChatGPT is a hermeneutic agent, firstly, this brings us to reconsider our conception of
hermeneutics. Secondly, it brings us to ponder what this signifies for a digital knowledge society.
6.1 ON POSTANTHROPOCENTRIC HERMENEUTICS
In a postphenomenological analysis, Bas de Boer, Hedwig te Molder, and Peter-Paul Verbeek
(2021) discuss how brain imaging technologies shape neuropsychiatry. They discuss how
scientific instrumentsin their quality of technological artifactsperform the object of inquiry.
Olya Kudina and Peter-Paul Verbeek (2018) similarly observed how the human practices arising
with Google Glass led the users to rearticulate their meaning of privacy. Catherine Hasse more
generally demonstrates how it is not merely that our macroperceptual dimension influences
how we are technologically mediated to the world, but that the technologies themselves
“contribute to [these conceptual] mediations” (Hasse, 2023).
Similarly, we can look at how ChatGPT performs us. This performativity is not simply about how
human a priori embedded into ChatGPT come to expression in its output (e.g., Thomson &
Thomas, 2023). Neither is it simply about how ChatGPT performs our practices (i.e., the way we
interact with or use ChatGPT, as previously discussed). ChatGPT’s performativity is also about
how ChatGPT enacts its own interpretation onto that which it conveys. For instance, a non-
hermeneutic technological artifact, such as high heels, performs a certain walking style (a
practice), and a certain view of the female gender (a conception), but high heels will not add
their own interpretation of what the female gender ‘should be’. ChatGPT on the other hand, by
way of acting co-hermeneutically with us, adds its own interpretation of reality upon generating
an output. That is, as much as ChatGPT reperforms an interpretation based on the prior
encoded semantics by us humansand thus reperforms our human macroperceptual vision of
the worldso too, ChatGPT performs its own meaning as independently generated within its
characteristic, singular lifeworld. Namely, because ChatGPT’s data-world is governed by trillions
of parameters through both human-induced processes but also synthetic self-generated
processes that no one fully understands (cf. the black-box phenomenon; see also Goodfellow et
al., 2016), ChatGPT therefore does more than acting as an Ihdean mediator between a human
user and the physical reality. It does more than simply “delegate” human activity or “discipline
the human in acting a certain way (Latour, 1988). The type of social ordering that ChatGPT is
able to enact is, to some degree, one that lies beyond human design.
As hermeneutic agent, by way of disorientating our classical understanding of hermeneutics,
and by way of reorientating our conception of it, ChatGPT performs the meaning we attach to
hermeneutics. It unsettles our classical ideology of hermeneutics by bringing it to new frontiers
of speculative postanthropocentrism. For if we quote Paul Ricoeur about how a reader receives
text, and paraphrase this as being about how ChatGPT receives text upon being
programmatically encoded, we may read Ricoeur as follows:
“What [ChatGPT] receives is not just the sense of the word, but, through its sense, its
reference, that is, the experience it brings to language, and, in the last analysis, the world
and the temporality it unfolds in the face of this experience” (Paul Ricoeur in Ihde, 1993).
Should this not be the case, ChatGPT would not be able to make semantic sense to and for us.
The fact that a nonhuman entity is able to receive the sense of the word thus requires us to
question our self-declared, anthropocentric authority over interpretation and meaning-making.
ChatGPT enables us to acknowledge that meaning-making is multiple, and that it is also a quality
of the biochemically inert or synthetic nonhuman. ChatGPT thus further requires us to recognize
that its common portrayal as a mere statistical device, that would generate semantic answers
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 19
solely from calculating probabilities, is very much downplaying its complexity. Surely, calculating
probabilities to provide the most statistically significant output is an important component of
LLMs. But an LLM’s formula clearly also embodies a hermeneutic dimension for it to be able to
‘read’ a user’s prompt and to ‘write’ a meaningful response. Fabio Motoki et al.’s assessment of
ChatGPT’s political view (2023) or John Levi Martin’s study of ChatGPT’s morality (2023) are
great illustrations thereof. In its quality as a hermeneutic agent, ChatGPT itselfas a synthetic
verbalizerperforms the sense of the word and, therefore, our language. Namely, through its
internal synthetic hermeneutic processing, ChatGPT enacts its own linguistic interpretations.
This should not mean that ChatGPT determines 100% the sense of its output; for the sense of its
output is co-determined by both ChatGPT and the human user (not only because the human
enacts its own interpretation upon reading the output, but also because the human encoded
ChatGPT originally). However, it does mean that the human is being performed to some extent
by ChatGPT’s own macroperceptual lifeworld.
Should we be unwilling to accept ChatGPT’s hermeneutic character, then how to explicate how
ChatGPT is able to make sense of us and for us? For it is not that ChatGPT makes sense to us
simply by way of us making sense of the words it serially aligns (i.e., by way of us doing
hermeneutics). It is also that ChatGPT makes sense of us (upon ‘reading’ our prompts) and for us
(upon ‘writing’ a meaningful response that we are able to comprehend without requiring
deciphering techniques or additional mediating technologies). Denying ChatGPT’s hermeneutic
character would amount to equating it to a hard text, such as an online blog or a printed book,
of which the content was inscribed by a human, and of which the meaning is deciphered by a
human. The book itself does not do any interpretive work.
6.2 ON THE REIFICATION OF LANGUAGE
In its quality as a hermeneutic agent, and as a synthetic verbalizer, ChatGPT reifies the idea that
language alone is apt at representing a world of which’s complexity goes beyond verbal
descriptions (be it because verbal language is just one way of engaging with and making sense
of reality, just like mathematical language is another). In other words, by way of arranging its
output and, therefore, in ossifying particular ways of phrasing the world, ChatGPT thereby
ossifies a particular form of familiarity with or proximity to the world. In this sense, ChatGPT
performs the reality it is supposed to represent and, therefore, carries the potential to make us
reevaluate and rearticulate our relation to the world.
It is worth reminding that while we here speak of the word or of the written, this includes audio
and visual forms of verbalization. But the very essence of LLMs remains the word. The entire
inner digital lifeworld of a large language model is one huge linguistic salmagundi that
‘translates’ the physical and sensorial reality into words. ChatGPT makes sense for us, because
the words it serially aligns relate meaningfully to each other. In the same way that our bodily
spatial positioning acquires its orientation “through how [it] inhabit[s] space” (Ahmed, 2006), so
too, the spatial positioning of words in a sentence, in a paragraph, in a context, and in a
conversation, confers them with their meaning. It is for this reason that ChatGPT carries such
significant symbolic potential for influencing our digital knowledge society, for ChatGPT “forces
language to reside in the world” (Michel Foucault in Ihde, 1933). It forces its users to relate to
the world through words. It forces words to be meaningful representations of the world, and it
forces all worldly bodiesall humans and nonhumansto be normatively transcribed into
words. It forcingly orientates all worldly bodies. With ChatGPT, reality ‘becomes’ through words.
It is in this sense that ChatGPT represents the veritable epitome of the Cartesian spirit that
thrives on normative labels and categories. And it is in this sense that ChatGPT reifies the
precept that writing precedes the world and that things come into existence through being
verbalized.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 20
The theory of technological mediation has been “criticized for being too distant from political
and societal questions” (Mykhailov & Liberati, 2023). But we can now understand how it, in fact,
exhibits very close proximity to such questions. Yet, one should not, therefore, topple back into
the Heideggerian spirit of an overpowering and subjugating ‘Technology.’ ChatGPT may well
reify the idea that language precedes the world; however, ChatGPT does not unilaterally
determine a user’s fate. Engaging in such sensationalism would denigrate reality’s complexity
and would equate to divinifying LLMs as though they hold the power to unilaterally shape how
we understand reality12. Reality is always more nuanced than the way it is being described; and
so too it holds for the present paper. As previously stated, ChatGPT does not determine the
sense of its output to 100%. The sense of its output is co-determined by both ChatGPT and the
human user (see also Mykhailov, 2020). Whatever hermeneutic relation plays out between
ChatGPT and the human user, this relation co-emerges through their interaction. Moreover,
how a user reacts to ChatGPT is beyond ChatGPT’s control (cf. Latour, 1988, on whether a
human follows a door-closer’s prescription). Hence, while ChatGPT has the capacity to perform
our culture by way of ossifying a particular kind of familiarity with the world (a situated and
normative familiarity)13, it is not that ChatGPT therefore has a God-given monopoly over our
perception of reality, and neither is it that ChatGPT would be able to cause a global
homogenization or petrification of language. This would greatly underrate the complexity of
both ChatGPT and language. For ChatGPT is not a singular entity but exists as multiple models.
And language is not articulated through our interaction with ChatGPT alone (as though the
human-ChatGPT relation were fully hermetic and secluded from other simultaneously ongoing
relations that outnumber the human-ChatGPT relation).
While ChatGPT’s impact on society is thus forever limited, in a society that becomes increasingly
reliant on it, one may therefore not dismiss its potential for orientating our perception of reality
altogether. For even though the human still co-engages in making sense of ChatGPT’s output,
ChatGPT nonetheless does performat least to some extentits own synthetic
macroperceptual lifeworld onto the human. In addition, there is the upcoming complication of
the synthetic data effect. Synthetic data is artificially generated data by generative models; this
data is therefore not a direct extrapolation from the real world such as raw data is. The point of
synthetic data “is not simply to capture the general, the typical, the normal [Rather, it is] to
generate and expose the algorithm to synthetic blemishes, abnormal[ities], and edge cases [...]
in order to [speculatively correct for] bias, exclusion, or marginalisation” (Jacobsen, 2023). This
means that ChatGPT will generate data that reflect ways of experiencing reality that we are
familiar with, as well as ways of experiencing reality that make no sense to us (the so-called
hallucinations; as though ChatGPT were estranged, unfamiliar, alien to the human encoded
mode of experiencing the world). We might thus say that ChatGPT’s performance of language is
limited by this “reality gap” (see Jacobsen, 2023; Steinhoff, 2022); i.e., the gap between the
hermetic virtual lifeworld of synthetic data and that of the data as generated through
interaction with the human world. However, those hallucinations which we humans may
consider a “derailment” still perform our language (Ahmed, 2006); be it by (ephemerally)
imprinting us with hallucinatory modes of conceiving the world.
7 CONCLUSION
New generations of LLMs are constantly being developed, and millions of humans make use of
them daily. A question that, therefore, quickly comes to mind is why ChatGPT makes so much
sense to us. Only understanding ChatGPT as a device that calculates probabilities upon serially
12 Which, in turn, would equate to elevating humankind to the status of a divinity, as though it were able to create complex
life (but discussing this God complex goes beyond the scope of present paper).
13 For instance, the idea that short sentences should be preferred over long ones is deeply embedded in the English language.
Yet, this preference is not necessarily shared unanimously across languages.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 21
aligning words does not allow us to provide a satisfying answer to this question. Therefore,
through an Ihdean postphenomenological examination of the human practices arising upon
interacting with ChatGPT, and through speculating about ChatGPT’s practices arising from its
interaction with the human, we were able to detail some of the micro- and macroperceptual
dimensions of the human-ChatGPT relation. This analysis allowed us to suggest how ChatGPT
itself necessarily acts as a hermeneutic agent. We then discussed two implications that we
deemed particularly significant.
Firstly, while ChatGPT currently has neither direct access to the world nor direct experience of
the world, we suggested how ChatGPT nonetheless has an ability to make sense of the world.
This suggestion, that ChatGPT itself acts as a kind of hermeneutic agent, requires a radicalization
of our classical anthropocentric conception of hermeneutics, according to which only the human
would engage in such activity. Secondly, what this further means is that by way of adding an
interpretive layer to the output it so generates, ChatGPT therefore necessarily co-determines
together with the human the meaning of its output. In other words, to some extent, ChatGPT
performs us and enacts our meaning-making with its synthetically generated macroperceptual
lifeworld. That is how we suggested that ChatGPT carries the potential for orientating our
perception of reality and therefore for orientating how we relate to the world.
This potential for orientation is thus not only to be understood in the sense thatby way of
forcing us to relate to the world through wordsChatGPT reifies the idea that normative labels
and categories are apt at representing the world. It is also not only to be understood in the
sense thatby way of spatially arranging wordsChatGPT ossifies particular ways of phrasing
the world and familiarizing with the world. But, perhaps more thought-provokingly so, this
potential is also to be understood in the sense that ChatGPT performs the humanat least to
some extentwith its own synthetically generated semantics, and therefore with its own
synthetically generated perception of reality. We stressed that extravagant or sensationalizing
understandings of our suggestion are inappropriate, since ChatGPT will never unilaterally
dictate our perception of reality (be it because the human-reality relations outnumber the
human-ChatGPT relations by far). Yet, with the ongoing developments in LLMs and their
growing number of applications, we will increasingly be in touch with synthetic modes of
experiencing the world. LLMs will thus surely not leave us unchanged, and one may wonder how
this will bring us to reevaluate the meaning we attach to that which is written.
As Sara Ahmed noted, “orientations" depend on taking points of view as given” (2006), and so
too, this paper is just a view among many. We thus hope that our development may titillate
further debate on the abilities of LLMs and other generative technologies.
Data Access Statement
No new data was generated or analyzed. All the data generated for the analysis and discussion is
presented in the paper in the form of screenshots taken from the respective platforms. The use of the
data complies with the Terms of Use of the respective platforms at the time that the research was
conducted. The use of the images created with Microsoft Bing’s Image Creator from Designer complies
with its Terms of Use of 9 April 2024. The depiction of the brief exchanges with OpenAI’s ChatGPT model
GPT-3.5 complies with its Terms of Use of 9 April 2024. The depiction of the brief exchange with LA
Church’s free version of BibleGPT is authorized for research purposes (per 9 April 2024, the Terms of Use
only apply to paid subscribers and not to the freely available version).
Contributor Statement
Soraj Hongladarom initiated the idea for the paper and provided the postphenomenological framework.
Auriane van der Vaeren developed the paper, also drawing on science and technology studies, feminist
theses, and postanthropocentrism.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 22
Use of AI
Only those parts in the paper that serve as examples of uses of large language models (LLMs) were
created using LLMs. These parts are clearly and explicitly indicated in the paper to have been generated
using LLMs.
Funding Statement
Research for this paper was partially supported by a grant from the National Research Council of Thailand.
Acknowledgments
We would like to thank the editors for creating this special issue, as well as the reviewers for their
thoughtful comments that greatly contributed to the further consolidation of our argument.
References
Ahmed, S. (2006). Introduction & Chapter 2: sexual orientation. In Queer phenomenology: orientations,
objects, others (pp. 1-24 & 65-108). Duke University Press.
Betriu Yáñez, V. (2023). ChatGPT through postphenomenology and deconstruction: on the possibility of a
Derridean philosophy of technology (Publication No. 97405) [Master’s thesis dissertation,
University of Twente].
de Boer, B., te Molder, H., & Verbeek, P.-P. (2021). ‘Braining’ psychiatry: an investigation into how
complexity is managed in the practice of neuropsychiatric research. BioSocieties, 17, 758-781.
https://doi.org/10.1057/s41292-021-00242-8
Borgmann, A. (1984). Technology and the character of contemporary life: a philosophical inquiry. The
University of Chicago Press.
Chokkattu, J. (2024, January 11). Rabbit’s little walkie-talkie learns tasks that stump Siri and Alexa. Wired.
https://wired.me/gear/rabbits-little-walkie-talkie-learns-tasks-that-stump-siri-and-alexa/
Coeckelbergh, M., & Gunkel, D. J. (2023). ChatGPT: deconstructing the debate and moving it forward. AI &
Society. https://doi.org/10.1007/s00146-023-01710-4
Dilmegani, C. (2024, January 1). 50 ChatGPT use cases with real-life examples in 2024. AIMultiple.
https://research.aimultiple.com/chatgpt-use-cases/
Farazouli, A., Cerratto-Pargman, T., Bolander-Laksov, K., & McGrath, C. (2023). Hello GPT! Goodbye home
examination? An exploratory study of AI chatbots impact on university teachers’ assessment
practices. Assessment & Evaluation in Higher Education, 1-13.
Fitzpatrick, J. (2023, July 11). 8 surprising things you can do with ChatGPT. How-To Geek.
https://www.howtogeek.com/871367/surprising-uses-for-chatgpt/
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Introduction. In Deep learning (pp. 1-26). MIT Press.
https://www.deeplearningbook.org/
Gordon, R. (2023, March 3). Large language models are biased. Can logic help save them?. MIT News.
https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303
Hasse, C. Material hermeneutics as cultural learning: from relations to processes of relations. AI & Society,
38, 2037-2044 (2023). https://doi.org/10.1007/s00146-021-01171-7
Heidegger, M. (1977). The question concerning technology and other essays (W. Lovitt, Transl.). Garland.
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 23
Hongladarom, S. (2020). Machine hermeneutics, postphenomenology, and facial recognition technology.
AI & Society, 1-8.
Hongladarom, S. (2023, July 20). I am having a philosophical conversation with ChatGPT. Facebook.
https://www.facebook.com/soraj/posts/pfbid0o7rJfSZMV5TMVVixKEXYckzT2v4LWRb93Fz984Hn
wka8kRg3Zjat61Tq4uixSfa6l
Ihde, D. (1979). Technics and praxis. Reidel.
Ihde, D. (1991). Instrumental realism: the interface between Philosophy of Science and Philosophy of
Technology. Indiana University Press.
Ihde, D. (1993). Postphenomenology: essays in the postmodern context. Northwestern University Press.
Ihde, D. (2002). Chapter 7: prognostic predicaments. In Bodies in technology (pp. 103-112). University of
Minnesota Press.
Ihde, D. (2022). Material hermeneutics: reversing the linguistic turn. Routledge.
Jacobsen, B. N. (2023). Machine learning and the politics of synthetic data. Big Data & Society, 10(1).
https://doi.org/10.1177/20539517221145372
Kudina, O., & Verbeek, P.-P. (2018). Ethics from within. Science, Technology, & Human Values, 44(2), 291-
314. https://doi.org/10.1177/0162243918793711
Laaksoharju, M., Lennerfors, T. T., Persson, A., & Oestreicher, L. (2023). What is the problem to which AI
chatbots are the solution? AI ethics through Don Ihde’s embodiment, hermeneutic, alterity, and
background relationships. In Thomas Taro Lennerfors & Kiyoshi Murata (Eds.), Ethics and
sustainability in digital cultures (pp. 31-48). Taylor and Francis.
https://doi.org/10.4324/9781003367451-4
Latour, B. (Johnson, J.) (1988). Mixing humans and nonhumans together: the sociology of a door-closer.
Social Problems, 35(3), 298-310.
Levi Martin, J. (2023). The ethico-political universe of ChatGPT. Journal of Social Computing, 4(1).
https://doi.org/10.23919/JSC.2023.0003
Lobo, T. (2023). Selfie and world: on Instagrammable places and technologies for capturing them. Journal
of Human-Technology Relations, 1(1), 1-11. https://doi.org/10.59490/jhtr.2023.1.7011
Motoki, F., Pinho Neto, V., & Rodrigues, V. (2023). More human than human: measuring ChatGPT political
bias. Public Choice, 198, 3-23. https://doi.org/10.1007/s11127-023-01097-2
Mykhailov, D. (2020). The phenomenological roots of technological intentionality: a
postphenomenological perspective. Frontiers of Philosophy in China, 15(4), 612-635.
https://doi.org/10.3868/s030-009-020-0035-6
Mykhailov, D., & Liberati, N. (2023). Back to the technologies themselves: phenomenological turn within
postphenomenology. Phenomenology and the Cognitive Sciences, 1-20.
https://doi.org/10.1007/s11097-023-09905-2
Nooreyezdan, N. (2023, May 9). India’s religious AI chatbots are speaking in the voice of god and
condoning violence. Rest of World. https://restofworld.org/2023/chatgpt-religious-chatbots-
india-gitagpt-krishna/
Nunes Vieira, L. (2023). The many guises of machine translation: a postphenomenology perspective.
Digital Translation, 10(1), 16-36. http://dx.doi.org/10.1075/dt.00002.nun
JHTR Journal of Human-Technology Relations Vol. 2 (2024) 24
OpenAI (2022, November 30). Introducing ChatGPT. OpenAI.
https://openai.com/blog/chatgpt?ref=metacheles.ghost.io
Sasani, A. (2024, March 16). As AI tools get smarter, they’re growing more covertly racist, experts find. The
Guardian. https://www.theguardian.com/technology/2024/mar/16/ai-racism-chatgpt-gemini-
bias
Steinhoff, J. (2022). Toward a political economy of synthetic data: a data-intensive capitalism that is not a
surveillance capitalism?. New Media & Society, 0(0).
https://doi.org/10.1177/14614448221099217
Thomson, T. J., & Thomas, R. J. (2023, July 10). Ageism, sexism, classism and more: 7 examples of bias in
AI-generated images. The Conversation. https://theconversation.com/ageism-sexism-classism-
and-more-7-examples-of-bias-in-ai-generated-images-208748
Verbeek, P.-P. (2001). Don Ihde: the technological lifeworld. In American philosophy of technology: the
empirical turn (pp. 119-146). Indiana University Press.
Verbeek, P.-P. (2005). What things do: philosophical reflections on technology, agency, and design.
Pennsylvania State University Press.
Wadden, J. J. (2023). The postphenomenological impact of conversational artificial intelligence on
autonomy and psychological integrity. The American Journal of Bioethics, 23(5), 37-40.
Wiltsche, H.A. (2017). Mechanics lost: Husserl’s Galileo and Ihde’s telescope. Husserl Studies, 33, 149-173.
https://doi.org/10.1007/s10743-016-9204-x
Article
Full-text available
We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.
Article
Full-text available
Machine translation (MT) tools are widely available. They may be present in different spaces in ways that consumers of the content do not necessarily control or realise, and research to date has paid little attention to these human-MT encounters. This article draws on the philosophy of technology literature to consider implications of MT’s permeating presence in online environments as well as in face-to-face interactions. The focus of the article is on two situations where humans can come across MT: while browsing websites and when speaking with figures of authority. The article highlights ways in which humans’ relationship with MT transcends conscious decisions to operate an MT tool directly. It argues that the human-MT relationship can also be one of immersion where MT blends with the environment in ways that, on the one hand, break language barriers but, on the other, influence, persuade, and on occasion misinform.
Article
Full-text available
There have been widespread concerns about two aspects of the current explosion of predictive text models and other algorithm-based computational tools. On one hand, it is often insisted that Artificial Intelligence (AI) should be made “ethical”, and software providers take this seriously, attempting to make sure that their tools are not used to facilitate grossly criminal or widely condemned activities. On the other hand, it is also widely understood that those who create these tools have a responsibility to ensure that they are “unbiased”, as opposed to simply helping one side in political contestation define their perspectives as reality for all. Unfortunately, these two goals cannot be jointly satisfied, as there are perhaps no ethical prescriptions worthy of notice that are not contested by some. Here I investigate the current ethico-political sensibility of ChatGPT, demonstrating that the very attempt to give it an ethical keel has also given it a measurably left position in the political space and a concomitant position in social space among the privileged.
Article
Full-text available
Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and relate our finds to questions regarding authorship and language in the humanities. We also identify and respond to two common counter-objections in order to show the ethical appeal and practical use of our proposal.
Article
Full-text available
Instagrammable places are designed to be photographed for Instagram. This leads to the homogenization and commodification of the world to suit the app’s affordances. It is worth asking why Instagram users are so motivated to play along when only a miniscule fraction of them can monetize their pursuits. I argue that Instagram and its accompanying form, the selfie, touch upon a basic human need for meaning-making: for narratively organizing one’s experience of the world, and reversely for performing a narrativized identity in a meaningful world. The app establishes what Don Ihde has called a hermeneutic and an alterity relation to the world, by superficially contributing to an understanding of the world based on one’s own co-constitutive agency of framing and selecting features of the world to be photographed and shared, and by performing this agency to an audience.
Article
Full-text available
This paper revives phenomenological elements to have a better framework for addressing the implications of technologies on society. For this reason, we introduce the motto “back to the technologies themselves” to show how some phenomenological elements, which have not been highlighted in the philosophy of technology so far, can be fruitfully integrated within the postphenomenological analysis. In particular, we introduce the notion of technological intentionality in relation to the passive synthesis in Husserl’s phenomenology. Although the notion of technological intentionality has already been coined in postphenomenology, it is “in tension” with the notion of technological mediation since there are still no clear differences between these two concepts and studies on how they relate one to another. The tension between mediation and intentionality arises because it seems intuitively reasonable to suggest that intentionality differs from mediation in a number of ways; however, these elements have not been clearly clarified in postphenomenology so far. To highlight what technological intentionality is and how it differs from mediation, we turn the motto “back to the things themselves” into “back to the technologies themselves,” showing how the technologies have to be taken into consideration by themselves. More specifically, we use the concept of passive synthesis developed by Husserl, and we apply it to technologies to show their inner passive activity. The notion of the passive synthesis enables to demonstrate how technologies are able to connect to a wider (technological) environment without the subjects’ activity. Consequently, we claim that technologies have their pole of action, and they passively act by themselves.
Article
AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination prompts in undergraduate contexts. University teachers (n = 24) from four different departments in humanities and social sciences participated in Turing Test-inspired experiments, where they blindly assessed student and ChatGPT-written responses to home examination questions. Additionally, we conducted semi-structured interviews in focus groups with the same teachers examining their reflections about the quality of the texts they assessed. Regarding chatbot-generated texts, we found a passing rate range across the cohort (37.5 − 85.7%) and a chatbot-written suspicion range (14-23%). Regarding the student-written texts, we identified patterns of downgrading, suggesting that teachers were more critical when grading student-written texts. Drawing on post-phenomenology and mediation theory, we discuss AI chatbots as a potentially disruptive technology in higher education practices.