ResearchPDF Available

Call for papers "Postphenomenology in the age of AI: Prospects, Challenges, Opportunities"; Special issue for the Journal of Human-Technology Relations; Guest editor: Dmytro Mykhailov

Authors:

Abstract

This is a call for papers in the special issue "Postphenomenology in the Age of AI: Prospects, Challenges, Opportunities" for the Journal of Human-Technology Relations. Guest Editor - Dr. Dmytro Mykhailov For more details, please check the file attached or visit a webpage for the special issue - https://journals.open.tudelft.nl/jhtr/announcement/view/401
Call for Papers for the Journal of Human-Technology Relations special issue on
Postphenomenology in the age of AI: Prospects, Challenges, Opportunities
Guest Editor
Dr. Dmytro Mykhailov, postdoctoral fellow, School of Humanities, Southeast University,
Nanjing (China)
Description
After the launch of ChatGPT3 and ChatGPT4, AI entered a new era characterized by higher self-
sufficiency of artificial systems, an increasing level of automation, and an extreme
transformation of almost every social domain we know. Large Language Models (LLMs) are
revolutionizing the way we write text (Liberati, 2023). Innovative AI generative video
technology is capable of creating photorealistic content not only for living celebrities but also
for resurrecting famous people who have already passed away. Complex Machine Learning
algorithms enable new scientific practices of data interpretation and so create a new situation
of scientific explanation of nature and human beings (Kudina & de Boer, 2021). While
postphenomenology has analyzed AI from various perspectives, such as technological
intentionality in artificial neural networks (Mykhailov & Liberati, 2022), algorithmic biases and
non-neutrality of AI models (Wellner & Rothman, 2020), the problem of the black-box (Friedrich
et al., 2022) and recent postphenomenology of ChatGPT (Laaksoharju et al., 2023), the progress
in the field of AI over the last several months has brought forth radically new challenges that
must be addressed from a strong philosophical standpoint. With this in mind, the present
special issue aims to grasp the dynamic landscape of contemporary AI technology by applying
a postphenomenological methodology. The papers invited for this special issue should explore
a wide range of topics, including but not limited to the epistemological, moral, and societal
changes that novel AI applications will bring into play using a postphenomenological
perspective. The scope of AI applications is also not limited to specific AI technology but may
encompass cases from different domains such as medicine, education, scientific research, etc.
We invite the submission of papers focusing on but not restricted to the following questions:
How can postphenomenological concepts of mediation and technological
intentionality enhance our understanding of emerging AI applications?
What novel aspects does AI technology introduce into the postphenomenological
notion of multistability, and and what are those different technological ‘stabilities’ that
today’s AI can have (especially in different cultural contexts)?
In what ways can postphenomenology offer guidance for the ethical design and
development of AI systems?
How can the postphenomenological approach address and resolve issues related to AI
bias, discrimination, transparency and fairness?
To what extent does AI contribute to the broader discourse on technoscience, and
what significant technoscientific implications does AI hold for contemporary science?
How can postphenomenology reflect upon challenges that new AI applications create
for contemporary art and aesthetics in general?
References
Friedrich, A. B., Mason, J., & Malone, J. R. (2022). Rethinking explainability: toward a
postphenomenology of black-box artificial intelligence in medicine. Ethics and Information
Technology 2022 24:1, 24(1), 19. https://doi.org/10.1007/S10676-022-09631-4
Kudina, O., & de Boer, B. (2021). Co-designing diagnosis: Towards a responsible integration of Machine
Learning decision-support systems in medical diagnostics. Journal of Evaluation in Clinical
Practice, 27(3), 529536. https://doi.org/10.1111/JEP.13535
Laaksoharju, M., Lennerfors, T. T., Persson, A., & Oestreicher, L. (2023). What is the problem to which
AI chatbots are the solution? AI ethics through Don Ihde’s embodiment, hermeneutic, alterity,
and background relationships. In Thomas Taro Lennerfors & Kiyoshi Murata (Eds.), Ethics and
Sustainability in Digital Cultures (pp. 3148). Taylor and Francis.
https://doi.org/10.4324/9781003367451-4
Liberati, Nicola (2023) “ChatGPT 自技的解”. 《上海文化》 (文化究版)
2023 6 pp. 31- 38
Mykhailov, D., & Liberati, N. (2022). A Study of Technological Intentionality in C++ and Generative
Adversarial Model: Phenomenological and Postphenomenological Perspectives. Foundations of
Science 2022, 117. https://doi.org/10.1007/S10699-022-09833-5
Wellner, G., & Rothman, T. (2020). Feminist AI: Can We Expect Our AI Systems to Become Feminist?
Philosophy and Technology, 33(2), 191205. https://doi.org/10.1007/S13347-019-00352-Z
Timetable
Deadline for paper submissions: February 1, 2024
Deadline for paper reviews: April 1, 2024
Deadline for submission of revised papers: June 1, 2024
Deadline for reviewing revised papers: July 1, 2024
Accepted papers will be published in 2024
Submission guidelines: During the submission, please indicate on the first page of the cover
letter that your paper is for the special issue “Postphenomenology in the age of AI: Prospects,
Challenges, Opportunities”.
For any further information, please, contact: d.mykhailov117@icloud.com
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital technologies by focusing on the objects’ inner ‘life’ thanks to the analysis of passive synthesis and phenomenological horizons in the objects. These elements can be used in computer technologies to show how digital objects are ‘alive.’ This paper focuses on programs developed through high-order languages like C++ and unsupervised learning techniques like ‘Generative Adversarial Model.’ The phenomenological analysis reveals the computer’s autonomy within the programming stages. At the same time, the conceptual inquiry into the digital system’s learning ability shows the alive and changeable nature of the technological object itself.
Article
Full-text available
In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine learning in medicine has become more useful and more widely adopted, concerns have arisen about the “black-box” nature of some of these AI models, or the inability to understand—and explain—the inner workings of the technology. Some critics argue that AI algorithms must be explainable to be responsibly used in the clinical encounter, while supporters of AI dismiss the importance of explainability and instead highlight the many benefits the application of this technology could have for medicine. However, this dichotomy fails to consider the particular ways in which machine learning technologies mediate relations in the clinical encounter, and in doing so, makes explainability more of a problem than it actually is. We argue that postphenomenology is a highly useful theoretical lens through which to examine black-box AI, because it helps us better understand the particular mediating effects this type of technology brings to clinical encounters and moves beyond the explainability stalemate. Using a postphenomenological approach, we argue that explainability is more of a concern for physicians than it is for patients, and that a lack of explainability does not introduce a novel concern to the physician–patient encounter. Explainability is just one feature of technological mediation and need not be the central concern on which the use of black-box AI hinges.
Article
Full-text available
Rationale: This paper aims to show how the focus on eradicating bias from Machine Learning decision-support systems in medical diagnosis diverts attention from the hermeneutic nature of medical decision-making and the productive role of bias. We want to show how an introduction of Machine Learning systems alters the diagnostic process. Reviewing the negative conception of bias and incorporating the mediating role of Machine Learning systems in the medical diagnosis are essential for an encompassing, critical and informed medical decision-making. Methods: This paper presents a philosophical analysis, employing the conceptual frameworks of hermeneutics and technological mediation, while drawing on the case of Machine Learning algorithms assisting doctors in diagnosis. This paper unravels the non-neutral role of algorithms in the doctor's decision-making and points to the dialogical nature of interaction not only with the patients but also with the technologies that co-shape the diagnosis. Findings: Following the hermeneutical model of medical diagnosis, we review the notion of bias to show how it is an inalienable and productive part of diagnosis. We show how Machine Learning biases join the human ones to actively shape the diagnostic process, simultaneously expanding and narrowing medical attention, highlighting certain aspects, while disclosing others, thus mediating medical perceptions and actions. Based on that, we demonstrate how doctors can take Machine Learning systems on board for an enhanced medical diagnosis, while being aware of their non-neutral role. Conclusions: We show that Machine Learning systems join doctors and patients in co-designing a triad of medical diagnosis. We highlight that it is imperative to examine the hermeneutic role of the Machine Learning systems. Additionally, we suggest including not only the patient, but also colleagues to ensure an encompassing diagnostic process, to respect its inherently hermeneutic nature and to work productively with the existing human and machine biases.
Feminist AI: Can We Expect Our AI Systems to Become Feminist?
  • G Wellner
  • T Rothman
Wellner, G., & Rothman, T. (2020). Feminist AI: Can We Expect Our AI Systems to Become Feminist? Philosophy and Technology, 33(2), 191-205. https://doi.org/10.1007/S13347-019-00352-Z Timetable Deadline for paper submissions: February 1, 2024