Paul Formosa’s research while affiliated with Macquarie University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (69)


Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency
  • Preprint
  • File available

April 2025

·

68 Reads

Paul Formosa

·

Inês Hipólito

·

Thomas Montefiore

The proliferation of Artificial Intelligence (AI) systems exhibiting complex and seemingly agentive behaviours necessitates a critical philosophical examination of their agency, autonomy, and moral status. In this paper we undertake a systematic analysis of the differences between basic, autonomous, and moral agency in artificial systems. We argue that while current AI systems are highly sophisticated, they lack genuine agency and autonomy because: they operate within rigid boundaries of pre-programmed objectives rather than exhibiting true goal-directed behaviour within their environment; they cannot authentically shape their engagement with the world; and they lack the critical self-reflection and autonomy competencies required for full autonomy. Nonetheless, we do not rule out the possibility of future systems that could achieve a limited form of artificial moral agency without consciousness through hybrid approaches to ethical decision-making. This leads us to suggest, by appealing to the necessity of consciousness for moral patiency, that such non-conscious AMAs might represent a case that challenges traditional assumptions about the necessary connection between moral agency and moral patiency.

Download

The AI-mediated communication dilemma: epistemic trust, social media, and the challenge of generative artificial intelligence

March 2025

·

115 Reads

·

2 Citations

Synthese

The rapid adoption of commercial Generative Artificial Intelligence (Gen AI) products raises important questions around the impact this technology will have on our communicative interactions. This paper provides an analysis of some of the potential implications that Artificial Intelligence-Mediated Communication (AI-MC) may have on epistemic trust in online communications, specifically on social media. We argue that AI-MC poses a risk to epistemic trust being diminished in online communications on both normative and descriptive grounds. Descriptively, AI-MC seems to (roughly) lower levels of epistemic trust. Normatively, we argue that this brings about the following dilemma. On the one hand, there are at least some instances where we should epistemically trust AI-MC less, and therefore the reduction in epistemic trust is justified in these instances. On the other hand, there are also instances where we epistemically trust AI-MC less, but this reduction in epistemic trust is not justified, resulting in discrimination and epistemic injustice in these instances. The difficulty in knowing which of these two groups any instance of AI-MC belongs to brings about the AI-MC dilemma: We must choose between maintaining normal levels of epistemic trust and risking epistemic gullibility when reduced trust is justified, or adopting generally reduced epistemic trust and risking epistemic injustice when such reduced trust is unjustified. Navigating this choice between problematic alternatives creates a significant challenge for social media as an epistemic environment.


Dark Patterns Meet the Gamer's Dilemma: Contrasting Morally Objectionable Content with Systems in Video Games

February 2025

·

25 Reads

·

1 Citation

Games and Culture

Much of the philosophical discussion of video game ethics is dominated by the literature on the Gamer's Dilemma, which forces us to focus on the ethics of certain forms of extreme virtual content in video games, such as virtual murder or molestation. While a focus on the ethics of video game content is important, we argue that scrutinizing the ethics of video game systems is needed to properly capture the full range of ethical concerns raised by video games. Drawing on a distinction between intravirtual and extravirtual effects, we identify ethical issues with video game content and, by linking to the dark patterns literature, video game systems. To illustrate our view, we give examples of how a game can appear to have morally objectionable content without the game being, at least clearly, morally objectionable, and how a game can appear to be morally unobjectionable despite having morally objectionable systems.




Artificial Intelligence (AI) and Global Justice

November 2024

·

50 Reads

·

1 Citation

Minds and Machines

This paper provides a philosophically informed and robust account of the global justice implications of Artificial Intelligence (AI). We first discuss some of the key theories of global justice, before justifying our focus on the Capabilities Approach as a useful framework for understanding the context-specific impacts of AI on low- to middle-income countries. We then highlight some of the harms and burdens facing low- to middle-income countries within the context of both AI use and the AI supply chain, by analyzing the extraction of materials, which includes mineral extraction and the environmental harms associated with it, and the extraction of labor, which includes unethical labor practices, low wages, and the trauma experienced by some AI workers. We then outline some of the potential harms and benefits that AI poses, how these are distributed, and what global justice implications this has for low- to middle-income countries. Finally, we articulate the global justice significance of AI by utilizing the Capabilities Approach. We argue that AI must be considered from a global justice perspective given that, globally, AI puts significant downward pressure on several elements of well-being thereby making it harder for people to achieve threshold levels of the central human capabilities needed for a life of dignity.


The ratings for authorship, creatorship, disclosure, and responsibility across query, assistance, and assistant conditions. Solid geometrical shapes denote the mean for each condition, while the surrounding plots portray the distributions of the raw data
Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure

September 2024

·

115 Reads

·

11 Citations

AI & SOCIETY

The increasing use of Generative AI raises many ethical, philosophical, and legal issues. A key issue here is uncertainties about how different degrees of Generative AI assistance in the production of text impacts assessments of the human authorship of that text. To explore this issue, we developed an experimental mixed methods survey study (N = 602) asking participants to reflect on a scenario of a human author receiving assistance to write a short novel as part of a 3 (high, medium, or low degrees of assistance) X 2 (human or AI assistant) factorial design. We found that, for a human author, the degree of assistance they receive matters for our assessments of their level of authorship, creatorship, and responsibility, but not what or who rendered that assistance, although it was more important to disclose human rather than AI assistance. However, in our assessments of the assisting agent, human assistants were viewed as warranting higher rates of authorship, creatorship, and responsibility, compared to AI assistants rendering the same level of support. These results help us to better understand emerging norms around collaborative human-AI generated text, with implications for other types of collaborative content creation.


Generative AI and the Future of Democratic Citizenship

June 2024

·

38 Reads

·

7 Citations

Digital Government Research and Practice

Generative AI technologies have the potential to be socially and politically transformative. In this paper, we focus on exploring the potential impacts that Generative AI could have on the functioning of our democracies and the nature of citizenship. We do so by drawing on accounts of deliberative democracy and the deliberative virtues associated with it, as well as the reciprocal impacts that social media and Generative AI will have on each other and the broader information landscape. Drawing on this background theory, we outline some of the key positive and negative impacts that Generative AI is likely to have on democratic citizenship. The political significance of these impacts suggests the need for further regulation.




Citations (50)


... Un exemple de bonne intégration vidéoludique de dilemme moral, auquel le champ d'études s'est fréquemment référé (Ryan et al., 2016 ;Schuzlke, 2009), est celui de la quête secondaire « Oasis » du jeu de rôle à monde ouvert Fallout 3 (Bethesda, 2008). Au sein d'une Amérique postapocalyptique, où les bombes nucléaires ont laissé place à un désert de ruines dans lequel les formes de vie sont principalement mutantes, se trouve une forêt paradisiaque, maintenue en vie par Harold, un homme-arbre. ...

Reference:

Au-delà de l’engagement moral : Vers un élargissement de l’éthique vidéoludique
Four Lenses for Designing Morally Engaging Games
  • Citing Conference Paper
  • January 2016

... False and misleading news distorts elections, undermines public-health campaigns, and ranks among the gravest global risks [28]. As large language models (LLMs) increasingly shape the news ecosystem-with major outlets such as Forbes and the Financial Times deploying AI agents for news recommendations [13,12]-their potential to spread misinformation raises significant concerns [4,34,53]. These concerns intensify with vision-language models (VLMs), which combine text generation with image interpretation. ...

The AI-mediated communication dilemma: epistemic trust, social media, and the challenge of generative artificial intelligence

Synthese

... In addition to considering risk and expense of controls while designing security solutions, cybersecurity researchers must also consider the ethical consequences of architectural decisions as well as decisions to accept risks. Cybersecurity researchers have responsibilities to protect the organizations including employees, investors, customers, and stakeholders to which they are aligned but also possess social accountability to protect the society itself (Richards et al., 2020). ...

Design of a Serious Game for Cybersecurity Ethics Training
  • Citing Conference Paper
  • January 2022

Malcolm Ryan

·

·

·

[...]

·

... Although conducted by Nature among 3838 postdocs indicated a similar level of engagement with GenAI, particularly chatbots, with 31% of respondents reporting using chatbots [13]. One application of GenAI in particular, Large Language Models (LLMs) based applications such as ChatGPT, has seen very high uptake as they can assist with writing which is a component of different parts of the research process [14,15]. Writing was already a task that was often undertaken with the help of tools such as Grammarly, Zotero, and Evernote, among others that helped improve grammar and sentence structure and assisted with citations [16]. ...

Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure
  • Citing Article
  • January 2025

SSRN Electronic Journal

... While at first glance, this approach does not seem strikingly creative and does not meet the full definition of Runco (2023), generative AIs have already entered several artistic/creative domains, such as painting art, music creation, poetry, story writing, and movie scripting (see (Vinchon et al. 2024) as well as (Formosa et al. 2024) for comprehensive overviews). In addition, some recent studies suggest that LLMs such as ChatGPT already performed similarly to or even exceeded human norms. ...

Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure

AI & SOCIETY

... Il ressort des articles de la période 2016-2021 une volonté de faire des connaissances sur l'IA l'affaire de tous et toutes, une forme de diffusion de la discipline dans les programmes de formation générale. Plus récemment, on voit aussi émerger une perspective citoyenne (Formosa et al., 2024) et transdisciplinaire (Cao, 2023. Comme d'autres formes de littératie auparavant, la littératie de l'IA a été proposée pour répondre à la percolation de technologies (ici, les technologies d'IA) dans diverses sphères d'activité. ...

Generative AI and the Future of Democratic Citizenship
  • Citing Article
  • June 2024

Digital Government Research and Practice

... The Gamer's Dilemma has since been widened to apply to fictional wrongdoings more generally (Luck 2022 ;Montefiore and Formosa 2023a ;Montefiore et al. 2024 ). A fictional wrongdoing is an act committed against a fictional moral patient, such that were it an actual moral patient it would constitute an actual wrongdoing. ...

Extending the Gamer’s Dilemma: empirically investigating the paradox of fictionally going too far across media
  • Citing Article
  • May 2024

Philosophical Psychology

... A review of recent cybersecurity literature reveals a growing concern regarding the ethical dimensions of user behaviour in digital environments. While early research focused predominantly on technology adoption, system vulnerabilities, and policy compliance, contemporary studies (e.g., Fenech et al., 2024) emphasize the critical role of individual ethical reasoning in shaping cybersecurity outcomes. ...

Ethical principles shaping values-based cybersecurity decision-making
  • Citing Article
  • March 2024

Computers & Security

... Technological innovations and educational initiatives play a critical role in the region's cyber security. Australia's National Digital Identification System and serious games introduced by Ali Bajwa et al. [2] and Jayakrishnan et al. [9] represent the use of technology to enhance security and ethical decision-making in cyber spaces. Moreover, Myanmar's multifaceted cyber security awareness campaign showcases the efficacy of using varied platforms to engage the public [4]. ...

Evaluation of embodied conversational agents designed with ethical principles and personality for cybersecurity ethics training
  • Citing Conference Paper
  • December 2023

... In academia, the ethical dimension of cybersecurity governance also involves balancing institutional security imperatives with principles of academic freedom and openness; which are the fundamental values in higher education. Ethical leadership in cybersecurity requires nuanced decision-making to protect institutional data and systems without compromising the academic values of intellectual exploration, collaboration, and freedom of inquiry (Sadeghi et al., 2023;Tokat, 2023). Leaders must navigate the tension between security measures, such as monitoring digital communications, and respecting privacy and academic freedom, ensuring policies are both effective and ethically defensible. ...

Modelling the ethical priorities influencing decision-making in cybersecurity contexts
  • Citing Article
  • May 2023

Organizational Cybersecurity Journal Practice Process and People