April 2025
·
9 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
April 2025
·
9 Reads
February 2025
·
8 Reads
Purpose The problem of misinformation is one that has been well-explored in the literature. While researchers often study tertiary student behaviors, they do not distinguish between student groups – such as those who have lived in a country their whole lives versus those who moved to the country. Further, literature tends to focus broadly on misinformation, and malinformation is an understudied area. The study aims to address these gaps. Design/methodology/approach Data was gathered using a survey instrument deployed as part of a larger study. Students were presented with two posts: one containing malinformation and one containing misinformation. They were asked how they would evaluate the posts. There were 193 respondents. Responses were analyzed using general inductive analysis. In completing the analysis, we differentiated between migrant and sedentary student groups. Findings Findings indicate that there are qualitative differences in how different groups evaluate suspect information and qualitative differences in how students approach misinformation and malinformation. Students are more accepting of malinformation than misinformation. Migrant students are less prone to making trust/distrust decisions and instead more prone to ambivalence. They are also more likely to seek out additional information in the face of misinformation compared to sedentary counterparts. Originality/value Findings enhance our understanding of differences in migrant and sedentary students’ experiences with suspect information and provide insights on malinformation experiences which is an underexplored area of research.
February 2025
·
15 Reads
AI and Ethics
With the acceleration of Large Language Model (LLM) use by the public, there is an urgent need to make sure the downstream effects have beneficial impacts on humans and society. Therefore, there is an increasing push for ethical evaluation of LLMs to look for potential bias, toxic behaviour, and misinformation. Thus, crowdsourcing has become a popular practice, such as OpenAI’s 2023 Evals initiative. Firstly, by reviewing the literature from software development and ethics, we wish to highlight several cautions on applying the crowdsourcing model to LLMs: including participant self-selection and non-representativeness; the diffusion of responsibility effect including ethics washing and burden shifting; and requisition of ‘incentives’ vis-a-vis issues faced by gig workers. Using the Evals GitHub repository as a case study, we study the effectiveness of an expert-driven, voluntary, crowdsourced scheme on GitHub to address socioethical issues in LLMs. This is achieved by evaluating the statistics of crowdsourced contributions on ethical and bias considerations, which pales in comparison to other technical contributions. This commentary hopes to highlight the issues of ethics, equity, and justice in LLM crowdsourcing, drawing upon interdisciplinary literature, and presents open considerations on how we can improve the state of play.
January 2025
·
7 Reads
Papua New Guinea (PNG) is an emerging tech society with an opportunity to overcome geographic and social boundaries, in order to engage with the global market. However, the current tech landscape, dominated by Big Tech in Silicon Valley and other multinational companies in the Global North, tends to overlook the requirements of emerging economies such as PNG. This is becoming more obvious as issues such as algorithmic bias (in tech product deployments) and the digital divide (as in the case of non-affordable commercial software) are affecting PNG users. The Open Source Software (OSS) movement, based on extant research, is seen as a way to level the playing field in the digitalization and adoption of Information and Communications Technologies (ICTs) in PNG. This perspectives paper documents the outcome of the second International Workshop on BRIdging the Divides with Globally Engineered Software} (BRIDGES2023) in the hopes of proposing ideas for future research into ICT education, uplifting software engineering (SE) capability, and OSS adoption in promoting a more equitable digital future for PNG.
December 2024
·
24 Reads
New Technology Work and Employment
Today, organizations are increasingly relying on automated hiring. The mechanization of the hiring process is assumed to render it more neutral, but a growing literature shows algorithmic decisions are as likely to be biased (Dickson, 2018). In this study, we test two types of biases: (1) gender bias; and (2) parenting bias, (i.e., whether mothers and fathers with an extended gap to care for children are penalized vis‐à‐vis those with uninterrupted employment net of equivalent high‐impact qualifications). We apply a classic counterfactual study sending gender and parenthood manipulated CVs to 211 job advertisements across three occupations (men‐dominated, women‐dominated, and gender‐balanced, to mitigate confounding variables associated with gender composition) in the United States and measure penalty‐premium bias in response rates. Our results identify semi‐automated hiring bias against parents who took leave to care for children relative to those with uninterrupted employment. Importantly, we find fathers who have an extended parental leave were the most severely penalized, followed by mothers with an extended parental leave and women and men without parental leave respectively. Ultimately, we identify gender and parenting bias in algorithmic and human hiring decisions.
October 2024
·
36 Reads
·
6 Citations
October 2024
·
2 Reads
·
1 Citation
September 2024
·
26 Reads
Large Language Models (LLMs) have taken the world by storm, demonstrating their ability not only to automate tedious tasks, but also to show some degree of proficiency in completing software engineering tasks. A key concern with LLMs is their "black-box" nature, which obscures their internal workings and could lead to societal biases in their outputs. In the software engineering context, in this early results paper, we empirically explore how well LLMs can automate recruitment tasks for a geographically diverse software team. We use OpenAI's ChatGPT to conduct an initial set of experiments using GitHub User Profiles from four regions to recruit a six-person software development team, analyzing a total of 3,657 profiles over a five-year period (2019-2023). Results indicate that ChatGPT shows preference for some regions over others, even when swapping the location strings of two profiles (counterfactuals). Furthermore, ChatGPT was more likely to assign certain developer roles to users from a specific country, revealing an implicit bias. Overall, this study reveals insights into the inner workings of LLMs and has implications for mitigating such societal biases in these models.
September 2024
·
6 Reads
·
1 Citation
International Higher Education
June 2024
·
88 Reads
·
1 Citation
Background: The development of AI-enabled software heavily depends on AI model documentation, such as model cards, due to different domain expertise between software engineers and model developers. From an ethical standpoint, AI model documentation conveys critical information on ethical considerations along with mitigation strategies for downstream developers to ensure the delivery of ethically compliant software. However, knowledge on such documentation practice remains scarce. Aims: The objective of our study is to investigate how developers document ethical aspects of open source AI models in practice, aiming at providing recommendations for future documentation endeavours. Method: We selected three sources of documentation on GitHub and Hugging Face, and developed a keyword set to identify ethics-related documents systematically. After filtering an initial set of 2,347 documents, we identified 265 relevant ones and performed thematic analysis to derive the themes of ethical considerations. Results: Six themes emerge, with the three largest ones being model behavioural risks, model use cases, and model risk mitigation. Conclusions: Our findings reveal that open source AI model documentation focuses on articulating ethical problem statements and use case restrictions. We further provide suggestions to various stakeholders for improving documentation practice regarding ethical considerations.
... Our study aims to progress this conversation in the specific context of SE and the perception of software engineers, by systematically exploring potential societal biases in text and visual outputs of LLMs across multiple facets of gender, race/ethnicity, culture or religion, age, body type, and geographic locations. LLMs, widely used in SE for tasks such as recruitment [35], [36], requirements engineering [37], code generation [38], [39], and testing [40], pose significant risks of reinforcing societal biases. Given the diversity challenges already prevalent in SE [7], [16], [17], it becomes crucial to assess these tools' fairness and inclusivity in domain-specific contexts. ...
October 2024
... For example, AI models in genomics and drug discovery require continual refinements based on new data; open source frameworks allow scientists across institutions to collaborate on improvements in real time, accelerating progress. 7 Additionally, open source AI enhances competition within the AI landscape. With large corporations no longer monopolizing AI capabilities, smaller firms and research institutions can challenge established players, leading to a more dynamic and innovative ecosystem. ...
October 2024
... In conclusion, the transformative impact of AI on higher education necessitates a collaborative, multi-stakeholder approach to ensure its effective and ethical integration. Faculty, students, and administration each play vital, interconnected roles in shaping AI-powered teaching, learning, and institutional governance (Tanveer et al., 2020;Chang et al., 2024;Owoc et al., 2021). To achieve the full potential of AI in education, it is crucial that these stakeholders work in tandem, guided by principles of human-centricity, ethical decision-making, and a commitment to inclusive and equitable access. ...
September 2024
International Higher Education
... 9 11 In Indonesia, however, ERACS is actively advertised on social media by private health facilities and healthcare providers as an advanced method of CS that is painless, comfortable and results in faster recovery within 24 hours postsurgery. 12 The spread of misinformation on ERACS may influence Indonesian women's preferences over CS and further increase the rates in the country. Decision-making around CS is complex and includes interconnected clinical and non-clinical factors from women, communities, healthcare providers and system. ...
February 2023
... Due to the reliance of genAI image models on their datasets to create new but likely images, there has been significant concern about the reproduction and even amplification of biases embedded in the produced images [2,6,7,26,40,42,58,76,77,83,100,101], especially considering the history of data is biased [64,80,108]. For Suchman [99], the potential for AI to act as a "disclosing agent" for assumptions about humans is relevant here as genAI especially focuses on visual disclosure of patterns in social images within the data sets. ...
April 2024
Ethics and Information Technology
... To help creative professionals fully harness the potential of models' unpredictability for co-creation, prior works explored ways of balancing its inherent tradeoffs to uncontrollability -by surfacing and visualizing the prompt space for users [2], refining representations of user prompts with added words from the embedding space of a frozen text-to-image model (i.e., textual inversion [27,103]), refining textual prompts with multi-modal feedback [102], or using GPT to generate (1) code that "sketches out" graphical inputs to guide the text-to-image generation [109] or (2) more semantically diverse prompts [8]. But while these approaches afford creative professionals the agency to explore and expand their prompt spaces during their use of text-to-image tools, it remains a challenge to address the unseen and unsurfaced uniformities, stereotypes and homogeneities that often occur from the model itself [14,16,28,98]. Responding to calls for more pluralistic alignment [94,97], scholars from the HCI and AI communities explored and documented approaches to quantifying and mitigating social biases and homogeneities in large language models [53,65,90], image retrieval models [35,74], and image generation models [7,14,16,17,66], as well as algorithmic systems more broadly [25,89]. ...
March 2024
ACM Journal on Responsible Computing
... Other researchers have extended the initial Moral Foundations Dictionary to improve performance (Hopp et al. 2019) and developed alternative conceptualisations of moral language. For example, the 'Morality as Cooperation' dictionary (Alfano et al. 2024) aims to measure language related to seven categories of morality based on the idea that the function of morality is rooted in promoting cooperation. ...
February 2024
Heliyon
... A pragmatic framework for discussing ethics is principlism, which encompasses principles such as respect for autonomy, nonmaleficence, beneficence, and justice [21]. Schwartz recognised ten value categories including security and conformity, encompassing 58 human values [56,57]. ...
January 2023
IEEE Software
... (4) Educators. Given the insufficient efforts in documenting ethical concerns in current model cards, educators should put more emphasis on ethics [46], and provide practical examples, to ensure that students understand the importance of considering ethics when developing or using AI/ML models. Providing templates and detailed aspects when documenting ethics, such as the findings from our study, could be beneficial for students. ...
December 2023
Australasian Journal of Information Systems
... The potential offered by artificial intelligence (AI) and new SM developments, including the ways in which users interact with each other, is unmissable. One trend that is worth keeping track of is the development of decentralized social media as an alternative to traditional, centralized platforms such as Facebook, X, or Instagram, from which they differ by not using a single, central entity to exchange information but rather handling it through a network of independent entities, using technologies such as blockchain [37,38,39,40]. Decentralized SM platforms facilitate direct interactions between users, reducing the dependency on intermediaries. ...
December 2023