Figure 1 - uploaded by Rama Adithya Varanasi
Content may be subject to copyright.
A summary of co-production activities mapped to Jasanoff [58]'s co-production sites, along with the themes, RAI values invoked, and key findings and takeaways.

A summary of co-production activities mapped to Jasanoff [58]'s co-production sites, along with the themes, RAI values invoked, and key findings and takeaways.

Source publication
Conference Paper
Full-text available
Recently, the AI/ML research community has indicated an urgent need to establish Responsible AI (RAI) values and practices as part of the AI/ML lifecycle. Several organizations and communities are responding to this call by sharing RAI guidelines. However, there are gaps in awareness, deliberation, and execution of such practices for multi-discipli...

Similar publications

Article
Full-text available
Traditionally, the networking industry has been dominated by closed and proprietary hardware and software. Vendors have been controlling the network by hard-coding how packets should be processed and providing the network operators with a set of predefined protocols. Recently, the industry, operators, and the research community have started to pay...
Article
Full-text available
The research community is expressing an increasing concern about the negative outcomes of ostracism. However, how and when ostracism is associated with adolescents’ depression remains to be identified. Guided by the Temporal Need-threat Model, this study aimed to investigate the link between ostracism and depression, and to explore this influencing...
Preprint
Full-text available
Machine learning is an important tool for analyzing high-dimension hyperspectral data; however, existing software solutions are either closed-source or inextensible research products. In this paper, we present cuvis.ai, an open-source and low-code software ecosystem for data acquisition, preprocessing, and model training. The package is written in...
Article
Full-text available
The role of unsteadiness in the aerodynamics of Floating Offshore Wind Turbines (FOWT) remains a subject of discussion among the research community. Therefore, it must be investigated whether and to what extent transient aerodynamic phenomena impact the loads of a wind turbine rotor undergoing motions in unsteady winds. The study of transient aerod...
Preprint
Full-text available
The study of theoretical conditions for recovering sparse signals from compressive measurements has received a lot of attention in the research community. In parallel, there has been a great amount of work characterizing conditions for the recovery both the state and the input to a linear dynamical system (LDS), including a handful of results on re...

Citations

... When AI developers do engage with stakeholders, prior work has found a reliance on brief consultation over involving stakeholders in decision-making [30,85], and engaging client organizations as proxies more than actual end users [48]. Toolkits, frameworks, and auditing mechanisms have been created to help AI developers responsibly build tools, but studies have found that factors such as conflicting values within teams [115], profit motives [32], lack of context-specific guidance [33,51,52,124], or lack of prior experience [7] shape whether developers can successfully implement them. ...
... Indeed, prior work has highlighted the challenges that AI developers face when considering ethical guidelines and engaging with user-centered design (e.g. [52,115,124]) and that AI developers desire more support for the early stage of ideation and problem formulation [125]. Leveraging and compensating for community organizations' valuable knowledge and co-leadership is one crucial way to support technology teams. ...
... Still, it is clear that the structural constraints that technology teams face in academia and industry present challenges of adopting existing frameworks and toolkits [7,32,50,115]. However, within these constraints, we also see an opportunity for technology teams to think creatively about what funding opportunities they could pursue and to leverage non-traditional funding structures to enable more meaningful long-term engagement with community organizations [34,39,102]. ...
Conference Paper
Full-text available
Artificial Intelligence for Social Good (AI4SG) has emerged as a growing body of research and practice exploring the potential of AI technologies to tackle social issues. This area emphasizes interdisciplinary partnerships with community organizations, such as non-profits and government agencies. However, amidst excitement about new advances in AI and their potential impact, the needs, expectations, and aspirations of these community organizations-and whether they are being met-are not well understood. Understanding these factors is important to ensure that the considerable efforts by AI teams and community organizations can actually achieve the positive social impact they strive for. Drawing on the Data Feminism framework, we explored the perspectives of community organization members on their partnerships with AI teams through 16 semi-structured interviews. Our study highlights the pervasive influence of funding agendas and the optimism surrounding AI's potential. Despite the significant intellectual contributions and labor provided by community organization members, their goals were frequently sidelined in favor of other stakeholders, including AI teams. While many community organization members expected tangible project deployment, only two out of 14 projects we studied reached the deployment stage. However, community organization members sustained their belief in the potential of the projects, still seeing diminished goals as valuable. To enhance the efficacy of future collaborations, our participants shared their aspirations for success, calling for co-leadership starting from the early stages of projects. We propose data co-liberation as a grounding principle for approaching AI4SG moving forward, positing that community organizations' co-leadership is essential for fostering more effective, sustainable, and ethical development of AI.
... Even AI software engineers more broadly "have received surprisingly scant attention" [40]: Only few interview-based studies examine their attribution of ethical responsibility [40], awareness of the implications of their work and sense of accountability [55], or broader ethical concerns and proposed solutions [57]. Further work focuses on the co-production of values [51] or specific issues such as fairness or accountability [26,31,53]. These studies' broad findings on responsible AI development cannot be directly transferred to the specificities of deepfakes' development and impact. ...
Article
Full-text available
Policymakers and societies are grappling with the question of how to respond to deepfakes, i.e., synthetic audio-visual media which is proliferating in all areas of digital life– from politics to pornography. However, debates and research on deepfakes’ impact and governance largely neglect the technology’s sources, namely the developers of the underlying artificial intelligence (AI), and those who provide code or deepfake creation services to others, making the technology widely accessible. These actors include open-source developers, professionals working in large technology companies and specialized start-ups, and for deepfake apps. They can profoundly impact which underlying AI technologies are developed, whether and how they are made public, and what kind of deepfakes can be created. Therefore, this paper explores which values guide professional deepfake development, how economic and academic pressures and incentives influence developers’ (perception of) agency and ethical views, and how these views do and could impact deepfake design, creation, and dissemination. Thereby, the paper focuses on values derived from debates on AI ethics and on deepfakes’ impact. It is based on ten qualitative in-depth expert interviews with academic and commercial deepfake developers and ethics representatives of synthetic media companies. The paper contributes to a more nuanced understanding of AI ethics in relation to audio-visual generative AI. Besides, it empirically informs and enriches the deepfake governance debate by incorporating developers’ voices and highlighting governance measures which directly address deepfake developers and providers and emphasize the potential of ethics to curb the dangers of deepfakes.
... An emerging area of work focuses on evaluating practitioners' current practices and needs when engaging in responsible AI practices, providing valuable knowledge to shape AI development [27,53,56,61,95,98,104]. However, most of this work centers on AI practitioners working in industry and situated in Western contexts. ...
... Besides structured engagement, 2/12 participants also mentioned other communication channels like WhatsApp (P04, P09), where AI developers and technicians are available for more ad-hoc inquiries from end users. Aligned with Varanasi and Goyal [95], we found that practitioners tend to adopt a mix of approaches tailored to their specific context without clear and usable guidance on how to apply human-centered design frameworks. While the flexibility might allow practitioners to adapt general guidelines to local contexts, it might also leave practitioners feeling unsupported and needing to take on additional self-guided initiatives on top of existing workloads and tight deadlines [25,95,104]. ...
... Aligned with Varanasi and Goyal [95], we found that practitioners tend to adopt a mix of approaches tailored to their specific context without clear and usable guidance on how to apply human-centered design frameworks. While the flexibility might allow practitioners to adapt general guidelines to local contexts, it might also leave practitioners feeling unsupported and needing to take on additional self-guided initiatives on top of existing workloads and tight deadlines [25,95,104]. ...
Conference Paper
Full-text available
AI for Social Good (AI4SG) has been advocated as a way to address social impact problems using emerging technologies, but little research has examined practitioner motivations behind building these tools and how practitioners make such tools understandable to stakeholders and end users, e.g., through leveraging techniques such as explainable AI (XAI). In this study, we interviewed 12 AI4SG practitioners to understand their experiences developing social impact technologies and their perceptions of XAI, focusing on projects in the Global South. While most of our participants were aware of XAI, many did not incorporate these techniques due to a lack of domain expertise, difficulty incorporating XAI into their existing workflows, and perceiving XAI as less valuable for end users with low levels of AI and digital literacy. We conclude by reflecting on the shortcomings of XAI for real-world use and envisioning a future agenda for explainability research.
... Studies have shown that transparent AI systems lead to higher user satisfaction and trust, while fair algorithms in recommendation systems can increase user engagement and promote diverse content consumption [27,51,66]. Implementing responsible AI faces challenges such as algorithmic bias, the complexity of ethical decision-making in dynamic environments, and the risk of unintended consequences [73,95]. ...
... The ValueCompass framework and findings expand the scope of ethical values that should be integrated into responsible AI practices. Current guidelines, such as IBM's "Pillars of Trust" [47] and Google's "AI Principles" [35], outline typical ethical principles including explainability [35,47], fairness [35,47,64], robustness [47], transparency [47,64], accountability [35,64], privacy and security [35,47,64], reliability and safety [35,64], and inclusiveness [64] to ensure alignment with stakeholder values and legal standards [95]. ...
Preprint
As AI systems become more advanced, ensuring their alignment with a diverse range of individuals and societal values becomes increasingly critical. But how can we capture fundamental human values and assess the degree to which AI systems align with them? We introduce ValueCompass, a framework of fundamental values, grounded in psychological theory and a systematic review, to identify and evaluate human-AI alignment. We apply ValueCompass to measure the value alignment of humans and language models (LMs) across four real-world vignettes: collaborative writing, education, public sectors, and healthcare. Our findings uncover risky misalignment between humans and LMs, such as LMs agreeing with values like "Choose Own Goals", which are largely disagreed by humans. We also observe values vary across vignettes, underscoring the necessity for context-aware AI alignment strategies. This work provides insights into the design space of human-AI alignment, offering foundations for developing AI that responsibly reflects societal values and ethics.
... These range from speculative red teaming activities to help product teams identify security vulnerabilities to ethics trainings [37,52,72]. To slowly embed new practices into teams, entrepreneurial "vigilantes" such as privacy or security champions became early adopters excited to test these out with teams [29,62,66]. Understanding how "routines can become a source of change, " security practitioners built code testing tools in hopes that more and more engineers would adopt them if they were easy to use and proved helpful [52]. ...
Preprint
Full-text available
Across the technology industry, many companies have expressed their commitments to AI ethics and created dedicated roles responsible for translating high-level ethics principles into product. Yet it is unclear how effective this has been in leading to meaningful product changes. Through semi-structured interviews with 26 professionals working on AI ethics in industry, we uncover challenges and strategies of institutionalizing ethics work along with translation into product impact. We ultimately find that AI ethics professionals are highly agile and opportunistic, as they attempt to create standardized and reusable processes and tools in a corporate environment in which they have little traditional power. In negotiations with product teams, they face challenges rooted in their lack of authority and ownership over product, but can push forward ethics work by leveraging narratives of regulatory response and ethics as product quality assurance. However, this strategy leaves us with a minimum viable ethics, a narrowly scoped industry AI ethics that is limited in its capacity to address normative issues separate from compliance or product quality. Potential future regulation may help bridge this gap.
... When AI developers do engage with stakeholders, prior work has found a reliance on brief consultation over involving stakeholders in decision-making [30,85], and engaging client organizations as proxies more than actual end users [48]. Toolkits, frameworks, and auditing mechanisms have been created to help AI developers responsibly build tools, but studies have found that factors such as conflicting values within teams [115], profit motives [32], lack of context-specific guidance [33,51,52,124], or lack of prior experience [7] shape whether developers can successfully implement them. ...
... Indeed, prior work has highlighted the challenges that AI developers face when considering ethical guidelines and engaging with user-centered design (e.g. [52,115,124]) and that AI developers desire more support for the early stage of ideation and problem formulation [125]. Leveraging and compensating for community organizations' valuable knowledge and co-leadership is one crucial way to support technology teams. ...
... Still, it is clear that the structural constraints that technology teams face in academia and industry present challenges of adopting existing frameworks and toolkits [7,32,50,115]. However, within these constraints, we also see an opportunity for technology teams to think creatively about what funding opportunities they could pursue and to leverage non-traditional funding structures to enable more meaningful long-term engagement with community organizations [34,39,102]. ...
Preprint
Full-text available
Artificial Intelligence for Social Good (AI4SG) has emerged as a growing body of research and practice exploring the potential of AI technologies to tackle social issues. This area emphasizes interdisciplinary partnerships with community organizations, such as non-profits and government agencies. However, amidst excitement about new advances in AI and their potential impact, the needs, expectations, and aspirations of these community organizations--and whether they are being met--are not well understood. Understanding these factors is important to ensure that the considerable efforts by AI teams and community organizations can actually achieve the positive social impact they strive for. Drawing on the Data Feminism framework, we explored the perspectives of community organization members on their partnerships with AI teams through 16 semi-structured interviews. Our study highlights the pervasive influence of funding agendas and the optimism surrounding AI's potential. Despite the significant intellectual contributions and labor provided by community organization members, their goals were frequently sidelined in favor of other stakeholders, including AI teams. While many community organization members expected tangible project deployment, only two out of 14 projects we studied reached the deployment stage. However, community organization members sustained their belief in the potential of the projects, still seeing diminished goals as valuable. To enhance the efficacy of future collaborations, our participants shared their aspirations for success, calling for co-leadership starting from the early stages of projects. We propose data co-liberation as a grounding principle for approaching AI4SG moving forward, positing that community organizations' co-leadership is essential for fostering more effective, sustainable, and ethical development of AI.
... A further complication is that many designers and developers of AI/ML systems1 are currently unaware of the tensions and trade-offs, which may stem from unfamiliarity of (or their unwillingness to engage with) AI ethics principles and/or their underlying aspects [48], [49]. Without regulatory enforcement, taking AI ethics principles into account can be contrary to industry priorities [31]. ...
... The selection, prioritisation and trade-off resolution of AI ethics aspects can occur at various points in the AI/ML system development pipeline. Without organisational policies and formal governance, these can occur on an ad-hoc basis at the design and implementation levels, and as such can be significantly affected by individual team members, their knowledge and interpretation of Responsible AI issues, personal preferences and bias [22], [49], and lack of understanding of the effect of trade-offs on others [34]. On the other hand, explicit organisational policies may be under-developed and/or can lead to overly rigid adherence due to lack of flexibility [37], [49]. ...
... Without organisational policies and formal governance, these can occur on an ad-hoc basis at the design and implementation levels, and as such can be significantly affected by individual team members, their knowledge and interpretation of Responsible AI issues, personal preferences and bias [22], [49], and lack of understanding of the effect of trade-offs on others [34]. On the other hand, explicit organisational policies may be under-developed and/or can lead to overly rigid adherence due to lack of flexibility [37], [49]. ...
Conference Paper
Full-text available
While the operationalisation of high-level AI ethics principles into practical AI/ML systems has made progress, there is still a theory-practice gap in managing tensions between the underlying AI ethics aspects. We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex. The approaches differ in the types of considered context, scope, methods for measuring contexts, and degree of justification. None of the approaches is likely to be appropriate for all organisations, systems, or applications. To address this, we propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions. The proposed framework aims to facilitate the implementation of well-rounded AI/ML systems that are appropriate for potential regulatory requirements.
... Lastly, it is relevant to highlight that the potential and limitations of relying on human values to design and develop AI technologies must be scrutinised not only in the context of their deployment and adoption, but also in the context of their embedding. In particular, we suggest more research needs to be done in three stages of the development pipeline: Ideation, where a narrow group of people will decide which social values better represent the constraints and potential that a specific algorithm-driven technology should respond to [59]; Development processes, where a larger number of stakeholders will have the agency to add and remove values to address concerns or reinforce interests related to varied, and often conflicting, aspects of design, ethics and performance [60]; Marketing strategies, where algorithmic values will be leveraged to replace knowledge of a technology with a social positioning towards that technology to exploit people's reliance on social values to trust and understand new technologies [61,62]. ...
Conference Paper
Full-text available
Value-based frameworks are widely used to guide the design of algorithms, yet their influence in mediating users’ perception and use of algorithm-driven technologies is vastly understudied. Moreover, there is a need to move research beyond a focus on human-algorithm interaction to account for how the values these frameworks promote – algorithmic values – become socialised outside the boundaries of the (human-algorithm) interaction and how they influence everyday practices that are not algorithmically mediated. This paper traces the entanglement of algorithmic values and everyday life by mapping how residents of the Salvadorian town of El Zonte perceive the top-down transition of the town into "Bitcoin Beach" through value-driven transformations to diverse aspects of their material culture and built environment. This approach advances empirical research on the impact of algorithms by acknowledging the myriad ways in which those who won’t or can’t (afford to) interact with algorithm-driven technologies are impacted by the value-based outcomes of their programming and provides novel insights for critically examining the role of algorithm-driven technologies in shaping sustainable futures.
... To address this gap, we shift from theoretical, guideline-focused scholarship [3,39,40,49,61,76,78,100,114] to empirical inquiry, exploring the grounded practices of fair dataset curation. Following a well-established tradition in human-computer interaction (HCI) [65,73,94,104,120,144], we conducted interviews with 30 dataset curators from both academia and industry who have experience curating fair vision, language, or multi-modal datasets. Through these interviews, we uncover practical challenges and trade-offs to ensuring fairness in dataset curation. ...
Preprint
Full-text available
Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade-offs encountered throughout the dataset curation lifecycle. Our findings underscore overarching issues within the broader fairness landscape that impact data curation. We conclude with recommendations aimed at fostering systemic changes to better facilitate fair dataset curation practices.
... Similarly, in November 2021, the UN Educational, Scientific, and Cultural Organisation (UNESCO) signed a historic agreement outlining shared values needed to ensure the development of Responsible AI (UN 2021). The study conducted by Varanasi and Goyal (2023) involved interviewing 23 AI practitioners from 10 organisations to investigate the challenges they encounter when collaborating on Responsible AI (RAI) principles defined by UNESCO. The findings revealed that practitioners felt overwhelmed by the responsibility of adhering to specific RAI principles (non-maleficence, trustworthiness, privacy, equity, transparency, and explainability), leading to an uneven distribution of their workload. ...
Article
Full-text available
The term ethics is widely used, explored, and debated in the context of developing Artificial Intelligence (AI) based software systems. In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives. But what do we know about the views and experiences of those who develop these systems – the AI practitioners? We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners’ views on ethics in AI and analysed them to derive five categories: practitioner awareness, perception, need, challenge, and approach. These are underpinned by multiple codes and concepts that we explain with evidence from the included studies. We present a taxonomy of ethics in AI from practitioners’ viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics. The taxonomy provides a landscape view of the key aspects that concern AI practitioners when it comes to ethics in AI. We also share an agenda for future research studies and recommendations for practitioners, managers, and organisations to help in their efforts to better consider and implement ethics in AI.