Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To address the challenges brought about by AI, a humancentered AI (HCAI) approach has been proposed to address the ignorance of humans and society as a priority in the current technology-driven approach to developing and deploying AI systems [1], [3], [17], [18], [19], [20], [21], [22], [23], [24]. For example, Shneiderman [1] and Xu [2] specifically proposed their HCAI frameworks. ...
... Over the last several years, researchers have begun to take a human-centered perspective in developing AI technology, such as human-centered explainable AI [23], [45], inclusive design [46], human-centered computing [24], human-compatible AI [47], and human-centered machine learning [48]. For example, Shneiderman [1] and Xu [2] proposed their systematic HCAI frameworks. ...
... Over the years, HCAI has gained significant momentum and has been extensively discussed in literature. While HCAI addresses AI's societal impacts, including ethical issues like user privacy and fairness, current practices primarily focus on individual human-AI systems, such as explainable AI and human-AI interaction design [1], [3], [23], [30], [32], [44]. ...
Article
Full-text available
While artificial intelligence (AI) offers significant benefits, it also has negatively impacted humans and society. A human-centered AI (HCAI) approach has been proposed to address these issues. However, current HCAI practices have shown limited contributions due to a lack of sociotechnical thinking. To overcome these challenges, we conducted a literature review and comparative analysis of sociotechnical characteristics with respect to AI. Then, we propose updated sociotechnical systems (STS) design principles. Based on these findings, this paper introduces an intelligent sociotechnical systems (iSTS) framework to extend traditional STS theory and meet the demands with respect to AI. The iSTS framework emphasizes human-centered joint optimization across individual, organizational, ecosystem, and societal levels. The paper further integrates iSTS with current HCAI practices, proposing a hierarchical HCAI (hHCAI) approach. This hHCAI approach offers a structured approach to address challenges in HCAI practices from a broader sociotechnical perspective. Finally, we provide recommendations for future iSTS and hHCAI work.
... Existing studies focus on XAI for healthcare from the health carers' perspective. However, several recent studies have shown that the explainability and interpretability of AI/ML can be defined on the basis of what to explain (e.g., data, features, or decisions), how to explain it when to explain it (e.g., design stage) and who to explain it to (users, health carers, or designers) [16]. Similarly, ref. [3] state that the 6W questions, Why, Who, What, Where, When, and How, need to be evaluated to design an explainable security system. ...
... How to explain? and Why to explain? The answer to these questions regarding XAI for cybersecurity will be different according to the target audience [16]. Some studies suggest overcoming the XAI shortcomings by removing humans from the loop, which is dangerous in the healthcare domain. ...
... Another important point that needs to be considered at an early stage of designing an XAI DT-enabled healthcare system is involving humans in data labeling as data generated by artifacts (i.e., IoT, cameras, robots) lacks context. Manual data labeling is about adding extra information or metadata to a piece of data [16]. Metadata increase the explainability of AI and helps with feature engineering, detecting incorrect labels, and detecting adversarial examples. ...
Article
Full-text available
There have recently been rapid developments in smart healthcare systems, such as precision diagnosis, smart diet management, and drug discovery. These systems require the integration of the Internet of Things (IoT) for data acquisition, Digital Twins (DT) for data representation into a digital replica and Artificial Intelligence (AI) for decision-making. DT is a digital copy or replica of physical entities (e.g., patients), one of the emerging technologies that enable the advancement of smart healthcare systems. AI and Machine Learning (ML) offer great benefits to DT-based smart healthcare systems. They also pose certain risks, including security risks, and bring up issues of fairness, trustworthiness, explainability, and interpretability. One of the challenges that still make the full adaptation of AI/ML in healthcare questionable is the explainability of AI (XAI) and interpretability of ML (IML). Although the study of the explainability and interpretability of AI/ML is now a trend, there is a lack of research on the security of XAI-enabled DT for smart healthcare systems. Existing studies limit their focus to either the security of XAI or DT. This paper provides a brief overview of the research on the security of XAI-enabled DT for smart healthcare systems. It also explores potential adversarial attacks against XAI-enabled DT for smart healthcare systems. Additionally, it proposes a framework for designing XAI-enabled DT for smart healthcare systems that are secure and trusted.
... The two widely reported outcomes of XAI are enhanced user trust [4], [7], [38] and improved performance [4], [7], [13]. XAI makes AI systems more trustworthy for users and boosts user confidence [40] in employing AI systems for decision-making by elaborating to users how the algorithm arrives at an outcome [18], [20], [39]. XAI facilitates users to validate and optimize the decisions taken by the system, enabling adoption [15] and appropriation of technology [4], [7]. ...
... For the foregoing reason, users must be able to validate and optimise their decisions, resulting in more widespread adoption of AI in everyday situations [20], [21]. Using XAI additionally serves in mitigating biases in AI-based automated decision making, rendering the system more fair and reliable [4], [39], [40]. The primary intent of XAI is to elevate AI methods' transparency and persuade users that these techniques may yield reliable decisions [7], [39]. ...
... Using XAI additionally serves in mitigating biases in AI-based automated decision making, rendering the system more fair and reliable [4], [39], [40]. The primary intent of XAI is to elevate AI methods' transparency and persuade users that these techniques may yield reliable decisions [7], [39]. Explanations may assist in investigating the root causes whenever the algorithm makes incorrect predictions [7], [25]. ...
... In this context, LLM-based tools for the planning stage of selfdirected learning face two major challenges that can impact user experience and effectiveness [61]. The first is the lack of transparency, which makes it difficult for learners to trust and follow recommendations without clear explanations [61,72]. The second is the potential for incorrect information, which can confuse learners, negatively affecting their overall experience. ...
... Through a literature review, we identified explainability and controllability as key factors. Explainability provides clear justifications for AI recommendations, fostering trust and informed interaction [72,81,82], while controllability allows users to adjust outputs and correct errors like hallucinations [38]. These two factors are interconnected because explainability provides users with the reasoning behind AI output, enhancing their ability to refine system behavior and reduce errors or hallucinations [38,81]. ...
Preprint
Full-text available
Personal development through self-directed learning is essential in today's fast-changing world, but many learners struggle to manage it effectively. While AI tools like large language models (LLMs) have the potential for personalized learning planning, they face issues such as transparency and hallucinated information. To address this, we propose PlanGlow, an LLM-based system that generates personalized, well-structured study plans with clear explanations and controllability through user-centered interactions. Through mixed methods, we surveyed 28 participants and interviewed 10 before development, followed by a within-subject experiment with 24 participants to evaluate PlanGlow's performance, usability, controllability, and explainability against two baseline systems: a GPT-4o-based system and Khan Academy's Khanmigo. Results demonstrate that PlanGlow significantly improves usability, explainability, and controllability. Additionally, two educational experts assessed and confirmed the quality of the generated study plans. These findings highlight PlanGlow's potential to enhance personalized learning and address key challenges in self-directed learning.
... For example, involving HSES researchers in the AI development process may have avoided the implementation of facial recognition software for recruitment, as using facial recognition in this context reproduces existing social norms that are influenced by implicit gendered and racial prejudices about how an 'ideal employee' would respond [22]. Drawing on HSES expertise allows us to design inclusive processes for eliciting and incorporating these societal rules and standards into AI products [23]. For example, HSES insights were incorporated into an educational project about the American Civil War that used facial recognition to find portraits from the Civil War era that resemble the user [24]. ...
... From a process perspective, ideally the best time to initiate these collaborations is at the project's outset [23] which maximizes the impact of AI and HSES integration. Interdisciplinary collaborations need time to develop a shared understanding of their research [45]. ...
Article
Full-text available
As AI systems increasingly influence various aspects of human life, it is critical to ensure their development and deployment align with ethical standards and societal values. Our paper argues that integrating expertise from Humanities, Social, and Economic Sciences (HSES) into AI development is essential to achieving responsible AI. We present four compelling reasons to advocate for this integration: enhancing social legitimacy, ensuring meaningful impact, strengthening credibility and capability building. These reasons emerged from a collaborative effort involving 16 researchers from AI and HSES fields. Together, we explored the enablers and barriers to integrating our knowledge for the purpose of developing effective, responsible, and socially grounded AI products. We aim to inspire others to adopt an integrated approach to AI development, promoting innovations that are both technologically advanced and aligned with societal needs.
... To effectively mitigate bias and discrimination, various strategies can be employed, as illustrated in Fig. 1, including enhancing data quality through data augmentation and bias detection [19,20,21], promoting transparency via model interpretability and audit trails [22,23,24], and utilizing XAI to build trust [25,26,27,28,29,30,31,32]. Additionally, continuous monitoring processes and implementing fairnessaware algorithms and metrics are crucial for developing fairer AI systems and reducing discriminatory outcomes [33,34,29,3,4,36]. ...
... FMs like Equal Opportunity (EO) ensure fair and unbiased AI [31,32]. These quantitative measures assess bias by examining how AI models treat different groups. ...
... Theme 2: Organisations are using AI that is designed through using human-centred approach along with explain ability through maintaining ethical principles In recent times, "Human-centred AI (HCAI)" is in high demand as organisations are using it for improving their decision-making process which includes betterment of multi-stakeholder engagement. Use of "Human-centred AI (HCAI)" helps employees of organisations in reframing technology-centric and traditional approaches used in their supply chain [14]. On the other hand, HCAI is known to be an "emerging discipline" for caretaking AI systems that organisations use in order to augment and amplify in place of displacing human abilities in business operations [14]. ...
... Use of "Human-centred AI (HCAI)" helps employees of organisations in reframing technology-centric and traditional approaches used in their supply chain [14]. On the other hand, HCAI is known to be an "emerging discipline" for caretaking AI systems that organisations use in order to augment and amplify in place of displacing human abilities in business operations [14]. Implementation of HCAI also further assists organisations in preserving human control for providing beneficial outcomes by AI along with maintaining ethical principles. ...
Article
Full-text available
Advancements and usefulness of artificial intelligence (AI) have been transforming the ways in which companies reach decision making and ethical adherence in competitive market. This study has focused on secondary data collection and thematic analysis for developing understanding about the mitigation of bias and transparency in the use of AI technologies. This study has been responsible for stating about different methods that has been used for gaining proper outcomes. Apart from that, preparation of thematic analysis has been effective to reach the research objectives effectively. Finally, this study has also established strategic recommendations through which organisation scan imply proper ethical principles for better utilisation of AI technology.
... Recent advancements in machine learning (ML) have transformed industries globally, including banking. ML algorithms excel in processing large datasets, uncovering patterns, and generating valuable insights (Han et al., 2022;Schoenherr et al., 2023). In financial institutions, ML has shown promise in improving decision-making, risk management, fraud detection, and customer service (Mohammad et al., 2023). ...
... The research is grounded in the Technology Acceptance Model (TAM) and Innovation Diffusion Theory (IDT), exploring factors influencing ML adoption and its impact on financial decisions (Davis, 1989;Rogers, 1962). While ML offers significant benefits, challenges like data quality, regulatory constraints, and infrastructure limitations hinder its implementation (Adewale et al., 2020;Schoenherr et al., 2023). will inform strategies for effective ML integration, enhancing competitiveness and resilience in the sector. ...
Article
Full-text available
This study examines the impact of machine learning on decision-making in Nigeria's banking sector, highlighting its potential to enhance credit risk evaluation, fraud detection, personalized banking, and predictive analytics. However, challenges like data quality, privacy, and regulatory compliance are significant barriers. The research applies the Technology Acceptance Model (TAM) and Diffusion of Innovation Theory (DOI) to understand adoption drivers, focusing on perceived usefulness and compatibility. The findings suggest that Nigerian banks can improve decision-making and customer experiences by effectively adopting machine learning. However, they must address challenges like bias and operational inefficiencies. Strong data governance, transparent models, and ethical practices are essential for successful integration and sustained growth.
... Establishing trust between human and artificial agents represents an unsolved issue to date in creative settings such as those of the music technology field [92]. To mitigate the lack of trust in AI-based systems, tools from the field of eXplainable Artificial Intelligence (XAI) [93] need to be integrated to explain AI agents' decision making, thus ultimately enhancing trust in AI-based systems [94], [95]. This is especially relevant in situations when the explanation needs to be provided in real-time, at the moment in which the musical activity unfolds [96]. ...
... The use of co-design techniques is a valuable avenue for this purpose, but only a few examples are reported in the IoMusT literature thus far (see, e.g., [62]). Approaches from the field of human-centered AI [94], [106] are also relevant, but thus far have been rarely employed in the design of intelligent IoMusT applications. ...
Article
Full-text available
This paper proposes a paradigm shift from the current wave of Internet of Musical Things (IoMusT) research, which is mostly centered on technological development, towards the new wave of the Internet of Musical Things and People (IoMusTP). This wave focuses not only on musical stakeholders’ values, needs, behaviors and diversity, but also on their mutual entanglement with networked musical devices, services and environment. In the IoMusTP, technology is not only aware of the users and their surrounding context, but is also compliant to ethical and sustainable principles that will make it possible more inclusive, personalized, and socially acceptable experiences for the 21st-century musical stakeholders and beyond. The move from the IoMusT to the IoMusTP is a move from a network of musical devices to a network of musical stakeholders, whose interactions with musical resources as well as other stakeholders are empowered by devices. To this end, we propose a framework that can concretely guide designers of IoMusT technologies in considering the human and non-human factors relevant in the IoMusTP vision. We illustrate our framework by analyzing a set of case studies, showing how existing systems are insufficient to comply with the IoMusTP vision. Finally, we reflect on the challenges ahead of us, identifying a set of promising future directions that can inform the development of the next generation of IoMusT technologies.
... In addition, AI systems are not always transparent, and it can be challenging to understand their decision-making processes. This lack of transparency makes errors difficult to detect and fix [67]. To address AI unpredictability, researchers have proposed various strategies. ...
... Furthermore, Ref. [66] pointed out that unpredictable errors in AI systems can adversely affect user experience and societal impact. The authors of [67] emphasize the importance of designing AI with a human-centered approach to ensure explainability and accuracy, enhance trustworthiness, and mitigate the risks associated with unpredictable outcomes and unintended biases. ...
Article
Full-text available
This study delves into the dual nature of artificial intelligence (AI), illuminating its transformative potential that has the power to revolutionize various aspects of our lives. We delve into critical issues such as AI hallucinations, misinformation, and unpredictable behavior, particularly in large language models (LLMs) and AI-powered chatbots. These technologies, while capable of manipulating human decisions and exploiting cognitive vulnerabilities, also hold the key to unlocking unprecedented opportunities for innovation and progress. Our research underscores the need for robust, ethical AI development and deployment frameworks, advocating a balance between technological advancement and societal values. We emphasize the importance of collaboration among researchers, developers, policymakers, and end users to steer AI development toward maximizing benefits while minimizing potential harms. This study highlights the critical role of responsible AI practices, including regular training, engagement, and the sharing of experiences among AI users, to mitigate risks and develop the best practices. We call for updated legal and regulatory frameworks to keep pace with AI advancements and ensure their alignment with ethical principles and societal values. By fostering open dialog, sharing knowledge, and prioritizing ethical considerations, we can harness AI’s transformative potential to drive human advancement while managing its inherent risks and challenges.
... Humans sometimes resist using decision-making algorithms, influenced by factors at societal, algorithmic, and individual levels, including anthropomorphism, complexity, accuracy, and learning capabilities [82][83][84][85]. The emphasis on the use of machine learning and 'black box' techniques, meaning systems that produce results without clear or easily understandable explanations, to automate decisions creates a need for transparency in algorithm design and implementation [86]. ...
... In the three time periods examined, ChatGPT's role as a creator is presented as an active one, implying an agency similar to that of a human content creator. Although previous research indicates that humans may reject decision-making algorithms based if they are personified or anthropomorphised [82][83][84][85], this appears to not be the case here. This portrayal developed over time, reflecting shifts in how ChatGPT is perceived in the online discourse. ...
Article
Full-text available
ChatGPT, a chatbot using the GPT-n series large language model, has surged in popularity by providing conversation, assistance, and entertainment. This has raised questions about its agency and resulting implications on trust and blame, particularly when concerning its portrayal on social media platforms like Twitter. Understanding trust and blame is crucial for gauging public perception, reliance on, and adoption of AI-driven tools like ChatGPT. To explore ChatGPT’s perceived status as an algorithmic social actor and uncover implications for trust and blame through agency and transitivity, we examined 88,058 tweets about ChatGPT, published in a ‘hype period’ between November 2022 and March 2023, using Corpus Linguistics and Critical Discourse Analysis, underpinned by Social Actor Representation. Notably, ChatGPT was presented in tweets as a social actor on 87% of occasions, using personalisation and agency metaphor to emphasise its role in content creation, information dissemination, and influence. However, a dynamic presentation, oscillating between a creative social actor and an information source, reflected users’ uncertainty regarding its capabilities and, thus, blame attribution occurred. On 13% of occasions, ChatGPT was presented passively through backgrounding and exclusion. Here, the emphasis on ChatGPT’s role in informing and influencing underscores interactors’ reliance on it for information, bearing implications for information dissemination and trust in AI-generated content. Therefore, this study contributes to understanding the perceived social agency of decision-making algorithms and their implications on trust and blame, valuable to AI developers and policymakers and relevant in comprehending and dealing with power dynamics in today’s age of AI.
... Barcellos et al. [69] also suggest that accuracy reduces the ambiguity in information, facilitating understanding. Schoenherr et al. [71] propose that transparency makes a model visible and improves its clarity. c) Timeliness: refers to providing explanations when the user needs and expects them [72]. ...
Article
Full-text available
Guidance-enhanced approaches are used to support users in making sense of their data and overcoming challenging analytical scenarios. While recent literature underscores the value of guidance, a lack of clear explanations to motivate system interventions may still negatively impact guidance effectiveness. Hence, guidance-enhanced VA approaches require meticulous design, demanding contextual adjustments for developing appropriate explanations. Our paper discusses the concept of explainable guidance and how it impacts the user–system relationship—specifically, a user's trust in guidance within the VA process. We subsequently propose a model that supports the design of explainability strategies for guidance in VA. The model builds upon flourishing literature in explainable AI, available guidelines for developing effective guidance in VA systems, and accrued knowledge on user–system trust dynamics. Our model responds to challenges concerning guidance adoption and context-effectiveness by fostering trust through appropriately designed explanations. To demonstrate the model's value, we employ it in designing explanations within two existing VA scenarios. We also describe a design walk-through with a guidance expert to showcase how our model supports designers in clarifying the rationale behind system interventions and designing explainable guidance.
... Added HCI [1], [27], [28], [33], [50], [59], [65], [76], [106], [111], [114], [128], [134] [8], [49], [63], [102] None [151] HCI-AI [6], [15], [28], [44], [50], [58], [59], [111], [128]- [130], [134], [137], [155], [159] [8], [28], [99] [67] [127] IS [5], [32], [48], [55], [80], [81], [90], [107], [133], [138]- [140], [161] None [17], [18] None IS-AI [7], [35], [36], [52], [62], [74], [119], [123], [141], [143], [145] [47], [69], [86], [157] [87] [147] SE [16], [20], [21], [23], [29], [53], [61], [64], [70], [83], [144], [150] [26], [57] None None SE-AI [3], [64] [69], [79], [84], [86], [162] [87] None SE-CR [4], [22], [75], [105], [153] None None None PH+PS [30], [38], [54], [60], [82], [91], [104], [109], [110], [116], [124], [149], [152], [156] None None None studies in existing trust models. It was common for IS papers to customize existing trust models (13 out of 25 articles). ...
Preprint
Trust is a fundamental concept in human decision-making and collaboration that has long been studied in philosophy and psychology. However, software engineering (SE) articles often use the term 'trust' informally - providing an explicit definition or embedding results in established trust models is rare. In SE research on AI assistants, this practice culminates in equating trust with the likelihood of accepting generated content, which does not capture the full complexity of the trust concept. Without a common definition, true secondary research on trust is impossible. The objectives of our research were: (1) to present the psychological and philosophical foundations of human trust, (2) to systematically study how trust is conceptualized in SE and the related disciplines human-computer interaction and information systems, and (3) to discuss limitations of equating trust with content acceptance, outlining how SE research can adopt existing trust models to overcome the widespread informal use of the term 'trust'. We conducted a literature review across disciplines and a critical review of recent SE articles focusing on conceptualizations of trust. We found that trust is rarely defined or conceptualized in SE articles. Related disciplines commonly embed their methodology and results in established trust models, clearly distinguishing, for example, between initial trust and trust formation and discussing whether and when trust can be applied to AI assistants. Our study reveals a significant maturity gap of trust research in SE compared to related disciplines. We provide concrete recommendations on how SE researchers can adopt established trust models and instruments to study trust in AI assistants beyond the acceptance of generated software artifacts.
... The integration of AI in CAD systems has significantly transformed the way products are conceptualized and developed. By automating repetitive tasks, optimizing design processes, and enhancing decision-making, AI significantly contributes to improved efficiency and increased innovation (Schoenherr et al., 2023). However, despite its numerous advantages, AI implementation in CAD process also presents several challenges, including technical limitations, resistance to adoption, and ethical concerns (Patel et al., 2024). ...
Article
Full-text available
The integration of Artificial Intelligence (AI) into Computer-Aided Design (CAD) is transforming the product development process by enhancing efficiency, accuracy, and innovation. AI-driven approaches, including Machine Learning Models (MLMs) and optimisation algorithms, automate design processes, improve decision-making, and enable the exploration of optimised solutions that transcend traditional design limitations. Despite these advancements, the adoption of AI in CAD presents significant technological, ethical, and professional challenges. This study aims to analyse the impact of AI on CAD workflows, focusing on its role in automating repetitive tasks, optimising manufacturability, enhancing design validation through AI-assisted simulation, and facilitating collaborative workflows in distributed teams. Additionally, it explores the challenges associated with AI implementation and the future prospects of AI-driven CAD systems. The research is based on a systematic review of AI applications in CAD, examining Machine Learning (ML) techniques such as generative design, reinforcement learning, and genetic algorithms. Furthermore, the study also evaluates AI's influence on Product Lifecycle Management (PLM) and team collaboration, while addressing industry adoption barriers. Findings indicate that AI significantly reduces design time, enhances creativity through generative models, and improves design validation via automated simulations. AI-powered tools provide real-time feedback, streamline collaboration, and enable continuous optimisation of product performance and sustainability. Ultimately, AI-assisted CAD marks a paradigm shift in engineering and design.
... Comparative studies are needed to evaluate the real-world efficacy of these methods, testing their scalability, adaptability, and user acceptance. Such work should also analyze the interplay between technical solutions and human-centered design principles to ensure lasting trustworthiness in evolving digital environments (Schoenherr et al., 2023). ...
Article
Full-text available
We have witnessed an increased use of technology in every facet of our lives. These technologies come with great promises, such as enabling more independent living for older adults or people with physical disabilities, yet also fears, for instance, over privacy concerns or trust in automated systems. In this Topical Collection, we focus on Active and Assisted Living (AAL) technologies, which require trustworthiness and adherence to privacy regulations for successful adoption. The Collection contains six selected papers that address themes like privacy-by-design, trust in AI, and balancing privacy with technological innovation under regulations like GDPR and the AI Act. The presented articles emphasize the user-centered, privacy-friendly approaches to AAL designs, robust regulatory frameworks, and interdisciplinary methodologies to ensure ethical, trustworthy technologies.
... The integration of transparent decision-making processes and clear feedback mechanisms has transformed how users interact with AI-powered systems. Studies have demonstrated that implementing trust-building features in user interfaces leads to improved system adoption and user satisfaction [9]. These interfaces must balance complexity with usability while maintaining user confidence in system operations. ...
Article
Full-text available
This technical article explores the integration of artificial intelligence in enterprise routing systems, presenting a comprehensive examination of system architecture, data infrastructure, monitoring capabilities, and user experience design. The discussion encompasses critical aspects of implementing AI-powered routing solutions, including workflow orchestration, data quality management, observability frameworks, and security considerations. The article delves into how organizations can leverage advanced machine learning techniques to optimize resource allocation, enhance system reliability, and improve operational efficiency while maintaining robust security measures and regulatory compliance. The article highlights the importance of human-centered design approaches and the critical role of AI transparency in fostering user trust and system adoption.
... The development of a comprehensive and robust methodological framework for constructing AI metrics necessitates a multifaceted approach spanning from conceptual nuances of trustworthiness and responsibility of and in AI (OECD 2024a, b;Perrault and Clark 2024;Salloum 2024;Schoenherr et al. 2023) to the inclusion of important dedicated projects, databases, and initiatives that have been developed. Notably, the OECD AI Incidents Monitor, 6 which tracks AI-related incidents to concretely identify and mitigate AI risks, serves as a crucial tool for establishing trustworthy AI. ...
Article
Full-text available
This paper explores the interplay between AI metrics and policymaking by examining the conceptual and methodological frameworks of global AI metrics and their alignment with National Artificial Intelligence Strategies (NAIS). Through topic modeling and qualitative content analysis, key thematic areas in NAIS are identified. The findings suggest a misalignment between the technical and economic focus of global AI metrics and the broader societal and ethical priorities emphasized in NAIS. This highlights the need to recalibrate AI evaluation frameworks to include ethical and other social considerations, aligning AI advancements with the United Nations Sustainable Development Goals (SDGs) for an inclusive, ethical, and sustainable future.
... Researchers could investigate methods for detecting and mitigating biases in XAI systems to ensure explanations are accurate, unbiased, and trustworthy. Additionally, it is important to examine the broader societal implications of XAI, including its effects on job displacement, human-AI collaboration, and the distribution of power and resources in society [139]. By considering these ethical and social factors, researchers can ensure that XAI technologies are developed and deployed responsibly, with careful consideration of their potential impacts on individuals, communities, and society at large. ...
Article
Full-text available
Recent advancements in Explainable Artificial Intelligence (XAI) aim to bridge the gap between complex artificial intelligence (AI) models and human understanding, fostering trust and usability in AI systems. However, challenges persist in comprehensively interpreting these models, hindering their widespread adoption. This study addresses these challenges by exploring recently emerging techniques in XAI. The primary problem addressed is the lack of transparency and interpretability in AI models to humanity for institution-wide use, which undermines user trust and inhibits their integration into critical decision-making processes. Through an in-depth review, this study identifies the objectives of enhancing the interpretability of AI models and improving human understanding of their decision-making processes. Various methodological approaches, including post-hoc explanations, model transparency methods, and interactive visualization techniques, are investigated to elucidate AI model behaviours. We further present techniques and methods to make AI models more interpretable and understandable to humans including their strengths and weaknesses to demonstrate promising advancements in model interpretability, facilitating better comprehension of complex AI systems by humans. In addition, we provide the application of XAI in local use cases. Challenges, solutions, and open research directions were highlighted to clarify these compelling XAI utilization challenges. The implications of this research are profound, as enhanced interpretability fosters trust in AI systems across diverse applications, from healthcare to finance. By empowering users to understand and scrutinize AI decisions, these techniques pave the way for more responsible and accountable AI deployment.
... However, advanced AI-enabled technology operating on massively collected, large-scale real-world data at the hands of state authorities gives rise to significant ethical and legal issues: the citizens' right to privacy may be placed at risk, inherent bias within machine learning models may reproduce systemic inequalities, while constitutional guarantees may be jeopardized. A potential technical solution to these concerns arises from the fledgling scientific subfield of Trustworthy AI [6], [7], which promises to mitigate similar dangers by imparting AI algorithms with inherent robustness against them. ...
Article
Full-text available
Recent trends in the modus operandi of technologically-aware criminal groups engaged in illicit goods trafficking (e.g., firearms, drugs, cultural artifacts, etc.) have given rise to significant security challenges. The use of cryptocurrency-based payments, 3D printing, social media and/or the Dark Web by organized crime leads to transactions beyond the reach of authorities, thus opening up new business opportunities to criminal actors at the expense of the greater societal good and the rule of law. As a result, a lot of scientific effort has been expended on handling these challenges, with Artificial Intelligence (AI) at the forefront of this quest, mostly machine learning and data mining methods that can automate large-scale information analysis. Deep Neural Networks (DNNs) and graph analytics have been employed to automatically monitor and analyze the digital activities of large criminal networks in a data-driven manner. However, such practices unavoidably give rise to ethical and legal issues, which need to be properly considered and addressed. This paper is the first to explore these aspects jointly, without focusing on a particular angle or type of illicit goods trafficking. It emphasizes how advances in AI both allow the authorities to unravel technologically-aware trafficking networks and provide countermeasures against any potential violations of citizens’ rights in the name of security.
... They emphasize the need for collaboration across disciplines to create AI systems that are ethical, effective, and user-friendly. Schoenherr et al. (2023) advocate for designing AI using a human-centered approach that prioritizes explainability and accuracy. They argue that transparent and explainable AI systems are essential for building trust and ensuring ethical decision-making. ...
Chapter
This chapter delves into the pivotal role of emotional intelligence (EI) in the design and development of human-centered technology. By examining the core components of EI-self-awareness, self-regulation, motivation, empathy, and social skills-the chapter elucidates how these elements can enhance user experience and foster more intuitive and responsive technological interfaces. Through a blend of theoretical insights and practical applications, the text explores the integration of EI in various technological domains, including artificial intelligence, virtual reality, and user interface design. Case studies and real-world examples highlight successful implementations and underscore the benefits of emotionally intelligent technologies in improving user satisfaction, engagement, and overall well-being. The chapter also addresses potential ethical considerations and challenges in embedding EI into technology, advocating for a balanced approach that prioritizes both innovation and human values.
... They analysed a model that takes information from several data modalities, such as text, images, or audio. Multi-modal fusion approaches and network architectures are highlighted in the paper to resolve various machine learning tasks that require multi-modal input (Zhang et al., 2021b), (Schoenherr et al., 2023). ...
... Therefore, to develop an AI system that can be trusted by its end users and the communities it impacts, these factors should be considered fundamental. Research shows that explainability can ensure trustworthiness in AI-based models' decisions by visualizing the factors affecting the result, leading to fair and ethical analysis of the model (Schoenherr et al 2023). ...
Preprint
Full-text available
Artificial Intelligence (AI) has paved the way for revolutionary decision-making processes, which if harnessed appropriately, can contribute to advancements in various sectors, from healthcare to economics. However, its black box nature presents significant ethical challenges related to bias and transparency. AI applications are hugely impacted by biases, presenting inconsistent and unreliable findings, leading to significant costs and consequences, highlighting and perpetuating inequalities and unequal access to resources. Hence, developing safe, reliable, ethical, and Trustworthy AI systems is essential. Our team of researchers working with Trustworthy and Responsible AI, part of the Transdisciplinary Scholarship Initiative within the University of Calgary, conducts research on Trustworthy and Responsible AI, including fairness, bias mitigation, reproducibility, generalization, interpretability, and authenticity. In this paper, we review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias. We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making, as well as guidelines to foster Responsible and Trustworthy AI models.
... In particular, if the respective algorithmic mechanisms of a model towards its decision making are understandable, then the transparency of the model is positively impacted (Ali et al. 2023). Therefore, trustworthiness includes many aspects discussed above, in particular also extending to robustness, transparency, and evaluation of explanations in general (Hedström et al. 2023;Kaur et al. 2021;Vilone and Longo 2021;Schoenherr et al. 2023). ...
Article
Full-text available
The growing number of applications of machine learning and data mining in many domains—from agriculture to business, education, industrial manufacturing, and medicine—gave rise to new requirements for how to inspect and control the learned models. The research domain of explainable artificial intelligence (XAI) has been newly established with a strong focus on methods being applied post-hoc on black-box models. As an alternative, the use of interpretable machine learning methods has been considered—where the learned models are white-box ones. Black-box models can be characterized as representing implicit knowledge—typically resulting from statistical and neural approaches of machine learning, while white-box models are explicit representations of knowledge—typically resulting from rule-learning approaches. In this introduction to the special issue on ‘Explainable and Interpretable Machine Learning and Data Mining’ we propose to bring together both perspectives, pointing out commonalities and discussing possibilities to integrate them.
... "Can machines think?" is a question that has intrigued us since the 1950s, when Turing's seminal paper in Mind [13] was published. But, it has long been accepted that the human brain is superior to machines, despite the enormous advances in computing power, in the case of perceptual tasks that appear intuitive to humans [14], while on the other hand machines prove to be efficient at cognitive tasks that require extensive mental effort for humans [15] as well as in learning processes [16], repetitive tasks [17] or managing large databases [18]. We should note that AI was still in its beginnings linked to human cognition processes, especially in the field of expert systems started with the work of Edward Albert Feigenbaum. ...
Preprint
Full-text available
This paper introduces a novel decision-making approach grounded in insights into human visual perception of change. Modern technologies such as internet of things (IoT) provide us with large amounts of sensor data that need to be processed in real time and decisions made with a high degree of accuracy and reliability. Artificial intelligence (AI) methods are welcome in this context and need to be upgraded to meet actual challenges. While modern computing capabilities facilitate rapid data processing, the real-time demands of vast sensor data necessitate swift responses across the cyber chain, often leading to compromises in solution quality to circumvent combinatorial search complexities. Determining the adequacy of a solution entails varied approaches, often relying on heuristic methodologies. We illustrate our original approach with an example of a selected detail of a differential evolution algorithm, where we have to make a decision to adopt the best solution so far. We propose an approach inspired by human perceptual features that exploits the Weber-Fechner law to emulate human judgements, and offers a promising way to improve decision making in AI applications and real-time requirements fulfillment. Our proposed methodology demonstrates applicability across diverse AI scenarios involving numerical data, effectively mirroring human perception abilities.
... Centering the design and development of artificial intelligence systems around human beings fosters inclusivity, trustworthiness, and alignment with human values and objectives (Shneiderman, 2020;Schmager, 2023;Schoenherr et al., 2023). This paper is part of a broader research project aiming to adapt existing commercial HCAI guidelines to meet the unique requirements of the public sector. ...
Conference Paper
Full-text available
Artificial Intelligence (AI) technologies undergo rapidly increasing integration in all areas of everyday life, including healthcare, employment, and public services. To reconcile abstract theoretical concepts with the practical realities of AI development and deployment in the public sector, this study formulates Human-Centered AI (HCAI) design principles, recognizing the unique aspects of the public sector. The study argues for bespoke AI design and implementation strategies that prioritize societal needs over commercial interests and incorporate citizen perspectives into the design principles. The study employs Action Design Research and draws from Social Contract Theory, aiming to advance the dialogue on responsible AI practices within the public sector. These principles offer valuable insights for both academics and practitioners, bridging the gap between theory and practice, enriching the information systems discipline, and fostering a deeper understanding of a socially responsible AI design and deployment in public services.
... Finally, a human-centered approach helps to build trust in the information assessment detection. Users are more likely to trust a system that is transparent, accountable, and open to feedback (Schoenherr et al., 2023). A system that incorporates human expertise can help to provide these qualities, by ensuring that the decision-making process is transparent and that decisions are made based on sound ethical principles. ...
Article
Full-text available
Aim/Purpose: The purpose of this paper is to address the challenges posed by disinformation in an educational context. The paper aims to review existing information assessment techniques, highlight their limitations, and propose a conceptual design for a multimodal, explainable information assessment system for higher education. The ultimate goal is to provide a roadmap for researchers that meets current requirements of information assessment in education. Background: The background of this paper is rooted in the growing concern over disinformation, especially in higher education, where it can impact critical thinking and decision-making. The issue is exacerbated by the rise of AI-based analytics on social media and their use in educational settings. Existing information assessment techniques have limitations, requiring a more comprehensive AI-based approach that considers a wide range of data types and multiple dimensions of disinformation. Methodology: Our approach involves an extensive literature review of current methods for information assessment, along with their limitations. We then establish theoretical foundations and design concepts for EMIAS based on AI techniques and knowledge graph theory. Contribution: We introduce a comprehensive theoretical framework for an AI-based multimodal information assessment system specifically designed for the education sector. It not only provides a novel approach to assessing information credibility but also proposes the use of explainable AI and a three-pronged approach to information evaluation, addressing a critical gap in the current literature. This research also serves as a guide for educational institutions considering the deployment of advanced AI-based systems for information evaluation. Findings: We uncover a critical need for robust information assessment systems in higher education to tackle disinformation. We propose an AI-based EMIAS system designed to evaluate the trustworthiness and quality of content while providing explanatory justifications. We underscore the challenges of integrating this system into educational infrastructures and emphasize its potential benefits, such as improved teaching quality and fostering critical thinking. Recommendations for Practitioners: Implement the proposed EMIAS system to enhance the credibility of information in educational settings and foster critical thinking among students and teachers. Recommendation for Researchers: Explore domain-specific adaptations of EMIAS, research on user feedback mechanisms, and investigate seamless integration techniques within existing academic infrastructure. Impact on Society: This paper’s findings could strengthen academic integrity and foster a more informed society by improving the quality of information in education. Future Research: Further research should investigate the practical implementation, effectiveness, and adaptation of EMIAS across various educational contexts.
... In fact, research shows just the perception of AI as adaptive can increase human performance (Kosch et al. 2023). Despite robust research into how to make AI algorithms more transparent and explainable to the user (Larsson and Heintz 2020;Waltl and Vogl 2018;Hussain et al. 2021), there have been increased calls for more research into the content and frequency of explanations that humans need while interacting with an AI agent (Weber et al. 2015;Schoenherr et al. 2023). ...
Article
Full-text available
An obstacle to effective teaming between humans and AI is the agent’s "black box" design. AI explanations have proven benefits, but few studies have explored the effects that explanations can have in a teaming environment with AI agents operating at heightened levels of autonomy. We conducted two complementary studies, an experiment and participatory design sessions, investigating the effect that varying levels of AI explainability and AI autonomy have on the participants’ perceived trust and competence of an AI teammate to address this research gap. The results of the experiment were counter-intuitive, where the participants actually perceived the lower explainability agent as both more trustworthy and more competent. The participatory design sessions further revealed how a team’s need to know influences when and what teammates need explained from AI teammates. Based on these findings, several design recommendations were developed for the HCI community to guide how AI teammates should share decision information with their human counterparts considering the careful balance between trust and competence in human-AI teams.
... Trustworthiness in AI reflects how confident one feels in the decisions that AI makes (e.g., [32,33]). Trustworthiness is enhanced when employees know that AI is used to enhance their skills and experience at work and that it is used in a responsible manner (e.g., [34,35]). We acknowledge that different internal stakeholders (e.g., managers, leaders) can view trustworthiness differently. ...
Article
Full-text available
The rapid advancement of Artificial Intelligence (AI) in the business sector has led to a new era of digital transformation. AI is transforming processes, functions, and practices throughout organizations creating system and process efficiencies, performing advanced data analysis, and contributing to the value creation process of the organization. However, the implementation and adoption of AI systems in the organization is not without challenges, ranging from technical issues to human-related barriers, leading to failed AI transformation efforts or lower than expected gains. We argue that while engineers and data scientists excel in handling AI and data-related tasks, they often lack insights into the nuanced human aspects critical for organizational AI success. Thus, Human Resource Management (HRM) emerges as a crucial facilitator, ensuring AI implementation and adoption are aligned with human values and organizational goals. This paper explores the critical role of HRM in harmonizing AI's technological capabilities with human-centric needs within organizations while achieving business objectives. Our positioning paper delves into HRM's multifaceted potential to contribute toward AI organizational success, including enabling digital transformation, humanizing AI usage decisions, providing strategic foresight regarding AI, and facilitating AI adoption by addressing concerns related to fears, ethics, and employee well-being. It reviews key considerations and best practices for operationalizing human-centric AI through culture, leadership, knowledge, policies, and tools. By focusing on what HRM can realistically achieve today, we emphasize its role in reshaping roles, advancing skill sets, and curating workplace dynamics to accommodate human-centric AI implementation. This repositioning involves an active HRM role in ensuring that the aspirations, rights, and needs of individuals are integral to the economic, social, and environmental policies within the organization. This study not only fills a critical gap in existing research but also provides a roadmap for organizations seeking to improve AI implementation and adoption and humanizing their digital transformation journey.
... On the other hand, advancing beyond those intellectual tasks requires holistic and contextual thinking, which remains a core capability of HI. Humans are experts in using their past experiences to solve new problems and handle unfamiliar situations (Schoenherr et al., 2023). In addition to intellectual solutions, emotional decisions and decisions based on real-time situation awareness can also be helpful in an abnormal situation. ...
Preprint
Full-text available
The emergence of Generative AI features in news applications may radically change news consumption and challenge journalistic practices. To explore the future potentials and risks of this understudied area, we created six design fictions depicting scenarios such as virtual companions delivering news summaries to the user, AI providing context to news topics, and content being transformed into other formats on demand. The fictions, discussed with a multi-disciplinary group of experts, enabled a critical examination of the diverse ethical, societal, and journalistic implications of AI shaping this everyday activity. The discussions raised several concerns, suggesting that such consumer-oriented AI applications can clash with journalistic values and processes. These include fears that neither consumers nor AI could successfully balance engagement, objectivity, and truth, leading to growing detachment from shared understanding. We offer critical insights into the potential long-term effects to guide design efforts in this emerging application area of GenAI.
Article
Artificial intelligence methods are used to understand how humans think and to solve complex problems, as well as to create automated solutions for problems. The automation of a problem includes the construction of a model that contains all the information needed to solve the problem. Then, the automated solutions generate things that act according to the given input and are used to search and drive solutions for the problem. In this review paper, we outline the significance of contribution of AI in engineering applications, Challenges and difficulties encountered by the enhancement of industrial models and the top of future trends that could be utilized to reduce the human errors and enhance the effectiveness of engineering tools. AI methods are highly useful in the construction of intelligent systems because AI has a strong theoretical base and has produced many tools that are useful for implementing intelligent systems. With the upcoming trends in the field of technology, it is highly useful to combine engineering and artificial intelligence. This combination is a powerful tool and is used to construct new intelligent systems. These intelligent systems may be used to solve various tough problems in the real-world environment. The combination of artificial intelligence and engineering is the best possible way to make complex systems and to create simple solutions for them.
Article
Rapid progress in Artificial Intelligence (AI) is presenting both opportunities and threats that promise to be transformative and disruptive to the field of cybersecurity. The current approaches to providing security and safety to users are limited. Online attacks (e.g., identity theft) and data breaches are causing real-world harms to individuals and communities, resulting in financial instability, loss of healthcare benefits, or even access to housing, among other undesirable outcomes. The resulting challenges are expected to be amplified, given the increased capabilities of AI and its deployment in professional, public, and private spheres. As such, there is a need for a new formulation of these challenges that considers the complex social, technical, and environmental dimensions and factors that shape both the opportunities and threats for AI in cybersecurity. Through an exploration and application of the socio-technical approach, which highlights the significance and value of participatory practices, we can generate new ways of conceptualising the challenges of AI in cybersecurity contexts. This paper will identify and elaborate on key issues, in the form of both gaps and opportunities, that need to be addressed by various stakeholders, while exploring substantive approaches to addressing the gaps and capitalizing on the opportunities at the micro/meso/macro levels, which in turn will inform decision-making processes. This paper offers approaches for responding to public interest security, safety, and privacy challenges arising from complex AI in cybersecurity issues in open socio-technical systems.
Chapter
This chapter delves into the pivotal role of Human-Centered Design (HCD) in the development of Explainable AI (XAI). Grounded in the principles of enhancing user experiences, trust, and ethical responsibility, HCD emerges as a guiding framework in the pursuit of designing AI systems that are comprehensible and socially responsible. The chapter explores the definition and principles of HCD, its significance in XAI development, and its applications across diverse domains. Despite its transformative potential, challenges in implementation, such as incorporating user feedback, balancing technical and human factors, and addressing bias, are scrutinized. The chapter concludes by envisioning future directions for HCD in XAI, emphasizing advancements in user-centered technologies, interdisciplinary collaboration, and ethical considerations.
Chapter
This chapter provides a comprehensive exploration of the ethical and socially responsible dimensions of Explainable Artificial Intelligence (AI). It delves into the necessity of Explainable AI (XAI) concerning its ethical and social implications, outlining its definition and taxonomy. The challenges in developing AI systems that align with ethical and social responsibility are scrutinized, addressing both technical and socio-cultural aspects. The role of Human-Centered Design principles in nurturing ethical AI development is discussed, emphasizing its application in the design process. Furthermore, the chapter clarifies best practices for transparency and accountability in XAI, accompanied by real-world examples. Ensuring fairness and non-discrimination in XAI systems is scrutinized, offering strategies for equitable design and implementation. Privacy and security considerations in XAI are explored, highlighting intersections with ethical and social responsibility. The importance of ethical governance in AI development is outlined, presenting potential models and addressing associated challenges. The chapter concludes by examining applications of socially responsible XAI across various domains and presenting key takeaways, followed by future directions for ethical and socially responsible XAI.
Chapter
Due to the development of AI-based models and technologies, ethical AI concepts are needed. Ethical AI helps in the development and deployment of AI-based technologies and models. In this context, this chapter explains the guidelines for ethical AI and also defines the best industry practices for the inclusion of ethical AI. Further, it explains the importance of the inclusion of ethical AI in the model development lifecycle. Also, the chapter analyzes ethical testing and validation steps in detail. This chapter also examines the importance of ethical decision-making. Along with this, this chapter also studies the impact of ethical AI on society. As this chapter analyzes all the parts of the AI development lifecycle, it will help researchers and industry professionals to understand the importance of ethical AI.
Article
This paper discusses generative pre-trained transformer technology and its intersection with forms of creativity and law. It highlights the potential of generative AI to change considerable elements of society, including modes of creative endeavors, problem-solving, employment, education, justice, medicine, and governance. The author emphasizes the need for policymakers and experts to join in regulating against the potential risks and implications of this technology. The European Commission has taken steps to address the risks of AI through the European AI Act (EIA), which categorizes AI uses based on their potential harm. The legislation aims to ensure scrutiny and control in extreme cases like autonomous weapons or medical devices. However, the author criticizes the lack of meaningful AI oversight in the United States and argues that time has come for government to step in and offer meaningful regulation given the technology’s (1) rate of diffusion (2) virtually uncountable product permutations, the purposes, extent and depths to which it is anticipated to penetrate institutional and daily life.
Article
The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” [1] . Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” [2] . With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an all-or-nothing approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features [3] . But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ [4] , [5] . When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes [6] . However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.
Article
Full-text available
Artificial neural networks (ANN), machine learning (ML), deep learning (DL), and ensemble learning (EL) are four outstanding approaches that enable algorithms to extract information from data and make predictions or decisions autonomously without the need for direct instructions. ANN, ML, DL, and EL models have found extensive application in predicting geotechnical and geoenvironmental parameters. This research aims to provide a comprehensive assessment of the applications of ANN, ML, DL, and EL in addressing forecasting within the field related to geotechnical engineering, including soil mechanics, foundation engineering, rock mechanics, environmental geotechnics, and transportation geotechnics. Previous studies have not collectively examined all four algorithms—ANN, ML, DL, and EL—and have not explored their advantages and disadvantages in the field of geotechnical engineering. This research aims to categorize and address this gap in the existing literature systematically. An extensive dataset of relevant research studies was gathered from the Web of Science and subjected to an analysis based on their approach, primary focus and objectives, year of publication, geographical distribution, and results. Additionally, this study included a co-occurrence keyword analysis that covered ANN, ML, DL, and EL techniques, systematic reviews, geotechnical engineering, and review articles that the data, sourced from the Scopus database through the Elsevier Journal, were then visualized using VOS Viewer for further examination. The results demonstrated that ANN is widely utilized despite the proven potential of ML, DL, and EL methods in geotechnical engineering due to the need for real-world laboratory data that civil and geotechnical engineers often encounter. However, when it comes to predicting behavior in geotechnical scenarios, EL techniques outperform all three other methods. Additionally, the techniques discussed here assist geotechnical engineering in understanding the benefits and disadvantages of ANN, ML, DL, and EL within the geo techniques area. This understanding enables geotechnical practitioners to select the most suitable techniques for creating a certainty and resilient ecosystem.
Article
While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.
Article
Full-text available
Experienced ease of recall was found to qualify the implications of recalled content. Ss who had to recall 12 examples of assertive (unassertive) behaviors, which was difficult, rated themselves as less assertive (less unassertive) than subjects who had to recall 6 examples, which was easy. In fact, Ss reported higher assertiveness after recalling 12 unassertive rather than 12 assertive behaviors. Thus, self-assessments only reflected the implications of recalled content if recall was easy. The impact of ease of recall was eliminated when its informational value was discredited by a misattribution manipulation. The informative functions of subjective experiences are discussed.
Article
Full-text available
Human-centered artificial intelligence (HCAI) seeks to shift the focus in AI development from technology to people. However, it is not clear whether existing HCAI principles and practices adequately accomplish this goal. To explore whether HCAI is sufficiently focused on people, we conducted a qualitative survey of AI developers (N = 75) and users (N = 130) and performed a thematic content analysis on their responses to gain insight into their differing priorities and experiences. Through this, we were able to compare HCAI in principle (guidelines and frameworks) and practice (developer priorities) with user experiences. We found that the social impact of AI was a defining feature of positive user experiences, but this was less of a priority for developers. Furthermore, our results indicated that improving AI functionality from the perspective of the user is an important part of making it human-centered. Indeed, users were more concerned about being understood by AI than about understanding AI. In line with HCAI guidelines, developers were concerned with issues such as ethics, privacy, and security, demonstrating an ‘avoidance of harm’ perspective. However, our results suggest that an increased focus on what people need in their lives is required for HCAI to be truly human-centered.
Book
Full-text available
This book offers a unique interdisciplinary perspective on the ethics of 'artificial intelligence' - autonomous, intelligent, (and connected) systems, or AISs, applying principles of social cognition to understand the social and ethical issues associated with the creation, adoption, and implementation of AISs.
Article
Full-text available
The increasing deployment of artificial intelligence (AI) powered solutions for the public sector is hoped to change how developing countries deliver services in key sectors such as agriculture, healthcare, education, and social sectors. And yet AI has a high potential for abuse and creates risks, which if not managed and monitored will jeopardize respect and dignity of the most vulnerable in society. In this study, we argue for delineating public procurements’ role in the human-centred AI (HCAI) discourses, focusing on the developing countries. The study is based on an exploratory inquiry and gathered data among procurement practitioners in Uganda and Kenya, which have similar country procurement regimes: where traditional forms of competition in procurement apply compared to more recent pre-commercial procurement mechanisms that suit AI procurement. We found limited customization in AI technologies, a lack of developed governance frameworks, and little knowledge and distinction between AI procurement and other typical technology procurement processes. We proposed a framework, which in absence of good legal frameworks can allow procurement professionals to embed HCAI principles in AI procurement processes.
Article
Full-text available
This article teases out the ramifications of artificial intelligence (AI) use in the credit analysis process by banks and other financing institutions. The unique features of AI models, coupled with the expansion of computing power, make new sources of information (big data) available for creditworthiness assessments. Combined, the use of AI and big data can capture weak signals, whether in the form of interactions or non-linearities between explanatory variables that appear to yield prediction improvements over conventional measures of creditworthiness. At the macroeconomic level, this translates into positive estimates for economic growth. On a micro scale, instead, the use of AI in credit analysis improves financial inclusion and access to credit for traditionally underserved borrowers. However, AI-based credit analysis processes raise enduring concerns due to potential biases and ethical, legal, and regulatory problems. These limits call for the establishment of a new generation of financial regulation introducing the certification of AI algorithms and of data used by banks.
Article
Full-text available
Explanations are central to understanding the causal relationships between entities within the environment. Instead of examining basic heuristics and schemata that inform the acceptance or rejection of scientific explanations, recent studies have predominantly examined complex explanatory models. In the present study, we examined which essential features of explanatory schemata can account for phenomena that are attributed to domain-specific knowledge. In two experiments, participants judged the validity of logical syllogisms and reported confidence in their response. In addition to validity of the explanations, we manipulated whether scientists or people explained an animate or inanimate phenomenon using mechanistic (e.g., force, cause) or intentional explanatory terms (e.g., believes, wants). Results indicate that intentional explanations were generally considered to be less valid than mechanistic explanations and that ‘scientists’ were relatively more reliable sources of information of inanimate phenomena whereas ‘people’ were relatively more reliable sources of information of animate phenomena. Moreover, after controlling for participants’ performance, we found that they expressed greater overconfidence for valid intentional and invalid mechanistic explanations suggesting that the effect of belief-bias is greater in these conditions.
Article
Full-text available
The use of artificial intelligence (AI) in the healthcare field is gaining popularity. However, it also raises some concerns related to privacy and ethical aspects that require the development of a responsible AI framework. The principle of responsible AI states that artificial intelligence-based systems should be considered a part of composite societal and technological systems. This study attempts to establish whether AI risks in digital healthcare are positively associated with responsible AI. The moderating effect of perceived trust and perceived privacy risks is also examined. The theoretical model was based on perceived risk theory. Perceived risk theory is important in the context of this study, as risks related to uneasiness and uncertainty can be expected in the development of responsible AI due to the volatile nature of intelligent applications. Our research provides some interesting findings which are presented in the discussion section.
Preprint
Full-text available
The ethical and societal implications of artificial intelligence systems raise concerns. In this paper we outline a novel process based on applied ethics, namely Z-inspection, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI. Z-inspection is a general inspection process that can be applied to a variety of domains where AI systems are used, such as business, healthcare, public sector, among many others. To the best of our knowledge, Z-inspection is the first process to assess trustworthy AI in practice.
Article
Full-text available
Credit risk evaluation has a relevant role to financial institutions, since lending may result in real and immediate losses. In particular, default prediction is one of the most challenging activities for managing credit risk. This study analyzes the adequacy of borrower’s classification models using a Brazilian bank’s loan database, and exploring machine learning techniques. We develop Support Vector Machine, Decision Trees, Bagging, AdaBoost and Random Forest models, and compare their predictive accuracy with a benchmark based on a Logistic Regression model. Comparisons are analyzed based on usual classification performance metrics. Our results show that Random Forest and Adaboost perform better when compared to other models. Moreover, Support Vector Machine models show poor performance using both linear and nonlinear kernels. Our findings suggest that there are value creating opportunities for banks to improve default prediction models by exploring machine learning techniques.
Article
Full-text available
Researchers' goals shape the questions they raise, collaborators they choose, methods they use, and outcomes of their work. This article offers a fresh vision of artificial intelligence (AI) research by suggesting a simplification to two goals: 1) emulation to understand human abilities to build systems that perform tasks as well as or better than humans and 2) application of AI methods to build widely used products and services. Researchers and developers for each goal can fruitfully work along their desired paths, but this article is intended to limit the problems that arise when assumptions from one goal are used to drive work on the other goal. For example, autonomous humanoid robots are prominent with emulation researchers, but application developers avoid them, in favor of tool-like appliances or teleoperated devices for widely used commercial products and services. This article covers four such mismatches in goals that affect AI-guided application development: 1) intelligent agent or powerful tool; 2) simulated teammate or teleoperated device; 3) autonomous system or supervisory control; and 4) humanoid robot or mechanoid appliance. This article clarifies these mismatches to facilitate the discovery of workable compromise designs that will accelerate human-centered AI applications research. A greater emphasis on human-centered AI could reduce AI's existential threats and increase benefits for users and society, such as in business, education, healthcare, environmental preservation, and community safety.
Article
Full-text available
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence , namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Article
Full-text available
Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature. Algorithms and bots are capable of performing some behaviours at human or super-human levels. Humans, however, tend to trust algorithms less than they trust other humans. The authors find that bots do better than humans at inducing cooperation in certain human–machine interactions, but only if the bots do not disclose their true nature as artificial.
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
Full-text available
Estimation of influenza-like illness (ILI) using search trends activity was intended to supplement traditional surveillance systems, and was a motivation behind the development of Google Flu Trends (GFT). However, several studies have previously reported large errors in GFT estimates of ILI in the US. Following recent release of time-stamped surveillance data, which better reflects real-time operational scenarios, we reanalyzed GFT errors. Using three data sources—GFT: an archive of weekly ILI estimates from Google Flu Trends; ILIf: fully-observed ILI rates from ILINet; and, ILIp: ILI rates available in real-time based on partial reporting—five influenza seasons were analyzed and mean square errors (MSE) of GFT and ILIp as estimates of ILIf were computed. To correct GFT errors, a random forest regression model was built with ILI and GFT rates from the previous three weeks as predictors. An overall reduction in error of 44% was observed and the errors of the corrected GFT are lower than those of ILIp. An 80% reduction in error during 2012/13, when GFT had large errors, shows that extreme failures of GFT could have been avoided. Using autoregressive integrated moving average (ARIMA) models, one- to four-week ahead forecasts were generated with two separate data streams: ILIp alone, and with both ILIp and corrected GFT. At all forecast targets and seasons, and for all but two regions, inclusion of GFT lowered MSE. Results from two alternative error measures, mean absolute error and mean absolute proportional error, were largely consistent with results from MSE. Taken together these findings provide an error profile of GFT in the US, establish strong evidence for the adoption of search trends based 'nowcasts' in influenza forecast systems, and encourage reevaluation of the utility of this data source in diverse domains.
Article
Full-text available
→ The third wave of AI can be characterized by technological enhancement and application + a human-centered approach. → HCI professionals should take a leading role by providing explainable and comprehensible AI, and useful and usable AI. → HCI professionals should proactively participate in AI R&D to increase their influence, enhance their AI knowledge, and integrate methods between the two fields.
Article
Full-text available
Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.
Conference Paper
Full-text available
The application of “machine learning” and “artificial intelligence” has become popular within the last decade. Both terms are frequently used in science and media, sometimes interchangeably, sometimes with different meanings. In this work, we aim to clarify the relationship between these terms and, in particular, to specify the contribution of machine learning to artificial intelligence. We review relevant literature and present a conceptual framework which clarifies the role of machine learning to build (artificial) intelligent agents. Hence, we seek to provide more terminological clarity and a starting point for (inter- disciplinary) discussions and future research.
Chapter
These essays draw on work in the history and philosophy of science, the philosophy of mind and language, the development of concepts in children, conceptual change in adults, and reasoning in human and artificial systems. Explanations seem to be a large and natural part of our cognitive lives. As Frank Keil and Robert Wilson write, "When a cognitive activity is so ubiquitous that it is expressed both in a preschooler's idle questions and in work that is the culmination of decades of scholarly effort, one has to ask whether we really have one and the same phenomenon or merely different cognitively based phenomena that are loosely, or even metaphorically, related." This book is unusual in its interdisciplinary approach to that ubiquitous activity. The essays address five basic questions about explanation: How do explanatory capacities develop? Are there kinds of explanation? Do explanations correspond to domains of knowledge? Why do we seek explanations, and what do they accomplish? How central are causes to explanation? The essays draw on work in the history and philosophy of science, the philosophy of mind and language, the development of concepts in children, conceptual change in adults, and reasoning in human and artificial systems. They also introduce emerging perspectives on explanation from computer science, linguistics, and anthropology. Contributors Woo-kyoung Ahn, William F. Brewer, Patricia W. Cheng, Clark A. Chinn, Andy Clark, Robert Cummins, Clark Glymour, Alison Gopnik, Christine Johnson, Charles W. Kalish, Frank C. Keil, Robert N. McCauley, Gregory L. Murphy, Ala Samarapungavan, Herbert A. Simon, Paul Thagard, Robert A. Wilson Bradford Books imprint
Chapter
Cyberbiosecurity (CBS) is essential to humanity due to dangers arising from digitalization of information, processes, and materials of various branches of biology. Humans are threatened by intensifying potential for malicious destruction, misuse, and exploitation of our biological data and information. As society seeks to identify and mitigate CBS risks, we must also work to ensure the absence of avoidable or rectifiable disparities among groups of people, whether those groups are defined socially, economically, demographically/psychographically, or geographically. In this chapter, we identified behaviorally, currently recognized risks at the demographically/psychographically, or interface of the life sciences and the digital world that lead to a failure “to protect opportunity or capability of people to function as free and equal citizens.” We then identify and explore the more imperceptible uses of technology relative to the life sciences that negatively impact freedom and equity for humans. We considered these technologies against the backdrop of such social justice principles as inequality of outcomes, inequality of process, and inequality of autonomy.KeywordsSocial justiceEconomic justiceCBSCyberbiosecurityEquityInequality of autonomyInequality of outcomesInequality of processAllostatic loadA-LoadPhysiological freedomVigilance fatigueBiohackingBioengineering biochemistryHCIHCD
Article
Mental health and well-being are increasingly important topics in discussions on public health [1] . The COVID-19 pandemic further revealed critical gaps in existing mental health services as factors such as job losses and corresponding financial issues, prolonged physical illness and death, and physical isolation led to a sharp rise in mental health conditions [2] . As such, there is increasing interest in the viability and desirability of digital mental health applications. While these dedicated applications vary widely, from platforms that connect users with healthcare professionals to diagnostic tools to self-assessments, this article specifically explores the implications of digital mental health applications in the form of chatbots [3] . Chatbots can be text based or voice enabled and may be rule based (i.e., linguistics based) or based on machine learning (ML). They can utilize the power of conversational agents well-suited to task-oriented interactions, like Apple’s Siri, Amazon’s Alexa, or Google Assistant. But increasingly, chatbot developers are leveraging conversational artificial intelligence (AI), which is the suite of tools and techniques that allow a computer program to seemingly carry out a conversational experience with a person or a group.
Article
Artificial intelligence (AI) and algorithmic decision making are having a profound impact on our daily lives. These systems are vastly used in different high-stakes applications like healthcare, business, government, education, and justice, moving us toward a more algorithmic society. However, despite so many advantages of these systems, they sometimes directly or indirectly cause harm to the users and society. Therefore, it has become essential to make these systems safe, reliable, and trustworthy. Several requirements, such as fairness, explainability, accountability, reliability, and acceptance, have been proposed in this direction to make these systems trustworthy. This survey analyzes all of these different requirements through the lens of the literature. It provides an overview of different approaches that can help mitigate AI risks and increase trust and acceptance of the systems by utilizing the users and society. It also discusses existing strategies for validating and verifying these systems and the current standardization efforts for trustworthy AI. Finally, we present a holistic view of the recent advancements in trustworthy AI to help the interested researchers grasp the crucial facets of the topic efficiently and offer possible future research directions.
Article
Trust is necessary for any kind of complex social organization. Whether trust is the result of past exchanges with a specific individual (e.g., direct reciprocity) or the result of norms and conventions within a social network (e.g., indirect reciprocity), understanding the basis for trust is a prerequisite for successful coordination between agents. Nowhere is this more evident than in the healthcare setting. Traditionally, exchanges in healthcare occur between healthcare professionals (HCPs, e.g., doctors, nurses) and patients or parents.
Article
In the past few decades, artificial intelligence (AI) technology has experienced swift developments, changing everyone’s daily life and profoundly altering the course of human society. The intention behind developing AI was and is to benefit humans by reducing labor, increasing everyday conveniences, and promoting social good. However, recent research and AI applications indicate that AI can cause unintentional harm to humans by, for example, making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against a group or groups. Consequently, trustworthy AI has recently garnered increased attention regarding the need to avoid the adverse effects that AI could bring to people, so people can fully trust and live in harmony with AI technologies. A tremendous amount of research on trustworthy AI has been conducted and witnessed in recent years. In this survey, we present a comprehensive appraisal of trustworthy AI from a computational perspective to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex subject, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Nondiscrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.
Article
The dawn of electronic business (e-business) changed the way that individuals interact not only with one another but also with the companies that supply them with goods and services, as well as with the government agencies on which they depend for welfare and security. We can speak of “digital business” as the (re)design or (re)definition of new or existing business models, and as the creation of increased flows and connectivity between customers and other entities, both internal and external to the business, among other defining features. In this editorial, we explore three interrelated levels of sociological and economic practice—micro, meso, and macro—as they pertain to advances in digital business [1] , with the intention of revealing hidden dynamics and implications resulting from interactions between these levels. At the micro level, we consider the individual user. This can be interpreted as the “self” or the individual level (e.g., a person or person in singular interaction with another). At the meso level, we reflect on technological systems (e.g., information systems, biometrics, and data analysis through machine learning (ML), for the purposes of this article). This level is about groups and how they communicate in building knowledge, with a particular emphasis on what that knowledge means. The groups are made up of organizations, whether business or government agencies, or other collectives. And finally, at the macro level, we consider the societal context inclusive of communities (e.g., local/regional/national or international levels).
Chapter
Due to benefits like increased speed, Artificial Intelligence (AI)/Machine Learning systems are increasingly involved in screening decisions which determine whether or not individuals will be granted important opportunities such as college admission or loan/mortgage approval. To discuss concerns about potential bias in such systems, this chapter focuses on AI to support human resource decisions (e.g., selecting among job applicants). As AI systems do not inherently harbor prejudices, they could increase fairness and reduce bias. We discuss, however, that bias can be introduced via: (i) human influence in the system design, (ii) the training data supplied to the system, and (iii) human involvement in processing system recommendations. For each of the above factors we review and suggest possible solutions to reduce/remove bias. Notably, developing AI systems for screening decisions increased scrutiny for bias and raised awareness about pre-existing bias in human decision patterns which AI systems were trained to emulate.
Article
Artificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. We propose a medical algorithmic audit framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences. We suggest several approaches for testing algorithmic errors, including exploratory error analysis, subgroup testing, and adversarial testing, and provide examples from our own work and previous studies. The medical algorithmic audit is a tool that can be used to better understand the weaknesses of an artificial intelligence system and put in place mechanisms to mitigate their impact. We propose that safety monitoring and medical algorithmic auditing should be a joint responsibility between users and developers, and encourage the use of feedback mechanisms between these groups to promote learning and maintain safe deployment of artificial intelligence systems.
Article
bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Ethics in cybersecurity is traditionally considered in terms of computer and information ethics and, more recently, cyber warfare [1] – [4] . Although these remain the dominant paradigms there has been growing dissatisfaction with the adequacy of the warfare metaphor in capturing the social and ethical features of arrangementes of cybersecurity and as a means to inform design [5] , [6] . The result has been an increased focus on a distinct ethics of cybersecurity [7] , [8] . Complimenting ethical analyses, social cognitive factors that affect users must also be taken into account [9] . For instance, security issues are based on the concepts of identity, social relationships, and the networks that define them (e.g., [10] ).
Book
Researchers, developers, business leaders, policy makers, and others are expanding the technology-centered scope of artificial intelligence (AI) to include human-centered AI (HCAI) ways of thinking. This expansion from an algorithm-focused view to embrace a human-centered perspective can shape the future of technology so as to better serve human needs. Educators, designers, software engineers, product managers, evaluators, and government agency staffers can build on AI-driven technologies to design products and services that make life better for people and enable people to care for each other. Humans have always been tool builders, and now they are supertool builders, whose inventions can improve our health, family life, education, business, the environment, and much more. The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits AI researchers, developers, business leaders, policy makers, and others who build on their working methods by including HCAI strategies of design and testing. This enlarged vision can shape the future of technology so as to better serve human needs. As many technology companies and thought leaders have said, the goal is not to replace people, but to empower them by making design choices that give humans control over technology.
Article
This article introduces algorithmic bias in machine learning (ML) based marketing models. Although the dramatic growth of algorithmic decision making continues to gain momentum in marketing, research in this stream is still inadequate despite the devastating, asymmetric and oppressive impacts of algorithmic bias on various customer groups. To fill this void, this study presents a framework identifying the sources of algorithmic bias in marketing, drawing on the microfoundations of dynamic capability. Using a systematic literature review and in-depth interviews of ML professionals, the findings of the study show three primary dimensions (i.e., design bias, contextual bias and application bias) and ten corresponding subdimensions (model, data, method, cultural, social, personal, product, price, place and promotion). Synthesizing diverse perspectives using both theories and practices, we propose a framework to build a dynamic algorithm management capability to tackle algorithmic bias in ML-based marketing decision making.
Article
The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.
Chapter
Adaptive Instructional Systems (AIS) have the potential to provide students with a flexible, dynamic learning environment in a manner that might not be possible with the limited resources of human instructors. In addition to technical knowledge learning engineering also requires considering the values and ethics associated with the creation, development, and implementation of instruction and assessment techniques such as fairness, accountability, transparency, and ethics (FATE). Following a review of the ethical dimensions of psychometrics, I will consider specific ethical dimensions associated with AIS (e.g., cybersecurity and privacy issues, invidious selection processes) and techniques that can be adopted to address these concerns (e.g., differential item function, l-diversity). By selectively introducing quantitative methods that align with principles of ethical design, I argue that AIS can be afforded a minimal ethical agency.
Article
French political and social scientist, Ellul [1, pp. 52–60] explained that in prehistoric times, invention was a necessity, a movement to ensure humans could survive the elements. By the beginning of the Industrial Revolution, he noticed an obvious shift in the reason for invention: from necessity to that of the special interest of the state. By the 19th century again, the reason for invention changed to that of the special interest of the bourgeoisie who could see the profits that could be generated by the deliberate development of a technique. Since about the 1600s people have invested their money in stock in order to receive dividends; this practice became particularly attractive in the 20th century.
Chapter
Modern AI systems have become of widespread use in almost all sectors with a strong impact on our society. However, the very methods on which they rely, based on Machine Learning techniques for processing data to predict outcomes and to make decisions, are opaque, prone to bias and may produce wrong answers. Objective functions optimized in learning systems are not guaranteed to align with the values that motivated their definition. Properties such as transparency, verifiability, explainability, security, technical robustness and safety, are key to build operational governance frameworks, so that to make AI systems justifiably trustworthy and to align their development and use with human rights and values.
Article
This article attempts to bridge the gap between widely discussed ethical principles of Human-centered AI (HCAI) and practical steps for effective governance. Since HCAI systems are developed and implemented in multiple organizational structures, I propose 15 recommendations at three levels of governance: team, organization, and industry. The recommendations are intended to increase the reliability, safety, and trustworthiness of HCAI systems: (1) reliable systems based on sound software engineering practices, (2) safety culture through business management strategies, and (3) trustworthy certification by independent oversight. Software engineering practices within teams include audit trails to enable analysis of failures, software engineering workflows, verification and validation testing, bias testing to enhance fairness, and explainable user interfaces. The safety culture within organizations comes from management strategies that include leadership commitment to safety, hiring and training oriented to safety, extensive reporting of failures and near misses, internal review boards for problems and future plans, and alignment with industry standard practices. The trustworthiness certification comes from industry-wide efforts that include government interventions and regulation, accounting firms conducting external audits, insurance companies compensating for failures, non-governmental and civil society organizations advancing design principles, and professional organizations and research institutes developing standards, policies, and novel ideas. The larger goal of effective governance is to limit the dangers and increase the benefits of HCAI to individuals, organizations, and society.
Conference Paper
Modern black-box artificial intelligence algorithms are computationally powerful yet fallible in unpredictable ways. While much research has gone into developing techniques to interpret these algorithms, less have also integrated the requirement to understand the algorithm as a function of their training data. In addition, few have examined the human requirements for explainability, so these interpretations provide the right quantity and quality of information to each user. We argue that Explainable Artificial Intelligence (XAI) frameworks need to account the expertise and goals of the user in order to gain widespread adoptance. We describe the Knowledge-to-Information Translation Training (KITT) framework, an approach to XAI that considers a number of possible explanatory models that can be used to facilitate users’ understanding of artificial intelligence. Following a review of algorithms, we provide a taxonomy of explanation types and outline how adaptive instructional systems can facilitate knowledge translation between developers and users. Finally, we describe limitations of our approach and paths for future research opportunities.
Article
Cognitive uncertainty is evidenced across learning, memory, and decision-making tasks. Uncertainty has also been examined in studies of positive affect and preference by manipulating stimulus presentation frequency. Despite the extensive research in both of these areas, there has been little systematic study into the relationship between affective and cognitive uncertainty. Using a categorization task, the present study examined changes in cognitive and affective uncertainty by manipulating stimulus presentation frequency and processing focus (i.e., promotion v. prevention focus). Following training, participants categorized stimuli and provided ratings of both typicality and negative affect. Results indicated that cognitive uncertainty was influenced by a categorical representation of stimuli whereas affective uncertainty was also influenced by exemplar presentation frequency during training. We additionally found that when the training was framed in terms of the avoidance of errors (i.e., a prevention focus), categorization performance was affected across the stimulus continuum whereas affective ratings remained unchanged.
Article
Research suggests that consumers are averse to relying on algorithms to perform tasks that are typically done by humans, despite the fact that algorithms often perform better. The authors explore when and why this is true in a wide variety of domains. They find that algorithms are trusted and relied on less for tasks that seem subjective (vs. objective) in nature. However, they show that perceived task objectivity is malleable and that increasing a task’s perceived objectivity increases trust in and use of algorithms for that task. Consumers mistakenly believe that algorithms lack the abilities required to perform subjective tasks. Increasing algorithms’ perceived affective human-likeness is therefore effective at increasing the use of algorithms for subjective tasks. These findings are supported by the results of four online lab studies with over 1,400 participants and two online field studies with over 56,000 participants. The results provide insights into when and why consumers are likely to use algorithms and how marketers can increase their use when they outperform humans.
Conference Paper
Artificial intelligence (AI) is the latest trend being implemented in the public sector. Recent advances in this field and the AI explosion in the private sector have served to promote a revolution for government, public service management, accountability, and public value. Incipient research to understand, conceptualize and express challenges and limitations is now ongoing. This paper is the first approach in such a direction; our research question is: What are the current AI trends in the public sector? In order to achieve that goal, we collected 78 papers related to this new field in recent years. We also used a public policy framework to identify future areas of implementation for this trend. We found that only normative and exploratory papers have been published so far and there are a lot of public policy challenges facing in this area, and that AI implementation results are unknown and unexpected; since there may be great benefits for governments and society, but, on the other hand, it may have negative results like the so-called ”algorithmic bias” of AI when making important decisions for social development. However, we consider that AI has potential benefits in the public health, public policies on climate change, public management, decision-making, disaster prevention and response, improving government-citizen interaction, personalization of services, interoperability, analyzing large amounts of data, detecting abnormalities and patterns, and discovering new solutions through dynamic models and simulation in real time.
Article
This literature review illuminates the conceptualization of predictive policing, and also its potential and realized benefits and drawbacks. The review shows a discrepancy between the considerable attention for potential benefits and drawbacks of predictive policing in the literature, and the empirical evidence that is available. The empirical evidence provides little support for the claimed benefits of predictive policing. Whereas some empirical studies conclude that predictive policing strategies lead to a decrease in crime, others find no effect. At the same time, there is no empirical evidence at all for the claimed drawbacks. We conclude that the current thrust of predictive policing initiatives is based on convincing arguments and anecdotal evidence rather than on systematic empirical research. We urge the research community to do independent tests of both positive and negative expectations to generate an evidence base for predictive policing.
Article
Although the gender gap in academia has narrowed, females are underrepresented within some fields in the USA. Prior research suggests that the imbalances between science, technology, engineering and mathematics fields may be partly due to greater male interest in things and greater female interest in people, or to off-putting masculine cultures in some disciplines. To seek more detailed insights across all subjects, this article compares practising US male and female researchers between and within 285 narrow Scopus fields inside 26 broad fields from their first-authored articles published in 2017. The comparison is based on publishing fields and the words used in article titles, abstracts, and keywords. The results cannot be fully explained by the people/thing dimensions. Exceptions include greater female interest in veterinary science and cell biology and greater male interest in abstraction, patients, and power/control fields, such as politics and law. These may be due to other factors, such as the ability of a career to provide status or social impact or the availability of alternative careers. As a possible side effect of the partial people/thing relationship, females are more likely to use exploratory and qualitative methods and males are more likely to use quantitative methods. The results suggest that the necessary steps of eliminating explicit and implicit gender bias in academia are insufficient and might be complemented by measures to make fields more attractive to minority genders.
Book
Explaining how ubiquitous computing is rapidly changing our private and professional lives, Ethical IT Innovation: A Value-Based System Design Approach stands at the intersection of computer science, philosophy, and management and integrates theories and frameworks from all three domains. The book explores the latest thinking on computer ethics, including the normative ethical theories currently shaping the debate over the good and bad consequences of technology. It begins by making the case as to why IT professionals, managers, and engineers must consider the ethical issues when designing IT systems, and then uses a recognized system development process model as the structural baseline for subsequent chapters. For each system development phase, the author discusses: the ethical issues that must be considered, who must consider them, and how that thought process can be most productive. In this way, an ‘Ethical SDLC’ (System Development Life Cycle) is created. The book presents an extensive case study that applies the “Ethical SDLC” to the example of privacy protection in RFID enabled environments. It explains how privacy can be built into systems and illustrates how ethical decisions can be consciously made at each stage of development. The final chapter revisits the old debate of engineers’ ethical accountability as well as the role of management. Explaining the normative theories of computer ethics, the book explores the ethical accountability of developers as well as stakeholders. It also provides questions at the end of each chapter that examine the ethical dimensions of the various development activities.