Conference PaperPDF Available

The Impact of Signaling Commitment to Ethical AI on Organizational Attractiveness


Abstract and Figures

As organizations drive the development and deployment of Artificial Intelligence (AI)-based technologies, their commitment to ethical and humanistic values is critical to minimizing potential risks. Here, we investigate talent attraction as an economic incentive for organizations to commit to ethical AI. Based on Corporate Social Responsibility (CSR) literature and signaling theory, we present a mixed-methods research design to investigate the effect of ethical AI commitment on organizational attractiveness. Specifically, we i) identify signals of ethical AI commitment based on a review of corporate websites and expert interviews and ii) examine the effect of selected signals on organizational attractiveness in an online experiment. This short paper presents first results on ethical AI signals and details the next steps. Our research will contribute to the theoretical conceptualization of ethical AI as a part of CSR and support managers of digital transformation processes when weighing investments in ethical AI initiatives.
Content may be subject to copyright.
Association for Information Systems Association for Information Systems
AIS Electronic Library (AISeL) AIS Electronic Library (AISeL)
Wirtschaftsinformatik 2022 Proceedings Track 7: Digital Business Models &
Jan 17th, 12:00 AM
The Impact of Signaling Commitment to Ethical AI on The Impact of Signaling Commitment to Ethical AI on
Organizational Attractiveness Organizational Attractiveness
Sünje Clausen
University of Duisburg-Essen, Faculty of Engineering, Duisburg, Germany
Felix Brünker
University of Duisburg-Essen, Faculty of Engineering, Duisburg, Germany
Anna-Katharina Jung
University of Duisburg-Essen, Faculty of Engineering, Duisburg, Germany
Stefan Stieglitz
University of Duisburg-Essen, Faculty of Engineering, Duisburg, Germany
Follow this and additional works at:
Recommended Citation Recommended Citation
Clausen, Sünje; Brünker, Felix; Jung, Anna-Katharina; and Stieglitz, Stefan, "The Impact of Signaling
Commitment to Ethical AI on Organizational Attractiveness" (2022).
Wirtschaftsinformatik 2022
. 10.
This material is brought to you by the Wirtschaftsinformatik at AIS Electronic Library (AISeL). It has been accepted
for inclusion in Wirtschaftsinformatik 2022 Proceedings by an authorized administrator of AIS Electronic Library
(AISeL). For more information, please contact
17th International Conference on Wirtschaftsinformatik,
February 2022, Nürnberg, Germany
The Impact of Signaling Commitment to Ethical AI on
Organizational Attractiveness
Sünje Clausen1, Felix Brünker1, Anna-Katharina Jung1, Stefan Stieglitz1
1 University of Duisburg-Essen, Faculty of Engineering, Duisburg, Germany
{suenje.clausen, felix.bruenker, anna-katharina.jung, stefan.stieglitz}
Abstract. As organizations drive the development and deployment of Artificial
Intelligence (AI)-based technologies, their commitment to ethical and humanistic
values is critical to minimizing potential risks. Here, we investigate talent
attraction as an economic incentive for organizations to commit to ethical AI.
Based on Corporate Social Responsibility (CSR) literature and signaling theory,
we present a mixed-methods research design to investigate the effect of ethical
AI commitment on organizational attractiveness. Specifically, we i) identify
signals of ethical AI commitment based on a review of corporate websites and
expert interviews and ii) examine the effect of selected signals on organizational
attractiveness in an online experiment. This short paper presents first results on
ethical AI signals and details the next steps. Our research will contribute to the
theoretical conceptualization of ethical AI as a part of CSR and support managers
of digital transformation processes when weighing investments in ethical AI
Keywords: Signaling Theory, Corporate Social Responsibility, Organizational
Attractiveness, Artificial Intelligence, Ethics
1 Motivation
Artificial Intelligence (AI), that is, the increasing capability of machines to perform
specific roles and tasks currently performed by humans within the workplace and
society in general [1, p. 2], is considered a key element for value creation in
organizations and obtaining competitive advantages in the digital transformation [2].
While AI-based technologies are increasingly integrated in organizations [3], they are
also a subject of concern [4, 5] especially due to their complexity and adaptability
impeding the anticipation of adverse outcomes [6]. Thereby, legal guidelines and
frameworks for the development and deployment of AI are still in their infancy and
transferring them into practice can be challenging [6] and is strongly dependent on the
priorities within organizations [7]. Thus, the initiatives of organizations to strive for AI-
based technologies as a force of good which empower humans and benefit society (here
referred to as ethical AI”) are a crucial step for avoiding potential harms and should
be a part of any company’s corporate social responsibility (CSR) initiatives.
CSR has its roots in normative ethics [8] and has been defined as an “organization's
voluntary efforts to operate ethically and promote the social and economic welfare of
internal and external stakeholders [9, p. 872]. The view that organizations ought to
take more responsibility for the social and economic impact of digital technologies is
also reflected in the recently proposed concept of corporate digital responsibility (CDR;
[10]). Yet, regardless of normative considerations, the historical development of CSR
shows that economic incentives are indispensable for organizations engaging in CSR
activities [11]. Accordingly, previous research addressed how doing good (i.e., being
ethical) and doing well (i.e., making profit) could be reconciled [12, 13] and identified
arguments in the “business case for CSR” [14]. This raises the question: which
economic incentives exist for organizations to voluntarily commit to ethical AI?
One such economic incentive could be a competitive advantage in attracting and
retaining talent [14] which is one of the most important factors for sustained business
success [15]. Due to demographic developments and changing demands in the job
market, the competition among organizations for recruiting talented employees has
intensified [16, 17]. Thereby, CSR initiatives (e.g., sustainable practices) were found
to increase organizational attractiveness and employer attractiveness [18, 19] as well as
job choice intentions [20, 21]. Moreover, Ronda et al. found that CSR is a non-
negotiable attribute for some applicants: If a company did not meet CSR requirements,
job offers were rejected in 31% of the cases, regardless of other attributes [22]. Thus,
CSR serves as a competitive advantage for attracting talent [23, 24]. Here,
organizational attractiveness refers to one’s (positive) attitude toward an organization
and perceived desirability of entering an employment relationship. The effect of CSR
on organizational attractiveness has been explained with signaling theory [25, 26]
which assumes that CSR initiatives convey information about the companies’ values
and practices. The effect on the perceived organizational attractiveness of prospective
applicants is mediated through perceived value fit with an organization, anticipated
pride of working for an organization, and expected treatment in an organization [18].
Against this backdrop, we suggest that signaling commitment to ethical AI as a part
of CSR could signal desirable qualities about an organization and thus serve as a
competitive advantage in attracting and retaining talent. Accordingly, we formulate the
following research question: How does signaling commitment to ethical AI impact
organizational attractiveness?
To answer this research question, we draw on signaling theory, CSR-, and
organizational attractiveness literature [9, 18, 26] and follow a mixed methods approach
to i) identify signals of commitment to ethical AI based on a review of corporate
websites and an interview study and ii) examine the effect of these signals on
organizational attractiveness in an online experiment. Here, we present our approach
and first results for identifying ethical AI signals and the design for the online
experiment. Our research will contribute to the conceptualization of CSR regarding
ethical AI initiatives, empirically test the model of signaling mechanisms by Jones and
colleagues [18] in a new context, and support managers of digital transformation
processes when weighing the costs and benefits of ethical AI initiatives. It could present
a strategy for doing well by doing good [12] and synergistically achieving instrumental
(i.e., increasing profit through improved talent attraction) and humanistic (i.e., social
welfare through a focus on ethical AI) outcomes when developing or deploying AI
systems in organizations [cf. 27].
2 Research Design
2.1 Signaling commitment to ethical AI
To identify signals of commitment to ethical AI, we reviewed the websites of
companies which i) develop and/or apply AI technologies and ii) are listed among “The
2021 World’s Most Ethical Companies” by the Ethisphere Institute. The rating
evaluates the company’s i) Ethics and Compliance Program, ii) Culture of Ethics, iii)
Corporate Citizenship and Responsibility, iv) Governance, and v) Leadership and
Reputation based on company-reported data, supplementary documentation, publicly
available information, and, if necessary, additional research. While it is not focused on
ethical AI specifically, we expected that a software, IT- or technology organization
ranking highly in these areas of ethical conduct is also likely to be committed to ethical
AI. Thus, we expected that the online presence of such companies would provide
informative examples for signaling commitment to ethical AI to relevant stakeholders.
From the 2021 list, we selected companies from the industries “Software &
Services”, “Information Technology Services”, and “Technology” which indicated on
their website that they develop or use AI technology (i.e., Infosys, wipro (IND),
DellTechnologies, HewlettPackard Enterprise, IBM, leidos, Microsoft, Salesforce,
workday (USA)). The websites of these companies were reviewed for information
related to costly initiatives in the field of AI technology and ethics. According to
signaling theory, a signal only conveys information to the recipient if it is costly.
Otherwise, it could be acquired by anyone and thus would lose its informational quality
[25]. Zerbini [26] developed an overview of CSR signals and distinguishes between
dissipative costs (i.e., must always be paid for acquiring a signal, for example hiring an
Ethics Officer) and penalty costs (i.e., must only be paid if signals turn out to be untrue,
for example if a company is sued for not following its own code of ethics). Table 1
shows exemplary signals retrieved from the websites of IBM and Salesforce and their
classification based on Zerbini [26].
To validate, prioritize, and potentially complement the list of identified ethical AI
signals, semi-structured interviews will be conducted with each 3-5 individuals from i)
Human Resources or Management, ii) Business Ethics, and iii) prospective applicants
in the technology sector. The first part of the semi-structured interview includes
questions about the background and position of the interviewee, the perceived relevance
of ethical behavior of an organization in job choice, and if they can think of initiatives
of organizations which make them appear more ethical to them. In the second part, the
identified ethical AI signals will be discussed with four guiding questions: How do you
perceive the costs or difficulty of implementing or acquiring the signal? How does the
signal impact organizational attractiveness for you? How relevant do you consider the
signal from an ethical or societal point of view? What would make this signal
(in)sincere for you? The interviews will be transcribed and coded according to
qualitative content analysis [28]. A subset of ethical AI signals will then be
implemented on the website of a fictious technology company called “Cladus” as a
corporate website is often the first point of contact for job seekers.
Table 1. Examples of signaling commitment to ethical AI
Observable Signals
Classification based
on [26]; new signals
Chief ethical and ethical use officer and
“Office of Ethical and Ethical Use of
Technology” with advisory council
Ethics officer
Ethics committee
Guiding principles (e.g., privacy, safety)
and AI ethics commitment (e.g.,
accountable, transparent)
Code of ethics
Certifications, standards, regulations
(e.g., ISO 27018 for data privacy)
Trust marks
Building awareness for employees (e.g.,
consequence scanning)
Training programs
AI ethics board, IBM Policy Lab
Ethics committee(s)
Trust and transparency principles (e.g.,
augment- not replace, explainability)
Code of ethics
Open-source software toolkits (e.g., AI
Fairness 360 to find biases)
Corporate disclosure
(knowledge sharing)
European Commission Expert Group on
AI, Global Partnership on AI, IEEE
Global initiative on AI Ethics
Trust marks
TechEthicsLab (with University of Notre
Dame) research collaboration
Self-restriction not to develop general
facial recognition software until legal
framework is refined
2.2 Impact of ethical AI signals on organizational attractiveness
The empirical evaluation of the website is based on the theoretical model by Jones et
al. [18] as we investigate if the identified signals of commitment to ethical AI increase
organizational attractiveness both directly and mediated by anticipated
pride/organizational prestige, perceived value fit, and expected treatment. Additionally,
as insincerity of the signals might torpedo the effect [26, 29], we include perceived
signal quality as a moderator of the relationship. We formulate the following
hypotheses (visualized in Figure 1):
H1a-c: Signals of commitment to ethical AI increase a) the anticipated
pride/organizational prestige, b) the perceived value fit, and c) the expected treatment.
H2a-c: The effect of the signals of commitment to ethical AI on a) the anticipated
pride/organizational prestige, b) the perceived value fit, and c) expected treatment is
positively moderated by a high perceived signal quality.
H3a-c: The a) anticipated pride/organizational prestige, b) perceived value fit, and c)
expected treatment increase the perceived organizational attractiveness.
H4: Signals of commitment to ethical AI increase the perceived organizational
Figure 1. Adapted research model [18] of the effects of signals of commitment to ethical AI on
organizational attractiveness
For the main study, we plan to recruit at least 200 participants (matching N =180 in
Jones et al. [18]) who have an educational or professional background in IT. In a
between-groups design, the participants will be asked to imagine that they are looking
for a new job and want to evaluate if Cladus would be a suitable employer. There will
be three groups with different websites: Group 1 (baseline), group 2 (ethical AI signals),
and group 3 (ethical AI signals + general CSR information). This allows for quantifying
the added value of ethical AI commitment. For realism, only positive and a multitude
of ethical AI signals are included on the website. Other potentially relevant factors for
job choice (e.g., salary) are mentioned on the website in all conditions. Following the
methodological approach of Jones et al. [18] we use the same scales for measuring
anticipated pride [30], perceived value fit [18], expected treatment [18], organizational
attractiveness [31], and will derive questions for perceived signal quality from related
measures. For analyzing the data, we aim to conduct multiple regression analysis
including the examination of mediators and moderator effects as visualized in Figure
1. We also aim to examine potential group differences that might result based on the
applied signals. Furthermore, as other studies found CSR to be especially important for
attracting millennials [9] and women [22], we will consider individual demographics
(age, gender, AI experience) to exploratively check for group influences.
3 Conclusion
This short paper proposes a study to addresses the research gap regarding the role of
ethical AI as a part of CSR and a possible economic incentive for organizations to
commit to ethical AI. Organizations drive AI innovation and use, and their choices have
implications for society and individuals. On a theoretical level, the study will contribute
to the understanding of signal-based mechanisms and organizational attractiveness by
transferring Jones et al.s’ [18] model to the context of ethical AI and additionally
considering the role of perceived signal quality. It will also add to the conceptualization
of CSR in research to include ethical AI and potentially add types of signals (e.g., self-
restriction) to existing overviews [26].
4 References
1. Dwivedi, Y.K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T.,
Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., et al.: Artificial Intelligence (AI):
Multidisciplinary perspectives on emerging challenges, opportunities, and agenda
for research, practice and policy. International Journal of Information
Management, vol. 57, 101994 (2021). doi: 10.1016/j.ijinfomgt.2019.08.002
2. Borges, A.F., Laurindo, F.J., Spínola, M.M., Gonçalves, R.F., Mattos, C.A.: The
strategic use of artificial intelligence in the digital era: Systematic literature
review and future research directions. International Journal of Information
Management, vol. 57, 102225 (2021). doi: 10.1016/j.ijinfomgt.2020.102225
3. Frick, N., Brünker, F., Ross, B., Stieglitz, S.: Design requirements for AI-based
services enriching legacy information systems in enterprises: A managerial
perspective. In: Proceedings of the 31st Australasian Conference on Information
Systems (ACIS), pp. 14 (2020)
4. Benbya, H., Pachidi, S., Jarvenpaa, S.L.: Special issue editorial: Artificial
intelligence in organizations: Implications for information systems research.
Journal of the Association for Information Systems, vol. 22, 281303 (2021). doi:
5. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V.,
Luetge, C., Madelin, R., Pagallo, U., Rossi, F., et al.: AI4PeopleAn Ethical
Framework for a Good AI Society: Opportunities, Risks, Principles, and
Recommendations. Minds and Machines, vol. 28, 689707 (2018). doi:
6. Asatiani, A., Malo, P., Nagbøl, P.R., Penttinen, E., Rinta-Kahila, T., Salovaara,
A.: Sociotechnical Envelopment of Artificial Intelligence: An Approach to
Organizational Deployment of Inscrutable Artificial Intelligence Systems. Journal
of the Association for Information Systems, vol. 22, 8 (2021). doi:
7. Martin, K.: Designing Ethical Algorithms. MIS Quarterly Executive, vol. 18,
129142 (2019). doi: 10.17705/2msqe.00012
8. Bowen, H.R.: Social responsibility of the businessman. Harper, New York, NY
9. Waples, C.J., Brachle, B.J.: Recruiting millennials: Exploring the impact of CSR
involvement and pay signaling on organizational attractiveness. Corporate Social
Responsibility and Environmental Management, vol. 27, 870880 (2020). doi:
10. Lobschat, L., Mueller, B., Eggers, F., Brandimarte, L., Diefenbach, S., Kroschke,
M., Wirtz, J.: Corporate digital responsibility. Journal of Business Research, vol.
122, 875888 (2021). doi: 10.1016/j.jbusres.2019.10.006
11. Bansal, P., Song, H.-C.: Similar But Not the Same: Differentiating Corporate
Sustainability from Corporate Responsibility. Academy of Management Annals,
vol. 11, 105149 (2016). doi: 10.5465/annals.2015.0095
12. Falck, O., Heblich, S.: Corporate social responsibility: Doing well by doing good.
Business Horizons, vol. 50, 247254 (2007). doi: 10.1016/j.bushor.2006.12.002
13. Yang, X., Li, Y., Kang, L.: Reconciling “doing good” and “doing well” in
organizations’ green IT initiatives: A multi-case analysis. International Journal of
Information Management, vol. 51, 102052 (2020). doi:
14. Carroll, A.B., Shabana, K.M.: The Business Case for Corporate Social
Responsibility: A Review of Concepts, Research and Practice. International
Journal of Management Reviews, vol. 12, 85105 (2010). doi: 10.1111/j.1468-
15. Rynes, S.L., Barber, A.E.: Applicant Attraction Strategies: An Organizational
Perspective. Academy of Management Review, vol. 15, 286310 (1990). doi:
16. Celani, A., Singh, P.: Signaling theory and applicant attraction outcomes.
Personnel review, vol. 40, 222-238 (2011). doi: 10.1108/00483481111106093
17. Evertz, L., Süß, S.: The importance of individual differences for applicant
attraction: a literature review and avenues for future research. Management
Review Quarterly, vol. 67, 141174 (2017). doi: 10.1007/s11301-017-0126-2
18. Jones, D.A., Willness, C.R., Madey, S.: Why Are Job Seekers Attracted by
Corporate Social Performance? Experimental and Field Tests of Three Signal-
Based Mechanisms. Academy of management journal, vol. 57, 383404 (2014).
doi: 10.5465/amj.2011.0848
19. Klimkiewicz, K., Oltra, V.: Does CSR Enhance Employer Attractiveness? The
Role of Millennial Job Seekers Attitudes. Corp. Soc. Responsib. Environ.
Mgmt., vol. 24, 449463 (2017). doi: 10.1002/csr.1419
20. Dawkins, C.E., Jamali, D., Karam, C., Lin, L., Zhao, J.: Corporate Social
Responsibility and Job Choice Intentions: A Cross-Cultural Analysis. Business &
Society, vol. 55, 854888 (2016). doi: 10.1177/0007650314564783
21. Osburg, V.-S., Yoganathan, V., Bartikowski, B., Liu, H., Strack, M.: Effects of
Ethical Certification and Ethical eWoM on Talent Attraction. Journal of Business
Ethics, vol. 164, 535548 (2020). doi: 10.1007/s10551-018-4018-8
22. Ronda, L., Abril, C., Valor, C.: Job choice decisions: understanding the role of
nonnegotiable attributes and trade-offs in effective segmentation. Management
Decision, vol. 59, 1546-1561 (2020). doi: 10.1108/MD-10-2019-1472
23. Bhattacharya, C.B., Sen, S., Korschun, D.: Using corporate social responsibility
to win the war for talent. MIT Sloan Management Review, vol. 49, 3744 (2008)
24. Greening, D.W., Turban, D.B.: Corporate Social Performance As a Competitive
Advantage in Attracting a Quality Workforce. Business & Society, vol. 39, 254
280 (2000). doi: 10.1177/000765030003900302
25. Spence, M.: Job Market Signaling. The Quarterly Journal of Economics, vol. 87,
355374 (1973). doi: 10.2307/1882010
26. Zerbini, F.: CSR Initiatives as Market Signals: A Review and Research Agenda.
Journal of Business Ethics, vol. 146, 123 (2017). doi: 10.1007/s10551-015-
27. Sarker, S., Chatterjee, S., Xiao, X., Elbanna, A.: The sociotechnical axis of
cohesion: its historical development and its continued relevance. MIS Quarterly,
vol. 43, 695719 (2019). doi: 10.17705/1jais.00664
28. Mayring, P.: Qualitative Content Analysis: Theoretical Background and
Procedures. In: Bikner-Ahsbahs, A., Knipping, C., Presmeg, N. (eds.)
Approaches to Qualitative Research in Mathematics Education: Examples of
Methodology and Methods, pp. 365380. Springer Netherlands, Dordrecht
(2015). doi: 10.1007/978-94-017-9181-6_13
29. Carlini, J., Grace, D., France, C., Lo Iacono, J.: The corporate social
responsibility (CSR) employer brand process: integrative review and
comprehensive model. Journal of Marketing Management, vol. 35, 182205
(2019). doi: 10.1080/0267257X.2019.1569549
30. Cable, D.M., Turban, D.B.: The Value of Organizational Reputation in the
Recruitment Context: A Brand-Equity Perspective. Journal of Applied Social
Psychology, vol. 33, 22442266 (2003). doi: 10.1111/j.1559-
31. Highhouse, S., Lievens, F., Sinar, E.F.: Measuring Attraction to Organizations.
Educational and Psychological Measurement, vol. 63, 9861001 (2003). doi:
... Yes Clausen et al. 2022 AI, recruitment, ethics T  AI recruitment systems are advertised as less biased and prone to variance than human-led hiring processes, and an effective automation tool as recruitment initiatives in certain fields become increasingly competitive.  However, potential procedural biases and ethical conflicts within such AI systems can have serious implications for a candidates' income, where they live, career and life trajectory, and equal opportunity status. ...
...  How are positive CDR practices promoted by service firms perceived as a comparative advantage during recruitment activities (Clausen et al. 2022)? ...
Full-text available
Digitization, artificial intelligence, and service robots carry serious ethical, privacy, and fairness risks. Using the lens of corporate digital responsibility (CDR), we examine these risks and their mitigation in service firms and make five contributions. First, we show that CDR is critical in service contexts because of the vast streams of customer data involved and digital service technology's omnipresence, opacity, and complexity. Second, we synthesize the ethics, privacy, and fairness literature using the CDR data and technology life-cycle perspective to understand better the nature of these risks in a service context. Third, to provide insights on the origins of these risks, we examine the digital service ecosystem and the related flows of money, service, data, insights, and technologies. Fourth, we deduct that the underlying causes of CDR issues are trade-offs between good CDR practices and organizational objectives (e.g., profit opportunities versus CDR risks) and introduce the CDR calculus to capture this. We also conclude that regulation will need to step in when a firm's CDR calculus becomes so negative that good CDR is unlikely. Finally, we advance a set of strategies, tools, and practices service firms can use to manage these trade-offs and build a strong CDR culture.
...  How are positive CDR practices promoted by organizations perceived and used for talent recruitment (Clausen et al. 2022)? ...
Full-text available
Calls for Papers (CfP) for Special Issue on Corporate Digital Responsibility (CDR) to be published in @Organizational Dynamics (OD). OD has a strong applied focus. As such, we aim to publish 10 to 12 insightful articles that introduce CDR-related issues to executives and MBA students. We welcome reviews, qualitative and conceptual papers, and in-depth case studies (also based on company collaborations). Potential topics are outlined in the CfP posted in ResearchGate and for information on OD see comments.
... With our research, we add to recent research on ICT-related stress (e.g., Tarafdar et al., 2019), the emerging IS research stream on CDR (e.g., Clausen et al., 2022;Mihale-Wilson et al., 2022), and extend previous research on CSR initiatives as market signals (Zerbini, 2017) to the context of CDR. Specifically, our findings show that while knowledge workers experience negative impacts of ICT, they struggle with implementing digital wellbeing initiatives in practice. ...
Conference Paper
Knowledge workers increasingly rely on information and communication technologies (ICT) in their work. If not managed effectively, this shift can reduce workers' wellbeing and performance. Accordingly, research on corporate digital responsibility (CDR) urges organizations to implement digital wellbeing initiatives to protect workers. In this research, we investigate which digital wellbeing initiatives are offered by organizations, expected by knowledge workers, and whether such initiatives might provide economic returns in the form of improved organizational attractiveness. Based on signaling theory and following a multi-method approach, we identify digital wellbeing initiatives from websites and social media posts of 25 technology companies and conduct semi-structured interviews with 10 students and young professionals. We discuss the conceptualization of digital wellbeing and the role of digital wellbeing for organizational attractiveness. Our findings provide a starting point for investigating business cases for CDR and can advance understanding and implementation of digital wellbeing both in research and practice.
... This dimension also includes the competitive impact of defining and implementing CDR. Early conceptual work suggests that customers, future talent, and investors will be favorably influenced in their decision making when an organization adopts a CDR regime and exhibits compliant behaviors (Lobschat et al. 2021); a suggestion that is supported by emergent empirical work (e.g., Clausen et al. 2022;Mihale-Wilson et al. 2021). ...
Full-text available
The paper presents an approach for implementing inscrutable (i.e., nonexplainable) artificial intelligence (AI) such as neural networks in an accountable and safe manner in organizational settings. Drawing on an exploratory case study and the recently proposed concept of envelopment, it describes a case of an organization successfully "enveloping" its AI solutions to balance the performance benefits of flexible AI models with the risks that inscrutable models can entail. The authors present several envelopment methods-establishing clear boundaries within which the AI is to interact with its surroundings, choosing and curating the training data well, and appropriately managing input and output sources-alongside their influence on the choice of AI models within the organization. This work makes two key contributions: It introduces the concept of sociotechnical envelopment by demonstrating the ways in which an organization's successful AI envelopment depends on the interaction of social and technical factors, thus extending the literature's focus beyond mere technical issues. Secondly, the empirical examples illustrate how operationalizing a sociotechnical envelopment enables an organization to manage the trade-off between low explainability and high performance presented by inscrutable models. These contributions pave the way for more responsible, accountable AI implementations in organizations, whereby humans can gain better control of even inscrutable machine-learning models.
Full-text available
Artificial intelligence (AI) technologies offer novel, distinctive opportunities and pose new significant challenges to organizations that set them apart from other forms of digital technologies. This article discusses the distinct effects of AI technologies in organizations, the tensions they raise and the opportunities they present for information systems (IS) research. We explore these opportunities in term of four business capabilities: automation, engagement, insight/decision making and innovation. We discuss the differentiated effects that AI brings about and the implications for future IS research.
Full-text available
Purpose This research draws upon decision-making theory to study job choice decisions. Past studies measured job choice as a single-stage, compositional process addressing the weights and part-worth utilities of a selected number of job and organizational attributes. However, the presence of noncompensatory attributes and whether the utilities and weights attached to the attributes vary among applicants have not been addressed. The authors posit that a conjoint analysis is an accurate methodological technique to explain job choice and overcome these limitations. Design/methodology/approach Using a random sample of 571 participants, we conducted an adaptive choice-based conjoint analysis to estimate the weighted utilities of eight employer attributes and a cluster analysis to identify differences in preferences among employee profiles. Findings The results reveal that the use of the conjoint technique contributes to the literature in two ways. First, the results demonstrate the relevance of nonnegotiable attributes in the design of job offers. The results show that Salary, Flexibility and Ethics serve as cutoff points. Second, the results highlight the importance of considering the latent preferences of applicants in crafting effective job offers and adequately segmenting job applicants. More specifically, the following three groups are identified: Career-seeking applicants, Sustainability-oriented applicants and Pragmatic applicants. Practical implications The managerial implications of this study are relevant for HR and employer brand managers since a better understanding of the job-choice process and implementing a decompositional method to understand applicants' preferences could allow firms to provide more customized and relevant job offers to employees of interest. Originality/value This study concludes that to implement efficient employer-attraction branding strategies, employers should understand the attributes considered noncompensatory by their employee target audience, promote the most valued/important attributes to ensure that job offers are customized to fit employees' underlying preferences, and devise trade-off strategies among compensatory attributes.
Conference Paper
Full-text available
Information systems (IS) have been introduced in enterprises for decades to generate business value. Historically systems that are deeply integrated into business processes and not replaced remain vital assets, and thus become legacy IS (LISs). To secure the future success, enterprises invest in innovative technologies such as artificial intelligence-based services (AIBSs), enriching LISs and assisting employees in the execution of work-related tasks. This study develops design requirements from a managerial perspective by following a mixed-method approach. First, we conducted ten interviews to formulate requirements to design AIBSs. Second, we evaluated their business value using an online survey (N = 101). The results indicate that executives consider design requirements as relevant that create strategic advancements in the short term. With the help of our findings, researchers can better understand where further in-depth studies are needed to refine the requirements. Practitioners can learn how AIBSs generate business value when enriching LISs.
Full-text available
We propose that digital technologies and related data become increasingly prevalent and that, consequently, ethical concerns arise. Looking at four principal stakeholders, we propose corporate digital responsibility (CDR) as a novel concept. Specifically, we define CDR as the set of shared values and norms guiding an organization's operations with respect to the four main processes related to digital technology and data. These processes are the 1) creation of technology and data capture, (2) operation and decision making, (3) inspection and impact assessment, and (4) refinement of technology and data. On this basis, we expand our discussion of CDR by highlighting how to managerially effectuate CDR compliant behavior based on an organizational culture perspective. Our proposed conceptualization of CDR unlocks future research opportunities related to refining and expanding the concept, especially regarding pertinent antecedents and consequences. Managerially, we shed first light on how an organization's shared values and norms regarding CDR can get translated into actionable guidelines for users. This provides grounds for future discussions related to CDR readiness, implementation, and success.
Full-text available
Artificial Intelligence tools have attracted attention from the literature and business organizations in the last decade, especially by the advances in machine learning techniques. However, despite the great potential of AI technologies for solving problems, there are still issues involved in practical use and lack of knowledge as regards using AI in a strategic way, in order to create business value. In this context, the present study aims to fill this gap by: providing a critical literature review related to the integration of AI to organizational strategy; synthetizing the existing approaches and frameworks, highlighting the potential benefits, challenges and opportunities; presenting a discussion about future research directions. Through a systematic literature review, research articles were analyzed. Besides gaps for future studies, a conceptual framework is presented, discussed according to four sources of value creation: (i) decision support; (ii) customer and employee engagement; (iii) automation; and (iv) new products and services. These findings contribute to both theoretical and managerial perspectives, with extensive opportunities for generating novel theory and new forms of management practices.
Organizations considering green IT initiatives seek to reconcile two significant goals: environmental sustainability (“doing good”) and business profitability (“doing well”). These two purposes, however, are not necessarily aligned and can engender a dilemma in which investing to achieve environmental sustainability may threaten business profitability to some extent. Thus, this research builds on the theoretical perspective of corporate ecological responsiveness and proposes three types of strategic drivers for green IT initiatives (economic, authority, and moral drivers). How various types of organizations may be motivated by these green IT initiative drivers will affect whether organizations can reconcile the objectives of environmental sustainability and business profitability. We further deduce findings from multiple case studies of eight organizations in China and Singapore. The findings illustrate that the characteristics of any given organization influence why various driving factors affect their green IT initiative decisions. Additionally, “doing good” and “doing well” are reconciled when organizations consider the short-term investment and long-term benefit, design appropriate strategy, and receive severe external pressure. We also discuss contributions to research and practice.
Modern organizations must consider corporate social responsibility (CSR) and its implications. CSR involvement carries many potential benefits, including opportunities to promote stakeholder engagement with the organization, particularly among young people (i.e., millennials) in the emergent workforce. Collectively, millennials are often described as both socially active and self‐centered. These seemingly antithetical motives complicate the execution of optimal recruitment practice. This empirical study examined the impact of CSR activity and relative pay level signals on organizational attractiveness. Participants who were seeking or soon to be seeking employment in the United States responded to a hypothetical company profile. Confirming expectations, results revealed an effect of CSR information on organizational attractiveness, wherein notification of CSR involvement enhanced attractiveness. Pay level and CSR notification did not significantly interact, indicating that the effects of CSR on attractiveness are not moderated by information about pay levels. The implications of the results and future research recommendations are discussed.
As far back as the industrial revolution, significant development in technical innovation has succeeded in transforming numerous manual tasks and processes that had been in existence for decades where humans had reached the limits of physical capacity. Artificial Intelligence (AI) offers this same transformative potential for the augmentation and potential replacement of human tasks and activities within a wide range of industrial, intellectual and social applications. The pace of change for this new AI technological age is staggering, with new breakthroughs in algorithmic machine learning and autonomous decision-making, engendering new opportunities for continued innovation. The impact of AI could be significant, with industries ranging from: finance, healthcare, manufacturing, retail, supply chain, logistics and utilities, all potentially disrupted by the onset of AI technologies. The study brings together the collective insight from a number of leading expert contributors to highlight the significant opportunities, realistic assessment of impact, challenges and potential research agenda posed by the rapid emergence of AI within a number of domains: business and management, government, public sector, and science and technology. This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.