ArticlePDF Available

Designing, developing, and deploying artificial intelligence systems: Lessons from and for the public sector


Abstract and Figures

Artificial intelligence applications in cognitive computing systems can be found in organizations across every market, including chatbots that help customers navigate websites, predictive analytics systems used for fraud detection, and augmented decision-support systems for knowledge workers. In this article, we share reflections and insights from our experience with AI projects in the public sector that can add value to any organization. We organized our findings into four thematic domains—(1) data, (2) technology, (3) organizational, and (4) environmental—and examine them relative to the phases of AI. We conclude with best practices for capturing value with cognitive computing systems.
Content may be subject to copyright.
Designing, developing, and deploying
artificial intelligence systems: Lessons
from and for the public sector
Kevin C. Desouza
*, Gregory S. Dawson
, Daniel Chenok
QUT Business School, Queensland University of Technology, 2 George Street, Brisbane
City, QLD 4000, Australia
W.P. Carey School of Business, Arizona State University, Tempe, AZ 85287, U.S.A.
IBM Center for the Business of Government, IBM, U.S.A.
Artificial intelligence
Cognitive computing
Innovation management;
Technology adopti on
Abstract Artificial intelligence applications in cognitive computing systems can
be found in organizations across every market, including chatbots that help cus-
tomers navigate websites, predictive analytics systems used for fraud detection,
and augmented decision-support systems for knowledge workers. In this article,
we share reflections and insights from our experience with AI projects in the public
sectorthatcanaddvaluetoanyorganization. We organized our findings into
four thematic domainsd(1) data, (2) technology, (3) organizational, and (4)
environmentaldand examine them relative to the phases of AI. We conclude with
best practices for capturing value with cognitive computing systems.
ª2019 Kelley School of Business, Indiana University. Published by Elsevier Inc. All
rights reserved.
1. AI’s history and value capture
Research and development of artificial intelli-
gence (AI) systems have a rich history. There was a
flurry of interest when it was originally conceptu-
alized in the 1950s, but it quickly fizzled in the
face of AI’s technical realities. Advances in infor-
mation and computational sciences over the last
decade have provided the resources necessary to
finally capitalize on the early discoveries and
technology underpinning AI systems. As noted by
Kai-Fu Lee (2018, p. 12), our current innovations in
AI are “merely the application of the past decade’s
breakthroughdprimarily deep learning but also
complementary technologies like reinforcement
learning and transfer learningdto new problems.”
Today, AI applications can be found in organiza-
tions across every sector and include chatbots that
* Corresponding author
E-mail addresses: (K.C.
Desouza), (G.S. Dawson), (D. Chenok)
0007-6813/ª2019 Kelley School of Business, Indiana University. Published by Elsevier Inc. All rights reserved.
Business Horizons (xxxx) xxx, xxx
Available online at
help customers navigate websites, predictive an-
alytics systems used for fraud detection,
augmented decision-support systems for knowl-
edge workers, and semi- and fully-autonomous
systems in transportation, defense, and
For the last 6 years, we studied AI systems
across public, private, and nonprofit sectors. Our
projects spanned multiple industry sectors,
including healthcare, law enforcement, educa-
tion, social services, defense, finance, manage-
ment consulting, and infrastructure engineering.
During these engagements, we had a firsthand
view of critical issues organizations must contend
with as they design, develop, deploy, and assess AI
While most AI projects are complex, we assert
that those within the public sector present some
unique challenges:
The public sector must contend with complex
policy, societal, legal, and economic ele-
ments that might be skirted by their private-
sector counterparts;
Public-sector AI projects must advance the
public good (Cath, Wachter, Mittelstadt,
Taddeo, & Floridi, 2018) yet also deliver
public value (Crawford, 2016a);
These projects must go beyond simple cost
and efficiency gains to satisfy a richer and
diverse set of stakeholders who may have
conflicting agendas;
The need for transparency (Bryson &
Winfield, 2017; Edwards & Veale, 2017) and
fairness (Chouldechova, 2017; Crawford,
2016b) in decision making and system opera-
tions adds to the complexity of public-sector
AI projects; and
Given that public-sector projects and systems
are taxpayer-funded, these efforts face reg-
ular scrutiny and oversight that is generally
not seen in the private sector (BBC News,
While the aggregation of these factors is unique to
the public sector, individual components apply to
virtually every domain. For example, a nonprofit
may have a much more diverse group of stake-
holders than a privately held business, but a pri-
vately held business may have a higher
requirement for cost and efficiency consider-
ations. Thus, we believe that an examination of AI
in the public sector can provide insights applicable
to all organizations.
In this article, we share reflections and insights
from our experience with AI projects in the
public sector. Within the phases of AI system en-
gineering and implementation, we organized our
findings into four thematic domainsd(1)
data, (2) technology, (3) organizational, and (4)
environmentaldand examine these domains rela-
tive to the phases of AI.
2. Background on cognitive computing
AI systems are often considered part of a larger set
of cognitive computing systems (CCSs; Desouza,
2018). CCSs, as the name implies, have cognition
due to their learning functions. A CCS can learn in
one of two basic ways: supervised and unsuper-
vised. In supervised learning, each record in the
training dataset is tagged with its correct classifi-
cation so that the machine learns what makes a
record more or less likely to be in each group. In a
fraud detection exercise, each record is tagged as
either fraudulent or non-fraudulent, and the ma-
chine identifies other attributes in the record that
help to distinguish the two groups. The system may
find that individuals from a certain school (e.g.,
University of Arizona) are more likely to commit
fraud than those individuals from another school
(e.g., Arizona State University). The system looks
for the best distinguishing attributes to describe
one group versus another.
In unsupervised learning, the system discovers
previously unknown patterns or groupings in the
data. This is common when a state looks to better
understand its citizens. In this case, the machine
runs, unsupervised, to discover patterns and types
of people that exist within the state. A state may
discover four different types of citizens along with
their prevalence (e.g., few retirees and lots of
families with young children). A human interprets
the groupings that emerge; using this information,
the state can make better-informed decisions
about whether to fund more education or more
services for seniors. Choosing between supervised
and unsupervised learning is an exercise in un-
derstanding the trade-offs that exist between the
accuracy of the learning and its interpretability.
Thus, numerous selection considerations come
into play when making the final decision (Lee &
Shin, 2020).
Several characteristics differentiate CSSs from
other systems. These five characteristics drive
development and deployment choices:
2 K.C. Desouza et al.
1. CSSs learn from both data and human in-
teractions, and both are required for suc-
cessful deployment;
2. CSSs are context-sensitive and draw on
environmental characteristics (e.g., user
profiles and previous interactions) to deal
with information requests;
3. CSSs recall history (i.e., previous in-
teractions) in developing recommendations
and groupings;
4. CSSs interact with humans through natural
language processing; and
5. CSSs provide confidence-weighted recom-
mendations (i.e., outcomes) that can be
acted upon by humans.
Numerous examples of successful CCS deployments
exist in the public sector (Desouza, 2018; Mehr,
2017). The U.S. government has implemented an
AI-based chatbot app that helps potential refugees
to the U.S. answer a series of questions to help
determine required forms as well as assess whether
a refugee is eligible for protection. North Carolina
uses AI-based chatbots to answer basic help center
questions, which makes up approximately 90% of its
phone calls. The Mexican government is piloting an
AI tool to classify citizen petitions and then route
them to the correct department. Finally, the orga-
nizers of the Pyeongchang Winter Olympics devel-
oped and deployed an AI-based tool for real-time
However, there have been some notable AI
failures. Police departments have purchased an AI-
based tool that executes real-time facial recogni-
tion. Unfortunately, the tool returned a high
number of false positives and falsely matched 28
members of Congress with mugshots of unrelated
individuals (Santamicone, 2019). The City of Chi-
cago, in an effort to identify people most likely to
be involved in a shooting, developed an algorithm
to identify such individuals and stop them from
making a firearm purchase. Yet, a report by the
RAND Corporation showed that the output from
the tool is less effective than a most-wanted list
and, even worse, targets innocent citizens for
police attention (Stroud, 2016).
These examples show the power of CCS
deployment in the public sector as well as the
potential risks. It is not surprising that technology
experts generally agree on the power of AI to
transform the economy and society, but remain
sharply divided over whether the transformation
will be helpful or harmful (Kietzmann & Pitt,
2020). Managers are responsible for deciding if,
where, why, and how AI should be adopted so as to
capture its benefits while mitigating its risks
(Kietzmann & Pitt, 2020).
3. Designing cognitive computing
Organizations need to be strategic when designing
CCSs and choose between a low-hanging-fruit
initiative and a bold challenge. Then, they need
to ensure the proper infrastructure is in place,
both internally and externally, to complete the
project successfully.
3.1. Low hanging fruit vs. grand challenge
With a low-hanging-fruit initiative, the organiza-
tion might look at a problem for which automation
through the deployment of a CCS can alleviate
mundane work and increase efficiencies. In a
grand challenge, the organization can look at how
CCSs open opportunities for business model
transformation and disruptive innovation. Indeed,
the use of CCSs can play a key role in developing
innovation through the judicious application of
analytics (Kakatkar, Bilgram, & Fu
¨ller, 2020).
Organizations with immature information sys-
tems capability often benefit from tackling low
hanging fruit firstda wise strategy for all types of
systems development but even more appropriate
for a CCS. This allows them the opportunity to gain
experience working through the various phases of
system development and deployment. They can
build on existing data assets and system capabil-
ities while extending their current knowledge base
and expertise. However, organizations that have
deep in-house expertise in information system
development and/or have sufficient IT resources
can successfully engage in a bold challenge. Here,
the organization can leverage the collective wis-
dom to identify key opportunities for CCS design
that align with internal or environmental disrup-
tions. As part of this initiative, the organization
can launch an educational campaign to increase
the workforce’s familiarity with cognitive systems
techniques and applications. This does not mean
that the organization should only look internally
for necessary resources. All organizations, but
particularly public-sector organizations, have a
rich ecosystem of commercial partners that can
contribute to the effort.
Technologically sophisticated organizations
ready to approach a bold challenge are the
Designing, developing, and deploying artificial intelligence systems 3
exception rather than the rule. While these
advanced firms infuse innovation into their
corporate DNA, most organizations lack the
maturity necessary to undertake a major cognitive
computing effort without first testing the waters
and improving with easier, lower-risk projects.
Finding small success with these low hanging fruit
will help the organization to mature and build
some credibility before tacking a bold challenge.
3.2. Ensure necessary capabilities
At the start, organizations must take a close look
at data, technological, organizational, and envi-
ronmental elements to determine if any CCS
implementation is feasible and, if so, which type
will most likely yield a successful result. Indeed,
care must be taken to see that the proposed so-
lution does benefit the organization since poorly
planned or conceived CCSs can hinder rather than
help an organization’s value chain (Canhoto &
Clear, 2020).
3.2.1. Data
The organization must determine whether the
data are available, accessible, and analyzable in
order to take advantage of CCS algorithms. These
issues need to be considered carefully if the or-
ganization is tackling an accessible opportunity.
While there will be costs associated with readying
data due to cleaning and connecting (i.e., inte-
grating), they should not be exorbitant. However,
if the data are not available, it could be a sizable
and costly effort to find, validate, and incorporate
the data into the CCS scheme. Absent available
data, the effort may be classified as a grand
In the case of grand challenges, ideas should be
solicited regarding what data to analyze, what
data sources need to be procured and integrated,
and what major trials the organization will
confront as data is readied. Within the public
sector, processes have evolved to deal with prop-
erly communicating the data risks of these grand
challenges before the effort begins. Unfortu-
nately, the public sector has perhaps over-
engineered this process, which could explain why
they take so much time.
If a private-sector organization goes after a
grand challenge, it needs to develop and imple-
ment a repeatable process for properly under-
standing the advantages and disadvantages of
collecting such data. In addition to the simple
technical aspects of collecting, cleansing, and
deploying the data, legal and ethical implications
must be considered. The stakes with these pro-
jects are high, and it is often helpful to collaborate
with outside experts to steer the data procure-
ment process although they should have no direct
or indirect role in the implementation to avoid
conflicting goals.
3.2.2. Technology
On the technological front, the organization must
have a good handle on its current IT assets both
from an infrastructure and capability perspective.
Regarding infrastructure, does the organization
have the necessary IT applications and allied
technical resources to undertake a CCS develop-
ment effort? If not, can the organization foster
existing or new partnerships to access the neces-
sary technical resources? Governments are
particularly apt at identifying skilled partners to
support the work. A private-sector organization
can also do more to leverage other potential
partners who can provide the missing resources
and, ideally, share in the risk and the reward of
the initiative. Some of the public agencies we
studied use a risk-versus-value determination for
aligning with partners. Risk can be high or low and
value can be high or low (see Table 1), and the
particular result can drive the partnership
In the case of government, the most salient
domain is risk; the natural tendency is to try to
minimize risk rather than maximize value. In
contrast, private-sector organizations are more
likely to prioritize value over risk and that can
alter their approach. The organization needs to
take a careful look at what expertise/knowledge
should be developed internally versus what can be
As we will discuss later, CCSs require training
from human experts and should be subject to au-
dits; organizations should carefully consider how
they continue to develop technical capabilities to
(1) retain the expertise to design the next-
generation of CCSs, and (2) investigate, fix, and
learn from CCS deployments. If too much work is
outsourced, the organization faces costly support
for future deployments, and this may be
Table 1. Risk and value typology
Low value High value
Low risk Explore
Perform work
High risk Outsource work Explore
4 K.C. Desouza et al.
particularly acute in private sector organizations
more focused on the value versus risk.
3.2.3. Organization
The organizational front looks internally to un-
derstand the organization’s current strengths and
weaknesses. For organizations new to CCSs, the
organizational front is considerably more chal-
lenging. Organizations often have trouble assessing
their own capabilities in a candid way.
This organizational front problem is even more
acute in government due to the long tenure of
most government managers and their frequent
lack of comparative outside experience. Without
an external frame of reference, an appropriate
assessment is more challenging. Agencies have
made some progress in this area through data
sharing, which helps all levels of government face
similar challenges and limitations.
In contrast, very few people in the private
sector have spent their careers with just one firm,
and most managers have a frame of reference
against which to evaluate their current employer.
If the organizational assessment reveals a lack of
maturity, the organization can simply buy/hire or
rent/contract with firms or individuals with the
necessary skills.
3.2.4. Environment
The environment front looks at any efforts un-
derway at other companies in the same or adja-
cent industries. The goal of the environment scan
also differs for public-sector versus private-sector
organizations due to the nature of the domains. A
realization that other government entities use
CCSs allows the public sector to request necessary
information and then apply that to their problem.
However, in addition to commercial companies
being unwilling to share their confidential infor-
mation, competition can drive CCS adoption in
order to avoid being boxed out of the market by a
rival firm already progressing strongly in this area.
Table 2 summarizes our findings for choosing a CCS
A government can perform an environment scan
more easily since most civilian government appli-
cations are transparent. Thus, public-sector orga-
nizations can share this information to develop a
clear sense of environmental status. Not surpris-
ingly, the private sector has a bigger hurdle with
this since most organizations will not freely share
their competitive plans with others in or adjacent
to their industry. Thus, the private sector needs to
do a great deal more speculating on the maturity
of efforts in other companies.
4. Developing cognitive computing
Best practices for developing a CCS are the same
as those necessary for developing non-cognitive
systems: Organizations must uncover re-
quirements, follow a robust methodology, and
involve skilled developers, project managers, and
users to ensure successful development. In this
section, we address some of the key differences in
developing a CCS versus other systems.
CCSs learn from data, and thus the focus is on
being able to understand the data and potential
biases (Wiggers, 2018). Wiggers (2018) identified
notable examples of these biases:
An AI system was used to predict whether a
defendant would commit a future crime. The
algorithm was found to overstate the risk for
black defendants versus white defendants.
A study by Boston University found that
datasets used to teach AI systems contained
“sexist semantic connections” and, for
example, considered “programmer” to be
more masculine than feminine.
A study by MIT found that facial recognition
software was 12% more likely to misidentify
black males from white males. The problem
of bias is real and needs to be addressed in
the development stage of the CCS.
New York City confronted this challenge with ef-
forts to make algorithms more accountable
(Powles, 2017). A 2018 law created a task force to
examine the city’s CCSsdwhich guide allocation
and deployment of resources ranging from police
officers and firefighters to public housing and food
stampsdto make sure data and support algorithms
are fair and transparent. As James Vacca, a New
York City Council member, said: “If we’re going to
be governed by machines and algorithms and data,
well, they better be transparent” (Powles, 2017).
An initial part of this legislation, since dropped,
would have required a city agency to turn over the
source code of any new CCS to the public and also
to simulate the algorithm’s performance using
data provided by New Yorkers.
Since systems will support decisions based on
data, there is value in understanding the data and
its biases; the data and its algorithms need to be
checked and validated prior to deployment. In
determining the quality of an algorithm, it is
necessary to envision the risk dimension we display
Designing, developing, and deploying artificial intelligence systems 5
in Table 1. For the public sector, the risk dimension
remains much more salient; as the risk of making a
bad decision rises from low to high, more evalua-
tion of the data and its algorithms becomes
necessary. For the private sector, a more balanced
assessment of risk versus value drives decision
The technical dimension of deploying CCSs has
improved over time due to the development of
open-source tools for auditing algorithms. In 2018,
a New York-based AI startup announced that it had
decided to open source its audit tool, used “to
determine whether a specific statistic or trait fed
into an algorithm is being favored or disadvan-
taged at a statistically significant, systematic rate,
leading to adverse impact on people underrepre-
sented in the dataset” (Johnson, 2018). However,
the tool does not actually remove the bias but
merely points the bias out to the developer. The
release of such open-source tools comes as the
industry continues to create tools that can also
detect biases (Wiggers, 2018).
Organizational issues in CCSs are probably the
most challenging since it likely will not have peo-
ple with the requisite skills in-house. In the public
sector, this has been partially solved with the use
of agile acquisition practices. Agile acquisition is a
procurement strategy that “integrates planning,
design, development, and testing into an iterative
lifecycle to deliver small, frequent, incremental
capability to the end user..[it] requires inte-
grated government and contractor processes and
partnerships” (MITRE, 2019). The proposal process
differs at the government level because it can
require the bidders to complete a sprint rather
than a written or oral proposal (White, Sateri, &
Table 2. CSS design issues
Category CCS Challenge Public Sector
Private Sector
Data Data
Data sources
Assess data
Walk away from
challenges if
data not a
Develop a repeatable process
for collecting data
Determine harm from poor
decision making
Hire experts to steer the
Technical Current asset
Identification of
Risk versus
Focus on risk
Internal exper-
tise versus
Eye on future
Balance risk and value
Internal expertise versus
Organizational Challenge in
Engage outside
Sharing data
Hire missing
Leverage network of
internal staff
Environment Challenge in
disclosure of
Leverage network of
internal staff
Engage especially if other
firms are engaging
6 K.C. Desouza et al.
Chawanasunthornpot, 2018). Although not yet
adopted for CSS development by any known gov-
ernment entity, agile acquisition can dramatically
lower the risk associated with development.
Rather than attempting to acquire an entire sys-
tem with a single procurementda risky and
lengthy endeavordthe government can buy indi-
vidual pieces of the system. Given the newness of
CCS development in government, this should
dramatically lower the risk.
Private sector firms can adopt agile acquisition
for the development of their CCSs since the same
risk reduction goals exist as with the public sector,
and the use of this strategy should build skill levels
available to the organization. Environmental issues
closely track to technical issues during the devel-
opment phase. The right tools can aid in the
development of the CCS, many of which can tackle
biases in the data. Given the expansion of open-
source tools, the adoption of such approaches
should increase significantly. Table 3 summarizes
our CSS development findings.
5. Deploying cognitive computing
While a great deal of work goes into CCS deploy-
ment, significant amount of work needs to happen
after deployment. Two aspects are particularly
salient. First, these systems must be audited.
While all systems need monitoring, continuously
learning CCSs can learn from the wrong data.
Algorithmic biases can easily sneak into a CCS, and
the organization needs to understand the risk and
preemptively test for it. In 2016, the Obama
Administration directly called on companies to
audit their algorithms. DJ Patil, former chief data
scientist for the U.S. said at the time: “Right now
the ability of algorithm to be evaluated is an open
question. We don’t know technically how to take
the tech box and verify it” (Hempel, 2018). Un-
fortunately, there are few standards for auditing
an algorithm; even if such criteria existed, a
company may have to disclose trade secrets to the
auditor. Singapore and Korea recently announced
the formation of an AI ethics board, as have
various industry groups.
An audit could have value by revealing that a
company does not discriminate against different
classes of individuals. By passing an audit, the
company could use marketing success as a sort of
“Good Housekeeping seal of approval, signaling to
customers a higher standard of ethics and building
trust” (Hempel, 2018). In New York, a rich market
is developing ways to examine and certify algo-
rithms. In the New York City rental market, one
company has developed a program to grade the
maintenance of every rental building in the U.S.
Using information from 311 phone calls to the
citydwhich lodge violation complaints against the
buildingdthis program calculates the grades and
will certify it for an additional fee of $100e$1000
per year. An external examiner evaluated the
program’s logic and found that the algorithm was
fair (Hempel, 2018).
Second, the value of the CCS needs to be
determined. This taps the external dimension. The
public sector must consider value more holistically
than simply efficiency gains. The concept of public
value (Osborne, Radnor, & Nasi, 2013) has a rich
history in the public management literature.
Common elements captured in various public
values frameworks include equity, fairness, pro-
motion of the common good and public interest,
protecting the vulnerable and human dignity,
economic sustainability of the program/initiative,
and upholding the rule of law (e.g., Benington &
Moore, 2011; Chohan & Jacobs, 2018; Jørgensen
& Bozeman, 2007).
Organizations should examine efforts in the
public sector to consider more holistic approaches
to capturing value from CCSs. We recommend
thinking of value capture along four dimensions,
and the output of this thinking can be reflected in
the value/risk typology we presented in Table 1.
The following value can be derived:
Table 3. CSS development issues
Category CCS Challenge Public Sector Strategies Private Sector Strategies
Data Inadvertent bias in
data and algorithms
Risk-focus determination
on how good data and
algorithms need to be
Balanced risk/value determination
on how good data and algorithms
need to be
Technical Availability of tools
to audit for bias
Take advantage of tools Take advantage of tools
Organizational Lack of trained staff Agile acquisition strategy Agile acquisition strategy
Environment Availability of tools
to audit for bias
Take advantage of tools Take advantage of tools
Designing, developing, and deploying artificial intelligence systems 7
Process gains: costs or resources saved, time
cut from processes, etc.;
Output gains: being more effective;
Outcome gains: transforming the customer
experience and acting ethically; and
Network gains: shaping interactions with
other stakeholders due to the CCSs, adding to
brand and leadership position, capturing new
value from an existing supply network, and
adding value through new affordances and
influences to the overall ecosystem through
disruption and transformation.
The first two address the organization ensuring
that it ‘advances the business of today’ through
CCSs, while the last two ensure that it focuses on
‘advancing the business of tomorrow’ and being
relevant in the age of CSSs. We summarize the is-
sues in Table 4.
6. Harnessing new technologies
CCSs will change the nature of computing in a
dramatic way, and we are seeing the benefits of
their implementation in all aspects of government
and private industry. We are entering what futur-
ists already call the Fourth Industrial Revolution
(Schwab, 2017). In this fourth revolution, tech-
nologies allow a convergence of computing power
as transformative as the previous revolutions, but
with much greater reach.
On the positive side, this means the world may
become more connected than ever before,
dramatically improving the efficiency of organiza-
tions and potentially even regenerating the dam-
age of previous revolutions. However, as pointed
out by Klaus Schwab, this requires organizations to
adapt in novel and challenging ways and govern-
ments to manage new technologies and address
security concerns (Schwab, 2017). Schwab calls for
leaders and citizens to work together to harness
these new technologies. We concur with this call
and recommend a necessary first step to
acknowledge that technologies like CCSs introduce
challenges that need new management ap-
proaches, like those outlined in this and other ar-
ticles in this special issue.
Kevin C. Desouza acknowledges funding from
the IBM Center for the Business of Government
for his research project on AI and the public
BBC News. (2019, March 20). Artificial intelligence: Algorithms
face scrutiny over potential bias. Available at https://www.
Benington, J., & Moore, M. (2011). Public value: Theory and
practice. Basingstoke, UK: Palgrave Macmillan.
Bryson, J., & Winfield, A. (2017). Standardizing ethical design
for artificial intelligence and autonomous systems. Com-
puter, 50(5), 116e119.
Canhoto, A., & Clear, F. (2020). Artificial intelligence and ma-
chine learning as business tools: A framework for diagnosing
value destruction potential. Business Horizons, 63(2)
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L.
(2018). Artificial intelligence and the ‘good society’: The US,
EU, and UK approach. Science and Engineering Ethics, 24(2),
Chohan, U. W., & Jacobs, K. (2018). Public value as rhetoric: A
budgeting approach. International Journal of Public
Administration, 41(15), 1217e1227.
Chouldechova, A. (2017). Fair prediction with disparate impact:
A study of bias in recidivism prediction instruments. Big
Data, 5(2), 153e163.
Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes
from life in calculated publics. Science, Technology &
Human Values, 41(1), 77e92.
Crawford, K. (2016, June 25). Artificial intelligence’s white guy
problem. The New York Times. Available at https://www.
Table 4. CSS deployment issues
Category CCS Challenge Public Sector Strategies Private Sector Strategies
Data Inadvertent bias in
data and algorithms
Audit of the algorithm
to ensure accuracy
Audit of the algorithm
to ensure accuracy
Technical Availability of tools to
audit for bias
Take advantage of tools Take advantage of tools
Organizational Lack of trained staff Take advantage of tools Take advantage of tools
Environment Availability of tools to
audit for bias
Take advantage of tools Take advantage of tools
8 K.C. Desouza et al.
Desouza, K. C. (2018). Delivering artificial intelligence in gov-
ernment: Challenges and opportunities [White Paper].
Washington, DC: IBM Center for the Business of Government.
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a
‘right to an explanation’ is probably not the remedy you are
looking for. Duke Law and Technology Review, 16(1), 18e84.
Hempel, J. (2018, May 9). Want to prove your business is fair?
Audit your algorithm. Wired. Available at https://www.
Johnson, K. (2018, May 31). Pymetrics open-sources audit AI: An
algorithm bias detection tool. Venture. Available at https://
Jørgensen, T. B., & Bozeman, B. (2007). Public values: An in-
ventory. Administration & Society, 39(3), 345e381.
Kakatkar, C., Bilgram, V., & Fu
¨ller, J. (2020). Innovation ana-
lytics: Leveraging artificial intelligence in the innovation
process. Business Horizons, 63(2) (XXXeXXX).
Kietzmann, J., & Pitt, L. (2020). Artificial intelligence and
machine learning: What general managers need to know.
Business Horizons, 63(2) (XXXeXXX).
Lee, K.-F. (2018). AI super-powers: China, silicon, and the new
world order. Boston, MA: Houghton Mifflin Harcourt.
Lee, I., & Shin, Y. J. (2020). Machine learning for enterprises:
Applications, algorithm selection, and challenges. Business
Horizons, 63(2) (XXXeXXX).
Mehr, H. (2017). Artificial intelligence for citizen services and
government [White Paper]. Cambridge, MA: Ash Center for
Democratic Governance and Innovation.
MITRE. (2019). Agile acquisition strategy. Available at https://
Osborne, S., Radnor, Z., & Nasi, G. (2013). A new theory for
public service management? Toward a (public) service-
dominant approach. The American Review of Public
Administration, 43(2), 135e138.
Powles, J. (2017, December 20). New York City’s bold, flawed
attempt to make algorithms accountable. The New Yorker.
Available at
Santamicone, M. (2019, April 2). Is artificial intelligence racist?
Racial and gender bias in AI. Data Science. Available at
Schwab, K. (2017). The fourth industrial revolution. New York,
NY: Crown Business.
Stroud, M. (2016, August 19). Chicago’s predictive policing tool
just failed a major test. Verge. Available at https://www.
White, K., Sateri, G., & Chawanasunthornpot, P. (2018,
September 11). Agile acquisition: Why it makes sense to plan
from future backward. Federal News Network. Available at
Wiggers, K. (2018, May 25). Microsoft is developing a tool to help
engineers catch bias in algorithms. Venture. Available at
Designing, developing, and deploying artificial intelligence systems 9
... Often, a distinction is made between supervised machine learning, unsupervised machine learning and reinforcement learning. In supervised learning, labelled training data is used in order to predict new cases based on the existing information whereas in unsupervised learning insights from the data without a clear output are derived [32]. In the reinforcement learning approach, an AI system learns how to do a task well by giving a "reward" based on the output of the task [31]. ...
... AI as Superintelligence A futuristic machine or computer which (far) surpasses human intelligence [13,25,26] AI as General Intelligence A futuristic machine or computer which displays equal human-like intelligence in a variety of domains [31,40] AI as Narrow Intelligence Current artificial intelligence in which systems display human-like intelligence in one specific function [41,42] AI methods/techniques Techniques and methods that allow the analysis of large volumes of data to develop AI such as case-based reasoning, cognitive mapping, fuzzy logic, machine learning, multi-agent systems, rule-based systems amongst many others. These may fall under supervised, unsupervised and reinforcement learning methods [14,27,32,[43][44][45] AI as human-like cognitive capability ...
... In this respect, this may either be AI as specific learning techniques, AI as an ability, AI as a specific application or a futuristic form of AI, or a mixture between them, as these dimensions support and even expand upon one another. Each of these concepts also follows the different stages of the adoption; from development & design to the application and implementation of AI, with each phase having different requirements or study perspectives as a result [32]. More fundamentally is that, when researching the use of AI by civil servants, for instance, is that merely using general definitions of AI may thus be insufficient to account for all the different ways this term is used. ...
... The theme of IT assets identifies an organisation's digital maturity as the determinant of AI adoption. IT assets include cloud computing capabilities (Coglianese & Lehr, 2017); digital infrastructure in terms of connectivity, bandwidth, processing power, and networks (Alshahrani, Dennehy, & Mäntymäki, 2021;Chatfield & Reddick, 2018;Desouza, Dawson, & Chenok, 2020;Schedler, Guenduez, & Frischknecht, 2019;van Noordt & Misuraca, 2020a;van Noordt & Misuraca, 2020b;Wirtz & Müller, 2018); "compatibility" of existing assets with new AI technologies (Schaefer et al., 2021, p. 6); and ability to integrate systems and data (Erkut, 2020;Mikalef, Fjortoft, & Torvatn, 2019;Rogge, Agasisti, & De Witte, 2017). The data related assets are identified as data accessibility, internally within the organisation or externally, and quality (Alshahrani et al., 2021;Ballester, 2021;Fatima, Desouza, Buck, & Fielt, 2021;Gao & Janssen, 2020;; database management infrastructure and enterprise architecture (Gong & Janssen, 2021;; ownership and sharing of data between governmental agencies (Alshahrani et al., 2021;Campion, Gasco-Hernandez, Mikhaylov, & Esteve, 2020;Janssen, Brous, Estevez, Barbosa, & Janowski, 2020;Makasi, Tate, Desouza, & Nili, 2021;Pencheva, Esteve, & Mikhaylov, 2020;Rogge et al., 2017;Vogl, Seidelin, Ganesh, & Bright, 2019); and cloud storage (Coglianese & Lehr, 2017). ...
... The related theme of IT capabilities identifies current capabilities in managing IT assets, basic employee knowledge in AI and big data, and a data-oriented culture essential to building AI capabilities (Ballester, 2021;Campion et al., 2020;Casalino, Saso, Borin, Massella, & Lancioni, 2020;Chatfield & Reddick, 2018;Chen, Ran, & Gao, 2019;Clarke & Margetts, 2014;Desouza et al., 2020;Giest, 2017;Janssen, Brous, et al., 2020;Medaglia et al., 2021;Ojo et al., 2019;Pencheva et al., 2020;van Noordt & Misuraca, 2020a;van Noordt & Misuraca, 2020b). Specialised capabilities are required to develop, deploy, and manage AI assets. ...
... Specialised capabilities are required to develop, deploy, and manage AI assets. A lack of AI experts within public administration requires access to an ecosystem of commercial partners and external AI specialists (Alexopoulos et al., 2019;Campion et al., 2020;Desouza et al., 2020;Makasi et al., 2021;Medaglia et al., 2021;Wirtz & Müller, 2018). ...
Artificial Intelligence (AI) implementation in public administration is gaining momentum heralded by the hope of smart public services that are personalised, lean, and efficient. However, the use of AI in public administration is riddled with ethical tensions of fairness, transparency, privacy, and human rights. We call these AI tensions. The current literature lacks a contextual and processual understanding of AI adoption and diffusion in public administration to be able to explore such tensions. Previous studies have outlined risks, benefits, and challenges with the use of AI in public administration. However, a large gap remains in understanding AI tensions as they relate to public value creation. Through a systematic literature review grounded in public value management and the resource-based view of the firms, we identify technology-organisational-environmental (TOE) contextual variables and absorptive capacity as factors influencing AI adoption as discussed in the literature. To our knowledge, this is the first paper that outlines distinct AI tensions from an AI implementation and diffusion perspective within public administration. We develop a future research agenda for the full AI innovation lifecycle of adoption, implementation, and diffusion.
... While AI algorithms have been used by humans in traditional organisational settings before, lately, there is an increasing tendency to outsource decision-making authority to AI-based algorithmic governance (Danaher et al., 2017). This tendency corresponds to the current institutional inertia regarding climate change, which has partly to do with the limited human cognitive capacity to process and make decisions on highly complex matters (Coeckelbergh, 2021;Desouza et al., 2020;Gasser & Almeida, 2017;Zambonelli et al., 2018). However, there are several risks associated with the outsourcing of mobility system optimisation to algorithmic governance, which are scarcely discussed in technical MaaS and transport literature. ...
Full-text available
Highlights • Data collection and processing that is crucial to MaaS, might reproduce socio-political inequalities • AI-driven customisation and nudging of end-user demand, and integration of mobility service supply ignore rebound effects, that can only be avoided if sustainability objectives are central in governance processes • When mobility system optimisation through AI becomes more widespread, MaaS might become a form of algorithmic governance, giving rise to hybrid forms of governance between humans and algorithms in mobility • A framework of hybrid governance for sustainability needs co-creation between technologists, policymakers, citizens, and algorithms ABSTRACT Mobility-as-a-Service (MaaS) is regarded as key innovation for sustainable mobility, with data /AI playing a central role. This paper explores the nexus of data-AI-governance in MaaS to understand in how far sustainability is addressed. While the role of data and AI is covered by technical literature, and governance by social science literature, these discussions remain largely separate in MaaS. This paper aims to redress this issue through an interdisciplinary narrative literature review that brings together these literature sets. The research question is: How does the data-AI-governance nexus in MaaS give rise to hybrid forms of governance between humans and algorithms and what are the implications for sustainable mobility? Results show that: (1) The data collection and processing that is crucial to MaaS, might reproduce socio-political inequalities. (2) AI-driven customisation and nudging of end-user demand ignores rebound effects, that can only be avoided if sustainability objectives are central. (3) Inadequate integration of mobility service supply might exacerbate mobility challenges. (4) When mobility system optimisation through AI becomes more widespread, MaaS platforms might become a form of algorithmic governance. (5) Whether sustainability can be reached, depends on how and by whom (sustainability) objectives of algorithms will be decided. The paper concludes that a framework of hybrid governance for sustainability requires close collaboration between policymakers and industry players and acknowledging AI algorithms as important non-human actors. The paper contributes to conceptual debates on sustainability and data/AI, governance and data/AI in MaaS and beyond, and to policymaking on aligning platform systems with sustainability.
... However, several peculiarities mean previous literature on public sector innovation and ICT implementation is not fully applicable to the use of AI in government. First, the technology is different and has peculiar features -in a nutshell AI is different from standard digital technologies as it does not follow simple if-then logic, meaning schemes are no longer simple artefacts but actually a new class of organisational agents (Desouza et al., 2020;Maragno et al., 2022;Raisch & Krakowski, 2021). Second, the adoption of AI is different in the public and private sectors as the first poses unique challenges. ...
... Most of the contemporary urban e-services are essentially exchanges of information, forms and agreements, and their digitalization process is, in principle, straightforward. However, due to the common lack of collective designing of public sector information technology system architectures, the task has proven to be more difficult in practice than anticipated [23,24]. ...
Full-text available
Artificial intelligence (AI) deployment is exceedingly relevant to local governments, for example, in planning and delivering urban services. AI adoption in urban services, however, is an understudied area, particularly because there is limited knowledge and hence a research gap on the public's perceptions-users/receivers of these services. This study aims to examine people’s behaviors and preferences regarding the most suited urban services for application of AI technology and the challenges for governments to adopt AI for urban service delivery. The methodological approach includes data collection through an online survey from Australia and Hong Kong and statistical analysis of the data through binary logistic regression modeling. The study finds that: (a) Attitudes toward AI applications and ease of use have significant effects on forming an opinion on AI; (b) initial thoughts regarding the meaning of AI have a significant impact on AI application areas and adoption challenges; (c) perception differences between the two countries in AI application areas are significant; and (d) perception differences between the two countries in government AI adoption challenges are minimal. The study consolidates our understanding of how the public perceives the application areas and adoption challenges of AI, particularly in urban services, which informs local authorities that deploy or plan to adopt AI in their urban services. Doi: 10.28991ESJ-2022-06-06-01 Full Text: PDF
The use of AI algorithms to support public decisions is spreading in public administrations while with an asymmetrical speed it has already transformed the private sector. Substantial questions are still open about the algorithmic legality of AI choices. Deep neural networks do not allow full eXplainability of the decision-making process as the programmer himself does not have the ability to be aware of the logical steps taken to achieve the proposed goal. In these cases the question arises of the “AI black box” which collides with the rules of Italian administrative law which, on the contrary, seeks total transparency as a great metaphor for a “glass house” public administration and is oriented towards a shared administration with citizens. The objective of the paper is to explore the torsion points of algorithmic decision making towards the rules of the administrative procedure, bringing the algorithmic procedure back to a transparency in line with the European Commission's choice of an anthropocentric and transparent artificial intelligence by design that aims the improvement of the algorithm's eXplainability techniques (XAI). If refining XAI will be the key to a complete application of AI in the PA, this path, in order not to slow down the spread of AI and thus increase the gap to the detriment of the public sector, will have to be preceded by a logic of experimentation and incremental input: from the widespread automation of routine activities and administrative procedures with no margin of discretion to a gradual experimentation of AI solutions in complex use cases to support public decision-making through parallel analysis and verification processes. This approach can only be guaranteed through a joint work of jurists and computer scientists in the design of transparent algorithms by design, in the dogmatic construction of an Algorithmic Decision Making to respond to the principle of enhanced transparency required to guarantee the legitimacy of A.I. in administrative activity. KeywordsArtificial IntelligencePublic decisioneXplainability
Advances in big data and artificial intelligence (AI), including machine learning (ML) and other cognitive computing technologies (CCT), have facilitated the development of human resource management (HRM) applications promising greater efficiency, economy, and effectiveness for public administration (Maciejewski, 2017) and better alignment with the modern, constantly evolving employment landscape. It is not surprising then that these advanced technologies are featured in proposals to elevate the government’s human capital. This article discusses current and emerging AI applications that stand to impact most (if not all) HRM functions and their prospects for elevating public human capital. In particular, this article (a) reviews the current state of the field with regards to AI and HRM, (b) discusses AI’s current and potential impact upon the core functional areas of HRM, (c) identifies the main challenges AI poses to such concerns as public values, equity, and traditional merit system principles, and (d) concludes by identifying research needs for public HRM scholarship and practice that highlight the growing role and influence of AI applications in the workplace.
The research objective is to is to develop an algorithm to computerise the process of allocating the limited resources of a local government to maximise the needs of the community. With limited financial resources, local governments must determine the optimum volume of planned services to be provided. The increasing amount of information, as well as the need for rapid management decision-making, necessitates the use of information and computer technologies (ICT) in this area.Research methods used: comparative analysis, planning theory, utility analysis, design of software module, analytical methods.Results of the research: The paper contributes to the theoretical studies about ICT implementation in local governance. Also, the paper contributes to the discussion of the practical implementation of ICT in the allocation of limited resources at the local governance level.KeywordsAlgorithmUtilitySoftwareManagementExpenditure optimisation
Artificial Intelligence (AI) is increasingly adopted by organizations to innovate, and this is ever more reflected in scholarly work. To illustrate, assess and map research at the intersection of AI and innovation, we performed a Systematic Literature Review (SLR) of published work indexed in the Clarivate Web of Science (WOS) and Elsevier Scopus databases (the final sample includes 1448 articles). A bibliometric analysis was deployed to map the focal field in terms of dominant topics and their evolution over time. By deploying keyword co-occurrences, and bibliographic coupling techniques, we generate insights on the literature at the intersection of AI and innovation research. We leverage the SLR findings to provide an updated synopsis of extant scientific work on the focal research area and to develop an interpretive framework which sheds light on the drivers and outcomes of AI adoption for innovation. We identify economic, technological, and social factors of AI adoption in firms willing to innovate. We also uncover firms' economic, competitive and organizational, and innovation factors as key outcomes of AI deployment. We conclude this paper by developing an agenda for future research.
Full-text available
Artificial Intelligence (AI) and Machine Learning (ML) may save costs, and improve the efficiency of business processes. However, these technologies can also destroy business value, sometimes critically. The inability to identify how AI and ML may destroy value for businesses, and manage that risk, lead some managers to delay the adoption of these technologies, and, hence, prevents them from realizing the technologies’ potential as business tools. This article proposes a new framework by which to map the components of an AI solution, and to identify and manage the value destruction potential of AI and ML for businesses. We show how the defining characteristics of AI and ML risk the integrity of the AI system’s inputs, processes and outcomes. We, then, drawn on the concepts of value creation content and value creation process to conceptualize how these risks may hinder the process of value creation and actually result in value destruction. Finally, we illustrate the application of our framework with the example of the deployment of an AI powered chatbot in customer service, and discuss how to remedy the problems identified.
Full-text available
Artificial Intelligence (AI) is about imbuing machines with a kind of intelligence that would otherwise mainly be attributable to humans. Extant literature suggests that, while AI may not yet be ready to completely take over the more creative tasks within the innovation process, our experience of using AI in the field suggests that it may be able to significantly support innovation managers nonetheless. In this article, we broadly refer to the derivation of computer-enabled, data-driven insights, models and visualizations within the innovation process as innovation analytics. We argue that AI can play a key role in the innovation process by driving multiple aspects of innovation analytics. We consider the fuzzy front end of the innovation process as a “double diamond” that spans exploration and selection of concepts in the problem and solution space, and outline the aspects of innovation analytics where AI can play an important role. We then present four different case studies of AI in action – one for each part of the innovation process – based on our previous work in the field. The cases demonstrate how AI-enabled innovation analytics can yield richer insights in a more cost effective manner. Finally, we conclude with implications for innovation managers, highlighting the benefits and limitations of using AI in innovation.
Full-text available
The public value theory has been accused of serving as a “rhetorical device” for public managers to advance their interests and influence vis-à-vis politicians. This article uses Legislative Budget Offices (LBOs) as a lens to re-examine the theme of “public value as rhetoric”. It examines how an LBO can relegate itself to a lower public value-creating position that avoids conflict with politicians, which then allows politicians to employ rhetoric such as fiscal “sustainability” and “responsibility”, without making actual budget choices that incur political costs. The findings of the article suggest that the use of public value as rhetoric is a function of contradictory values held by citizens, which politicians and public managers must reconcile by choosing to divert either resources or rhetoric. Furthermore, rhetoric is bidirectional, and employable not just by public managers, but by politicians as well.
Full-text available
AI is here now, available to anyone with access to digital technology and the Internet. But its consequences for our social order aren't well understood. How can we guide the way technology impacts society?
Surgery is an exceptionally complex domain where multi-dimensional expertise is developed over an extended period of time, and mastery is maintained only through ongoing engagement in surgical contexts. Expert surgeons integrate perceptual information through both conscious and subconscious awareness, and respond to the environment by leveraging their deep understanding of surgical constructs. However, their ability to utilize these deep knowledge structures can be complicated by continuous advances in technology, medical science, pharmacology, technique, materials, operative environments, etc. that must be routinely accommodated in professional practice. The demands on surgeons to perform perfectly in ever-changing contexts increases cognitive load, which could be reduced through judicious use of accurate and reliable artificial intelligence (AI) systems. AI has great potential to support human performance in complex environments such as surgery; however, the foundational requirements for the rules governing algorithmic development of performance requirements necessitate the active involvement of surgeons to precisely model the quantitative measures of performance along the continuum of expertise. Providing the AI development community with these data will help assure that accurate and reliable systems are designed to supplement human performance in applied surgical contexts. The Military Health System’s Clinical Readiness Program is developing these types of metrics to support military medical readiness.
Artificial intelligence (AI) is about imbuing machines with a kind of intelligence that is mainly attributed to humans. Extant literature—coupled with our experiences as practitioners—suggests that while AI may not be ready to completely take over highly creative tasks within the innovation process, it shows promise as a significant support to innovation managers. In this article, we broadly refer to the derivation of computer-enabled, data-driven insights, models, and visualizations within the innovation process as innovation analytics. AI can play a key role in the innovation process by driving multiple aspects of innovation analytics. We present four different case studies of AI in action based on our previous work in the field. We highlight benefits and limitations of using AI in innovation and conclude with strategic implications and additional resources for innovation managers.
Machine learning holds great promise for lowering product and service costs, speeding up business processes, and serving customers better. It is recognized as one of the most important application areas in this era of unprecedented technological development, and its adoption is gaining momentum across almost all industries. In view of this, we offer a brief discussion of categories of machine learning and then present three types of machine-learning usage at enterprises. We then discuss the trade-off between the accuracy and interpretability of machine-learning algorithms, a crucial consideration in selecting the right algorithm for the task at hand. We next outline three cases of machine-learning development in financial services. Finally, we discuss challenges all managers must confront in deploying machine-learning applications.
The Fourth Industrial Revolution is changing everything - from the way we relate to each other, to the work we do, the way our economies work, and what it means to be human. We cannot let the brave new world that technology is currently creating simply emerge. All of us need to help shape the future we want to live in. But what do we need to know and do to achieve this? In Shaping the Fourth Industrial Revolution, Klaus Schwab and Nicholas Davis explore how people from all backgrounds and sectors can influence the way that technology transforms our world. Drawing on contributions by more than 200 of the world's leading technology, economic and sociological experts to present a practical guide for citizens, business leaders, social influencers and policy-makers this book outlines the most important dynamics of the technology revolution, highlights important stakeholders that are often overlooked in our discussion of the latest scientific breakthroughs, and explores 12 different technology areas central to the future of humanity. Emerging technologies are not predetermined forces out of our control, nor are they simple tools with known impacts and consequences. The exciting capabilities provided by artificial intelligence, distributed ledger systems and cryptocurrencies, advanced materials and biotechnologies are already transforming society. The actions we take today - and those we don't - will quickly become embedded in ever-more powerful technologies that surround us and will, very soon, become an integral part of us. By connecting the dots across a range of often-misunderstood technologies, and by exploring the practical steps that individuals, businesses and governments can take, Shaping the Fourth Industrial Revolution helps equip readers to shape a truly desirable future at a time of great uncertainty and change.
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric" explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers' worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ("right to be forgotten") and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered.