ArticlePDF Available

AI and the Future of Management Decision-Making

Authors:

Abstract and Figures

Advancing AI capabilities make the technology increasingly relevant for enabling better and faster decisions. AI plays different roles in different types of decisions, with the most common AI-enabled decisions involving repetitive, tactical, and structured situations. These are also the types of decisions that are most likely to be fully or partially automated. For unstructured and semi-structured decisions, AI plays a more supporting role in informing human decision-makers. Although the primary effect of AI is to augment human intelligence rather than replace it, AI is capable of transforming managerial decision processes, allowing managers to make earlier, simulated, and complementary decisions. It also becomes incumbent upon managers to understand the ways that AI-enabled decision tools operate, and when the models on which they rely no longer reflect current reality and need to be retrained. Organizations should begin to redesign key decision processes with these new capabilities and responsibilities in mind.
Content may be subject to copyright.
1
AI and the Future of Management Decision-Making
Ali Aslan Gümüsay
Head of Research Group Innovation, Entrepreneurship & Society,
Humboldt Institute for Internet & Society
Thomas Bohné
Head of Cyber-Human Lab,
University of Cambridge
Tom Davenport
Distinguished Professor of IT and Management,
Babson College
Forthcoming in Management and Business Review
Acknowledgements: We are grateful for feedback by Stephen Cave, Letizia Mortara,
participants of the Leverhulme Centre for the Future of Intelligence and Humboldt
Institute for Internet & Society seminars as well as comments by senior editors and
reviewers from Management and Business Review on earlier versions of this
manuscript.
2
AI and the Future of Management Decision-Making
Executive Summary
Advancing AI capabilities make the technology increasingly relevant for enabling
better and faster decisions. AI plays different roles in different types of decisions,
with the most common AI-enabled decisions involving repetitive, tactical, and
structured situations. These are also the types of decisions that are most likely to be
fully or partially automated. For unstructured and semi-structured decisions, AI plays
a more supporting role in informing human decision-makers. Although the primary
effect of AI is to augment human intelligence rather than replace it, AI is capable of
transforming managerial decision processes, allowing managers to make earlier,
simulated, and complementary decisions. It also becomes incumbent upon
managers to understand the ways that AI-enabled decision tools operate, and when
the models on which they rely no longer reflect current reality and need to be
retrained. Organizations should begin to redesign key decision processes with these
new capabilities and responsibilities in mind.
Artificial intelligence (AI) is commonly delineated as computational learning and
problem-solving capabilities and behavior that would require human intelligence
(HI).1 As AI is deployed, it enables new forms of automated, augmented, and altered
data-driven decision-making. In organizations around the world, the ability and
authority to make key decisions is closely tied with management. As AI develops its
capabilities we can expect it to be increasingly present when and how decisions are
made in organizations in a range of areas including criminal sentencing, hiring,
purchasing, and immigration decisions as well as credit risk predictions and surgery
allocation.2,3 For this reason “humans are no longer the sole agents in
management.4 But what will this actually mean for managementfor individual
managers making decisions, and also for organizations enacting these decisions?
Management, at least in the foreseeable future, is likely to be shaped by an
intelligence amplificationa combination of human intelligence (HI) and artificial
3
intelligence, in short: HI-AI blending but not by a complete displacement or
replacement of humans.5 The general implications of this development have
received significant attention in the last few years and this relationship is anticipated
to be one of dynamic human-machine symbioses, which requires careful balancing.
A key concern that has been identified is how AI will blend with and augment
managers, thus extending their capabilities while not alienating or devaluing
humans.6
On the important topic of decision-making our understanding of the potential
implications of this development remains rudimentary. Little is still known about how
AI will co-exist, co-evolve, and co-work with managers making decisions that do not
just affect individualsas in many medical or legal situationsbut entire
organizations. As AI is in the process of becoming our most common companion
species7, what will an AI integrated future of decision-making look like? In this article
we briefly review what decisions are already affected by AI, and then focus on three
of the potentially most significant new implications for decision-making in
management in the future: earlier decisions, simulated decisions, and
complementary decisions. Finally, we note potential downsides of AI-driven decision-
making and speak to the future of managing HI-AI blending.
Decisions Increasingly Affected by AI
Before delving into how AI may be changing decision-making itself, it is helpful to
consider different types of decisions and their suitability for AI. We can broadly
identify three types of management decisions that can be impacted by AI in different
ways: structured, semi-structured and unstructured. AI and machine learning in
4
particular is increasingly deployed to perform highly structured decisions that are
frequent or recurrent, tactical, based on a large amount of data, and requiring swift
processing. One important example is pricing decisions, in which AI models are
trained based on past decisions to charge certain prices, with the outcomewhether
the customer accepted the price and bought the productknown and captured.
Using this data a model can be trained to make predictions for whether a price will
lead to a sale when the outcome is unknown.
The pricing example reveals aspects of the types of decisions that AI currently
supports best. Pricing is a recurrent activity. Recurrence or frequency is important
not only because it provides data to train the model, but it also means it is worth the
investment to build and deploy a model in the first place. Pricing is also arguably a
tactical decisionalthough it has major implications for profitabilitythat has
traditionally been performed by managers in middle, frontline, or sales positions.
Because it is a relatively structured decision, executives may choose to automate
pricing decisions and reduce the autonomy of frontline managers or salespeople.
Some highly structured decisions are only possible through automation. One
example is in digital advertising, where ads are automatically placed on a publisher’s
website in a matter of milliseconds. Machine learning models are trained on which
combinations of cookies have led to click-throughs in the past, and the pricing of the
advertisement is usually handled through automated auctions. No human can
perform screening a large amount of data and perform rapid calculations in such a
short period of time, and the consequences of a mistake are typically measured in
5
fractions of a cent. Moreover, such decisions are far too tactical to be made by a
senior marketing executive.
Managers also make semi-structured decisions that are less recurrent and generate
less data, but can also benefit from AI. They are made less frequently and involve
more diverse information than the structured decisions we have described, and are
not well-suited to full automation. Because machine learning systems often require a
lot of training data and semi-structured decisions may not generate enough, they
may also not be suited to machine learning. Some organizations, however, have
found that an older AI technologyrule-based or “expert systems,”—are well suited
to semi-structured decisions. These systems make recommendations that can inform
a decision or lead to further investigation. They are, for example, the basis for most
anti-money laundering (AML) programs in banks (although they are increasingly
being augmented by machine learning).8 The rules identify possible violations of
AML regulations by particular customers, and human analysts investigate the
customers further to determine whether they should be confronted.
As another example, Taylor Wessing, an international law firm based in London,
used a rule-based AI system to alert clients that they may be in violation of corporate
legal regulations such as the Foreign Corrupt Practices Act.9 If the system indicates
a possible violation, clients are encouraged to investigate further and hire an
attorney.
AI can also support unstructured strategic decisions, but in a different way than in
the above examples and by using different AI methods. Such strategic decisions
6
such as whether to acquire a competitor, to develop a new product line, or to take on
private equity ownershiptypically do not happen very often. These decisions also
involve idiosyncratic circumstances and are not suitable for supervised machine
learning. However, natural language processing (NLP) is used for strategic decisions
when, for example, a firm deploys it to detect mentions of key concepts involving
potential new products in published marketing materials or job postings of
competitors. That might inform a decision by a firm to develop a similar product itself
or contact customers to determine their interest in such a product. However, an AI
system in this case only informs and supports the human decision and is not used to
fully automate it or normally even to make a recommendation. That is, the data and
its analysis by a system are coupled only loosely to the decisions and actions a
manager may take.
One example of such a loosely-coupled system was the use of AI to identify the
COVID-19 outbreak in China in late December 2019. BlueDot, a Canadian startup,
regularly scans and analyzes disease reports, travel movements, and mobile device
data to identify infectious disease outbreaks and locations where they might spread.
It warned its customers of a cluster of “unusual pneumonia” cases in Wuhan, China,
and suggested that the disease would spread rapidly. Of course, despite these
warnings, many governments and private sector organizations did not take action to
prevent spread of the disease. This raises the question of how to transform not just
AI predictions, but actions.10 As Kamran Khan, the CEO and founder of BlueDot,
described the issue at a conference:
[Complex problem solving] isn’t just about data and technology: it’s about
human intelligence and understanding a problem that is inherently very, very
7
complex. AI is one tool in the toolbox. How do we innovate in ways that
develop solutions that can turn insights into actions?9
Other firms are using similar scanning AI capabilities to determine whether their
extended supply chains are at risk. They can identify potential or actual legal and
regulatory challenges, weather disruptions, political unrest, or economic difficulties of
a company’s direct suppliers and their suppliers.
Another method of AI for strategic decisions is a machine learning-based simulation
of a future decision task grounded in data and not by accessing a predefined
knowledge base in the program code. This is a major difference to traditional
business game simulations, which are either pre-programmed and rule-based or a
game-like competition between human teams. An example of data-driven
simulations are emerging “digital twinsof companies, such as BMW’s and Nvidia’s
collaboration to create an Omniverse for virtual factory planning. This approach
allows companies new ways to simulate how key aspects of their business
environment would operate under different strategies and circumstances. The
downside of these simulations is that they may require considerable data, time, and
resources to develop. As with other AI systems for unstructured strategic decisions,
they are only an input to further deliberation by managers.
Overall, we observe early adoption of AI in substituting or supporting human agency.
This leads us to ask how managerial decision-making itself may be transformed by
AI? This is discussed next.
8
Three Future Transformations of Decision-Making by AI
Having considered the types of decisions that are already influenced by AI, it is
interesting and important to contemplate what this means for the future of
managerial decision making. With the shift from AI analysis and examination to AI
governance and execution, we see a potential delegation of decisions that were
previously performed by managers to AI. In other words, AI upgrades from prediction
to action, from diagnosis to decision. This will not lead to a replacement of
management, but a transformation of its role and focus in decision-making.
Specifically, we identify three such foreseeable transformations, which we term
earlier, simulated, and complementary decision-making.
Table 1: Future decision-making transformations by AI
Earlier decision-making
Simulated decision-
making
Complementary
decision-making
Explanation
Decisions can be made
earlier and then be
followed up by AI
Decisions and their
consequences can be
tested and autonomously
executed by AI
Managers and AI can co-
decide, i.e., make
decisions together
Examples
Prior management
decision to enter new
market is executed by AI
given contextual
developments
Multiple hypothetical
contract decisions are
simulated, pursued and
partially executed by AI
Competitor diagnosis and
response decision is
based on blended input
from management and AI
9
Changing
role of
managers
Less continuous input by
managers needed; fewer
but more far-reaching
decisions by managers
More imaginary,
simulated as well as if-
then decisions by
managers
Managers not sole
agents for decisions;
regular co-engagement
with AI
Key skills to
develop
Strategic foresight and
rapid response
Hypothetical thinking and
action
Regular co-engagement
with AI
Decision-
making focus
Active long-term strategic
decisions and reactive
rapid-response decisions
Hypothetical and guardrail
decisions
Joint, collaborative
decision-making between
managers and AI
Benefits
Swifter and more granular
decisions by AI; less
managerial resources
needed for continuous
input
Situational preparedness
by AI; disentangles
decision-making from
managerial presence
Integrating advantages of
AI and managers skills
Perils
Negative AI-based follow-
up decisions
Negative AI-based
decisions due to
unexpected developments
Unclear demarcation of
responsibility and
accountability between
managers and AI
Earlier Decision-Making
As AI can increasingly perform cognitive tasks that were previously managed by
humans10, AI-driven decision-making might allow managers to make decisions
earlier or further in advance of relevant eventsparticularly with predictive machine
learning models. Such decisions could then be followed by AI-guided subsequent
decisions that keep track of and account for important developments. For instance, a
key decision for new product introductions is about where, when, and how to move
into a new market or customer segment. This decision may be made by an AI
10
system if the company has a lot of past data on new product introductions and their
outcomes, or at least some decision rules. This would also allow for swifter and more
granular decisions. Management could set initial criteria and certain boundaries or
guardrails if a company wished to expand across or avoid a particular region. A
machine learning model could episodically adjust this strategy based on real-world
feedback on its implementation. This means that less continuous input from
managers would be needed, which would free managerial resources. As a result,
managers might have to make fewer but more far-reaching decisions, e.g., on the
scope of the expansion strategy, as follow-up tactical decisions would be automatic
and potentially swift. From a temporal perspective, decisions would be made prior to
the action taken and would predict the success of the action.
This would change management styles, which need to be built on two types of
decision-making: active long-term strategic decisions that take AI into account but
are not automated, and reactive rapid-response decisions when there is insufficient
data for AI models. One challenge of the latter type of decision-making is how to
prevent and engage with negative AI- or automation-based performances and results
that go astray such as in the known cases of Boeing’s 737 Max crashes or Waymo’s
and Tesla’s road accidents with autonomous cars. After all, AI models are
correlational and lack causal reasoning ability. They are largely probabilistic and will
sometimes predict incorrectly. Poor prediction results are especially concerning if the
responsibility for them remains with managers. Still, with careful planning and
precaution mechanisms earlier decision-making can empower managers to focus
more on long-term strategy.
11
Simulated Decision-Making
AI-driven decision-making could enable managers to simulate more decisions and
better scope imagined and uncertain futures. In practice, managers typically make
decisions between different real options such as pursuing a particular strategy or not.
They also plan for scenarios and design automated strategies that are enacted
under uncertain conditions. For instance, Amazon already has a simulation and
experimentation group that uses a range of statistical and machine-learning models
to enable simulation-based predictions, optimization, and rapid experimentation of
‘what if’ questions affecting its supply chain, fulfillment network, and customers. This
allows Amazon to experiment with uncertainty quantification and statistical
emulation.
With increasing AI capabilities, managers could pro-actively make decisions among
more far-reaching hypothetical options that could then be immediately enacted by an
AI system if the case occurs in the future. This would lead to a situational
preparedness by AI disentangling decision-making from managerial presence. For
instance, instead of managers making large contract decisions, an AI might be given
the task to make such strategic decisions on its own. Such an AI-based strategy is
anticipated to free up and transform managerial resources and also react more
quickly.11 However, it would also mean that managers need to take into account
possible changes in organizational and contextual circumstances that would reverse
or change the thinking that was trained into the AI system. Any delegation to AI of
future decision-making could thus lead to an implementation that was built on ex-
ante programming but that is not in line with the thinking of management in the very
moment it is performed. In other words, managers would need to take into account
12
that their thinking develops as the hypothetical becomes factual and that this new
thinking needs to be integrated into an AI system. This requires a coupling between
progress in management thinking and AI operationalization.
Complementary Decision-Making
Except for the most tactical and structured decision situations, AI-driven decision-
making is unlikely to replace human strategic decision-making in the foreseeable
future. Rather, it is more likely to augment it. This is not only because AI is (still)
limited to specified and bounded tasks, but also because value judgements are (still)
within the human domain. The idea of augmentation is not new, of course, and can
be traced back to early notions of “augmenting human intellect.12 This is
recontextualized by the rapidly evolving AI development of today, which moves us
away from knowledge-based systems that codify human knowledge to self-learning
systems. The central question to ask in a decision-making context is: how will AI
augment HI? While this very much depends on AI and HI capabilities and biases,
most related work to date, with some exceptions13, points towards a hierarchical,
one-sided augmentation process whereby AI offers input, but the final decision lies
with the human. For instance, doctors are using AI to help diagnose skin cancer14
and judges to predict recidivism.15 This is already controversial, as AI mirrors and
potentially even aggravates biases of those who have designed it and the data it is
built on.16 In the case of predicting future criminal behavior, for example, systems
have been shown to be biased against black defendants.17
Yet, we see an alternative and even more far-reaching possibility for decision-
making whereby an automatic output from an AI system is based on input from
13
another AI machine and a human being. For instance, instead of an AI diagnosing
cancer and offering suggestions to doctors who then decide on treatment, which
separates prediction from action, both the machine and doctors would enter input
into a system. This input could be valued differently depending on the expertise of
the doctors or their personal level of certainty. Based on this input, a machine then
makes a final decision about treatment that both the AI system and doctors must
adhere to. Similarly, managers may possibly co-create decisions with AI in a joint
human-machine cognition.
This would not only raise questions about justifiability, autonomy, and responsibility,
but also new questions about the strategic interplay between humans and machines.
While human decisions can be opaque, especially if collectively done in teams,
ultimately everyone who was part of the team is known and could be held
accountable. Conversely, a decision made by an AI-HI combination is not
interpretable by humans (e.g., a virtual meeting amongst humans can be recorded
and interpreted, but not an AI decision process), there is no information on the sub-
processes involved that led to the decision, and an AI itself is not legally
accountable. So we may ask if a decision is co-decided by HI and AI, how can a joint
decision be justified if neither the human nor the AI has full information about all the
sub-processes involved? Will autonomy be undermined, and humans be turned into
simple “confirmers”, in decision-making situations if there is a competence gap (e.g.,
AI has better feature detection)? Could, would and should humans override the AI,
for instance, if they have further information that would also change the AI’s
decision?11 And who is accountable for the implications of the decision?
Accountability links with authority. As authority would be distributed across HI and AI
14
this would require wider considerations for fair ex-post judgment. Humans may also
be prone to adapt their behavior if they know that the final decision is a blended one.
For instance, some managers could use negotiation tactics and take extreme
stances to ultimately achieve blended decisions along their true viewpoint, whereas
others might take middle-ground stances to make the AI side ultimately turn the tide
of a collective HI-AI decision and thus make the AI blameworthy.
The Potential Downsides of AI-Driven Decision-Making
As AI is not simply automation, any decision fed into an AI has to consider the
extensive potential effects of subsequent AI-based decision-making. This is both
because of its wider potential reach, as AI performs increasingly significant tasks, as
well as its distance from human agency since AI evolves and decides based on its
(unsupervised) learning and development. For instance, managers may design a
procurement system that is then directed by an AI system without any direct human
supervision. Likewise, managers might need to develop rapid-response mechanisms
when they become aware of problematic decisions or developments. This requires
strategic what-if planning, a warning system, an intervention into AI decisions,
correction mechanisms, and management and operational execution.
The recent COVID-19 pandemic is a case in point. Machine learning models in
consumer goods companies that had been trained on demand and supply chain data
before the pandemic were often no longer accurate in their predictions, since past
consumer behavior was not a good predictor of consumption patterns during the
pandemic.19 Such “model drift” can be detected either by humans or by software
programs (known as MLOps for “machine learning operations”), but managers need
15
to make the final decision about whether to stop using the predictions and what
alternative decision strategies to employ to address negative (side) effects of AI
usage. Just as the responsibility to adopt AI falls upon managers, the responsibilities
for knowing how they work, paying attention to whether their underlying assumptions
are still valid, and discontinuing or retraining them when necessary also are
incumbent upon managers.
Managing the future of HI-AI blending
What does the future of HI-AI blending in decision-making hold for management?
With increased AI infusion, managerial tasks that were previously performed by
humans can be performed by machines. Our argument focused on three key
transformations for management in the future: earlier, simulated, and complementary
decision-making. These are likely to arise in two ways: the more visible one is a
bottom-up development in which AI slowly but continuously carves out more and
more important decision-making, while more complex and strategic tasks remain
within the human domain—“at the top of the organization.
Even more disruptively, however, it is also possible to envisionwith considerable
improvement in technology capabilitiesAI systems that engage with the most
important concerns of organizations, that is, those that require value judgements.
This might lead to a top-down transformative process whereby AI engages with or
informs strategic tasks and potentially even the overall orientation, vision, and
mission of organizations. This could lead to a reversal whereby humans would
execute managerial tasks that are the result of an AI-driven overall strategy. Whether
disruptively or not, the future of HI-AI blending is likely to transform management.
16
To be prepared and to ensure intentional decisions on these issues, organizations
should inventory their most important decisions and discuss the potential for AI
assistance or even managerial process redesign. The process might involve
determining what data is available to train AI models, how the decision might benefit
from AI, and whether current AI systems might have the capabilities to drive or
support the decision. At the very least, such discussions would make organizations
realize that AI will play a more prominent and transformational role in its decision-
making.
17
Bibliography
1. Copeland, B. Artificial intelligence. Encyclopedia Britannica.
2. Kellogg, K. C., Valentine, M. A. & Christin, A. Algorithms at Work: The New
Contested Terrain of Control. ANNALS 14, 366410 (2020).
3. Shrestha, Y. R., Ben-Menahem, S. M. & von Krogh, G. Organizational Decision-
Making Structures in the Age of Artificial Intelligence. California Management
Review 61, 6683 (2019).
4. Raisch, S. & Krakowski, S. Artificial Intelligence and Management: The
Automation-Augmentation Paradox. AMR (2020) doi:10.5465/2018.0072.
5. Jarrahi, M. H. Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making. Business Horizons 61, 577586 (2018).
6. Cave, S. & ÓhÉigeartaigh, S. S. Bridging near- and long-term concerns about AI.
Nature Machine Intelligence 1, 5 (2019).
7. Saffer, D. Why We Need to Tame Our Algorithms Like Dogs. Wired (2014).
8. Davenport, T. & Miller, S. The Future Of Work Now: AI-Driven Transaction
Surveillance At DBS Bank. Forbes (2020).
9. Davenport, T. & O’Dell, C. Explainable AI and the Rebirth of Rules. Forbes (2019).
10. Athey, S. Beyond prediction: Using big data for policy problems. Science (2017)
doi:10.1126/science.aal4321.
11. Ludwig, J. & Mullainathan, S. Fragile Algorithms and Fallible Decision-Makers:
Lessons from the Justice System. Journal of Economic Perspectives 35, 7196
(2021).
18
Author Biographies
Ali Aslan Gümüsay is head of research group “Innovation, Entrepreneurship &
Society” at the Humboldt Institute for Internet & Society Berlin. He works at the
intersection of organization theory, entrepreneurship, business ethics, and
leadership. Currently he is a Visiting Research Fellow at Judge Business School,
University of Cambridge.
Thomas Bohné is founder and head of the Cyber-Human Lab at the University of
Cambridge. His research focuses on systems able to augment rather than replace
human abilities and improve workforce performance in industry. In collaboration
with the World Economic Forum he is leading the Augmented Workforce Initiative.
Thomas H. Davenport is the President’s Distinguished Professor of Information
Technology and Management at Babson College, a Visiting Professor at Oxford
University’s Saïd Business School, a Fellow of the MIT Initiative on the Digital
Economy, and a Senior Advisor to Deloitte’s AI practice.
ResearchGate has not been able to resolve any citations for this publication.
Preprint
Full-text available
Taking three recent business books on artificial intelligence (AI) as a starting point, we explore the automation and augmentation concepts in the management domain. Whereas automation implies that machines take over a human task, augmentation means that humans collaborate closely with machines to perform a task. Taking a normative stance, the three books advise organizations to prioritize augmentation, which they relate to superior performance. Using a more comprehensive paradox theory perspective, we argue that, in the management domain, augmentation cannot be neatly separated from automation. These dual AI applications are interdependent across time and space, creating a paradoxical tension. Over-emphasizing either augmentation or automation fuels reinforcing cycles with negative organizational and societal outcomes. However, if organizations adopt a broader perspective comprising both automation and augmentation, they could deal with the tension and achieve complementarities that benefit business and society. Drawing on our insights, we conclude that management scholars need to be involved in research on the use of AI in organizations. We also argue that a substantial change is required in how AI research is currently conducted in order to develop meaningful theory and to provide practice with sound advice.
Article
Full-text available
How does organizational decision-making change with the advent of artificial intelligence (AI)-based decision-making algorithms? This article identifies the idiosyncrasies of human and AI-based decision making along five key contingency factors: specificity of the decision search space, interpretability of the decision-making process and outcome, size of the alternative set, decision-making speed, and replicability. Based on a comparison of human and AI-based decision making along these dimensions, the article builds a novel framework outlining how both modes of decision making may be combined to optimally benefit the quality of organizational decision making. The framework presents three structural categories in which decisions of organizational members can be combined with AI-based decisions: full human to AI delegation; hybrid—human-to-AI and AI-to-human—sequential decision making; and aggregated human–AI decision making.
Article
Full-text available
Research and debate on the impacts of AI have often been divided into two sets of issues, associated with two seemingly separate communities of researchers. One relates to the near-term -- that is, immediate or imminent challenges, such as privacy and algorithmic bias. A second set of issues relates to longer-term concerns that are less certain, such as risks of AI developing broad superhuman capabilities. These two sets of issues are often seen as entirely disconnected. We argue that this perception of disconnect is a mistake. There are many connections between the nearer and longer-term issues, and researchers focused on one have good reasons to take seriously work done on the other. Long-termists should look to the near-term because research directions, policies, and collaborations developed on a range of issues now could significantly affect long-term outcomes. At the same time, near-termists could benefit from the long-termists’ big picture forecasting and contingencies work.
Article
Full-text available
Artificial intelligence (AI) has penetrated many organizational processes, resulting in a growing fear that smart machines will soon replace many humans in decision-making. To provide a more proactive and pragmatic perspective, this article highlights the complementarity of humans and AI, and examines how each can bring their own strength in organizational decision making processes typically characterized by uncertainty, complexity, and equivocality. With a greater computational information processing capacity and an analytical approach, AI can extend humans' cognition when addressing complexity, whereas humans can still offer a more holistic, intuitive approach in dealing with uncertainty and equivocality in organizational decision-making. This premise mirrors the idea of 'intelligence augmentation': AI systems should be designed with the intention of augmenting, not replacing, human contributions.
Article
Algorithms (in some form) are already widely used in the criminal justice system. We draw lessons from this experience for what is to come for the rest of society as machine learning diffuses. We find economists and other social scientists have a key role to play in shaping the impact of algorithms, in part through improving the tools used to build them.
Article
The widespread implementation of algorithmic technologies in organizations prompts questions about how algorithms may reshape organizational control. We use Edwards’ (1979) perspective of “contested terrain,” wherein managers implement production technologies to maximize the value of labor and workers resist, to synthesize the interdisciplinary research on algorithms at work. We find that algorithmic control in the workplace operates through six main mechanisms, which we call the “6 Rs”—employers can use algorithms to direct workers by restricting and recommending, evaluate workers by recording and rating, and discipline workers by replacing and rewarding. We also discuss several key insights regarding algorithmic control. First, labor process theory helps to highlight potential problems with the largely positive view of algorithms at work. Second, the technical capabilities of algorithmic systems facilitate a form of rational control that is distinct from the technical and bureaucratic control used by employers for the past century. Third, employers’ use of algorithms is sparking the development of new algorithmic occupations. Finally, workers are individually and collectively resisting algorithmic control through a set of emerging tactics we call algoactivism. These insights sketch the contested terrain of algorithmic control and map critical areas for future research.
Article
Machine-learning prediction methods have been extremely productive in applications ranging from medicine to allocating fire and health inspectors in cities. However, there are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimize data-driven decision-making.
Artificial intelligence. Encyclopedia Britannica
  • B Copeland
Copeland, B. Artificial intelligence. Encyclopedia Britannica.
Why We Need to Tame Our Algorithms Like Dogs
  • D Saffer
Saffer, D. Why We Need to Tame Our Algorithms Like Dogs. Wired (2014).
The Future Of Work Now: AI-Driven Transaction Surveillance At DBS Bank
  • T Davenport
  • S Miller
Davenport, T. & Miller, S. The Future Of Work Now: AI-Driven Transaction Surveillance At DBS Bank. Forbes (2020).