Content uploaded by Matthew Hutson
Author content
All content in this area was uploaded by Matthew Hutson on May 19, 2021
Content may be subject to copyright.
Content uploaded by Matthew Hutson
Author content
All content in this area was uploaded by Matthew Hutson on May 19, 2021
Content may be subject to copyright.
Financial World | May 2021 13
Matthew Hutson looks at ethical issues involved in the use of articial intelligence
in nancial services and highlights some of the benets and risks
We need to code for ethics
Approval for a loan or credit card can seem arbitrary.
Some obscure decision-making apparatus, whether
human or machine, lies behind this summary
judgment on our solvency and reliability. We assume
nancial rms know how the decision was made, but
articial intelligence (AI) can nd patterns in data without
obvious explanation. And that can shape the intimate
decisions that we make about nance every day.
Some of those decisions are critical. Whether you can buy
a house, how you invest your retirement money and what
happens to your personal information can dramatically shape
the course of a life.
“An AI system’s ethical risks are related to not just the scale
at which they work,” says Lachlan McCalman, computer
scientist at Gradient Institute in Australia, “but also the
consequences of the actions that they take in the world.”
So, it’s worth exploring some of the ethical issues involved
in AI for nancial services. Roughly, applications fall into
two categories, although they overlap: judging people and
advising people. Let’s look at each in turn.
Judging people
Talk of AI ethics often points to harms – bias, surveillance
and so on. But AI also brings benets to the many. For
example, the US Consumer Finance Protection Bureau
reports that about 45m Americans can’t access credit
because of insufcient records. They often turn to predatory
payday lenders to cover bills. Machine-learning algorithms
can provide a fuller picture of applicants and customers,
which increases access to credit.
But the biases of human decision-makers can creep in. First,
the algorithms themselves may encode certain assumptions.
Second, the data embodies many forms of bias. Samples
from one group of people may not properly represent the
population as a whole; data may reect not the ground truth
but previous decisions by biased individuals (for example,
for historical reasons, women might not own property at
the same rates as equally creditworthy men do); and, due
to missing context, software may group people with others
who are not like them.
Nizan Packin, a law professor at the City University of New
York, has found that judgments about people with a range
of disabilities, for instance, are often incorrect because those
people “don’t t any of the existing boxes”.
Regulations in some countries restrict the use of certain
‘protected’ attributes, such as race and gender, when
making decisions, unless there’s a good business reason for
including them. But even algorithms blind to such attributes
can wind up disadvantaging certain groups based on proxies
such as post codes.
The risks can be reduced. Jay Budzik, Chief Technology
Ofcer of Zest AI, a company that makes software for
machine-learning-based credit scoring, says it uses a several-
step process to render the model transparent and reduce
bias. Zest employs an algorithm to identify how much
each factor affects an AI model’s decisions, and to pinpoint
those that reduce fairness. Using traditional models, a
typical lender may nd that removing that factor leads to a
deterioration in performance and a potential loss, so it gets
retained.
“Most people kind of stop there,” Budzik says. “What we’ve
done is go a step further.” Zest software can create models
that weigh variables slightly differently to make different
accuracy/fairness trade-offs. (There are conicting denitions
of fairness, such as equal opportunity or equal outcome;
Zest uses whichever the client chooses.)
Sometimes it turns out that sacricing a bit of accuracy
affords a lot more fairness. “Being able to whistle past the
graveyard when there are these unfair outcomes just doesn’t
seem acceptable now that we have new maths to correct
them,” Budzik says.
Algorithms can judge people even before they apply for a
loan. Again, that has benets. When marketing, AI may
help reach more people, or reach the right people – those
who won’t default and harm themselves and the service
providers. It can act as a defence along with credit scoring,
says McCalman, but it also provides another chance for bias,
particularly as marketers have more leeway on data use.
The fourth industrial revolution and financial services
Sometimes sacricing a bit
of accuracy in a soware
model can result in a lot
more fairness
“
A disruptive force for change
14 May 2021 | Financial World
The fourth industrial revolution and financial services
Marketing can also be predatory. McCalman says companies’
marketing and credit-scoring systems may not always be
integrated, and marketing might merely try to increase
applications. Algorithms can also optimise pricing, extracting
more money from some groups than others.
Companies should balance prot and fairness, but even
quantifying the trade-offs presents a hard technical
challenge. “This stuff is difcult,” McCalman says. “All the
more reason to think it carefully through as it gets more
sophisticated.”
Advising people
Good nancial advice is expensive, hence the rise
of automated advice. More than $1tn is under the
management of such robo-advisers in the US alone. They
can offer guidance on how much to save, how to stick to
plans, how to allocate investments and which services to
use, depending on local nancial regulations. Some support
human counsellors, some advise clients independently and
some even carry out transactions, for instance to rebalance
portfolios. Compared with human advisers, they can be
cheaper and more transparent, and can also be more
competent and personalised, given the data they
can process.
But robo-advisers can also identify the vulnerable and exploit
behavioural biases – to maximise prot, such as by raising
rates for people who don’t know better or are too lazy to
care, or by using bait and switch – and they can also learn
these tactics better than a human, says Benedict Dellaert, a
marketing professor at Erasmus University Rotterdam.
Robo-advisers could also potentially experiment or seek
collective action at the expense of some users. Just as a
trafc app might send some people down detours to see if
the routes are any good, or to reduce congestion, so some
consumers might become guinea pigs or sacricial lambs. If
such a possibility is in the ne print, many won’t notice.
Increasingly, ntech apps and services collect data beyond
standard nancial metrics. If we allow them, which is
sometimes a precondition for access, they can browse our
shopping, our search histories and our social media. One
might ask what we’re giving up by opening our online lives.
Packin presents one scenario: “I wouldn’t want my insurance
premiums to skyrocket just because I am ultra-clumsy and
I tend to bump into things, and I post about this on social
media.” AI might also scan our Facebook photos for images
of junk food. Such snooping would differentially harm those
who don’t know how to curate their online presence.
Also, being careful is probably not enough to ensure privacy.
Our friends can inadvertently give access to data they hold
on us and without our knowing. That gives platforms a lot
of power. China may show the future. It has introduced
social credit systems, in which behaviours such as writing
about censorship can disqualify citizens from receiving a
loan or buying an airline ticket. “We like to say that this
only happens in China,” Packin says. “I highly disagree.”
Facebook, for example, has applied for patents on making
nancial judgments based on user behaviour.
Data privacy laws in the EU promise protection from
exposure. The General Data Protection Regulation was set
up to ensure that consumers can control their own data.
For some breaches, rms can be ned up to €20m or 4% of
Fintech apps and services
can browse our shopping,
our search histories and
our social media
“
Financial World | May 2021 15
annual worldwide turnover, whichever is greater. In the US,
however, consumers might not even know who is using their
data. Fintech apps aren’t required to share what factors they
use to make decisions, and they in turn rely on invisible data
aggregator companies, which can hold on to data even after
people cancel ntech services. “The data is leaking in all
directions,” Packin says.
The data might also be wrong. In a 2012 survey of 1,000
people by the Federal Trade Commission, a quarter reported
a ‘material’ error in at least one of three credit reports. With
more data, there’s more opportunity for error. If you actually
paid a bill on time, or if an algorithm misses the context of a
Facebook photo because it doesn’t understand Halloween,
you can’t easily correct the ntech app or data aggregator.
Any errors can persist for generations because algorithms
capture social patterns from which other algorithms learn.
Making algorithms explainable would go a long way
towards ameliorating these problems. If programmers or
consumers understood how algorithms operated and what
data they used, they would trust them, they could improve
their performance and they could correct any errors or
unwanted biases. Fortunately, in some ways, AI models are
less opaque than humans. We have methods to analyse their
inner workings, while brains are not so easily read.
Beyond reducing unwanted bias, explainability can also
reduce the false appearance of bias, says Leanne Allen,
head of data ethics at KPMG, the accounting and consulting
rm. “An algorithm might price insurance based on postal
code not because it correlates with ethnicity but because
it correlated with crime rates, a more acceptable basis for
judgment, as long as it’s explained,” she says.
Another worry about data privacy and AI is security risk.
The more data that passes between servers, the greater
the risk of hacking. Hackers can sometimes even reverse-
engineer trained models to extract the private data that they
processed. “You have so much that could go wrong with all
these different steps,” Packin says.
When nance, regulation and computer science meet,
you need service providers and regulators who understand
computers and computer scientists who understand nance.
These teams require a diverse personnel to avoid ethical
blind spots, Allen says. But regulation is not sufcient.
Companies could just box-tick. They need to abide by a set
of values.
Ethics is “should you do something, not can you do
something”, Allen says. She believes companies need diverse
panels actively searching for problems to x and being
rewarded when they spot something. And they should share
what they nd. “This isn’t about trade secrets, where if you
do well, everyone else fails,” she says. “These have to be
industry-wide solutions.”
Matt Hutson is a freelance science writer in
New York City who covers psychology and
technology for The New Yorker, Science, Nature,
Scientic American and other publications. He
is the author of The 7 Laws of Magical Thinking
Rebecca Pool explains what deep learning is and how this form of articial intelligence
is being developed by the nance industry to make light work of complex analysis
Going deep to learn more
Be it in chatbots, credit-scoring or fraud prevention,
articial intelligence is on the rise across the nance
industry. Many use-cases rely on machine learning,
which, put simply, is a method of ‘training’ algorithms to nd
patterns in large datasets so that machines can make better
predictions over time without being specically programmed
to do so.
But the use of articial intelligence goes beyond the quick and
easy credit check and mimicking live employees. There is a
growing appetite at banks and nance companies to nd new
ways to use the process. In November 2020, for example,
Barclaycard Germany joined forces with Amazon to provide
and predict customised shopping and payment services by
using articial intelligence.
Andreas Joseph, Senior Research Economist at the Bank of
England’s Advanced Analytics Division, is condent that
“Any errors can persist
because algorithms capture
social patterns from which
other algorithms learn