Access to this full-text is provided by Springer Nature.
Content available from Humanities and Social Sciences Communications
This content is subject to copyright. Terms and conditions apply.
ARTICLE
Paternalistic AI: the case of aged care
Cristina Voinea 1✉, Tenzin Wangmo2& Constantin Vică3
In this paper, we argue that AI systems for aged care can be paternalistic towards older
adults. We start by showing how implicit age biases get embedded in AI technologies, either
through designers’ideologies and beliefs or in the data processed by AI systems. Thereafter,
we argue that ageism oftentimes leads to paternalism towards older adults. We introduce the
concept of technological paternalism and illustrate how it works in practice, by looking at AI
for aged care. We end by analyzing the justifications for paternalism in the care of older
adults to show that the imposition of paternalistic AI technologies to promote the overall
good of older adults is not justified.
https://doi.org/10.1057/s41599-024-03282-0 OPEN
1The Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK. 2Institute for Biomedical Ethics, University of Basel,
Basel, Switzerland. 3Faculty of Philosophy, University of Bucharest, Bucharest, Romania. ✉email: cristina.voinea@philosophy.ox.ac.uk
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0 1
1234567890():,;
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Introduction
Imagine you are thirty, and you return home after a stressful
day at work, wishing for a long bath to relax. After soaking in
the tub for half an hour, the alarms suddenly go off, and your
phone starts ringing. Your family members have been alerted by
sensors in your home that you have been in the bathroom for
longer than usual. Once the situation settles, you head to the
kitchen to have a glass of wine, only to receive a message on your
smartwatch warning against it. Your digital assistant reminds
you that you’ve had wine in the previous days, which is very bad
for your health and may affect your life insurance. You are
advised to return the wine to the fridge and head to bed;
otherwise, your family and physician will be notified. This sce-
nario looks far from ideal. But what if the protagonist of the story
was eighty? For many, using technology to monitor and control
the lives of older adults is often seen as a way to ensure their
safety and well-being, even if the price is the constraint of their
autonomy.
Already, the question of the impact of digital technologies on
older adults is becoming more and more pressing. By 2050, the
percentage of people 65 years and above is projected to double in
comparison to the 2021 figures, due to increased life expectancy
over recent decades (United Nations Department of Economic
and Social, 2023, p. 18). Longer lifespans bring about a higher
likelihood of experiencing disabilities and various medical con-
ditions. In the near future, the demand for care services will far
surpass the offer, which is already dwindling (United Nations
Department of Economic and Social, 2023, p. 113). In this con-
text, artificial intelligence (AI) is seen as crucial in supplementing
and expanding care for older adults. Although AI holds impor-
tant promise in tackling the scarcity of care, it raises some pro-
blems as well.
In this paper, we show that ageism can inform the develop-
ment and deployment of AI technologies for aged care, and
when it does, it takes the form of technological paternalism.
Ageism against older adults presupposes the existence of ste-
reotypes based on age, which frequently rely on broad
assumptions about old age and old individuals, depicting them
as frail, vulnerable, and incompetent, although warm and
friendly (Swift et al., 2021, p. 168). Ageism can negatively affect
older people’s self-esteem, who might end up internalizing
ageist stereotypes, which can become self-fulfilling prophecies
resulting in social exclusion and health problems (Chasteen
et al., 2020,p.1326;Changetal.,2020, p. 15). Although it can
be difficult to avoid the confrontation with ageist stereotypes,
older adults can still overcome them and continue living their
livesastheyseefit. But in some instances, ageism leads to
paternalistic attitudes towards older adults (Cary et al., 2017),
and in these circumstances, it is more insidious and difficult to
resist, as it presupposes a constraint of older adults’freedom,
against their will, and for their supposed well-being. Thus,
paternalism has direct effects on the lives of older adults, whose
autonomy is constrained without them having the possibility to
resist or oppose the constraint.
We start by showing how implicit age biases get embedded in
AI technologies, either through designers’ideologies and beliefs
or in the data processed by AI systems. Thereafter, we argue
that ageism can lead to paternalism towards older adults. We
show how implicit age biases in AI development lead to the
creation of paternalistic technologies designed for older adults’
care. We introduce the concept of technological paternalism
and illustrate how it works in practice, by looking at AI for aged
care. We end by analyzing the justifications for paternalism in
the care of older adults to show that the imposition of pater-
nalistic AI technologies to promote the overall good of older
adults is not justified.
AI for aged care
AI is an umbrella term for a variety of systems that can analyze
their surroundings and take actions with a degree of autonomy.
These systems can be software-based, functioning in virtual
spaces such as conversational agents based on large language
models, or hardware-based, operating in physical environments,
such as robots. AI techniques encompass machine learning,
computer vision, pattern detection, and natural language pro-
cessing, among others. These AI-enhanced interventions, which
oftentimes incorporate environmental sensors, are developed to
support the health and independence of older individuals. The
hope is that these semi- and fully autonomous systems will extend
the reach of care services, enhance their efficiency, and reduce the
burden on caregivers. Moreover, by supplementing (or com-
pletely replacing) caregivers, AI is hoped to improve workforce
sustainability, address service disparities, and streamline infor-
mation systems and data analysis for those in need of care
(Loveys et al., 2022, p. e286).
AI in aged care also holds promise because of its' potential to
realize, in a cost-effective manner, the ideal of 4P medicine
(predictive, personalized, preventive, and participatory) (Rubeis,
2020, p. 2), which is supposed to reduce the incidence of chronic
diseases in a cost-effective manner. For example, predictive sys-
tems are already used to collect data through monitoring, sur-
veillance, and sensors to detect abnormalities in older individuals’
behavior, such as the probability of falls. Similarly, AI plays a vital
role in personalized medicine, where once again, collection of
personal data is crucial, as it is then used to screen for chronic
diseases and provide tailored treatment options and health advice,
taking into account an individual’s specific health profile (Miura
et al., 2022). Preventive AI systems, on the other hand, are used to
alert healthcare providers or family members of irregular patterns
in the daily activities of patients, allowing for timely risk miti-
gation measures (Pilotto et al., 2018). These systems are often
associated with “in-place remote healthcare assistance”, involving
extensive monitoring for daily-life health support and the trig-
gering of alarms in case of emergencies (Lee et al., 2023). Last but
not least, through the participatory dimension of some AI sys-
tems, the patient can read and interpret the data from wearable
sensors, which can offer them a better understanding of their
medical situation and, thus, a better ground for participating in
decision-making regarding their own health and well-being.
While AI holds the potential to enhance the quality and
breadth of aged care, it is not without its problems. In recent
years, researchers have drawn attention to how AI can perpetuate
biases, with a notable focus on racial and gender biases (Buo-
lamwini and Gebru, 2018; Noble, 2018). Yet, one of the most
pervasive and often unnoticed biases in most societies is ageism
(Iversen et al., 2009; Nelson, 2016). This raises the question of
whether AI can also embed and reinforce ageist biases.
AI ageism. Relatively recent but scarce work traces a connection
between AI and ageism (Stypinska, 2023; Chu, Leslie et al., 2022;
Rubeis, 2020; Neves et al., 2023; Berridge and Grigorovich, 2022).
AI ageism is defined as those practices within the field of AI that
contribute to discrimination against or the exclusion and neglect
of the interests of older adults (Stypinska, 2023, p. 669). Like
other AI biases, ageism can manifest through the beliefs and
ideologies of those creating AI technologies, or it can be
embedded in the datasets processed by AI systems (Stypinska,
2023; Rubeis, 2020; Neves et al., 2023). Ageism in AI risks per-
petuating negative stereotypes and sidelining older individuals,
making their active engagement with and benefit from AI tech-
nologies more difficult. Thus, AI systems are not neutral; they
ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0
2HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
reflect the values, beliefs, and biases of their creators or those that
are embedded in the data processed (Boyd and Crawford, 2012;
Buolamwini and Gebru, 2018; Noble, 2018). How does this
happen?
Age scripts. Firstly, the perceptions of designers, developers, and
programmers regarding potential technology users, usually called
‘scripts’, infuse technology development, which can influence the
way consumers use products (Peine and Neven, 2021, p. 2857).
Users have the option to follow these predefined scripts and use
the technology as it is supposed to be used (such as the large
majority of the population that uses computer operating systems
in expected ways), they can adapt the technology to better suit
their needs (as in the case of people who tinker with computer
operating systems in order to adapt them to their preferences), or
reject it entirely if it does not align with their preferences (such as
the case of those who refuse to use certain operating systems
because of value reasons). When it comes to technology designed
for older individuals, age-specific scripts come into play,
embedding societal perspectives regarding the aging process in
technology design processes. These scripts, in turn, exert a nor-
mative influence on users, compelling them to adhere to pre-
vailing expectations (Peine and Neven, 2021, p. 2858).
Nonetheless, as recent research at the intersection of social ger-
ontology and Science and Technology Studies (STS) shows, users
can also challenge these scripts and reappropriate technologies to
fit their needs (Loos et al., 2021). To put it simply, ageing and
technology are co-constituted (Peine and Neven, 2019, p. 17).
Often, age-scripts portray older adults as incompetent in
dealing with technology, vulnerable, or frail and emerge because
ageism is pervasive in the tech industry, which is dominated by
young (often white) males who may not be immediately aware of
their age-related biases. This issue is so pronounced that the tech
industry has been characterized as “one of the most ageist places
on Earth”(Gullette, 2017, xx). For instance, most AI technologies
for older adults predominantly focus on healthcare and chronic
disease management, often referred to as gerontechnology, while
aspects related to leisure and enjoyment are overlooked (Neves
et al., 2023, p. 1275).
Older adults are often seen as “invisible users”in the
development of digital technologies, leading to their exclusion
from design processes (Mannheim et al., 2022, p. 1197; Ivan and
Cutler, 2021). For example, in a literature review of studies
documenting the design of digital technologies with older
persons, Mannheim et al. found that the exclusion of older
adults from design processes often takes the form of “no or low
involvement, upper-age limits, and sample biases toward
relatively ‘active,’healthy and ‘tech-savvy’older persons.”(2022,
p. 1188). This exclusion not only disempowers older adults but
also perpetuates their marginalization from the design and use of
AI technology. Developers of digital technologies and AI systems
often create technologies “on behalf of older people, instead of for
older people”(WHO, 2022, p. 8). This means that technology is
designed based on inaccurate assumptions about the lifestyles,
needs, and interactions of older individuals and has in view older
adults as a homogenous group. It is important to note that this
lack of consideration does not necessarily imply ill intentions on
the part of developers but rather reflects a deficiency in awareness
and reflection regarding the needs, preferences, skills, and
capacities of older adults (Manor and Herscovici, 2021, p. 1088).
For instance, Neven (2015) analyzed AIMS, an in-place remote
healthcare assistance system designed to make monitoring older
persons as unobtrusive as possible, allowing them to live at home.
This was achieved by installing a variety of sensors and cameras
in the homes of older adults that monitored and learned their
movements, triggering an alarm in case of detection of unusual
behaviors. In this case, “for the older people, the script of AIMS
has distinct elements of ‘giving up’—e.g. control over previously
private information or access to (spare) rooms which were not
equipped with sensors—and ‘putting up’—e.g. with being
monitored and with changes in care—and the autonomous
nature of AIMS affords very few opportunities to resist this”
(Neven, 2015, p. 41). These age scripts made it so that it was
nearly impossible for older adults to resist the system without
triggering the alarm or using the system in a creative and
unforeseen way, as decided by them and not dictated by others. In
this sense, older adults had to conform to the new technology and
adapt their behaviors to it—for instance, some rooms that were
unmonitored became completely off-limits, and some people
stopped kneeling when praying because of the fear of triggering
the alarm. But older adults do not always conform to
technological systems. As Berridge (2017) shows, passive
monitoring system do not necessarily invade or respect the
privacy of older adults, but instead can provide the opportunity to
negotiate what privacy means for them, what are its boundaries,
and when they can be infringed.
But customers and users of gerontechnology are not always the
same. Customers are oftentimes family members who want to
improve the life of an older relative, or they can be large-scale
care providers who want to make care more efficient through
technology. This means that designers or developers of AI
technologies for aged care are caught between competing
interests: on the one hand, the customers, the ones who pay,
value older adults’safety above their autonomy, while older adults
who are the users of these technologies might, on the contrary,
value autonomy above safety. Because of the ways markets
operate, developers are incentivized to prioritize the customers’
interests over those of the users, as this is how profit gets
maximized. This means that in the case of technologies for aged
care, developers have fewer incentives to create devices that can
be easily adapted and creatively reappropriated by end-users.
Biased data. Another notable source of bias can permeate the
datasets processed by AI systems, which often fail to adequately
include older individuals. Even the largest datasets are not inde-
pendent of the “instruments, practices, and systems of knowl-
edge”used for data collection, processing, and analysis (Sourbati
and Behrendt, 2021, p. 1401). Data is an object of power, it
includes or excludes certain individuals, processes, or phenom-
ena; it makes them visible or, on the contrary, invisible (Ruppert
et al., 2017). Biased data can lead to discriminatory or exclu-
sionary results for minority groups. For example, Buolamwini
and Gebru (2018) showed that facial analysis AI does not work
for women and men with darker skin, which often results in
discriminatory outcomes. Straw and Wu (2022) revealed that AI
systems built to predict liver disease, are twice as likely to miss
disease in women as in men. These results can be attributed to the
underrepresentation of marginalized groups in datasets, as shown
by research in critical data studies (Geneviève et al., 2020; Dalton
et al., 2016). Older individuals are also frequently absent from
datasets used for AI development and assessment (Mannheim
et al., 2019; Fernández-Ardèvol and Grenier, 2022; Rosales and
Fernández-Ardèvol, 2019). This can be attributed to the lower
likelihood of older individuals using digital technologies, which
might be the result of the gray digital divide (Mubarak and
Suomi, 2022).
However, the underrepresentation of older adults is also due to
exclusionary data collection processes (Rosales and Fernández-
Ardèvol, 2019; Sourbati and Behrendt, 2021). Data collection
practices, even health and medical data from sources such as
clinical trials, often prioritize younger demographics, resulting in
the underrepresentation of older age groups (United Nations
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0 ARTICLE
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0 3
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Independent Expert on the Enjoyment of All Human Rights by
Older Persons Report, 2020). And this is not due to “explicit age
exclusion but implicit age bias”(Jecker, 2020, p. 250). For
example, research on osteoporosis amongst older age groups is
rare despite the fact that they are the population most affected by
this medical condition. One review of randomized control trials
found that the average age for osteoporosis study participants is
64, which is almost two decades younger than the average age of
people with hip fractures, the most important clinical event in
osteoporosis (McGarvey et al., 2017). Jecker (2020, p. 250) notes
that implicit age bias that results in the exclusion of older adults is
often present in studies involving stroke (Gaynor et al., 2014),
cancer (Murthy et al., 2004), acute coronary syndrome (Lee et al.,
2001), chronic kidney disease (O’Hare et al., 2009), diabetes
(Cruz-Jentoft et al., 2013), and Parkinson’s disease (Buckley and
O’Neill, 2015) to name a few. The situation highlights just how
far-reaching implicit age bias is in clinical research. The problem
is that data from these studies is used to train AI for aged care,
which can result in situations where AI systems may fail to
generalize to older age groups, leading to poorer performance and
user experiences for older users (Chu et al., 2022, p. 950).
But even when older adults are represented in datasets, data
might not be disaggregated for relevant use. Disaggregated data is
data broken down into sub-categories which allows a better
understanding of trends and patterns emergent in these sub-
categories. Lack of age-disaggregated health data “impedes the
identification of meaningful correlations among various factors
and limits the capacity for quantitative program evaluations, to
assess causal inference, and to pinpoint best practices”(Diaz et al.,
2021, p. e436). Data tend to be disaggregated for younger age
groups, but not for older ones. This can be attributed to ageist
stereotypes that lead to a failure to see the reality that older adults
are not a homogenous group, but that they differ significantly.
The risk is that AI systems that incorporate data from extensive
cohorts that are not disaggregated may interpret individual data
in terms of the average values derived from those datasets.
Because individual interests and skills are not reflected in the
data, individual variations might be misinterpreted as aberrant
behavior (WHO, 2022, p. 6).
Paternalism and its technological version
As noted previously, ageism frequently relies on both positive and
negative stereotypes, painting older adults as warm and likable
yet also as incompetent, forgetful, and fragile (Levy, 2018; Ayalon
et al., 2020; Cary et al., 2017; Chang et al., 2020). This blend of
stereotypes can elicit complex emotional responses, including
feelings of pity and the desire to help, which lead to paternalistic
behaviors towards older adults (North and Fiske, 2012, p. 10). For
example, when positive stereotypes are in the mix, people tend to
respond actively, that is, to intervene on behalf of the older adult
in order to help, without older adults actually asking for help
(Cuddy et al., 2007, p. 109). Although helping behaviors are
sometimes seen as a form of respect for older adults when such
help is accompanied by judgments of frailty and incompetency,
they become paternalistic and can undermine older adults’
independence (Sublett et al., 2022).
Paternalism, classically defined, involves interventions that
restrict an individual’s liberty for their own benefit and without
their consent (Dworkin, 1972, p. 67). Examples of paternalistic
interventions abound in day-to-day life, such as the prohibition of
the sale of tobacco in New Zealand to anyone born after 2008.
Three crucial dimensions of paternalism emerge:
1. Paternalism entails a limitation of freedom;
2. the limitation of freedom is justified by the intention of
advancing an individual’s best interests;
3. the limitation of freedom is without prior consent
(Dworkin, 1972, p. 65).
Paternalistic behaviors can manifest in various ways, from
assuming that older adults cannot make their own choices to
making decisions on their behalf without their input. The incli-
nation to over-help or over-protect those who are perceived as
needing assistance or guidance leads to the creation of excessively
accommodating environments that assume older adults’depen-
dency and fragility without considering their actual competence
or interest in receiving help (Vervaecke and Meisner, 2021,p.
160). While some may argue that paternalism is driven by gen-
uine concern for the well-being of older adults, it can actually
reinforce their lower social status as it can lead to a cycle of
disempowerment that further entrenches the stereotypes asso-
ciated with old age. Furthermore, paternalistic behaviors place
older individuals in a position of dependence, perpetuating the
belief that they are unable to make their own, informed choices
(Swift and Chasteen, 2021).
Technological paternalism. As paternalism implies intention, it
is usually assumed that only humans can be paternalistic and can
impose restrictions on other people’s freedom for their supposed
good. But in the last decades, technology has played an increas-
ingly substantial role in decision-making processes and is often
used to impose certain ways of doing things or even to prohibit
some actions. Examples of the latter include cars that would emit
warnings or refuse to start unless seatbelts are fastened, or
machinery that prevents operation without safety gear (Spie-
kermann and Pallas, 2006). These examples show how personal
autonomy can not only be constrained through the intentional
actions of other individuals but also through various social
epistemic and material structures (Hofmann, 2003), such as
technology. In this context, the concept of technological patern-
alism has been introduced to examine the ways in which indi-
vidual freedom can be constrained by technology, often without
users’consent, and for their own benefit (Millar, 2015; Spie-
kermann and Pallas, 2006; Hofmann, 2003; Rochi, 2023).
For the concept of technological paternalism to make sense in
relation to AI technologies for aged care, the three conditions
above have to be accomplished. First it is essential to consider
whether AI technologies can interfere with users’liberty,
understood as freedom of action. AI is deployed in aged care
not only to create virtual outputs but also to control physical
environments. Assistive technologies such as smart homes are a
case in point, showing how AI systems can exert physical control
over a users’environment, by monitoring and controlling
“physiological parameters (pulse, oxygen saturation, blood
pressure); functionality (general activities, motion, meal intake);
safety and security (automatic lighting, trip and fall reduction,
hazard detection, intruder detection); social interaction (phone
calls, video-mediated communication, virtual participation in
groups); and cognitive/sensory assistance (medication reminder,
lost key locator)”(Facchinetti et al., 2023, p. 2). With all of these
sources of data, AI systems can make suggestions or offer advice
to older adults, such that they encourage healthy habits and
discourage dangerous activities. However, these all-encompassing
monitoring systems might also restrict access to certain areas or
activities for safety reasons, even when older adults are capable of
managing these tasks on their own. Such systems might have a
profound influence on older adults’decision-making processes,
making some actions more attractive while others less so. For
instance, if an AI system controls medication schedules and
dosage, older adults might not have the autonomy to adjust their
treatment based on how they feel or their preferences (Fadhil,
2018). What is more, AI systems often collect and analyze
ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0
4HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
personal health data. Older adults might be uncomfortable with
constant surveillance and data collection, which can limit their
sense of freedom and personal space (Mannheim et al., 2022).
Thus, AI systems can limit older adults’possibilities of acting.
The second condition pertains to the intention behind the
limitation of freedom, which should be to promote individuals’
best interests. Can we meaningfully say that technologies have
intentions and that these intentions are to protect users’interests?
First, while we cannot (yet) talk of AI systems as possessing
intentions, they can be means for various entities, such as states,
companies, or designers, family members or healthcare providers
to accomplish certain goals, sometimes by imposing constraints
on user’autonomy. Technologies are created to foster the
accomplishment of different types of goods or, more generally,
to accomplish various types of purposes. In other words,
technologies operate with some types of criteria that serve as a
definition of their goals (Kühler, 2022, p. 196). In the case of AI
systems, these criteria are named objective functions, which are
goals they should pursue as they learn more and become more
complex (Zhang and Conitzer, 2019). Most of the time, it is other
parties, such as developers, healthcare providers, and family
members, besides the beneficiaries, that is, older adults, who
operate with a notion of the good that technologies are meant to
maximize, and this is especially evident in healthcare systems.
Thus, technologies work in the same way as if they had intentions
and a conception of the good of their users (Kühler, 2022, p. 196).
In the case of a fall detection system, the objective function might
be an increase in the accuracy of fall prediction. And this
objective function is then maximized, even with the price of other
important aspects, such as privacy or autonomy. Another
example is AI systems that dispense medications according to a
strict schedule and dosage, which can limit older adults’ability to
deviate from the prescribed schedule. Similarly, AI monitoring
systems can automatically trigger emergency responses
in situations interpreted as critical, even if the older adult wishes
to manage their condition without immediate medical interven-
tion. This limitation on their freedom is driven by the desire to
prioritize their health and safety; and oftentimes, health and
safety, in the view of healthcare providers or family members,
overrides older adults’preferences. The same can be said of
developers of these technologies, whose intention oftentimes is to
support older adults’safety and well-being through technology,
even if the price is sometimes a restriction of the freedom to
choose (Boström et al., 2013).
Last but not least, AI systems restrict users’freedom without
users’actually explicitly agreeing to this or expecting it. Many AI
applications for aged care involve surveillance technologies that
collect data about users’daily activities, from wearable sensors to
smart home systems. This extensive surveillance often changes
the behaviors of individuals who are being monitored. For
example, monitoring of food consumption may make individuals
feel that they cannot eat what they would like or when they would
like, due to feelings of being watched and reprimanded for “bad
decisions”(Kang et al., 2010, p. 1584). What is more, users
typically lack the ability to opt out or to override technological
decisions without compromising the technology’s functionality
(Rochi, 2023). Some smart home systems grant remote control to
caregivers or family members, but older people report that they
prefer and want to have control over these systems (Ghorayeb
et al., 2021; Demiris et al., 2009). Also, older adults would prefer
to have a say in what information the AI system shares with their
family or caregivers (Galambos et al., 2019), a need that stems
from the different understandings and approaches to privacy that
older adults and their families and caregivers have (Berridge,
2017). For example, in a scoping review on the ethical issues
arising from the use of gerontechnology in the home care of older
people, an ethical dilemma involving the balance of paternalism
and the rights of older individuals emerged (Sundgren et al.,
2020). Family members placed greater importance on the
advantages of technology and viewed autonomy and privacy as
secondary concerns compared to the benefits of technology,
particularly in terms of the safety of older individuals (Landau
et al., 2010; Wild et al., 2008). This is consonant with previous
research that points toward the fact that perceptions of risk differ
when it comes to older adults and their families or caregivers
(Rolison et al., 2018). What is more, relatives expressed the belief
that older individuals would likely refuse technology use and,
consequently, stressed that the utilization of technology could be
coerced if necessary (Landau et al., 2010, p. 414). Similarly, smart
fall detection systems that promptly alert caregivers without
giving older adults the chance to confirm or cancel an alert can
diminish their sense of autonomy and control over their safety.
All in all, older adults are concerned about their safety, but they
wouldn’t increase it at any cost (Ienca et al., 2021). Thus, is not
only that AI systems in themselves are paternalistic in relation to
older individuals, their use is also oftentimes imposed on older
adults in a paternalistic manner.
Technological paternalism is often an unintended consequence
of designer’adherence to specific normative frameworks or
scripts in their approach to solving problems. The creation of AI
technologies for older adults is oftentimes based on stereotypical
representations of old age, depicting older adults “as a homo-
geneous group that can easily be linked to discourses about
vulnerability and illness”(Peine and Neven, 2019, p. 58). These
assumptions of old age that unintentionally get embedded into AI
systems, risk creating “a feedback loop that reinforces negative
stereotypes”(Chu, Nyrup, et al., 2022). Negative stereotypes of
aging, related to frailty and vulnerability, “have the potential to
affect the holistic health (i.e., mental, physical, social, and
emotional well-being) of an older person and ultimately the
length and quality of their life”(Dionigi, 2015).
While technologies themselves lack intentions, they serve as
tools for various entities, such as designers, caregivers, companies,
and states, to impose constraints on the autonomy of older users.
Many AI applications for aged care, such as home monitoring or
fall detection systems, involve surveillance technologies that
collect data about users’daily activities, often without their
awareness or ability to override these technological decisions
(Rubeis, 2020).
Is technological paternalism towards older adults justified?
It’s important to note that older adults constitute a diverse group.
Many older adults have the ability to make informed choices about
their lives, and imposing paternalistic AI on them could potentially
curtail their freedom (Voinea et al., 2022). Nonetheless, certain
segments of the older population, specifically those grappling with
severe medical conditions that incapacitate their decision-making
abilities, may need a form of paternalistic care facilitated by AI
technologies. In these scenarios, the justification for AI-mediated
paternalistic interventions can be compelling, rooted in the genuine
need to protect those who are incapable of making sound decisions
(Buchanan, 2008; Childress and Mount, 1983;Nys,2008). The
justification for embedding AI into healthcare settings for older
persons arises from other practical considerations as well, such as
staffing shortages and the steadily increasing number of older
persons in need of care. In this context, AI-driven paternalism
might, at times, seem the most pragmatic response to address the
care deficit. However, it is crucial to recognize that while AI
paternalism might be beneficial for specific groups, such as those
with volitional disabilities, applying such systems universally to
older adults capable of informed decisions may be unjust. The
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0 ARTICLE
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0 5
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
question arises: are the potential benefits for a subset of older adults
sufficient to justify the costs of a blanket imposition of paternalistic
technologies on the aging population?
To address this question, one must initially understand the
factors that justify paternalism. Soft paternalists advocate for
interference with a person’s freedom when they lack sufficient
competence and, thus, when their actions are non-voluntary
(Lyngby Pedersen, 2023). Hard paternalists, on the other hand,
argue that interventions impinging on a competent person’s
freedom are warranted if the good resulting from the intervention
outweighs the harm it causes. Thus, paternalist interventions are
justified by the good promoted.
Let’s begin with soft paternalism, which focuses on compe-
tence. Competence is typically defined in terms of an individual’s
decision-making capacities, specifically their ability “to receive
information, express wishes, and understand potential con-
sequences”of their actions (Pedersen, 2023, p. 43). The compe-
tence argument can be further understood either through the
‘best judge’or the ‘personal development’argument.
The ‘best judge’argument was first articulated by Mill in On
Liberty, who stressed that paternalistic interventions can be
rejected on the basis that individuals are, in general, the most
competent and the best judges of what constitutes their own best
interests. Thus, when the public does interfere with a person’s
freedom for their supposed well-being, “the odds are that it
interferes wrongly, and in the wrong place”([1859] 2012, p. 98).
Furthermore, objections to paternalistic interventions are often
based on the idea that they limit opportunities for learning and
personal development. Freedom of choice, according to Mill, has
a strong educative value, as people learn better through their
mistakes (Mill [1859] 2012, p. 74).
However, individuals may not always be the best judges of their
own interests, given the susceptibility of human judgment to errors
and biases. Take, for example, smoking: people who take up
smoking ignore the fact that it might result in harm, so one might
build the case that they are not the best judges of their best interest.
Similarly, it can be argued that older people may not be the best
judges of their best interest, specifically in cases in which they
would prefer to prioritize autonomy, even with the risk of potential
harm, which in the worst case may result in their death. Yet, the
fact that humans are not always the best judges of their own best
interest does not necessarily lead to the conclusion that others are
better suited to make decisions for them (Kleinig 1983, p. 163).
The case of older adults is paradigmatic here. Older adults pos-
sessawealthoflifeexperiencethatequipsthemtomakeinformed
decisions about their well-being, as opposed to younger adults who
do not yet have a crystallized view on what is best for them. As
people’s experience increases with age, it might become more and
more difficult to justify paternalistic interventions on competence
grounds, as people already know what is best for them, what is
risky, what is worth it, and what is not. For example, some older
adults might reasonably prefer to keep their privacy and, hence, the
space of their freedom, even with the price of potential risks to their
safety. They might reason that there is more to life than merely
living isolated and cloistered and might prefer having experiences,
even if that might endanger or tire them. What is more, the process
of acquiring the capacity to decide what is best for oneself also
requires the freedom to learn from bad choices (which represents
an important objection to the imposition of paternalistic interven-
tions on younger adults on comptenece grounds). Even if older
adults have a wealth of life experiences, it’s reasonable to assume
that they can continue to discover new life experiences and
experiment with them. This also implies the freedom to make
mistakes or engage in activities that are risky. Prioritizing a life rich
in experiences, even if some of those experiences carry risks, can be,
for some, more valuable than leading an overly sheltered existence
devoid of experiences. This is because these experiences, even if
potentially harmful, can offer opportunities for individuals to dis-
cover more about their capacities and make informed decisions
regarding paternalistic care (i.e., whether they are in the position to
accept or reject it). But, more than anything, people have a right to
decide how they want to spend their lives if that decision does not
hurt others besides themselves.
In any case, the ability to make decisions is a spectrum, and it is
not binary—individuals may have varying degrees of capacity,
and this complexity needs to be addressed on a case-by-case basis.
For any type of intervention that might reduce older adults’
freedom to choose for themselves, diversity in terms of cognitive
abilities and decision-making skills should be considered and
analyzed, as older adults are not a homogenous group (Mitnitski
et al., 2017; Nguyen et al., 2021; Gray et al., 2002). It is hard to
justify the position that other, oftentimes younger, persons are in
a better position to judge what might be in an older person’s best
interest—obviously, if that person does not suffer from various
medical conditions that affect their decision-making capacity. In
short, old age is not by itself enough to discount older adults’
competence to make decisions about how they should spend their
lives; thus, soft paternalism is not justified.
For hard paternalism, the constraint on people’s freedom is
justifiable when the benefits of the intervention outweigh the
costs. The only requirement is for individuals to act imprudently,
such that they might endanger their lives or cause harm to
themselves or to others. Thus, hard paternalism is not contingent
on the competence or voluntariness of those involved but rather
on their imprudent behavior. Imprudent behavior is commonly
understood as behavior that results in harm either to oneself or to
others. Most often, though, paternalistic care is justified on the
basis of preventing older adults from harming themselves due to
various incapacities. Harm to others is seldom, if ever, an issue in
this context. The presumption here would be that the good
prompted by hard paternalistic care, which is the expansion of
older adults’years, outweighs the harm done, and so justifies a
restriction of their freedom to choose how to live.
The justification for preventing people from harming themselves is
connected to life expectancy and what Pedersen (2023,p.49)calls
“life-year opportunities”, meaning the years left in a persons’life to
pursue their plans. It seems more harmful to a person to lose life-year
opportunitiestheyoungertheyare,astheyhavelesstimeto
experiment and pursue life plans. Consequently, this implies that
paternalistic interventions are warranted for younger individuals but
not for older adults since “the additional life expectancy of young
people is greater than that of older people”(Pedersen, 2023,p.48).In
other words, paternalistic interventions are more justified in the case
of younger individuals, as these have a longer life expectancy, which
istakentobeanindicatorofthewell-beingthatwillbeprotectedby
the concerned intervention (Pedersen, 2023).
Pedersen reaches the conclusion that hard paternalistic inter-
ventions should take age into account. More precisely, “the
number of (good) life years at risk of being lost and the number of
years lived are central to assessing the potential harm involved in
a given imprudent activity”(2023, p. 42). The good promoted by
paternalistic interventions thus diminishes with people’s age.
Paternalistic care that reduces people’s freedom might make older
adults’lives less meaningful and pleasurable. While the imposi-
tion of paternalistic technologies on older adults might actually
contribute to the promotion of the interests of some older adults,
especially those who lack the capacity to make their own life
decisions, it would negatively impact the greatest proportion of
those who can make their own life decisions.
As a general rule, the imposition of paternalistic AI technologies to
promote the overall good of older adults is not justified. While some
olderadultsmaystruggletoforeseetheconsequencesoftheiractions,
ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0
6HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
this does not apply to all of them. Some may willingly assume certain
risks, such as falling or not seeking immediate help, in order to
maintain autonomy. Therefore AI paternalism cannot be imposed on
older adults under the premise that they are incompetent judges.
Implications
In this paper we showed that the benefits of AI in aged care should
not be taken for granted. We reveal how the design of technology, the
portrayal of older adults as potential users, and the collection and
processing of data can inadvertently reinforce ageist attitudes through
the creation and deployment of paternalistic AI systems for aged care.
The theoretical and practical implications of this research go
hand in hand. At a theoretical level, more effort should be invested
into the investigation of how technological systems, once created
and put to use, impact the lives of older adults. In other words, we
suggest that we should move beyond the AI hype to thoroughly
and sincerely look into how AI systems affect end users. A prac-
tical implication of our research is that technology developers and
designers have to pay more attention to the stereotypes and pre-
conceptions about old age that might get embedded into their
products. Participatory design is one important means of miti-
gating the risk of the perpetuation of ageist biases and it pre-
supposes the inclusion of older adults in design processes, such
that their needs and expectations are known and taken into
consideration from the outset of technology creation processes.
This would also contribute to avoiding catering to ‘imagined users’
and disregarding actual user contexts (Loos et al., 2021). More-
over, a case-by-case analysis is necessary when considering AI
interventions, one that pays attention to the specific needs and
capabilities of each person considered. In other words, age, by
itself, should never be the only criterion used for deciding whether
an AI intervention is justifiable. Instead, the specific health con-
ditions and decision-making abilities of each older adult should be
considered. Additionally, the value trade-offs that come with
technologies for aged care, such as safety versus autonomy, require
careful consideration and should not be taken for granted.
However, we recognize that our diagnosis is not universal.
There are instances where AI systems are developed inclusively
with older adults’input, and data curation is meticulous, aimed at
eliminating the risks posed by ageist biases that permeate data.
Moreover, some older adults may adapt AI systems to their needs
creatively. Yet, there will also be situations where neither impo-
sition nor autonomy is fully realized when older adults feel
technology systems become too intrusive. In other words, we
acknowledge the diverse conditions and use contexts that arise
due to AI-user interactions. Thus, this paper does not aim to
make universalistic judgments, but only to draw attention to a
facet of current technological systems often overlooked—
paternalism stemming from age scripts and biased data—and
which has the potential to undermine older adults’autonomy.
Data availability
No new data were created or analyzed in this study. Data sharing
is not applicable to this article.
Received: 24 January 2024; Accepted: 5 June 2024;
References
Ayalon L, Chasteen A, Diehl M, Levy BR et al (2020) Aging in times of the
COVID-19 pandemic: avoiding ageism and fostering intergenerational soli-
darity. J Gerontol B. https://doi.org/10.1093/geronb/gbaa051
Berridge C (2017) Active subjects of passive monitoring: responses to a passive
monitoring system in low-income independent living. Ageing Soc
37(3):537–560. https://doi.org/10.1017/S0144686X15001269
Berridge C, Grigorovich A (2022) Algorithmic harms and digital ageism in the use
of surveillance technologies in nursing homes. Front Sociol 7. https://doi.org/
10.3389/fsoc.2022.957246
Boström M, Kjellström S, Björklund A (2013) Older persons have ambivalent
feelings about the use of monitoring technologies. Technol Disabil
25(2):117–125. https://doi.org/10.3233/TAD-130376
Boyd D, Crawford K (2012) Critical questions for big data. Inf Commun Soc
15(5):662–679. https://doi.org/10.1080/1369118X.2012.678878
Buchanan DR (2008) Autonomy, paternalism, and justice: ethical priorities in
public health. Am J Public Health 98(1):15–21. https://doi.org/10.2105/AJPH.
2007.110361
Buckley M, O’Neill D (2015) Ageism in studies of rehabilitation in Parkinson’s
disease. J Am Geriatr Soc 63(7):1470–1471. https://doi.org/10.1111/jgs.
13550
Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in
commercial gender classification. In: Friedler SA, Wilson C (eds) Conference
on fairness account transparency. PMLR, pp. 77–91
Cary LA, Chasteen AL, Remedios J (2017) The ambivalent ageism scale: developing
and validating a scale to measure benevolent and hostile ageism. Gerontol-
ogist 57(2):e27–36. https://doi.org/10.1093/geront/gnw118
Chang E-S, Kannoth S, Levy S, Wang S-Y, Lee JE, Levy BR (2020) Global reach of
ageism on older persons’health: a systematic review. PLoS ONE
15(1):e0220857–e0220857. https://doi.org/10.1371/journal.pone.0220857
Chasteen AL, Horhota M, Crumley-Branyon JJ (2020) Overlooked and under-
estimated: experiences of ageism in young, middle-aged, and older adults. J
Gerontol B. https://doi.org/10.1093/geronb/gbaa043
Childress JF, Mount E (1983) Who should decide? Paternalism in health care.
Theol Today 40(3):352–357. https://doi.org/10.1177/004057368304000314
Chu C, Leslie K, Khan S, Nyrup R, Grenier A (2022) Ageism in artificial intelli-
gence: a review. Innov. Aging 6(Suppl_1):663–663
Chu C, Nyrup R, Donato-Woodger S et al (2022) Examining the technology-
mediated cycles of injustice that contribute to digital ageism: advancing the
conceptualization of digital ageism: evidence and implications. In: Proceed-
ings of the 15th international conference on PErvasive Technologies Related
to Assistive Environments, PETRA ‘22. Association for Computing
Machinery, New York, NY, USA, pp. 545–551
Cruz-Jentoft AJ, Carpena-Ruiz M, Montero-Errasquín B et al. (2013) Exclusion of
older adults from ongoing clinical trials about type 2 diabetes mellitus. J Am
Geriatr Soc 61(5):734–738. https://doi.org/10.1111/jgs.12215
Cuddy AJC, Fiske ST, Glick P (2007) The BIAS Map: behaviors from intergroup
affect and stereotypes. J Pers Soc Psychol 92(4):631–648. https://doi.org/10.
1037/0022-3514.92.4.631
Dalton CM, Taylor L, Thatcher J (2016) Critical data studies: a dialog on data and
space. Big Data Soc 3(1). https://doi.org/10.1177/2053951716648346
Demiris G, Oliver DP, Giger J et al. (2009) Older adults’privacy considerations for
vision based recognition methods of eldercare applications. Technol Health
Care 17(1):41–48. https://doi.org/10.3233/THC-2009-0530
Diaz T, Strong KL, Cao B et al. (2021) A call for standardised age-disaggregated
health data. Lancet Healthy Longev 2(7):e436–e443. https://doi.org/10.1016/
S2666-7568(21)00115-X
Dionigi RA (2015) Stereotypes of aging: their effects on the health of older adults. J.
Geriatr 2015:e954027. https://doi.org/10.1155/2015/954027
Dworkin G (1972) Paternalism. Monist 56(1):64–84
Facchinetti G, Petrucci G, Albanesi B et al (2023) Can smart home technologies help
older adults manage their chronic condition? A systematic literature review. Int
J Environ Res Public Health 20(2). https://doi.org/10.3390/ijerph20021205
Fadhil A (2018) A Conversational Interface to Improve Medication Adherence:
Towards AI Support in Patient’s Treatment. ArXiv abs/1803.09844. https://doi.
org/10.48550/arXiv.1803.09844
Fernández-Ardèvol M, Grenier L (2022) Exploring data ageism: what good data
can(‘t) tell us about the digital practices of older people? New Media Soc.
https://doi.org/10.1177/14614448221127261
Galambos C, Rantz M, Craver A, Bongiorno M, Pelts M, Holik AJ, Jun JS (2019)
Living With Intelligent Sensors: Older Adult and Family Member Percep-
tions. CIN: Comput, Informatics, Nursing 37(12):615. https://doi.org/10.
1097/CIN.0000000000000555
Gaynor EJ, Geoghegan SE, O’Neill D (2014) Ageism in stroke rehabilitation stu-
dies. Age Ageing 43(3):429–431. https://doi.org/10.1093/ageing/afu026
Geneviève LD, Martani A, Shaw D et al (2020) Structural racism in precision
medicine: leaving no one behind. BMC Med Eth 21. https://doi.org/10.1186/
s12910-020-0457-8
Ghorayeb A, Comber R, Gooberman-Hill R (2021) Older adults’perspectives of
smart home technology: are we developing the technology that older people
want? Int J Hum–Comput Stud 147, 102571. https://doi.org/10.1016/j.ijhcs.
2020.102571
Gray LK, Smyth KA, Palmer RM, Zhu X, Callahan JM (2002) Heterogeneity in
older people: examining physiologic failure, age, and comorbidity. J Am
Geriatr Soc 50(12):1955–1961. https://doi.org/10.1046/j.1532-5415.2002.
50606.x
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0 ARTICLE
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Gullette MM (2017) Ending ageism, or how not to shoot old people. Rutgers
University Press
Hofmann B (2003) Technological paternalism: on how medicine has reformed
ethics and how technology can refine moral theory. Sci Eng Eth 9(3):343–352.
https://doi.org/10.1007/s11948-003-0031-z
Ienca M, Schneble C, Kressig RW, Wangmo T (2021) Digital health interventions
for healthy ageing: a qualitative user evaluation and ethical assessment. BMC
Geriatr 21(1):412. https://doi.org/10.1186/s12877-021-02338-z
Ivan L, Cutler SJ (2021) Ageism and technology: the role of internalized stereo-
types. Univ Tor Q 90(2):127–139. https://doi.org/10.3138/utq.90.2.05
Iversen TN, Larsen L, Solem PE (2009) A conceptual analysis of ageism. Nord
Psychol 61(3):4–22. https://doi.org/10.1027/1901-2276.61.3.4
Jecker NS (2020) Ending midlife bias: new values for old age. Oxford University
Press
Kang HG, Mahoney DF, Hoenig H, Hirth VA, Bonato P, Hajjar I, Lipsitz LA
(2010) In situ monitoring of health in older adults: technologies and issues. J
Am Geriatr Soc 58(8):1579–86. https://doi.org/10.1111/j.1532-5415.2010.
02959.x
Kleinig J (1983) Paternalism. Manchester University Press
Kühler M (2022) Exploring the phenomenon and ethical issues of AI paternalism
in health apps. Bioethics 36(2):194–200. https://doi.org/10.1111/bioe12886
Landau R, Auslander GK, Werner S et al. (2010) Families’and professional care-
givers’views of using advanced technology to track people with dementia.
Qual Health Res 20(3):409–19. https://doi.org/10.1177/1049732309359171
Lee C-H, Wang C, Fan X et al. (2023) Artificial intelligence-enabled digital
transformation in elderly healthcare field: scoping review. Adv Eng Inform
55:101874. https://doi.org/10.1016/j.aei.2023.101874
Lee PY, Alexander KP, Hammill BG, Pasquali SK, Peterson ED (2001) Repre-
sentation of elderly persons and women in published randomized trials of
acute coronary syndromes. JAMA 286(6):708–13. https://doi.org/10.1001/
jama2866708
Levy SR (2018) Toward reducing ageism: PEACE (Positive Education about Aging
and Contact Experiences) model. Gerontologist 58(2):226–32. https://doi.org/
10.1093/geront/gnw116
Loos E, Peine A, Fernandéz-Ardèvol M (2021) Older people as early adopters and
their unexpected and innovative use of new technologies: deviating from
technology companies’scripts. In: Gao Q, Zhou J (eds) Human aspects of IT
for the aged population. technology design and acceptance. Springer Inter-
national Publishing, Cham, pp. 156–167
Loveys K, Prina M, Axford C et al. (2022) Artificial intelligence for older people
receiving long-term care: a systematic review of acceptability and effective-
ness studies. Lancet Healthy Longev 3(4):e286–97. https://doi.org/10.1016/
S2666-7568(22)00034-4
Lyngby Pedersen VM (2023) In defence of age-differentiated paternalism. In:
Bognar G, Gosseries A (eds) Ageing without ageism?: conceptual puzzles and
policy proposals. Oxford University Press
Mannheim I, Wouters EJM, Köttl H et al. (2022) Ageism in the discourse and
practice of designing digital technology for older persons: a scoping review.
Gerontologist 63(7):1188–1200. https://doi.org/10.1093/geront/gnac144
Mannheim I, Schwartz E, Xi W et al (2019) Inclusion of older adults in the research
and design of digital technology. Int J Environ Res Public Health 16(19)
https://doi.org/10.3390/ijerph16193718
Manor S, Herscovici A (2021) Digital ageism: a new kind of discrimination. Hum
Behav Emerg Technol 3(5):1084–93
McGarvey C, Coughlan T, O’Neill D (2017) Ageism in studies on the management
of osteoporosis. J Am Geriatr Soc 65(7):1566–68. https://doi.org/10.1111/jgs.
14840
Mill JS (1859) On Liberty. English Edition: J.S. Mill’s On Liberty in Focus (2012),
Gray J, Smith GW (eds). Routledge, London
Millar J (2015) Technology as moral proxy: autonomy and paternalism by design.
IEEE Technol Soc Mag 34(2):47–55. https://doi.org/10.1109/MTS.2015.
2425612
Mitnitski A, Howlett SE, Rockwood K (2017) Heterogeneity of human aging and its
assessment. J Gerontol A 72(7):877–84. https://doi.org/10.1093/gerona/glw089
Miura C, Chen S, Saiki S, Nakamura M, Yasuda K (2022) Assisting Personalized
Healthcare of Elderly People: Developing a Rule-Based Virtual Caregiver
System Using Mobile Chatbot. Sensor 22(10):3829 https://doi.org/10.3390/
s22103829
Mubarak F, Suomi R (2022) Elderly forgotten? Digital exclusion in the information
age and the rising Grey digital divide. Inquiry 59:00469580221096272
Murthy VH, Krumholz HM, Gross CP (2004) Participation in cancer clinical trials:
race-, sex-, and age-based disparities. JAMA 291(22):2720–26. https://doi.org/
10.1001/jama2912720
Nelson TD (2016) The age of ageism. J Soc Issues 72(1):191–98. https://doi.org/10.
1111/josi12162
Neven L (2015) By any means? Questioning the link between gerontechnological
innovation and older people’s wish to live at home. Technol Forecast Soc
Change 93:32–43. https://doi.org/10.1016/j.techfore.2014.04.016
Neves BB, Petersen A, Vered M, Carter A, Omori M (2023) Artificial intelligence in
long-term care: technological promise, aging anxieties, and sociotechnical
ageism. J Appl Gerontol 42(6):1274–82
Nguyen QD, Moodie EM, Forget M-F, Desmarais P, Keezer MR, Wolfson C (2021)
Health heterogeneity in older adults: exploration in the Canadian Longitudinal
Study on Aging. J Am Geriatr Soc 69(3):678–87. https://doi.org/10.1111/jgs16919
Noble S (2018) Algorithms of oppression: how search engines reinforce racism.
NYU Press
North MS, Fiske ST (2012) An inconvenienced youth? Ageism and its potential
intergenerational roots. Psychol Bull 138(5):982–97. https://doi.org/10.1037/
a0027843
Nys TRV (2008) Paternalism in public health care. Public Health Eth 1(1):64–72.
https://doi.org/10.1093/phe/phn002
O’Hare AM, Kaufman JS, Covinsky KE et al. (2009) Current guidelines for using
angiotensin-converting enzyme inhibitors and angiotensin II-receptor
antagonists in chronic kidney disease: is the evidence base relevant to older
adults? Ann Intern Med 150(10):717–24. https://doi.org/10.7326/0003-4819-
150-10-200905190-00010
Pedersen VML (2023) For the greater individual and social good: justifying age-
differentiated paternalism. Utilitas. https://doi.org/10.1017/S0953820823000249
Peine A, Neven L (2019) From intervention to co-constitution: new directions in
theorizing about aging and technology. Gerontologist 59(1):15–21. https://
doi.org/10.1093/geront/gny050
Peine A, Neven L (2021) The co-constitution of ageing and technology—a model
and agenda. Ageing Soc 41(12):2845–66. https://doi.org/10.1017/
S0144686X20000641
Pilotto A, Boi R, Petermans J (2018) Technology in geriatrics. Age Ageing
47(6):771–74. https://doi.org/10.1093/ageing/afy026
Rochi M (2023) Technology paternalism and smart products: review, synthesis,
and research agenda. Technol Forecast Soc Change 192
Rolison JJ, Hanoch Y, Freund AM (2018) Perception of risk for older adults:
differences in evaluations for self-versus others and across risk domains.
Gerontology 65(5):547–59. https://doi.org/10.1159/000494352
Rosales A, Fernández-Ardèvol M (2019) Structural ageism in big data approaches.
Nordicom Rev 40(Suppl 1):51–64. https://doi.org/10.2478/nor-2019-0013
Rubeis G (2020) The disruptive power of artificial intelligence. Ethical aspects of
gerontechnology in elderly care. Arch Gerontol Geriatr 91
Ruppert E, Isin E, Bigo D (2017) Data politics. Big Data Soc 4(2). https://doi.org/10.
1177/2053951717717749
Sourbati M, Behrendt F (2021) Smart mobility, age and data justice. N Media Soc
23(6):1398–1414. https://doi.org/10.1177/1461444820902682
Spiekermann S, Pallas F (2006) Technology paternalism–wider implications of
ubiquitous computing. Poiesis Prax 4(1):6–18. https://doi.org/10.1007/
s10202-005-0010-3
Straw I, Wu H (2022) Investigating for bias in healthcare algorithms: A sex-
stratified analysis of supervised machine learning models in liver disease
prediction. BMJ Healthc Inform 29(1):e100457. https://doi.org/10.1136/
bmjhci-2021-100457
Stypinska J (2023) AI ageism: a critical roadmap for studying age discrimination
and exclusion in digitalized societies. AI Soc 38(2):665–77. https://doi.org/10.
1007/s00146-022-01553-5
Sublett JF, Vale MT, Bisconti TL (2022) Expanding benevolent ageism: replicating
attitudes of overaccommodation to older men. Exp Aging Res 48(3):220–33.
https://doi.org/10.1080/0361073X.2021.1968666
Sundgren S, Stolt M, Suhonen R (2020) Ethical issues related to the use of ger-
ontechnology in older people care: a scoping review. Nurs Eth 27(1):88–103.
https://doi.org/10.1177/0969733019845132
Swift HJ, Chasteen AL (2021) Ageism in the time of COVID-19. Group Process
Intergroup Relat 24(2):246–52. https://doi.org/10.1177/1368430220983452
Swift HJ, Abrams D, Lamont RA (2021) Ageism around the world. In: Encyclo-
pedia of gerontology and population aging. Springer, pp. 165–75
United Nations (2020) Independent expert on the enjoyment of all human rights
by older persons. https://www.ohchr.org/en/special-procedures/ie-older-
persons. Accessed 21 Nov 2023
United Nations Department of Economic and Social (2023) World Social Report
2023: leaving no one behind in an ageing world. United Nations. https://doi.
org/10.18356/9789210019682
Vervaecke D, Meisner BA (2021) Caremongering and assumptions of need: the
spread of compassionate ageism during COVID-19. Gerontologist
61(2):159–65. https://doi.org/10.1093/geront/gnaa131
Voinea C, Wangmo T, VicăC (2022) Respecting older adults: lessons from the
COVID-19 pandemic. Bioeth Inq 19:213–223. https://doi.org/10.1007/
s11673-021-10164-6
Wild K, Boise L, Lundell J, Foucek A (2008) Unobtrusive in-home monitoring of
cognitive and physical health: reactions and perceptions of older adults. J
Appl Gerontol 27(2):181–200. https://doi.org/10.1177/0733464807311435
World Health Organization (2022) Ageism in artificial intelligence for health:
WHO Policy Brief. World Health Organization, Geneva
ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0
8HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Zhang H, Conitzer V (2019) A PAC framework for aggregating agents’judgments.
Proc AAAI Conf Artif Intell 33(1):2237–44. https://doi.org/10.1609/
aaai.v33i01.33012237
Acknowledgements
Cristina Voinea’s work was supported by the European Commission [grant number
101102749] and UK Research and Innovation (UKRI) [grant number EP/Y027973/1].
For the purpose of Open Access, the author has applied a CC BY public copyright licence
to any Author Accepted Manuscript (AAM) version arising from this submission. Tenzin
Wangmo’s work is supported by the project “Smart Homes, Older Adults, And Care-
givers: Facilitating Social Acceptance and Negotiating Responsibilities [RESOURCE]”,
financed by the Swiss National Science Foundation (SNF NRP-77 Digital Transforma-
tion, Grant Number 407740_187464/1). Constantin Vică’s work is funded by the Eur-
opean Union (ERC, avataResponsibility, 101117761). Views and opinions expressed are
however those of the author(s) only and do not necessarily reflect those of the European
Union or the European Research Council Executive Agency. Neither the European
Union nor the granting authority can be held responsible for them.
Author contributions
Cristina Voinea, Tenzin Wangmo, and Constantin Vicăcontributed equally to this work.
Competing interests
The authors declare no competing interests.
Ethical approval
Ethical approval was not required as the study did not involve human participants.
Informed consent
Informed consent was not required as the study did not involve human participants.
Additional information
Correspondence and requests for materials should be addressed to Cristina Voinea.
Reprints and permission information is available at http://www.nature.com/reprints
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party
material in this article are included in the article’s Creative Commons licence, unless
indicated otherwise in a credit line to the material. If material is not included in the
article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://creativecommons.org/
licenses/by/4.0/.
© The Author(s) 2024
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-024-03282-0 ARTICLE
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2024) 11:824 | https://doi.org/10.1057/s41599-024-03282-0 9
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com