PreprintPDF Available

AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA)

Authors:

Abstract

This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. We conclude that while there are very significant technical hurdles to real human enhancement through AI, and significant ethical problems, there are also significant benefits that may realistically be achieved in ways that are consonant with a rights-based ethics as well. We also highlight the specific concerns that apply particularly to applications of AI for "sheer" IA (more realistic in the near term), and to enhancement applications, respectively. 2
1
Erler, Alexandre and Müller, Vincent C. (forthcoming 2021), ‘The ethics of biomedical
military research: Therapy, prevention, enhancement, and risk’, in Daniel Messelken
and David Winkler (eds.), Health care in contexts of risk, uncertainty, and hybridity
(Berlin: Springer), 1-20.
[August 2021, http://www.sophia.de]
The Ethics of Biomedical Military Research: Therapy, Prevention,
Enhancement, and Risk
By Alexandre Erler and Vincent C. Müller
ABSTRACT
What proper role should considerations of risk, particularly to research subjects, play when it comes to
conducting research on human enhancement in the military context? We introduce the currently visible
military enhancement techniques (1) and the standard discussion of risk for these (2), in particular what
we refer to as the ‘Assumption’, which states that the demands for risk avoidance are higher for
enhancement than for therapy. We challenge the Assumption through the introduction of three
categories of enhancements (3): therapeutic, preventive, and pure enhancements. This demands a
revision of the Assumption (4), alongside which we propose some further general principles bearing on
how to balance risks and benefits in the context of military enhancement research. We identify a
particular type of therapeutic enhancements as providing a more responsible path to human trials of the
relevant interventions than pure enhancement applications. Finally, we discuss some possible objections
to our line of thought (5). While acknowledging their potential insights, we ultimately find them to be
unpersuasive, at least provided that our proposal is understood as fully non-coercive towards the
candidates for such therapeutic enhancement trials.
1. Introduction: Human Enhancement in the Military
Militaries around the world share an interest in making soldiers more effective at performing
their tasks. One path towards that goal is to use modern technology and medical science to
improve soldiers’ physical and mental capacities what has been termed “human
enhancement”. Military enhancements, as we will understand them in this chapter, can be
defined as interventions, typically involving some form of biomedical technology, that either:
1. Improve aspects of a soldier’s functioning beyond what is considered “normal”; or
2. Give a soldier new capabilities that “normal”, non-enhanced humans do not possess.
Enhancements are often contrasted with therapeutic interventions: this is known as the therapy-
enhancement distinction. While the usefulness of that distinction has been challenged by some
authors (e.g. Savulescu et al., 2011), we adopt it here as a conceptual tool we consider helpful,
although we will challenge the common assumption that it posits a strict dichotomy between
2
the two different types of intervention.
1
We understand therapeutic interventions to also improve certain aspects of a person’s
functioning, yet unlike pure enhancements in a way that either restores or maintains health
or normal functioning. To limit the scope of our discussion, we will also assume that both
therapies and enhancements are integrated into a person’s body and bodily functioning to a
degree that devices regarded as mere “tools” are not, even when their effects are comparable
to the former kinds of intervention (on this, we follow Lin, Mehlman and Abney, 2013). For
example, a powered exoskeleton that increases a soldier’s ability to handle heavy loads does
not seem to truly become part of a soldier’s body, but rather constitutes an addition to it that
amplifies the impact of the actions it takes it is thus not an “enhancement” according to the
definition we use here. A prosthetic limb, by contrast, is required for the integrity of an
amputee’s body, and can therefore meet the integration requirement, even though it remains a
non-invasive intervention. And while brain stimulation devices are typically not so integrated
(except when they are implanted into the skull), the proximate cause of their effects on mental
function alterations in the properties and behaviors of neurons certainly is.
2
In our discussion, we will mostly be referring to the US military for illustrative purposes, given
the wealth of evidence it provides about both enhancement use and cutting-edge research
(conducted mainly through the Defense Advance Research Projects Agency or DARPA).
Among military enhancements in the first of the two categories distinguished above, some are
meant to improve a soldier’s physical functioning: capacities like endurance or strength.
Steroids like testosterone have been shown to increase muscle strength in healthy users, and
there is evidence that their covert use among US soldiers is relatively widespread, although
their safety profile is controversial, and their use is currently illegal in the US military without
a prescription (Peltier and Pettijohn, 2018). On a more futuristic note, some have speculated
that regenerative medicine might also have the potential to enhance soldiers’ physical
performance, for instance via the creation of artificial organs using technologies like 3D
bioprinting, the original purpose of which would be therapeutic, yet which might eventually
1
We may note that those who do not define enhancements in contrast to therapy do not posit such a dichotomy.
If, for instance, enhancements are understood as biomedical interventions that increase a person’s chance of living
a good life in a given set of circumstances, most therapies will count as a subset of enhancements.
2
Even though we take this point about integration to be practically useful for the purpose of our discussion, we
are not strongly committed to it, and it is not crucial that our readers’ intuitions should be fully in line with it.
3
surpass “natural” human organs after further refinements (Campobasso, 2015). Finally,
speculative applications of recent advances in genome editing technology, and particularly
CRISPR/Cas9, to enhance physical capacities on the battlefield have also been described: they
include facilitating muscle gain, and improving the ability to see in low light conditions
(Greene and Master, 2018).
Other military enhancements are focused on improving mental function: capacities like
wakefulness, attention, memory, or processing speed. The use of wakefulness-promoting
substances (initially developed for therapeutic purposes) already has a long history in the US
military (and others), with drugs like amphetamines and, more recently, modafinil (Mehlman,
2015). For a number of years already, DARPA has been running various programs aimed at
discovering new ways of enhancing soldiers’ mental capacities. A recent example is the Next
Generation Nonsurgical Neurotechnology (N3) program, which seeks to develop either non-
invasive or “minutely invasive” forms of brain-computer interface (BCI) that would endow
healthy soldiers with new abilities, such as controlling active cyber defense systems, and
swarms of unmanned aerial vehicles, using only their thoughts (DARPA, 2019).
DARPA-funded research also includes more invasive interventions into the brain, such as
neural prostheses and implants: on example is the Restoring Active Memory (RAM) program,
which seeks to develops tools to restore normal memory in military personnel suffering from
brain injury or illness (DARPA, 2018). Nonetheless, as illustrated by the N3 program, the
agency is seeking to limit the scope of application of such invasive devices to purely therapeutic
ones, and ultimately to make reliance on invasive interventions altogether redundant a
perfectly sensible policy of risk minimization. Still, it remains an open question at this stage
whether non-invasive interventions will be able to match invasive ones (which likely allow for
higher resolution readings) in terms of the outcomes they make possible.
A final relevant distinction among military enhancements is between reversible and irreversible
(or less easily reversed) ones a distinction that largely overlaps with that between invasive
and non-invasive interventions. The enhancement effects of substances like caffeine,
modafinil, or transcranial direct current stimulation or tDCS, can all be reversed quite easily,
within a reasonably short time frame, by ceasing to use the intervention. (Assuming they have
no long-term effects.) By contrast, removing an artificial “super organ”, or a neural implant, is
a much more delicate procedure, making such invasive interventions less easily reversible
4
(although not completely irreversible, unlike a more extreme case such as neurosurgery). We
may further note that the reversible/irreversible contrast applies not only to the enhancement
effects of the interventions we are discussing, but also to any unintended side effects they might
have. It seems plausible to assume that there will be some correlation between the extent to
which a particular intervention is reversible, and the extent to which its side effects are.
Nonetheless, this correlation need not be perfect. It is thus possible for a reversible intervention,
such as a certain psychoactive drug, to have irreversible unintended side effects, for instance if
it were to induce a stroke in the user.
2. The issue of risk in enhancement research
One key consideration in the context of military enhancement research relates to the risks that
such research presents for participants in the relevant clinical trials, and how they compare to
the benefits that they might gain from the research what is known as the intervention’s risk-
benefit ratio (RBR) for the participants concerned. The basic idea is that the intervention being
tested must offer benefits of sufficiently high magnitude to justify the risks it will impose on
the research subjects. (To clarify, we understand “military enhancement research” to be aimed
at developing interventions that will help members of the armed forces in the completion of
their duties. Yet such research can in principle involve civilians as research subjects, especially
in its initial stages, as illustrated by some of the studies funded by DARPA.)
When it comes to enhancement research, a relatively common belief, which we shall simply
refer to as the Assumption, states the following:
The Assumption: Because those who are ill or disabled start from a lower baseline of health
and have less to lose than healthy individuals seeking to become “better than well”,
therapeutic interventions will typically have a more favorable RBR than enhancements. In
light of this, responsible practices will entail a lower tolerance for potential bad outcomes,
or alternatively, the potential for greater benefits to research subjects, in the case of
enhancement research (see e.g. Agar, 2004; McGee and Maguire, 2007).
Maxwell Mehlman and Jessica Berg have challenged the Assumption, arguing that “some
enhancement benefits may be perceived as more valuable than medical benefits”. As an
example, they contrast an intervention that substantially boosted normal cognitive capacity
5
with “a substance to treat a minor skin irritation” (Mehlman and Berg, 2008, p. 550). Their
point is correct, and does call into question the first part of the Assumption: at the very least, it
shows that its first claim cannot simply be derived from the very nature of therapies and
enhancements, contrary to what its proponents are assuming. The fact that the benefits
conferred by enhancements can in principle surpass those of some therapies shows that, even
if research on the former category of interventions typically carries greater risks for subjects
than therapeutic research, we still cannot infer that its RBR (risk-benefit ratio) must typically
be worse, since that RBR is a function of the magnitude of both risks and benefits involved.
That said, Mehlman and Berg’s objection does not seem to contradict the second part of the
Assumption, since it seems compatible with the claim that enhancement research must involve
less serious potential bad outcomes, assuming comparable benefits, than therapeutic research,
or greater potential benefits for comparably bad potential outcomes.
To take Mehlman and Berg’s critique into account, we might thus try to reformulate the
Assumption in the following manner:
The Revised Assumption, version 1: Because [...] “better than well”, therapeutic
interventions will typically have a more favorable RBR than enhancements when they offer
benefits of comparable magnitude. In light of this, etc.
However, this first revised version of the Assumption can also be challenged. First, even if one
were to accept its validity in most ordinary contexts, one might nevertheless question whether
it also holds in the military one. Indeed, soldiers involved in combat missions face great risks,
which enhancements might conceivably help mitigate (as suggested by Mehlman and Li,
2014). Furthermore, it seems possible that, at least sometimes, enhancements will allow to
reduce such risks to a greater degree than would be the case if a soldier were to use a similar
intervention for therapeutic purposes. To take a futuristic example, an artificial lung that simply
restored an injured soldier back to “normal” functioning might not reduce combat-related risk
to the same degree as a “super” lung that enhanced endurance in a healthy subject, making him
or her less likely to be captured or shot. The RBR of the enhancement intervention thus need
not be worse than that of the therapeutic transplant, even assuming they would yield benefits
of comparable magnitude, because the greater potential loss faced by a healthy research subject
might be compensated for by the greater risk-reducing impact of the enhancement. In the
context of armed conflict, one might suppose that this may be true of a number of other
6
enhancements. Of course, this argument will only hold water if the enhancement in question is
tested within that particular context, and will no longer apply if the research is instead
conducted in a much less dangerous environment (say, military exercises).
A second, stronger challenge to the first revised version of the Assumption, one that extends
beyond the military context, contends that it is built on a false dichotomy between therapy and
enhancement. The next section spells out this line of argument further, by distinguishing
between three different categories of enhancements.
3. A tripartite distinction between enhancements
The therapy-enhancement distinction is often understood as contrasting two fundamentally
different, and mutually exclusive categories of interventions. Even though it is accepted that
there might not be a sharp dividing line between the two, so that some interventions might not
confidently be fit into one category rather than the other, the assumption tends to be that an
intervention qualifies either a therapy or enhancement (or neither), but never as both. However,
as some authors have noticed, this assumption seems questionable even if the distinction is
cashed out in the (common) way we have outlined above. A more accurate picture involves
recognizing that while some enhancements may indeed fit the standard paradigm by being
completely divorced from therapy, others do not. Enhancements of the latter kind also have
therapeutic or preventive effects. We therefore propose to distinguish between the following
three categories of enhancements:
a. “Therapeutic”: Therapeutic enhancements either bring someone’s initially subnormal
functioning beyond the threshold of “normality” (or health), or restore normality by
conferring new abilities (e.g. Wolbring & al., 2013). Examples of the former kind
include a short-sighted person who undergoes laser eye surgery to correct that
impairment, and ends up with better than 20/20 vision (a description that already applies
to actual people); or a pair of prosthetic legs that allowed an amputee to run faster than
they could with their original, “normal” legs a phenomenon that is, if not a current
reality (as was once assumed about former sprinter Oscar Pistorius), at least a plausible
near-term development. A contemporary example of the latter kind is provided by brain
implants that allow paralyzed patients to control a computer using only their thoughts
7
(e.g. Nuyujukian et al., 2018). Such an intervention can be described as restoring one
type of normal ability (control of a computer) by conferring a new, “superhuman” one
(direct control of the device via thought).
Future technological developments can be expected to yield new forms of therapeutic
enhancement that could be relevant to the military. We have mentioned the example of
artificial organs: once they get sufficiently perfected to be suitable for transplantation,
they will hopefully match “natural” donated organs in terms of their benefits for
patients, before ultimately surpassing them after further refinements. The ultimate
achievement in this endeavor would be “super” organs conferring a higher level of
functioning than a person’s natural organs could ever have attained, even in the absence
of pathology. Implanting such an organ into a patient to remedy their organ failure will
result in a therapeutic enhancement. An analogous state of affairs might also come to
be with regards to brain functioning: for example, a patient with memory deficits as a
result of damage to the hippocampus could conceivably, if fitted with a cutting-edge
memory prosthesis, achieve a better level of recall than she might ever have enjoyed
with her “normal” hippocampus.
b. “Preventive”: Preventive enhancements forestall the advent of disease or disability by
improving upon already normal functioning (e.g. Brock, 1998; Erler, 2017). A
paradigmatic example of this category of enhancements would be vaccines, which
enhance normal immunity to prevent disease especially those that are not yet routinely
used in the population, and therefore cannot plausibly be said to have redefined what
counts as “normal functioning”. The military context provides good illustrations of such
interventions, such as the anthrax vaccine administered by the US military to its
personnel as part of the Anthrax Vaccine Immunization Program from 1998 to 2008.
Future developments in biomedical science might open up new ways of enhancing
soldiers to protect them from normal vulnerability to injury and illness. Greene and
Master (2018) thus anticipate that genome editing via CRISPR/Cas9 might be used to
confer immunity to new biological agents.
We have previously mentioned the risk mitigation potential of enhancements for
soldiers involved in dangerous combat missions. This raises the question whether all
enhancements that somehow reduce a soldier’s risk of dying or getting injured should
8
be considered preventive enhancements. Although this is to a large degree a linguistic
point, we suggest that a negative answer is more plausible: on our understanding of
such enhancements, their protective effects must be direct, not indirect. The protection
against disease afforded by a vaccine does meet that requirement. By contrast, the
reduction in risk that might result from enhanced cognitive capacities, which might for
instance improve strategic planning, does not. In such a scenario, the proximate cause
of the non-occurrence of death or injury would be, for example, a certain kind of evasive
action, and not the superior cognitive capacities that had made such action possible. We
do agree, however, that in individual instances where we can reasonably predict that a
given intervention will have indirect preventive effects, these should be included into
the calculation of that intervention’s RBR for the subject concerned.
c. “Pure”: Pure enhancements, which might fit the paradigm that many have in mind
when they hear the term “enhancement”, improve upon already normal functioning to
a level that is either on the upper end of the natural distribution, or goes beyond it
entirely, and do not either remedy or prevent any pathological condition. A person with
20/20 vision who underwent laser eye surgery to improve it further would provide an
illustration of such an enhancement, as would someone who, thanks to a drug or brain
implant, managed to boost their normal memory. As we have seen, such enhancements
could conceivably and perhaps often reduce the risk of harm faced by the soldier,
albeit indirectly.
3
We will now see that this tripartite distinction has implications for the validity of the
Assumption, and the ethics of military enhancement research.
4. Re-thinking risks and benefits in the context of enhancement research
The tripartite distinction we introduced in the previous section highlights a key limitation of
3
On the other hand, it is also conceivable that, depending on the circumstances, having received a therapeutic or
a pure enhancement that increased one’s effectiveness in combat might actually end up increasing the soldier’s
risk of harm, at least in one respect. Suppose for example that the enemy were able to identify soldiers who had
received such enhancements: they might then view them as presenting a special threat, to be dealt with as a matter
of priority. Since it is not clear that this is a particularly likely possibility, however, we leave it aside for the sake
of our discussion.
9
the Assumption, even in its first revised version: namely, it illegitimately presupposes that any
potential beneficiary of an enhancement will be “a healthy individual seeking to become better
than well”, that is, that all enhancements are pure enhancements. As we have just seen, this is
not the case. We therefore propose the following, second reformulation of the Assumption, so
as to avoid positing a false dichotomy between therapies and enhancements, while also
incorporating the above remarks about the potential risk-reducing effects of military
enhancements in combat situations:
The Revised Assumption, version 2: Because [...] “better than well”, interventions with
therapeutic or preventive benefits will typically have a more favorable RBR than pure
enhancements, when they offer benefits of comparable magnitude. In light of this,
responsible practices will entail a lower tolerance for potential bad outcomes, or
alternatively, the potential for greater benefits to research subjects, in the case of pure
enhancement research (at least in cases where such research is conducted in reasonably safe
environments).
The counterpart of this is that pure therapies will also typically have a better RBR than
therapeutic (but not necessarily preventive) enhancements, when they offer benefits of
comparable magnitude. That is the second grain of truth in the original Assumption. Indeed, if
two therapies lead to comparable gains in functioning for research subjects, and only one of
them is a therapeutic enhancement, it follows that the recipient of the latter must have started
at a higher level of health or functioning than the recipient of the pure therapy, and therefore
had more to lose. That said, for the reasons outlined by Mehlman and Berg, it does not follow
that pure therapies will, as a general rule, have a more favorable RBR than any of the three
categories of enhancement we have distinguished. Perhaps that is in fact the case with regards
to one or more of these categories; nonetheless, if it is so, it will still not follow from the very
nature of enhancements as contrasted with pure therapies, and would need to be established by
engaging in a comprehensive empirical survey of the relevant interventions (which would
allow us to compare the RBRs of existing interventions, but of course not of all possible ones).
For similar reasons, one cannot confidently claim that therapeutic enhancements will typically
show a better RBR than pure enhancements. While the latter will typically be riskier than the
former, they might also, at least sometimes, offer greater benefits. However, it does seem that
certain kinds of therapeutic enhancements will indeed show a RBR superior to that of pure
10
enhancements. And those of particular relevance in the context of military enhancement
research will be those displaying a superior RBR while producing outcomes comparable to
those of pure enhancements. For instance, if cutting-edge prosthetic limbs ever surpass normal
human limbs in their functionality, the beneficiaries of such devices who became amputees as
a result of an injury or disease will achieve the same level of performance as hypothetical
healthy subjects looking for an extra physical edge via elective amputation, but they will enjoy
a better RBR, since they started from a disabled state. Similar remarks will apply to the
tetraplegic recipient of an invasive BCI, vs. a healthy recipient. Both might gain the same kind
of new abilities that organizations like DARPA are interested in developing, yet the former
might enjoy a better RBR.
The common denominator of the therapeutic enhancements in question seems to be, first, that
they are all invasive in nature. The preceding remarks might in principle apply to non-invasive
interventions, too, such as non-invasive BCIs. However, this will only be so if the modus
operandi of the device (say, electrical fields) does present some degree of risk to research
subjects. If it involves little to no risk, then uses of such devices for therapeutic enhancement
will no longer enjoy a more favorable RBR. As we mentioned in section 1, non-invasive
interventions will in any case tend to be less risky (partly because more easily reversible) than
invasive ones, so that we will have a strong reason to prefer the former when they offer similar
benefits to the latter. However, in at least some cases, we can expect invasive interventions to
have an edge in terms of their effectiveness.
A second commonality is that when such interventions are applied to a subject in a pathological
condition, that condition does not hamper in any way the enhancing effect of the intervention,
compared to what would happen if it were applied to a healthy subject. One reason why it
might be so is that the intervention involves the full replacement of a certain body part with a
new, “enhancing” one, so that it ultimately does not make a difference whether the original
part was dysfunctional (or missing) or not. Another possible reason is that the intervention is
endowing the recipient with entirely new abilities, such as control of drones and other military
vehicles by thought alone. While implanting such a device into the brain of a paralyzed subject
might also help compensate for some of the loss in her motor function, she would still enjoy
these new abilities to the same extent as a fully healthy recipient would.
When it comes to (usually invasive) interventions that fit the description just given, we can
11
therefore conclude that, when we have the choice between testing a new intervention of this
kind as a therapeutic or pure enhancement, we should, from the perspective of securing the
best possible RBR for research subjects, choose the former. This goes one step beyond the
well-established idea that, whenever possible, we should test invasive interventions like neural
implants on patients with pathological conditions first, for purely therapeutic purposes, before
considering assuming it is ethical to do so enhancement uses involving healthy subjects
(which could benefit from the knowledge, e.g. about the intervention’s safety profile, acquired
from the therapeutic research). Of course, the goal of securing the best RBR we can achieve is
only one of several relevant considerations in research ethics (and it thus need not automatically
translate into an all-things-considered judgment). Our discussion thus leaves aside a number of
other issues, such as what constitutes truly informed consent from subjects in this context, or
broader questions about the ethics of human enhancement, such as the possible value of the
“natural”. Nonetheless, the said goal arguably remains a very important consideration here.
Ideally, such interventions would help soldiers return to active duty after a period of physical
training, giving them an extra edge in the performance of their tasks. In cases where a soldier
cannot return to active duty, it is still conceivable that this person might be given the
opportunity to enroll in a trial for a therapeutic enhancement. However, assuming the
intervention in question were not primarily designed for use in military operations, it would no
longer count as military enhancement research even though some might argue that ill or
disabled veterans should get easier access to such trials as a reward for serving their country.
We return to this potentially contentious issue in subsection 5.3.
Having outline our proposed revisions to the Assumption, as well as our core line of argument
regarding the relevance of certain therapeutic enhancements for the sake of optimizing the RBR
of military enhancement research, we now turn to some potential objections to it.
5. Possible objections
5.1. Therapeutic enhancements need not all have a more advantageous RBR
To begin with, it is clear that the tendency to show a better RBR when used as therapeutic
rather than pure enhancements will not apply to all potential military enhancements. For
12
example, a number of interventions will, if equally strong in intensity, allow for a lower level
of performance if used on an ill or disabled subject, rather than on a healthy one. In such cases,
using the intervention as a therapeutic enhancement will require resorting to greater intensity
to reach results comparable to its use as a pure enhancement. A person with excessive
sleepiness who needed a daily dose of 200 mg of modafinil to reach a certain enhancement
level (say, in terms of wakefulness or executive function) might thus not enjoy a better RBR
than a person with no such condition who only needed 100mg daily to reach the same
enhancement level.
Somewhat similarly, a subject implanted with a neuroprosthesis for memory deficits resulting
from brain injury, who needed strong electrical stimulation to reach a certain level of enhanced
memory, may or may not enjoy a better RBR than a subject with normal memory who only
needed less intense stimulation from the same prosthesis to reach the same ultimate level of
recall. On the one hand, the risks associated with surgery would likely be the same in both
cases, which would boost the therapeutic enhancement’s RBR compared to that of the pure
enhancement. Yet on the other hand, this advantage might be cancelled out if the stronger
stimulation required for the therapeutic enhancement presented greater risks than the weaker
stimulation associated with the pure enhancement. If so, the commitment to optimize the
balance between risks and benefits for research subjects will not support preferring therapeutic
enhancement applications, rather than pure enhancement ones, when conducting human trials
of such interventions.
What is more, some of the conditions that therapeutic enhancements might in principle target
are currently disqualifying for recruitment in the military. For example, a diagnosis of
Attention Deficit Hyperactivity Disorder that required the use of prescription medication in the
previous 24 months, and a history of narcolepsy are treated as disqualifying conditions by the
US Army (Office of the Under Secretary of Defense for Personnel and Readiness, 2018).
Similarly, a candidate for genome editing who suffered from muscular dystrophy would also
be ineligible for military enrollment. Even if it were in principle possible for this person to end
up with an enhanced ability for muscle gain as a result of the intervention, by having an
“optimal” rather than just normal” genetic variant (from the perspective of muscle gain)
inserted to replace her initial, pathological one, this possibility would be irrelevant in actual
practice at least in the current state of affairs.
13
These remarks do not refute the claims we presented in the previous section, but simply
emphasize that their scope of application is limited to a certain type of interventions, those
fitting the description we have outline above.
5.2. Coercing/targeting vulnerable subjects
The suggestion that ill or disabled subjects should, all else being equal, be preferred for trials
of (mostly invasive) new interventions with prospective military enhancement applications,
might strike some as perverse. They might object that it would mean unfairly targeting people
in a vulnerable position, who, given their medical needs, will face stronger psychological
pressures to take part in such trials than healthy subjects with no such needs. The existence of
such pressures might especially be a concern when prospective subjects are members of the
armed forces, who are expected to obey orders from their superiors.
4
Alongside this worry
about potential coercion, there might be the sense that researchers should not unnecessarily
impose extra risks on people who already have significant medical needs (and are, in that
respect, worse off than others), and that our proposal precisely entails doing so.
We agree that such concerns are legitimate. However, we also think that they mostly apply to
cases where a subject is forced to participate in the relevant trial, and faces sanctions for
refusing. We certainly do not mean to suggest that anyone, whether civilians or members of
the armed forces, should ever be subjected to such pressures.
5
On the contrary, participants in
such trials should all enjoy the standard protections accorded to research subjects, guaranteeing
that their participation is fully voluntary. Those who do not wish to enroll in the trial should
instead be given access to the most solidly established, purely therapeutic intervention for their
condition. Furthermore, while a policy focused on promoting the best possible RBR for
research subjects does strike us as sensible overall, it should nonetheless remain constrained
by the recognition of an absolute threshold of risk (to be determined by experts) that no ethical
research should cross, regardless of the benefits it might hold. It is therefore conceivable that
4
The use of the term “vulnerable” is, of course, not meant to deny in any way the resilience that members of the
military so often display in the face of injury and other kinds of adversity.
5
As Lin and colleagues mention, it is possible to argue that there are exceptions to this rule: one illustration would
be the administration of somewhat experimental vaccines to American soldiers during the first Gulf War (Lin,
Mehlman and Abney, 2013; although these authors also question whether this practice should be regarded as a
formal military experiment). However, it seems that such exceptions will typically involve preventive rather than
therapeutic enhancements.
14
trials of certain exceptionally risky therapeutic enhancements might be impermissible, even if
the RBR of those interventions were superior to that of purely therapeutic alternatives for the
same condition.
A related objection might be that even offering a therapeutic enhancement, as an optional
alternative to a pure therapy, to a subject with medical needs would violate the principle of
“clinical equipoise” (Weijer and Miller, 2004), which does not allow researchers to test an
experimental intervention that they confidently expect not to provide any extra therapeutic
benefits compared with the alternative that medical experts would agree is appropriate in the
circumstances, while potentially imposing greater risks on research subjects. In response, we
agree that researchers should be reasonably confident that any intervention they might test as
a therapeutic enhancement will answer the patient’s medical needs about as well as the best
purely therapeutic alternative, and that they should not be prepared to blithely trade therapeutic
benefits for enhancement ones (although doing so would seem unlikely to yield a favorable
RBR). However, the standard problem with interventions that violate clinical equipoise seems
to be that they jeopardise the full therapeutic benefits a subject stands to gain, or at least
introduce additional risks, without providing any compensatory benefit. Yet this is not the case
of therapeutic enhancements. It does not seem to us that it must be unethical to allow subjects
with medical needs to freely decide to run such potential additional risks when enhancement
benefits are also at stake.
5.3. Therapeutic enhancements are less reversible than pure ones
Some might argue, against our proposal, that military enhancements should always be
reversible, either because permanent enhancements might present unacceptable risks for their
recipients (such as blood leakage and other safety issues, in the case of invasive BCIs), or from
a very different perspective, because they might confer an unfair advantage, for instance in the
employment context, once a subject had returned to civilian life (an idea mentioned by Thorpe
et al., 2017). However, the argument might go on, therapeutic enhancements are by their very
nature irreversible, insofar as their removal would go against the medical needs of the recipient,
and would on that account be indefensible. Therefore, one might conclude, military
enhancements should be directly tested on healthy subjects, who would not unacceptably suffer
from having them removed once the enhancements were no longer needed (whether for
15
research or actual combat purposes).
While this line of argument is not entirely without merit, it nonetheless seems to rely on some
overly strong assumptions. It is thus not true that removing a therapeutic enhancement would
necessarily entail ceasing to meet the medical needs of the recipient. In some cases, the
enhancement could simply be replaced by a purely therapeutic alternative: if, say, a “super”
artificial organ turned out to present health risks beyond a certain period of use, it could then
be swapped for a “normal”, safer organ, whether artificial or natural. (Of course, this new
construct would need to be safer for long-term use than the original enhancement, otherwise
considerations of safety will not support preferring a normal replacement to an enhancing one.)
Admittedly, this might not be possible in other cases. For instance, were a BCI to
simultaneously fulfill both a therapeutic function (e.g. restoring control of devices like a
computer) and an enhancing one (say, enabling silent telepathic communication) in a patient,
swapping it for a new device that only fulfilled the former function might not be an option, as
the prospects of replacing a brain implant with a new one are poor, at least for now,
6
and it is
not clear what therapeutic equivalent could be provided, say, to patients with tetraplegia (as
long as a full-fledged cure for the condition remains out of reach). Even so, however, it is not
clear that this must pose an insurmountable problem. Assuming such polyvalent invasive BCIs
were to become reality, they could presumably be designed so as to allow for the selective
disabling of their enhancing functions if deemed appropriate without compromising the
therapeutic ones and requiring a full removal.
7
Moving to the argument from fairness, we would argue that the competitive edge some veterans
might enjoy from their enhancements once they had returned to civilian life could in principle
be justified as an appropriate reward for their service. How plausible such a justification would
be will depend on the details of the case, including the magnitude of the said advantage, and
thus cannot be assessed in the abstract. This justification would admittedly not apply to
research subjects who are not members of the military although one might perhaps construct
an analogous argument in their favor, citing the need to reward voluntary participation in
potentially risky research. The costs of granting a person indefinite access to a therapeutic
6
Dr Frederic Gilbert, personal communication.
7
True, current invasive BCIs gradually lose their effectiveness over time, which requires their removal after a
few years of use. However, this is a general issue with invasive BCIs, and not one that specifically affects those
having enhancement as their ultimate purpose.
16
enhancement, which might be high, would also be a relevant issue. However, this consideration
will only provide plausible grounds for discontinuing such an intervention (assuming it is paid
for through public funds), and replacing it with a purely therapeutic one, if the costs imposed
by the former are significantly higher than those of the latter. Whether or not this will apply in
any concrete case remains to be seen, and cannot be assumed in advance.
5.4. Incentivizing risky or deceitful behavior
Another potential concern might be that if people, including soldiers, affected by a pathological
condition were known to be regarded as research subjects of choice for enhancement research,
this would risk encouraging certain forms of undesirable behavior among healthy subjects.
Some might thus try and fake the symptoms of the condition that provided the gateway to the
relevant trials. Besides being an ethically objectionable practice, it might also undermine the
reliability of the research in question. Others might go further and, if not actively harm
themselves, at least deliberately engage in unnecessarily risky behavior, whether in situations
of combat or during training, with the hope of ultimately ending up enhanced. Testing
enhancement interventions on healthy subjects, by contrast, would not create such perverse
incentives.
Considering first the concern about incentivizing reckless risk-taking, it strikes us as rather
speculative. Most people, and most members of the armed forces, are surely not likely to
choose to expose themselves to avoidable risks to their life and health for the sake of merely
possible, and rather elusive, enhancement benefits. And the few who might be inclined to do
so do not seem to present a serious ethical concern. Soldiers who engage in needlessly risky
conduct, potentially compromising the success of combat missions and the safety of their
comrades, can be sanctioned in accordance with military law. (And of course, not all forms of
risky behavior need be counterproductive in this way; some can be heroic and thus
praiseworthy.) Furthermore, those who ended up harming themselves as a result of their
reckless behavior would ultimately be responsible for it, as long as they had not been pressured
into such behavior by their superiors.
The possibility that some prospective research subjects might fake the symptoms of a particular
condition, in order to become eligible for a therapeutic enhancement trial, seems somewhat
17
more realistic, although again it is not clear that we should expect this to become a widespread
phenomenon. For some conditions, such as limb loss, such deception will clearly not be
possible. In other cases, such as pathologies of mental (e.g. memory) functioning, it might be
more practicable. Two main concerns would then arise: first, the impact of the deception on
the validity of the research results, since it could lead to an overestimation of the enhancement
potential of the relevant intervention; and secondly, the possible extra risks for the research
subject resulting from being mistakenly treated as suffering from a pathology (and possibly
exposed to a more aggressive intervention than would otherwise be the case). Of course, one
might again argue that those risks would be self-inflicted, and therefore less of a concern. Still,
these issues about personal responsibility notwithstanding, efforts should be made to avoid
both of these undesirable outcomes. This might be achieved by considering all possible ways
of supplementing a patient’s self-reports with more objective measures of a particular
condition, derived for instance from medical imaging.
5.5. Military enhancement and risk beyond research subjects
The final challenge we will consider to our line of argument stresses that it focuses exclusively
on the risks and benefits that might accrue to research subjects in enhancement trials. However,
one might argue, such considerations extend beyond that particular scope, to society as a whole.
There is thus a general concern that promoting the development of “super soldiers”, whether
via therapeutic or pure enhancements, will lead to an enhancement arms race that is likely to
make conflicts even fiercer in nature, thereby increasing the risks to civilian populations and
raising the pressure for ever more effective enhancement, irrespective of RBR. The latter point
in turn suggests that while promoting the best possible RBR for subjects in enhancement
research is certainly a worthy goal, the greater restrictions that this approach would impose on
such research might also place the nations that followed it at a disadvantage towards rivals that
did not share similar qualms, and consequently did not hesitate to test cutting-edge military
enhancements directly on healthy subjects.
In response, while it is certainly legitimate to worry about the potential harms that might result
from an enhancement arms race for both soldiers and non-combatants, we may note that this
concern is not exclusive to military enhancement research. Rather, it applies more broadly to
advances in military technology, and in particular to those with great potential for destruction,
18
such as nuclear or biological weapons. All of these cases raise the major challenge of how to
pursue international efforts that might successfully put a stop to such an arms race, and
persuade the great powers that they could forfeit further developments in the relevant areas
without unacceptably endangering their national security. In any case, it seems unlikely at the
present stage that an enhancement arms race would present a threat comparable to an arms race
involving weapons of mass destruction, or military applications of artificial intelligence such
as AI-augmented cyber warfare or autonomous weapons.
8
The possibility of an “ethics gap” (Boudreaux, 2019) or “bad guy advantage” in the context of
military enhancement research is also a tricky issue. Clearly, researchers committed to
upholding international standards of research ethics should not be willing to abandon them on
the grounds that researchers in rival countries are willing to flout them. Here again, coordinated
efforts should ideally be made to try and pressure nations around the world to adhere to such
standards. Nonetheless, how rival nations behave might still unavoidably impact what it is
reasonable for a particular country to do. For instance, even though avoiding invasive
interventions when looking for new ways to enhance mental functioning in soldiers might be a
desirable policy, it might be too risky to rigidly stick to it if it turned out that invasive
enhancements have a persistent edge over non-invasive ones, and that rival militaries were
embracing the former. Given this, we may certainly hope that, as some anticipate (Cinel et al.,
2019), the risk profile of invasive BCIs will improve with further technological advances, or
alternatively, that closing the gap in performance between invasive and non-invasive
interventions into the brain will turn out to be technically feasible.
6. Conclusions
While not the only relevant consideration when it comes to the ethics of military enhancement
research, the RBR for research subjects of the interventions being developed and tested is
nonetheless of clear significance. In our discussion of this issue, we have started from a
commonly held principle about the typical RBR of therapies vs. enhancements, which we have
referred to as the Assumption. We have argued that, in its standard formulation, this principle
8
Even though the use of BCIs for human enhancement could arguably be described as an application of AI, with
considerable long-term potential.
19
is untenable. We have then sought to reformulate the Assumption, with the goal of preserving
the grain of truth it contains, while avoiding its shortcomings, and particularly its
presupposition of a false dichotomy between therapy and enhancement. We have also proposed
additional general guidelines for the ethics of military enhancement research, including one
that singles out a certain type of therapeutic enhancements as providing a more responsible
path to human trials of the relevant (often, though not always, invasive) interventions than pure
enhancement applications. We have considered some potential objections to our proposal.
While acknowledging their potential insights (which partly depend on the future trajectory of
interventions like invasive BCIs), we have ultimately found them to be unpersuasive, at least
provided that it is understood as fully non-coercive towards the candidates for such therapeutic
enhancement trials.
On those grounds, we consider our proposed guidelines to provide a superior alternative to the
Assumption. That being said, we agree, first, that Mehlman and Berg are right to emphasize
the need to take into account “the specifics of the study in question” when assessing the RBR
of any proposed intervention (Mehlman and Berg, 2008, p. 550). We have also noted that there
is a strong reason for militaries to seek to perfect non-invasive enhancements in preference to
invasive ones, insofar as it is possible to do so without impeding the ultimate performance of
soldiers. Going beyond this idea, one might further argue that the army’s ultimate goal should
be to take humans away from the battlefield completely, and replace them with remote-
controlled weapons, such as drones and military robots. While such a development certainly
sounds desirable from the perspective of minimizing risks to troops, it also raises significant
ethical issues, such as potentially greater threats to civilians, as well as issues of accountability
if autonomous killing machines start replacing human soldiers. As important as these issues
might be, they lie beyond the scope of the present paper.
References:
AGAR, N. 2004. Liberal Eugenics : in Defence of Human Enhancement, Malden, Mass. ;
Oxford, Blackwell Publishing.
BOUDREAUX, B. 11/01/2019 2019. Does the U.S. Face an AI Ethics Gap? The RAND Blog
[Online]. Available from: https://www.rand.org/blog/2019/01/does-the-us-face-an-ai-
ethics-gap.html 2019].
20
BROCK, D. 1998. Enhancements of Human Function: Some Distinctions for Policymakers.
In: PARENS, E. (ed.) Enhancing Human Traits: Ethical and Social Implications.
Washington, DC: Georgetown University Press, 48-69.
CAMPOBASSO, T. 2015. Super Soldiers: 3D Bioprinting and the Future Fighter. Small
Wars Journal [Online]. Available: http://smallwarsjournal.com/jrnl/art/super-soldiers-
3d-bioprinting-and-the-future-fighter [Accessed 05/12/2017].
CINEL, C., VALERIANI, D. & POLI, R. 2019. Neurotechnologies for Human Cognitive
Augmentation: Current State of the Art and Future Prospects. Front Hum Neurosci,
13, 13.
DARPA. 2018. Progress in Quest to Develop a Human Memory Prosthesis [Online].
Available: https://www.darpa.mil/news-events/2018-03-28 [Accessed 04/08/2019
2019].
DARPA. 2019. Six Paths to the Nonsurgical Future of Brain-Machine Interfaces [Online].
Available: https://www.darpa.mil/news-events/2019-05-20 [Accessed 20/08/2019
2019].
ERLER, A. 2017. The Limits of the Treatment-Enhancement as a Guide to Public Policy.
Bioethics, 31, 608-615.
GREENE, M. & MASTER, Z. 2018. Ethical Issues of Using CRISPR Technologies for
Research on Military Enhancement. Journal of Bioethical Inquiry, 15, 327-335.
LIN, P., MEHLMAN, M. & ABNEY, K. 2013. Enhanced Warfighters: Risk, Ethics, and
Policy. The Greenwall Foundation.
MCGEE, E. M. & MAGUIRE, G. Q., JR. 2007. Becoming Borg to Become Immortal:
Regulating Brain Implant Technologies. Camb Q Healthc Ethics, 16, 291-302.
MEHLMAN, M. J. 2015. Captain America and Iron Man: Biological, Genetic and
Psychological Enhancement and the Warrior Ethos. In: LUCAS, G. (ed.) Routledge
Handbook of Military Ethics. London; New York: Routledge, 406-20.
MEHLMAN, M. J. & BERG, J. W. 2008. Human Subjects Protections in Biomedical
Enhancement Research: Assessing Risk and Benefit and Obtaining Informed Consent.
Journal of Law, Medicine and Ethics, 36, 546-49.
MEHLMAN, M. J. & LI, T. Y. 2014. Ethical, Legal, Social, and Policy Issues in the Use of
Genomic Technology by the U.S.Military. J Law Biosci, 1, 244-80.
NUYUJUKIAN, P., ALBITES SANABRIA, J., SAAB, J., PANDARINATH, C.,
JAROSIEWICZ, B., BLABE, C. H., FRANCO, B., MERNOFF, S. T., ESKANDAR,
E. N., SIMERAL, J. D., HOCHBERG, L. R., SHENOY, K. V. & HENDERSON, J.
M. 2018. Cortical Control of a Tablet Computer by People with Paralysis. PLoS One,
13, e0204566.
OFFICE OF THE UNDER SECRETARY OF DEFENSE FOR PERSONNEL AND
READINESS. 2018. DoD Instruction 6130.03: Medical Standards for Appointment,
Enlistment, or Induction into the Military Services. Available:
https://www.med.navy.mil/sites/nmotc/nami/arwg/Documents/WaiverGuide/DODI_6
130.03_JUL12.pdf [Accessed 23/08/2019].
PELTIER, C. & PETTIJOHN, K. 2018. The Future of Steroids for Performance
Enhancement in the U.S. Military. Mil Med, 183, 151-153.
SAVULESCU, J., SANDBERG, A. & KAHANE, G. 2011. Well-Being and Enhancement.
In: TER MEULEN, R., SAVULESCU, J. & KAHANE, G. (eds.) Enhancing Human
Capacities. Oxford: Wiley-Blackwell, 3-18.
THORPE, J. B., GIRLING, K. D. & AUGER, A. 2017. Maintaining Military Dominance in
the Future Operating Environment: A Case for Emerging Human Enhancement
Technologies that Contribute to Soldier Resilience. Small Wars Journal [Online].
21
Available: https://smallwarsjournal.com/jrnl/art/maintaining-military-dominance-in-
the-future-operating-environment-a-case-for-emerging-huma [Accessed 29/08/2019].
WEIJER, C. & MILLER, P. B. 2004. When Are Research Risks Reasonable in Relation to
Anticipated Benefits? Nature Medicine, 10, 570-573.
WOLBRING, G., DIEP, L., YUMAKULOV, S., BALL, N., LEOPATRA, V. & YERGENS,
D. 2013. Emerging Therapeutic Enhancement Enabling Health Technologies and
Their Discourses: What Is Discussed within the Health Domain? Healthcare (Basel),
1, 20-52.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This chapter examines the possibility of using artificial intelligence (AI) technologies to improve human moral reasoning and decision-making. The authors characterize such technologies as artificial ethics assistants (AEAs). The authors focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. The authors distinguish three broad areas in which an individual might think their own moral reasoning and decision-making could be improved: one’s actions, character, or other attributes fall short of one’s values and moral beliefs; one sometimes misjudges or is uncertain about what the right thing to do is, given one’s values; or one is uncertain about some fundamental moral questions or recognizes a possibility that some of one’s core moral beliefs and values are mistaken. The authors sketch why one might think AI tools could be used to support moral improvement in those areas and distinguish two types of assistance: preparatory assistance, including advice and training supplied in advance of moral deliberation, and on-the-spot assistance, including on-the-spot advice and facilitation of moral functioning over the course of moral deliberation. Then, the authors turn to ethical issues that AEAs might raise, looking in particular at three under-appreciated problems posed by the use of AI for moral self-improvement: namely, reliance on sensitive moral data, the inescapability of outside influences on AEAs, and AEA usage prompting the user to adopt beliefs and make decisions without adequate reasons.
Chapter
Full-text available
What proper role should considerations of risk, particularly to research subjects, play when it comes to conducting research on human enhancement in the military context? We introduce the currently visible military enhancement techniques (1) and the standard discussion of risk for these (2), in particular what we refer to as the ‘Assumption’, which states that the demands for risk-avoidance are higher for enhancement than for therapy. We challenge the Assumption through the introduction of three categories of enhancements (3): therapeutic, preventive, and pure enhancements. This demands a revision of the Assumption (4), alongside which we propose some further general principles bearing on how to balance risks and benefits in the context of military enhancement research. We identify a particular type of therapeutic enhancements as providing a more responsible path to human trials of the relevant interventions than pure enhancement applications. Finally, we discuss some possible objections to our line of thought (5). While acknowledging their potential insights, we ultimately find them to be unpersuasive, at least provided that our proposal is understood as fully non-coercive towards the candidates for such therapeutic enhancement trials.
Article
Full-text available
Ethical analyses of biomedical human enhancement often consider the issue of authenticity-to what degree can the accomplishments of those utilizing biomedical enhancements (including cognitive or athletic ones) be considered authentic or worthy of praise? As research into Brain-Computer Interface (BCI) technology progresses, it may soon be feasible to create a BCI device that enhances or augments natural human intelligence through some invasive or noninva-sive biomedical means. In this article we will (1) review currently existing BCI technologies and to what extent these can be said to enhance or augment the capabilities of the respective users, (2) describe one hypothetical type of BCI device that could augment or enhance a specific aspect of human knowledge-namely, mathematical ability, and (3) relate these concepts to the active externalism view of the extended mind as es-poused by Clark and Chalmers in order to (4) argue that knowledge of mathematics derived from the usage of a BCI and the application thereof constitutes authentic knowledge and achievement.
Chapter
Full-text available
This chapter addresses the claim that, as new types of neurointervention get developed allowing us to enhance various aspects of our mental functioning, we should work to prevent the use of such interventions from ever becoming the “new normal,” that is, a practice expected—even if not directly required—by employers. The author’s response to that claim is that, unlike compulsion or most cases of direct coercion, indirect coercion to use such neurointerventions is, per se, no more problematic than the pressure people all find themselves under to use modern technological devices like computers or mobile phones. Few people seem to believe that special protections should be introduced to protect contemporary Neo-Luddites from such pressures. That being said, the author acknowledges that separate factors, when present, can indeed render indirect coercion to enhance problematic. The factors in question include lack of safety, fostering adaptation to oppressive circumstances, and having negative side effects that go beyond health. Nonetheless, the chapter stresses that these factors do not seem to be necessary correlates of neuroenhancement.
Article
Full-text available
https://plato.stanford.edu/entries/ethics-ai/ - Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn.
Article
Full-text available
Computerized clinical decision support systems, or CDSS, represent a paradigm shift in healthcare today. CDSS are used to augment clinicians in their complex decision-making processes. Since their first use in the 1980s, CDSS have seen a rapid evolution. They are now commonly administered through electronic medical records and other computerized clinical workflows, which has been facilitated by increasing global adoption of electronic medical records with advanced capabilities. Despite these advances, there remain unknowns regarding the effect CDSS have on the providers who use them, patient outcomes, and costs. There have been numerous published examples in the past decade(s) of CDSS success stories, but notable setbacks have also shown us that CDSS are not without risks. In this paper, we provide a state-of-the-art overview on the use of clinical decision support systems in medicine, including the different types, current use cases with proven efficacy, common pitfalls, and potential harms. We conclude with evidence-based recommendations for minimizing risk in CDSS design, implementation, evaluation, and maintenance.
Article
Full-text available
What is the ethical impact of artificial intelligence (AI) assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society.
Chapter
Direct-to-consumer (DTC) neurotechnologies represent a growing market as companies vie to bring the promise of brain-based devices into consumers' homes. One subtype of these technologies is electroencephalography (EEG) devices, which are marketed for indications ranging from health to entertainment. The transition of EEG from clinical and research settings into people's homes has reignited a debate over mental privacy and fears about mind reading. Other, potentially more imminent concerns, however, have largely remained unexamined. Here, we survey the short-, mid-, and long-term ethical issues that DTC EEG devices may pose, and evaluate the conditions that would need to be met for those concerns to come to fruition. We conclude that the source of most ethical concerns about DTC EEG technology lies not so much in the devices themselves, but in what people believe about these devices and their capabilities.