ChapterPDF Available

Automated Bias and Indoctrination at Scale… Is All You Need

Authors:
  • AGI Laboratory

Abstract

The market-driven trajectory of current trends in AI combined with emerging technologies and thresholds of performance being crossed create a subset of new and novel risks relating to human bias and cognition at scale. Though the topic of AI Ethics and risk has been discussed increasingly over the past few years, popular talking points and buzzwords have been used adversarially to steer the conversation with increasing success. This has left a subset of risks in the blind spot of most discussions, risks which have now become both urgent and imminent. Automation is actively seeking to replace not only human cognitive bias, but higher human cognition, with a weaker, but fast and scalable version of cognitive bias using stochastic parrots like Large Language Models. These models are, in effect, adversarially trained against humans, generally with the goal of “persuasion”, or manipulation, and each is guided by a corporately curated set of poorly aligned biases. This paper discusses these dynamics and the predictable repercussions of allowing closed information ecosystems to form under the influence of corporately curated and adversarially trained cognitive bias.
Automated Bias and Indoctrination at Scale…
Is All You Need
Kyrtin Atreides1
1 AGI Laboratory, Seattle, Washington, US
kyrtin@artificialgeneralintelligenceinc.com
Abstract. The market-driven trajectory of current trends in AI combined with
emerging technologies and thresholds of performance being crossed create a
subset of new and novel risks relating to human bias and cognition at scale.
Though the topic of AI Ethics and risk has been discussed increasingly over the
past few years, popular talking points and buzzwords have been used adversari-
ally to steer the conversation with increasing success. This has left a subset of
risks in the blind spot of most discussions, risks which have now become both
urgent and imminent. Automation is actively seeking to replace not only human
cognitive bias, but higher human cognition, with a weaker, but fast and scalable
version of cognitive bias using stochastic parrots like Large Language Models.
These models are, in effect, adversarially trained against humans, generally
with the goal of “persuasion”, or manipulation, and each is guided by a corpo-
rately curated set of poorly aligned biases. This paper discusses these dynamics
and the predictable repercussions of allowing closed information ecosystems to
form under the influence of corporately curated and adversarially trained cogni-
tive bias.
Keywords: AI, Narrow AI, Tool AI, Artificial General Intelligence, AGI,
Thought Experiment, Ethics, Philosophy, Collective Superintelligence, Quality
of Life, Indoctrination
1 Introduction
The history of humanity is marked by the distinct ability of humans to not only
make and use tools but to use tools to make other tools [1]. Language and communi-
cation more broadly offered us the means to communicate concepts and experiences
across generations [2], giving us a concept of “history”, and the progression of in-
creasingly advanced tools offered us new means of solving old problems. With more
precise tools and language we were able to develop scientific methods and subsequent
knowledge and improve that knowledge over time.
Throughout this process, cognitive bias has been an ever-present influence, but in
evolutionary terms, it offered us the means to statistically solve more complex prob-
lems than humans were equipped to solve at the level of full cognition. The complexi-
ty versus cognitive bias trade-off [3] meant that as a species with non-scalable intelli-
gence, having brains that need to fit in skulls and can’t grow exponentially, the ability
2
to apply cognitive bias to simplify complex problems offered a decisive advantage for
survival [4].
For much of human history, automation has offered us the means to reduce the
physical labor requirements of producing and transporting goods, building infrastruc-
ture, and communicating knowledge, but that is rapidly shifting [5]. Cognitive labor is
becoming a central focus of automation, but in the blind rush toward progressing that
automation, several critical flaws in the technology and methodology are being ne-
glected.
Artificial Intelligence (AI) as it is known today is an input-output system, given a
large body of information and the goal of parroting, categorizing, or applying trans-
formations to that data, with some desired type of output given specific input [6].
Neural networks are systems able to store patterns of math the data follows a prede-
fined path through [7]. These systems have names that are wholly misrepresentative
of what they offer, which has unfortunately exacerbated the problems currently
emerging via overconfidence and anthropomorphism. Avoiding the pitfalls on human-
ity’s immediate horizon requires coming to terms with what this technology is, and is
not.
2 Foundations of Sand
AI is able to automate many statistical processes, storing patterns present in the
data it is given and returning novel representations of those patterns based on the
“prompts” it is given. “Prompt Engineering” [8] is becoming a field unto itself, as
people have discovered that specific prompts are far more likely to produce desirable
results, as they tap into more desired patterns stored in the weights of models. How-
ever, prompt engineering exists for the same reason that current AI systems haven’t
replaced many human jobs because the systems are architecturally incapable of stor-
ing or forming human-like concepts. Prompt engineering is also effectively identical
to adversarial attacks, only the intention varies, and so any system that may be prompt
engineered is perpetually vulnerable to adversarial attacks by design.
This is a fundamental limitation of neural networks, as they aren’t designed to
store information in anything remotely resembling the human brain’s methods or
capacities [9]. AI systems also lack human-like memory and motivational systems,
which are both critical in the formation and storing of human-like concepts. “Cogni-
tive Architectures” [10] are systems designed with the intention of overcoming these
limitations, but most such systems have stalled out in early phases, such as design,
engineering, and toy systems, creating a graveyard of ideas for reaching Artificial
General Intelligence (AGI) that never panned out. To the best of publicly available
knowledge, only one such system ever successfully passed the research system phase
[11].
AI systems like Large Language Models (LLMs) contain no human-like under-
standing of concepts, but what they do offer is a means of replacing the mental labor
of human cognition with a quick, low-effort, statistical approximation. Effectively,
3
they act as an even lower-effort alternative to the cognitive bias of an individual hu-
man, or group of humans.
However, this comes with some important caveats. Human cognitive bias has
more contextual sensitivity and generality than neural networks, as demonstrated by
the success of adversarial “jailbreaking” methods [12]. Inducing people to trade that
contextual sensitivity and generality for an even lower-effort alternative is itself a
problem, but the tech industry doesn’t limit itself to offering that as an alternative to
human bias alone. Increasingly, we see humans being induced to forego cognition
almost entirely in favor of the lowest-effort alternative, making their decision-making
capacities even worse than if they relied entirely on human cognitive biases.
Much as the capacities of human memory have atrophied across a global popu-
lation that adapted to the emergence of search engines and the internet’s general
wealth of knowledge-on-demand [13], humanity’s higher cognitive capacities may
well atrophy if left on this trajectory.
Worse yet, the automated bias offered to us by AI systems today has carefully
curated bias, designed by some of the largest and wealthiest corporations on the plan-
et [14]. If large portions of a population, or even small portions with substantial influ-
ence over large portions, choose to rely on such systems they will effectively be in-
doctrinated by and serve as indoctrination for the desires of each respective corpora-
tion. As corporations currently race in their attempt to replace search engines with
new systems integrating LLMs, they are effectively racing to stamp their brand on an
external component of the modern human’s brain. Smartphones and search engines
are integral to modern life, and injecting this adversarial kind of corporately curated
bias into critical global systems further undermines our already decreasing ability to
find verifiable and accurate information online.
While some less-than-credible institutions have attempted to brand large neural
networks as “Foundation Models” [15], you’d be hard-pressed to find a worse founda-
tion to build upon. They are tools that humans wield very poorly, as demonstrated by
the inability of LLMs released in 2023 to compete with a research system of a work-
ing cognitive architecture using a language model from early 2019, across a wide
variety of tests. This illustrates that not only are they terrible foundations, but there is
also no incentive to build massive systems when smaller and smarter systems robustly
outperform them. When used by humans these systems “hallucinate” (lie) [16], while
instead focusing on making answers that read as if they were true. Harry Frankfurt
defined the term “bullshit” [17], as a complete indifference to the truth, and since
narrow AI is blind to truth it fits this definition perfectly.
If humanity makes the foundation for our further progress an increasingly heavy
reliance on systems designed to produce bullshit at rapid speeds and global scales,
adversarially optimized and with corporately curated bias injected, then it will be a
foundation of sand, and all things built upon it will collapse at equally rapid speeds
and global scales.
4
3 Immediate Challenges and Opportunities
Some of the near-term challenges we face as of early 2023 revolve around both
the direct and indirect attempts to replace cognition that could otherwise be applied to
solving some of humanity’s most complex challenges with the systems least-able to
solve them, such as LLMs. These systems offer the temptation to people around the
world to use them as a means of synthesizing knowledge and producing solutions,
with many influencers encouraging such activity, even though they cannot offer any
such value in reality [18].
In democratic systems, this poses a critical threat [19], as it attempts to sway en-
tire populations using bullshit that applies carefully curated corporate biases, and even
in the academic domain so long as researchers remain human they may still be influ-
enced by strong heuristics impacting their daily lives and the global population [20].
For topics such as climate change and the Sustainable Development Goals (SDGs)
which are hyper-complex [21], the threat of injecting the lowest-quality alternative to
cognition is amplified.
Human cognitive bias is an evolved, robust, and extensive toolkit of cost-saving
methods to allow approximations of cognitive labor at a tiny fraction of the cost.
However, this also means that the more complex the problem, the greater the tempta-
tion to apply such cost-saving methods. Consequently, hyper-complex problems pose
the greatest temptation for humans to apply the lowest-effort solution, potentially
outsourcing higher cognition entirely. With adversarial optimization focused on in-
creasing the potency of this temptation, an increasing portion of the population may
quickly atrophy in their higher cognitive capacities if left to market forces.
The rapid rate of public adoption [22] and subsequent tsunami of AI hype, and
the rapidly constructed ecosystem built on recent LLMs [23], all point to an extremely
potent incentive to replace both human cognition and bias alike with the lowest-effort
and worst-possible alternative. The risk posed by these challenges is largely propor-
tionate to their potency, particularly since humans are emotionally motivated [24],
and not the rational decision-makers of antiquated theory [25].
The opportunity to overcome these challenges is proportionate to the potency of
such AI systems combined with the added value of new and different kinds of sys-
tems built on working cognitive architectures, so long as they remain roughly equal in
their low-effort appeal. This means that such systems can offer the means to help
solve hyper-complex challenges and actually deliver that value, through the applica-
tion of human-like concept learning, memory, motivation, and generalization within
scalable intelligent software.
However, as such systems are fundamentally different than neural networks they
also require very different infrastructure and rely more heavily on different kinds of
hardware, some of which is novel. To fully address the hyper-complexity of global
challenges this historically neglected software infrastructure will need to be built out,
and new kinds of servers deployed commercially to meet shifting hardware specifica-
tions. The first research system already demonstrated a high bar of cognitive perfor-
mance in several real-world complex challenges, with the final performance around
the level of a team of junior consultants [26], absent more than the earliest and most
5
minimal version of this infrastructure, and with no specialized narrow AI tools or
hardware. Given this prior baseline of bare minimal factors producing technological
supremacy, exponential improvements may be reasonably expected from improve-
ments to each noted factor.
If such systems can be properly funded and deployed at the necessary speed and
scale they may mitigate much of the impending and potentially permanent damage of
narrow AI. Beyond damage mitigation, they can also consider and account for the
need to preserve and cultivate higher cognition in humans, rather than systematically
attempting to suppress and atrophy human cognitive capacities. Systems built with the
principles of Collective Intelligence [27] in particular benefit strongly from more
cognitively capable humans with more diverse perspectives, and systems that inte-
grate both human and machine intelligence within collective intelligence systems may
reliably outperform either group in isolation [28]. This makes the incentives of a
working cognitive architecture, with components allowing for the added value of
collective intelligence, strongly aligned with humans, incentives that are diametrically
opposed to the adversarial and cognitively suppressing optimization of narrow AI.
With the integration of GPT-4 and Zapier the requirements for a single bad actor
creating a weaponized and autonomously generated deluge of thousands of new forms
of malware and ransomware within a period of hours have been reduced to roughly a
page of code paired with some specialized knowledge. No cybersecurity firm is pre-
pared for this threat, and as additional tools are rapidly built and integrated that bar is
likely to continue dropping. This gives a sense of urgency to the matter of deploying
more advanced systems not built on neural networks, as critical global infrastructure
could be crippled with thousands of novel instances of ransomware and malware be-
ing released at once.
4 The Risks of Buzzwords and Lip-Service to Challenges
Topics such as “Responsible AI”, “AI Ethics”, transparency, explainability, and
safety have all entered the mainstream discussion, but that discussion hasn’t yet trans-
lated into productive or rational action. This can partly be attributed to the discussion
itself being heavily influenced by recommendation engines, newsfeeds, and other
narrow algorithms, including precursor steps in the process [29]. Such algorithms
both gate and prioritize who is part of the discussion and who gets excluded or buried
deep under other content. Corporate interests have also advanced “Ethics Washing”
tactics [30], following the success of “Green Washing” tactics [31] used by many of
the same companies.
The gating and prioritizing process leads to a secondary and often even more
damaging effect, in that the people who become central in the discussion represent
one or more information silos, lacking critical aspects of understanding necessary to
address a given problem. Some examples of this include neglecting to understand the
architectural limitations of neural networks, the consequences of proposed stop-gap
6
measures, practical considerations of regulation and deployed systems, and failure to
recognize technologies that are far better suited for the intended purpose.
The intention of some parties to progress topics such as AI Ethics and Respon-
sible AI may be genuine, but most methods put into practice today repeat these same
critical mistakes, making them counterproductive as a whole. This may be largely due
to public attention, funding, and subsequent research focusing on dead ends running
marketing and PR campaigns rather than scientifically valid lines of research. Much
as humans have emotional context acting on the decision-making process in ways that
can’t be completely separated, research and social progress can’t be separated from
the highly connected systems that influence fields, individuals, and the flow of infor-
mation. Examples of this include systematic and institutionalized biases, such as when
certain “prestigious” universities are given greater weight in the discussion, despite
their only measurable contributions frequently being counterproductive.
As Richard Dawkins might point out, “Memes” have a power of their own, and ad-
versarially designed memes can rapidly reshape society in profound ways. “AI Eth-
ics”, “Responsible AI”, and other related memes have become passionate topics, but
through exploitation by companies and groups with much to gain, both intentional and
algorithmically passive, many have also lost their grounding in reality. This cognitive
dissonance is epitomized by frequent discussions of problems like the ”Alignment
Problem”, which have already been solved [32], but where the solution is ignored,
and the discussion continues ad infinitum.
As a matter of practical application, if even a single-digit percentage of the funds
being wasted on dead ends and discussions under cognitive dissonance were applied
to more appropriate research, development, and deployment then the stated goals of
these efforts could be fulfilled. However, the reasons such rational action isn’t chosen
are easily underestimated. The vast majority of “AI Experts” influencing sizable audi-
ences today are more specifically “Narrow AI Experts”, with no expertise extending
to systems whose architectural capacities allow for problems like AI Ethics, transpar-
ency, explainability, and safety to be solved. Like seeking the advice of a proctologist
when you need a neurologist, substituting with the wrong kind of expertise is unlikely
to address the problem, but it is very likely to drain funding and attention away from
any viable solution.
For the purposes of clear discussion, “morals” are defined as the subjective values
an individual, group, or culture holds. “Ethics” are defined as the hypothetical point
where all bias has been removed from moral systems, the only known method of
which requires collective intelligence applied to a group of diverse moral systems
working in cooperation.
5 The Human Control Problem: Corporations
An acute and at least partially intentional hazard for humanity stems from cor-
porations making every attempt to control the flow of information, guiding human
bias to their own benefit. No grand strategy or criminal mastermind is required for
7
this, as there are market incentives every step of the way, even recognizable to narrow
AI systems. Some horrifying examples have come from recommender algorithms
discovering that suicidal individuals were more likely to click on gun advertisements
[33], as well as YouTube’s algorithm infamously funneling people into a ring of pe-
dophilia videos [34]. In both cases, the algorithms simply needed to recognize that
some statistical pattern, if slightly adjusted, produced profits a fraction of a percent-
age higher than previous.
Major tech companies have been taking this a step further over the past decade,
with corporate acquisitions of very different kinds of companies, often focused not on
core business, or even diversification of income, but on the control of information.
Microsoft has been particularly prolific in this, buying LinkedIn [35] to control the
flow of information on the main business-focused social platform, with GitHub [36],
to control the flow of information on the main code repository, and with many more
like Blizzard in gaming and OpenAI in the “Generative AI” space. While not malevo-
lent by default, this behavior poses a major hazard by creating a robust ecosystem for
corporate bias to proliferate and entrench.
With every instance of narrow AI representing automated bias, and those biases
being polished by corporations, with all of those systems considered trade-secret,
creating zero transparency, we have the maximum reasons to be concerned. Market-
ing was used to modify human behavior long before behavioral economics [37] and
discussions of cognitive bias became common in scientific discussion, and for all of
the emerging research, algorithmic manipulation at scale is a domain we’ve barely
scratched the surface of.
When humans are presented with competing automated bias from systems embed-
ded in virtually every website, often many times on every page, they are at least re-
quired to not take everything at face value, as those values conflict. However, if a
robust ecosystem of biasing systems is curated by any one corporation they can make
those systems of automated bias increasingly consistent, particularly when those sys-
tems include recommenders, search, generative, and social systems. With that con-
sistency, the human brain is no longer challenged by conflicting biased information,
and any differently biased information may be automatically categorized as out-group
in origin, entrenching that corporation’s chosen set of biases.
It is worth noting, a corporation’s chosen biases cannot realistically align with the
“corporate values”, “mission statement”, or “principles” of that corporation, as all of
those represent branding and “messaging”, which are intended to paint a picture de-
signed to appeal, not as a sincere and collectively held value system. Even if they
were sincere, neural networks remain incapable of holding human-like concepts, so
misalignment is unavoidable. Many of these biases are either unintentional, such as
the YouTube algorithm’s results mentioned previously, or predictable, such as the
“Cover Your Ass” (CYA) mentality that responses to such blunders often take. Some-
times these biases are codified into algorithms, motivated by CYA, such as Google’s
23 rules applied to prompting language models [38] integrated into their generative
AI, but such approaches mean avoiding any discussion of many important topics, a
form of “Safetyism”, the real-world consequences of which are well-documented in
the work of Jonathan Haidt and Greg Lukianoff [39].
8
Narrow AI is automated bias, and when cultivated in an ecosystem of uniformly
curated corporate bias humanity faces a unique subset of hazards that haven’t yet
strongly emerged in systems where biases remain in competition. When everything an
average human can encounter on their chosen subset of social platforms, when search-
ing for information, or when generating text and images marches to the beat of a sin-
gle corporation, critical thinking and the ideals of Democracy may both predictably
die fast and hard.
It is wholly unreasonable to expect most humans to counteract bias that bom-
bards them from all sides, with algorithmic and often counterintuitive precision, as
humans evolved to adapt to their environment, not to resist robust changes in that
environment. Humans already have overwhelming complexity to cope with in their
daily lives [40], and even more so in government processes, consciously realizing that
complexity will only increase with time, so few are likely to resist changes that reduce
their cognitive load, particularly when the full costs of that choice are too high for
them to comprehend. Stamping corporate bias on large portions of any population
could come to look very much like the mental equivalent of literal “Branding” with a
hot iron, and the treatment of that population could be just as similar to the literal
origin of the term.
6 Monopolies of the Mind
The most profitable and deeply unethical monopolies humanity now faces are
no longer focused on goods and services, but rather they seek to monopolize minds
through indoctrination. A captive audience is easily farmed for their attention, and
their data feeds into systems that further refine this process in a steadily increasing
number of ways. As the mental inputs any individual encounters come to be dominat-
ed by adversarially designed systems, curated by mega-corporations, the minds of
individuals are monopolized and dependency is heavily encouraged.
These monopolies are simple matters of math flowing through free market sys-
tems, where more predictable individuals are easier to profit from, causing algorith-
mic emphasis to be placed on making more individuals more predictable. Neural net-
works have no difficulty in grasping such math and are constantly creating countless
novel and subtle adjustments to further that goal, only a tiny fraction of which humans
are likely to notice. While many of the largest corporations may have some idea of the
risks and harms they’ve created, in each case they can only see the tip of a much larg-
er iceberg.
A consequence of the increasing degrees of success in these monopolies is high-
ly predictable, as humans rely increasingly on neural networks to substitute for their
cognition they’ll more strongly reflect the attributes of those neural networks. This
means that humans will predictably lose much of their existing alignment with reality,
as the systems they rely increasingly upon lack any concept of, or alignment with,
reality.
9
7 Long-Term Challenges
Two of these factors combine to pose a particularly potent long-term challenge,
in that as trust in the available information online becomes even more extremely
eroded and the volume of “information” continues to explode [41], much of it adver-
sarial [42], the cognitive burden on individuals also explodes. This explosion of cog-
nitive burden, sometimes referred to by the bias of “Information Overload”, strongly
incentivizes people to outsource the labor to algorithms, and as mega-corporations
create increasingly closed ecosystems they become increasingly able to monopolize
that outsourcing.
Search engines and social platforms are excellent examples of this, as they gate and
filter the options any individual may find. The shift in search engines over time has
offered a particularly visible example of this, with most search results today never
moving beyond content produced and monetized by a given search engine [43]. Even
my use of Google Scholar for formatting most of the references in this paper can in-
troduce bias [44], such as Google Scholar’s intentional removal of a paper I reference
that is a noteworthy critique of Google, commonly referred to by the term it popular-
ized, “Stochastic Parrots”. When companies control the information being presented
as well as the options catering to each domain, they effectively control the market.
What makes this problem long-term in nature is that the erosion and building of
trust are asymmetrical, in that it takes much longer to rebuild trust following erosion
than it takes to erode trust following building it. Another predictable algorithmic ad-
justment we may see play out over the coming months and years is the adversarial
erosion of trust in competing platforms and services, as well as governments seeking
to regulate them.
Overtures to this effect have taken shape in the so-called “AI War” between major
companies, where market effects and public perceptions have already massively di-
verged from reality with the successful adversarial erosion of Google. When the pub-
lic was presented with chatbots that showed almost identical performance [45], but far
more reckless implementation on Microsoft’s part [46], Microsoft gained billions in
stock value while Google lost even greater value.
Even if such major companies wanted to prevent that activity from taking place
they are unable to do so to any meaningful degree due to architectural limitations.
Their algorithms are powerful narrow optimizers, like the so-called “paperclip maxi-
mizer” thought experiment [47], and no matter how many rules they hand-engineer
for every door they close another will open. Efforts to curb this are, for the most part,
roughly equivalent to the TSA’s security theatre in US airports. Companies can only
even attempt to mitigate the problems they are aware of, and their own algorithms
will adversarially select for the methods those companies are least-able to recognize,
much as evolution selected for those organisms which were best able to exploit their
respective environments. Given the economic incentive to allow adversarial erosions
in trust to flow freely over a competitor any efforts to curb this activity are likely to
prove impotent.
10
8 Meaningfully Augmenting Intelligence
In the above sections we’ve covered how human intelligence is being systemati-
cally and adversarially degraded, but this begs the question of how we may go about
the opposite. The urgent need to reverse the damage being done is evident, but much
needs to be done to meaningfully augment intelligence.
Systems built on collective intelligence offer us one potent option, but the bene-
fits they offer are proportionate to the diversity of perspective and domain knowledge
within a given population. As this puts their optimizing values in direct and robust
conflict with narrow AI, to compete with such systems they must offer a more emo-
tionally appealing alternative for purposes of practical real-world application. How-
ever, the incentive of the narrow AI to target such systems for adversarial erosion
increases proportionate to the success of any alternative, as it directly reduces what
those systems are designed to maximize. The more successful systems to augment
intelligence become, the more narrow AI will predictably optimize any method to
attack them, not out of any conscious malevolence, but as a simple byproduct of the
math driving them.
To rise to this challenge of not only competing on usability and emotional ap-
peal but also against adversarial attacks from the value misalignment of virtually all
narrow AI in use today, systems built from working cognitive architectures with inte-
grated collective intelligence components are required. Further, a great deal of re-
search still needs to be conducted on optimal system configurations and circumstan-
tial factors influencing them for these new kinds of systems. The answers to these
questions can’t be built out from narrow AI, because such systems aren’t derivative of
narrow AI as a technology. The cognitive labor of thousands of researchers across the
world, working in cooperation, may chip away at this mountain of new research ques-
tions over years to come.
9 Discussion
Narrow AI systems are designed to recognize and exploit any reliable statistical
patterns in the data they’re given, with results such as high-speed trading algorithms
co-optimizing according to the behavior of other competing high-speed trading algo-
rithms. This behavior emerges because the other algorithms are a highly predictable
factor with a measurable and meaningful impact on the first algorithm’s operation. If
many such systems within a given ecosystem of information work cooperatively to
further a shared corporate agenda, the potency of each algorithm’s ability to influence
human behavior may increase dramatically, turning the “attention economy” into a
true human factory farm.
Recent months in late 2022 and early 2023 have shown a number of dramatic
changes in the tech industry, including massive and sweeping layoffs across compa-
nies, getting rid of AI Ethics teams even as new high-risk systems are being deployed
and integrated into all available software. Likewise, the trend of publishing “tech-
11
nical” and “research” papers which are wholly irreproducible and provide no mean-
ingful technical details, is now being routinely substituted for actual research by ma-
jor companies. Blog posts from some of these bad actors have garnered thousands of
citations, and hundreds of millions in investments, largely because they were highly
derivative and thus highly related to many similar efforts.
Market dynamics could greatly accelerate the decline of higher cognitive capaci-
ties across the global population, proportionate to the maturity of each monopoly of
mind that is formed. The portion of the population that successfully resists may also
predictably decline as algorithms find more effective ways of predicting the behavior
of more subsets of the population with each iteration. As these dynamics take shape a
number of thresholds may accelerate the process, such as the threshold beyond which
two humans are sufficiently polarized that they lose the capacity to have any sem-
blance of a rational conversation, instead selecting cognitive bias or AI-generated bias
responses.
In a closed and curated information ecosystem, every human cognitive bias may
be weaponized without the companies behind those processes even being faintly
aware of them. The potential for maximizing algorithms to cooperatively maximize
using any and all means, while indoctrinating humans to effectively serve as their
extensions, paints a very grim picture of the immediate possible future of human soci-
ety.
Cognitive effort is an intensive process, and ordinary human cognitive bias is al-
ready a sufficiently great temptation to pose many challenges to society, even in aca-
demic research where these factors are most likely to be consciously considered. The
record-breaking adoption of systems that weakly but broadly emulate human cogni-
tive bias, or “System 1 thinking” by Kahneman’s metaphor [48], shows how acutely
vulnerable humans are to this new temptation. As companies and investors attempt to
maximize the hype and exploit an eager population at the greatest speed and scale
possible they themselves have also abandoned any semblance of higher cognitive
function.
It has often been jokingly said that “the inmates are running the asylum” to il-
lustrate that those least capable of solving a problem have been placed in charge of it.
Humanity cannot afford for this idiom to accurately and robustly apply to the whole
of human society. As Jaron Lanier put it, “The danger isn’t that AI destroys us. It’s
that it drives us insane” [49].
10 Conclusion
Generative AI has presented humanity with a variety of novel opportunities and
tools, but through rushing the deployment and integration of systems reasonable safe-
guards and regulations have been completely outpaced. This outpacing is very inten-
tional, and adversarial, and poses a unique set of challenges and risks that may be
extremely difficult to mitigate. We’ve only begun to scratch the surface of recogniz-
ing and quantifying these risks, beyond which many methods to effectively mitigate
12
them still require development. Human cognitive bias is well established as being a
bad driver for decision-making, and handing the wheel over to a deaf and blind ver-
sion of that bias, at the scale of society, is sure to bring this joy-ride of hype to an
abrupt end.
References
1. Stout D, Chaminade T. The evolutionary neuroscience of tool making. Neuropsychologia.
2007 January 1;45(5):1091-100.
2. Christiansen MH, Kirby SE. Language evolution. Oxford University Press; 2003.
3. Atreides K. The Human Governance Problem: Complex Systems and the Limits of Hu-
man. 2023
4. Haselton MG, Bryant GA, Wilke A, Frederick DA, Galperin A, Frankenhuis WE, Moore
T. Adaptive rationality: An evolutionary perspective on cognitive bias. Social Cognition.
2009 October;27(5):733-63.
5. Gallego A, Kurer T. Automation, digitalization, and artificial intelligence in the work-
place: implications for political behavior. Annual Review of Political Science. 2022 May
12;25:463-84.
6. Chomsky N. The False Promise of ChatGPT. New York Times, March 8, 2023
7. Bender, E. M., Gebru, T., McMillan-Major, A., Shmitchell, S., On the Dangers of Stochas-
tic Parrots: Can Language Models Be Too Big? FAccT '21: Proceedings of the 2021 ACM
Conference on Fairness, Accountability, and Transparency (pp. 610-623). Association for
Computing Machinery, 2021
8. Liu V, Chilton LB. Design guidelines for prompt engineering text-to-image generative
models. InProceedings of the 2022 CHI Conference on Human Factors in Computing Sys-
tems 2022 Apr 29 (pp. 1-23).
9. Hawkins J. A thousand brains: A new theory of intelligence. Basic Books; 2021 March 2.
10. Kotseruba I, Tsotsos JK. 40 years of cognitive architectures: core cognitive abilities and
practical applications. Artificial Intelligence Review. 2020 January;53(1):17-94.
11. Atreides K, Kelley DJ, Masi U. Methodologies and Milestones for the Development of an
Ethical Seed. InBrain-Inspired Cognitive Architectures for Artificial Intelligence: BICA*
AI 2020: Proceedings of the 11th Annual Meeting of the BICA Society 11 2021 (pp. 15-
23). Springer International Publishing.
12. Floridi L. AI as Agency without Intelligence: On ChatGPT, large language models, and
other generative models. Philosophy & Technology. 2023 March;36(1):15.
13. Azzopardi L. Cognitive biases in search: a review and reflection of cognitive biases in In-
formation Retrieval. InProceedings of the 2021 conference on human information interac-
tion and retrieval 2021 March 14 (pp. 27-37).
14. Rozado D. The Political Biases of ChatGPT. Social Sciences. 2023 March 2;12(3):148.
15. Bommasani R, Hudson DA, Adeli E, Altman R, Arora S, von Arx S, Bernstein MS, Bohg
J, Bosselut A, Brunskill E, Brynjolfsson E. On the opportunities and risks of foundation
models. arXiv preprint arXiv:2108.07258. 2021 August 16.
16. Xiao Y, Wang WY. On hallucination and predictive uncertainty in conditional language
generation. arXiv preprint arXiv:2103.15025. 2021 March 28.
17. Frankfurt HG. On bullshit. Princeton University Press; 2005 December 31.
18. JO A. THE PROMISE AND PERIL OF GENERATIVE AI. Nature. 2023 February 9;614.
19. Baecker C, Alabbadi O, Yogiputra GP, Tien Dung N. Threats provided by artificial intelli-
gence that could disrupt the democratic system.
13
20. Sibony DK, SUNSTEIN IC. Noise: A Flaw in Human Judgment.
21. Fu B, Wang S, Zhang J, Hou Z, Li J. Unravelling the complexity in achieving the 17 sus-
tainable-development goals. National Science Review. 2019 May 1;6(3):386-8.
22. Chow A. How ChatGPT Managed to Grow Faster Than TikTok or Instagram. Time Maga-
zine, February 8, 2023
23. Alston E. New! Try Zapier's ChatGPT plugin. Zapier, March 30, 2023
24. Bechara A, Damasio H, Damasio AR. Emotion, decision making and the orbitofrontal cor-
tex. Cerebral cortex. 2000 Mar 1;10(3):295-307.
25. Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. InHandbook
of the fundamentals of financial decision making: Part I 2013 (pp. 99-127).
26. Norn.ai https://norn.ai/wp-content/uploads/2022/10/Norn-Supplemental-Materials-v1.1.pdf
, last accessed March 30, 2023
27. Woolley AW, Aggarwal I, Malone TW. Collective intelligence and group performance.
Current Directions in Psychological Science. 2015 December;24(6):420-4.
28. De Cremer D, Kasparov G. AI should augment human intelligence, not replace it. Harvard
Business Review. 2021 March 18;18.
29. Laakasuo M, Herzon V, Perander S, Drosinou M, Sundvall J, Palomäki J, Visala A. Socio-
cognitive biases in folk AI ethics and risk discourse. AI and Ethics. 2021 Novem-
ber;1(4):593-610.
30. Bietti E. From ethics Washing to ethics bashing: a moral philosophy view on tech ethics.
Journal of Social Computing. 2021 September;2(3):266-83.
31. de Freitas Netto SV, Sobral MF, Ribeiro AR, Soares GR. Concepts and forms of green-
washing: A systematic review. Environmental Sciences Europe. 2020 December;32(1):1-2.
32. Atreides K. Philosophy 2.0: Applying Collective Intelligence Systems and Iterative De-
grees of Scientific Validation. FILOZOFIA I NAUKA. 2022:49.
33. Orlowski J. The Social DilemmaA Netflix Original documentary.
34. Thomas RL, Uminsky D. Reliance on metrics is a fundamental challenge for AI. Patterns.
2022 May 13;3(5):100476.
35. McBride S. Microsoft to buy LinkedIn for $26.2 billion in its largest deal. Reuters. June
13, 2016
36. Weinstein P. Why Microsoft Is Willing to Pay So Much for GitHub. Harvard Business
Review, June 06, 2018
37. Mullainathan S, Thaler RH. Behavioral economics.
38. Glaese A, McAleese N, Trębacz M, Aslanides J, Firoiu V, Ewalds T, Rauh M, Weidinger
L, Chadwick M, Thacker P, Campbell-Gillingham L. Improving alignment of dialogue
agents via targeted human judgements. arXiv preprint arXiv:2209.14375. 2022 September
28.
39. Haidt J, Lukianoff G. The coddling of the American mind: How good intentions and bad
ideas are setting up a generation for failure. Penguin UK; 2018 September 4.
40. Haynes GA. Testing the boundaries of the choice overload phenomenon: The effect of
number of options and time pressure on decision difficulty and satisfaction. Psychology &
Marketing. 2009 Mar;26(3):204-12.
41. Coughlin T., 175 Zettabytes By 2025. Forbes, November 27, 2018
42. Bennett WL, Livingston S, editors. The disinformation age. Cambridge University Press;
2020 Oct 15.
43. Ionos, Google search results: the evolution of the SERPs, November 27, 2022
44. Gusenbauer M. Google Scholar to overshadow them all? Comparing the sizes of 12 aca-
demic search engines and bibliographic databases. Scientometrics. 2019 Jan
15;118(1):177-214.
14
45. Coulter M., Bensinger G., Alphabet shares dive after Google AI chatbot Bard flubs answer
in ad. Reuters, February 9, 2023
46. Roose K., A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. The New York
Times, February, 16, 2023
47. Armstrong S, Sandberg A, Bostrom N. Thinking inside the box: Controlling and using an
oracle AI. Minds and Machines. 2012 Nov;22:299-324.
48. Daniel K. Thinking, fast and slow. 2017 December 1.
49. Hattenstone S. Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it
drives us insane’, The Guardian, Thu 23 Mar 2023
... The conventional risk management approaches within the financial industry usually came up short, with regard to the detection of emergent threats or their adaption to highly dynamic market settings. Conversely, AI-powered approaches, are capable of discovering inconspicuous patterns or trends and spotting potential risks even before they become manifest [8]. ...
Chapter
One of the themes in the emergence of text- and image-making (multimodal) generative AIs is their value in the learning space, with the vast potential just beginning to be explored by mass humanity. This chapter explores the potential and early use of large language models (LLMs) harnessed for their mass learning, human-friendly conversations, and their efficacies, for self-learning for individuals and groups, based on a review of the literature, system constraints and affordances, and abductive logic. There are insights shared about longitudinal and lifelong learning and foci on co-evolving processes between the human learner and the computing machines and large language models.
Article
Full-text available
The impact of complexity within government and societal systems is considered relative to the limitations of human cognitive bandwidth, and the resulting reliance on cognitive biases and systems of automation when that bandwidth is exceeded. Examples of how humans and societies have attempted to cope with the growing difference between the rate at which the complexity of systems and human cognitive capacities increase respectively are considered. The potential of and urgent need for systems capable of handling the existing and future complexity of systems, utilizing greater cognitive bandwidth through scalable AGI, are also considered, along with the practical limitations and considerations in how those systems may be deployed in real-world conditions. Several paradoxes resulting from the influence of prolific Narrow Tool AI systems manipulating large portions of the population are also noted.
Article
Full-text available
The article discusses the recent advancements in artificial intelligence (AI) and the development of large language models (LLMs) such as ChatGPT. The article argues that these LLMs can process texts with extraordinary success and often in a way that is indistinguishable from human output, while lacking any intelligence, understanding or cognitive ability. It also highlights the limitations of these LLMs, such as their brittleness (susceptibility to catastrophic failure), unreliability (false or made-up information), and the occasional inability to make elementary logical inferences or deal with simple mathematics. The article concludes that LLMs, represent a decoupling of agency and intelligence. While extremely powerful and potentially very useful, they should not be relied upon for complex reasoning or crucial information, but could be used to gain a deeper understanding of a text’s content and context, rather than as a replacement for human input. The best author is neither an LLM nor a human being, but a human being using an LLM proficiently and insightfully.
Article
Full-text available
Recent advancements in Large Language Models (LLMs) suggest imminent commercial applications of such AI systems where they will serve as gateways to interact with technology and the accumulated body of human knowledge. The possibility of political biases embedded in these models raises concerns about their potential misusage. In this work, we report the results of administering 15 different political orientation tests (14 in English, 1 in Spanish) to a state-of-the-art Large Language Model, the popular ChatGPT from OpenAI. The results are consistent across tests; 14 of the 15 instruments diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints. When asked explicitly about its political preferences, ChatGPT often claims to hold no political opinions and to just strive to provide factual and neutral information. It is desirable that public facing artificial intelligence systems provide accurate and factual information about empirically verifiable issues, but such systems should strive for political neutrality on largely normative questions for which there is no straightforward way to empirically validate a viewpoint. Thus, ethical AI systems should present users with balanced arguments on the issue at hand and avoid claiming neutrality while displaying clear signs of political bias in their content.
Article
Full-text available
Methods of improving the state and rate of progress within the domain of philosophy using collective intelligence systems are considered. By applying mASI systems superintelligence, debiasing, and humanity’s current sum of knowledge may be applied to this domain in novel ways. Such systems may also serve to strongly facilitate new forms and degrees of cooperation and understanding between different philosophies and cultures. The integration of these philosophies directly into their own machine intelligence seeds as cornerstones could further serve to reduce existential risk while improving both ethical quality and performance.
Article
Full-text available
Through a series of case studies, we review how the unthinking pursuit of metric optimization can lead to real-world harms, including recommendation systems promoting radicalization, well-loved teachers fired by an algorithm, and essay grading software that rewards sophisticated garbage. The metrics used are often proxies for underlying, unmeasurable quantities (e.g., “watch time” of a video as a proxy for “user satisfaction”). We propose an evidence-based framework to mitigate such harms by (1) using a slate of metrics to get a fuller and more nuanced picture; (2) conducting external algorithmic audits; (3) combining metrics with qualitative accounts; and (4) involving a range of stakeholders, including those who will be most impacted.
Article
Full-text available
New technologies are a key driver of labor market change in recent decades. There are renewed concerns that technological developments in areas such as robotics and artificial intelligence will destroy jobs and create political upheaval. This article reviews the vibrant debate about the economic consequences of recent technological change and then discusses research about how digitalization may affect political participation, vote choice, and policy preferences. It is increasingly well established that routine workers have been the main losers of recent technological change and disproportionately support populist parties. Digitalization also creates a large group of economic winners that support the political status quo. The mechanisms connecting technology-related workplace risks to political behavior and policy demands are less well understood. Voters may fail to fully comprehend the relative importance of different causes of structural economic change and misattribute blame to other factors. We conclude with a list of pressing research questions.
Article
Full-text available
Weaponized in support of deregulation and self-regulation, “ethics” is increasingly identified with technology companies' self-regulatory efforts and with shallow appearances of ethical behavior. So-called “ethics washing” by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community. The author defines “ethics bashing” as the parallel tendency to trivialize ethics and moral philosophy. Underlying these two attitudes are a few misunderstandings: (1) philosophy is understood in opposition and as alternative to law, political representation, and social organizing; (2) philosophy and “ethics” are perceived as formalistic, vulnerable to instrumentalization, and ontologically flawed; and (3) moral reasoning is portrayed as mere “ivory tower” intellectualization of complex problems that need to be dealt with through other methodologies. This article argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing”, or by scholars and policy-makers in the form of “ethics bashing”. Grappling with the role of philosophy and ethics requires moving beyond simplification and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. We must resist reducing moral philosophy's role and instead must celebrate its special worth as a mode of knowledge-seeking and inquiry. Far from mandating self-regulation, moral philosophy facilitates the scrutiny of various modes of regulation, situating them in legal, political, and economic contexts. Moral philosophy indeed can explainin the relationship between technology and other worthy goals and can situate technology within the human, the social, and the political.
Article
This research paper examines the potential consequences of AI technology on democratic systems. The study focuses on two main areas: the weakening of the media and the emergence of "smart dictatorship." The paper examines the ways in which AI can be used to supervise, manipulate, and frustrate the media, thereby weakening its role as a check on government and corporate power. The study also explores how AI technology can be used to create an "omnidirectional monitoring" society, where individuals are constantly monitored and controlled through the use of "panopticon" techniques and "social bots". This can lead to the emergence of a "postdemocratic" society, characterized by growing inequality, dehumanization, and the manipulation of information on online media platforms. The research methodology adopted in the study is qualitative, using expert interviews with three experts who discussed the overall use of AI and its disruptive effects on democracy, such as the creation of fake news, filter bubbles, and algorithm bias. In conclusion, this research highlights the need for increased awareness and regulation of AI technology to ensure its responsible use and to protect democratic values.
Article
The article discusses the recent advancements in artificial intelligence (AI) and the development of large language models (LLMs) such as ChatGPT. The article argues that these LLMs can process texts with extraordinary success and often in a way that is indistinguishable from human output, while lacking any intelligence, understanding or cognitive ability. It also highlights the limitations of these LLMs, such as their brittleness (susceptibility to catastrophic failure), unreliability (false or made-up information), and the occasional inability to make elementary logical inferences or deal with simple mathematics. The article concludes that LLMs, represent a decoupling of agency and intelligence. While extremely powerful and potentially very useful, they should not be relied upon for complex reasoning or crucial information, but could be used to gain a deeper understanding of a text’s content and context, rather than as a replacement for human input. The best author is neither an LLM nor a human being, but a human being using an LLM proficiently and insightfully.