Chapter

The totalitarian threat

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

During the twentieth century, many nations – including Russia, Germany, and China – lived under extraordinarily brutal and oppressive governments. Over 100 million civilians died at the hands of these governments, but only a small fraction of their brutality and oppression was necessary to retain power. The main function of the brutality and oppression, rather, was to radically change human behaviour, to transform normal human beings with their selfish concerns into willing servants of their rulers. The goals and methods of these governments were so extreme that they were often described – by friend and foe alike – as ‘total’ or ‘totalitarian’ (Gregor, 2000). The connection between totalitarian goals and totalitarian methods is straightforward. People do not want to radically change their behaviour. To make them change requires credible threats of harsh punishment – and the main way to make such threats credible is to carry them out on a massive scale. Furthermore, even if people believe your threats, some will resist anyway or seem likely to foment resistance later on. Indeed, some are simply unable to change. An aristocrat cannot choose to have proletarian origins, or a Jew to be an Aryan. To handle these recalcitrant problems requires special prisons to isolate dangerous elements, or mass murder to eliminate them. Totalitarian regimes have many structural characteristics in common. Richard Pipes gives a standard inventory: ‘[A]n official all-embracing ideology; a single party of the elect headed by a “leader” and dominating the state; police terror; the ruling party’s control of the means of communication and the armed forces; central command of the economy’. All of these naturally flow from the goal of remaking human nature. The official ideology is the rationale for radical change. It must be ‘all-embracing’ – that is, suppress competing ideologies and values – to prevent people from being side-tracked by conflicting goals. The leader is necessary to create and interpret the official ideology, and control of the means of communication to disseminate it. The party is comprised of the ‘early-adopters’ – the people who claim to have ‘seen the light’ and want to make it a reality.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Longtermists sometimes worry about totalitarian risk: a totalitarian government locking in its power over the long term. Bryan Caplan, for example, argues that future technological developments (such as AI-driven surveillance and life extension) might allow dictators to cement their power for a very long time (Caplan 2011). Alternatively, an Artificial General Intelligence might become all (or very) powerful, such that the goals or values it receives at inception might get locked in long-term (MacAskill 2022: ch. ...
... However, some worry that 26 For example, Porter and Gibbons (2022) consider how an appreciation of the longtermist priority of mitigating extinction (or other catastrophic) risks might lead parties behind a Rawlsian veil of ignorance to endorse different principles of justice than Rawls himself derived. P centralizing political authority raises totalitarian risk (Caplan 2011 (1790) provided a conservative political argument for concern for the future, which has been taken up by Ord (2020: 49 -52). Hans ...
Chapter
Full-text available
We set out longtermist political philosophy as a research field by exploring the case for, and the implications of, 'institutional longtermism': the view that, when evaluating institutions, we should give significant weight to their very long-term effects. We begin by arguing that the standard case for longtermism may be more robust when applied to institutions than to individual actions or policies, both because institutions have large, broad, and long-term effects, and because institutional longtermism can plausibly sidestep various objections to individual longtermism. We then address points of contact between longtermism and some central values of mainstream political philosophy, focusing on justice, equality, freedom, legitimacy, and democracy. While each value initially seems to conflict with institutional longtermism, we find that these conflicts are less clear-cut upon closer inspection, and that some political values might even provide independent support for institutional longtermism. We end with a grab bag of related questions that we lack space to explore here.
... 6 For relevant discussion, see Ord (2020:167) and Bostrom and Ćirković (2008). 7 See also Caplan (2008). Naturally, we are and should be interested in mitigating risks of all sorts. ...
Article
Full-text available
Rawls famously argues that the parties in the original position would agree upon the two principles of justice. Among other things, these principles guarantee equal political liberty—that is, democracy—as a requirement of justice. We argue on the contrary that the parties have reason to reject this requirement. As we show, by Rawls’ own lights, the parties would be greatly concerned to mitigate existential risk. But it is doubtful whether democracy always minimizes such risk. Indeed, no one currently knows which political systems would. Consequently, the parties—and we ourselves—have reason to reject democracy as a requirement of justice in favor of political experimentalism, a general approach to political justice which rules in at least some non-democratic political systems which might minimize existential risk.
... These are scenarios where aligned AGI is used by its operator(s) to establish stable, "sustainable", totalitarianism [20], with the AGI operator in charge as a dictator (or with a group of dictators). Aligned AGI could help such a dictator to eliminate threats and limits to their grip on power that today's AI systems do not yet allow authoritarian rulers to eliminate [21]. ...
Article
Full-text available
A transition to a world with artificial general intelligence (AGI) may occur within the next few decades. This transition may give rise to catastrophic risks from misaligned AGI, which have received a significant amount of attention, deservedly. Here I argue that AGI systems that are intent-aligned—they always try to do what their operators want them to do—would also create catastrophic risks, mainly due to the power that they concentrate on their operators. With time, that power would almost certainly be catastrophically exploited, potentially resulting in human extinction or permanent dystopia. I suggest that liberal democracies, if they decide to allow the development of AGI, may react to this threat by letting AGI take shape as an intergenerational social project, resulting in an arrangement where AGI is not intent-aligned but symbiotic with humans. I provide some tentative ideas on what the resulting arrangement may look like and consider what speaks for and what against aiming for intent-aligned AGI as an intermediate step.
Article
Untersuchungen zu existentiellen Gefahren für die Menschheit sind bislang eine Seltenheit. Mit der vorliegenden Publikation wird der Versuch unternommen, grundlegende Konturen dieses Forschungsfelds zu umreißen. Zu diesem Zweck werden methodische Probleme der Auseinandersetzung mit existenzielle Gefahren diskutiert und die Perspektiven von Nachbardisziplinen erschlossen. Eine vorläufige Bestandsaufnahme umfasst 20 existenzielle Gefahren, darunter plötzlich eintretende Katastrophen und solche, die sich über Niedergangsspiralen realisieren, zudem eher spekulative und solche mit einer breiten empirischen Basis. Mit den Kriterien von Vermeidbarkeit, Schadensausmaß, Wahrscheinlichkeit und Ungewissheit wird ein Kriterienraster für die Bewertung der existenziellen Gefahren vorgeschlagen. Die Kriterien Vermeidbarkeit und Ungewissheit werden als zentral herausgearbeitet und in einer Umfrage mit einem ausgewählten Expertenkreis getestet. Als Ergebnis lassen sich fünf Typen von existenziellen Gefahren unterscheiden: Sie reichen von anthropogenen Gefahren mit großer bis mittlerer Vermeidbarkeit bei mittlerer bis hoher Ungewissheit, über ökologische existenzielle Gefahren und Geo-Gefahren bis zu zwei Typen von kosmischen Gefahren mit sehr geringer Vermeidbarkeit und – bei Typ II – hochgradig spekulativem Charakter. Da als normatives Gebot jegliche existentielle Gefahr für die Menschheit auszuschließen ist, verbietet sich eine rein probabilistische Herangehensweise. Eine Bewertung nach Dringlichkeit der existenzielle Gefahren ist notwendig, aber nicht ausreichend. Zu sämtlichen existenzielle Gefahren besteht erheblicher Forschungsbedarf und ebenso politischer Handlungsbedarf.
Article
Full-text available
The rapid advancements in genome editing, particularly with CRISPR-Cas9, have brought long-promised medical breakthroughs to fruition, but have also accelerated ethically fraught applications. To develop adequate ethical safeguards and effective governance, many endorse public engagement as an essential aspect of this response. This paper tests this confidence by examining the public engagement approach with regard to an emerging existential risk that this rapid development of genome editing poses to liberal democracy, when combined with similarly rapidly growing socio-political polarization,. While this argument has some echoes of Maxwell Mehlman's specter of a genetically enhanced "genobility" destroying the basis of liberal democracy, I outline how this new concern is more plausible, more immediate and, moreover, possibly far more intractable a problem than Mehlman was considering. This is exacerbated by considering how the perception of genome editing's potential-rather than its actual capabilities-may be affected by and, in turn, may worsen this rising socio-political polarization. Given the confidence in the positive role of public engagement with respect to the technology that is involved here, I evaluate its effectiveness, arguing that certain forms of engagement may inadvertently worsen things, whereas stronger deliberative approaches hold promise but face significant, potentially insurmountable, barriers, at least for now.
Article
This article argues that conceptualization through the long-term view strengthens the case for education for deliberative democracy. This is due to two key factors. First, education for deliberative democracy has novel potential in helping curb the negative effects of political polarization, which, when analyzed through longtermism, can be identified as an important existential risk factor. Second, education for deliberative democracy enables societies to defuse the threat of a value lock-in, and in doing so to keep their cognitive space open to enable increased flexibility in dealing with new challenges that will arise in the future. Consequently, this article further argues that education for deliberative democracy as an education initiative can be normatively justified but acknowledges that there are still theoretical and practical hurdles to overcome, and thus calls for more research into developing a mature, pedagogically sound program of education for deliberative democracy.
Preprint
Full-text available
Future pandemics could arise from several sources, notably, Emerging Infectious Diseases (EID); and lab leaks from High Containment Biological Laboratories (HCBL). Recent advances in infectious disease, information technology and biotechnology provide building blocks to reduce pandemic risk if deployed intelligently. However, the global nature of infectious diseases, distribution of HCBLs, and increasing complexity of transmission dynamics due to travel networks, make it difficult to determine how to best deploy mitigation efforts. Increasing understanding of the risk landscape posed by EID and HCBL lab leaks could improve risk reduction efforts. The presented paper develops a country level spatial network Susceptible Infected Removed (SIR) model based on global travel network data and relative risk measures of potential origin sources, EID and lab leaks from Biological Safety Level 3+ and 4 labs, to explore expected infections over the first 30 days of a pandemic. Model outputs indicate that for EID and lab leaks India, the US and China are most impacted at day 30. For EID, expected infections shift from high EID origin potential countries at day 10 to the US, India and China, while for lab leaks the US and India start with high lab leak potential. With respect to model uncertainties and limitations, results indicate several large wealthy countries are influential to pandemic risk from both EID and lab leaks indicating high leverage points for mitigation efforts.
Article
The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind of probabilities should be involved in any appeal to expected value.
Article
Full-text available
Mass-casualty terrorism and terrorism involving unconventional weapons have received extensive academic and policy attention, yet few academics have considered the broader question of whether such behaviours could pose a plausible risk to humanity’s survival or continued flourishing. Despite several terrorist and other violent non-state actors having evinced an interest in causing existential harm to humanity, their ambition has historically vastly outweighed their capability. Nonetheless, three pathways to existential harm exist: existential attack, existential spoilers and systemic harm. Each pathway varies in its risk dynamics considerably. Although an existential attack is plausible, it would require extraordinary levels of terrorist capability. Conversely, modest terrorist capabilities might be sufficient to spoil risk mitigation measures or cause systemic harm, but such actions would only result in existential harm under highly contingent circumstances. Overall, we conclude that the likelihood of terrorism causing existential harm is extremely low, at least in the near to medium term, but it is theoretically possible for terrorists to intentionally destroy humanity.
Chapter
This handbook is currently in development, with individual articles publishing online in advance of print publication. At this time, we cannot add information about unpublished articles in this handbook, however the table of contents will continue to grow as additional articles pass through the review process and are added to the site. Please note that the online publication date for this handbook is the date that the first article in the title was published online. For more information, please read the site FAQs.
Chapter
This chapter considers how totalitarianism might become an existential threat to humanity in two different ways: by contributing to another risk, such as a Great Power Conflict, or by increasing vulnerability to other risks, such as climate change. Additionally, the possibility of a “lock-in” of a specific set of values through a totalitarian government is also considered a contributor to the overall level of existential risk. The possible future use of emerging technologies in propaganda and surveillance is considered a factor likely to increase the risks posed by totalitarianism. Then tensions are discussed between the need for coordinated global responses to threats to humanity and the possibility that the same structures could enable totalitarian regimes. The chapter concludes with a discussion of critical questions for future research on totalitarianism, peace, and conflict.KeywordsTotalitarianismAuthoritarianismPropagandaSurveillanceGlobal responseKey questions
Article
Full-text available
This paper relates evidence from the COVID‐19 pandemic to the concept of pandemic refuges, as developed in literature on global catastrophic risk. In this literature, a refuge is a place or facility designed to keep a portion of the population alive during extreme global catastrophes. COVID‐19 is not the most extreme pandemic scenario, but it is nonetheless a very severe global event, and it therefore provides an important source of evidence. Through the first 2 years of the COVID‐19 pandemic, several political jurisdictions have achieved low spread of COVID‐19 via isolation from the rest of the world and can therefore classify as pandemic refuges. Their suppression and elimination of COVID‐19 demonstrates the viability of pandemic refuges as a risk management measure. Whereas prior research emphasizes island nations as pandemic refuges, this paper uses case studies of China and Western Australia to show that other types of jurisdictions can also successfully function as pandemic refuges. The paper also refines the concept of pandemic refuges and discusses implications for future pandemics.
Article
Full-text available
There is a rapidly developing literature on risks that threaten the whole of humanity, or a large part of it. Discussion is increasingly turning to how such risks can be governed. This paper arises from a study of those involved the governance of risks from emerging technologies, examining the perceptions of global catastrophic risk within the relevant global policymaking community. Those who took part were either civil servants working for the UK government, U.S. Congress, the United Nations, and the European Commission, or cognate members of civil society groups and the private sector. Analysis of interviews identified four major themes: Scepticism; Realism; Influence; and Governance outside of Government. These themes provide evidence for the value of conceptualising the governance of global catastrophic risk as a unified challenge. Furthermore, they highlight the range of agents involved in governance of emerging technology and give reason to value reforms carried out sub-nationally.
Chapter
There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Oracle AIs (OAI), confined AIs that can only answer questions, are one particular approach to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper looks at some of them and analyses their strengths and weaknesses.
Chapter
In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.
Article
Full-text available
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous.
ResearchGate has not been able to resolve any references for this publication.