Article

# Biotechnology and the lifetime of technical civilizations

Authors:
To read the full-text of this research, you can request a copy directly from the author.

## Abstract

The number of people able to end Earth's technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifications of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity $L$ in Drake's equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization's biotechnology and the other characterizes the civilization's psychosociology. $L$ is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from Pubmed to inform the biotechnology parameter, the model predicts human civilization's median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby positioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in $10^{24}$ civilizations will survive -- a tempo and degree of winnowing compatible with Hanson's "Great Filter." Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigorous interstellar colonizers, the model provides a resolution to the Fermi paradox.

## No full-text available

... Another study [2] views self-annihilation as the answer to the Fermi Paradox [3]. Many possible causes of our self-annihilation, such as climate change and biotechnology [4], have been studied and modeled. However, due to the obvious lack of data regarding humanity's self-annihilation, assumptions vary widely across different studies and as such it is difficult to compare the effects of certain potential threats originating within our civilization relative to one another. ...
... Given the many ways in which humanity could bring about its own destruction, we first need to know which threats to prioritize. One study [4] highlights the dangers of proliferating and groundbreaking biotechnology, with humanity's survival time predicted to range between decades to centuries. The study concluded that due to the sheer growth in the number of individuals, institutions, and governments accessing biotechnology, this field poses a major threat in the near term. ...
... Furthermore, according to our assumptions, we can determine the probability function P(N). (4) In the following sections, we shall delve deeper into our prediction model and pose an indepth discussion of the parameters being used. ...
Preprint
Full-text available
Humanity’s path to avoiding extinction is a daunting and inevitable challenge which proves difficult to solve, partially due to the lack of data and evidence surrounding the concept. We aim to address this confusion by addressing the most dangerous threats to humanity, in hopes of providing a direction to approach this problem. Using a probabilistic mode, we observed the effects of nuclear war, climate change, asteroid impacts, artificial intelligence and pandemics, which are the most harmful disasters in terms of their extent of destruction on the length of human survival. We consider the starting point of the predicted average number of survival years as the present calendar year. Nuclear war, when sampling from an artificial normal distribution, results in an average human survival time of 60 years into the future starting from the present, before a civilization-ending disaster. While climate change results in an average human survival time of 193 years, the simulation based on impact from asteroids results in an average of 1754 years. Since the risks from asteroid impacts could be considered in the far future, it can be concluded that nuclear war, climate change, and pandemics are presently the most prominent threats to humanity. Additionally, the danger from superiority of artificial intelligence over humans, although abstract in its sense, is a factor of careful study and could also have wide ranging implications, impeding man’s advancements towards becoming a more advanced civilization.
... Additionally, human activity has unsettled the Earth's otherwise highly accommodative environment for supporting life, casting a dark shadow over the prospect of endlessly advancing technological innovation opening up an unlimited future for humanity to spread across space and time. Returning to the Drake Equation, recent modeling suggests that it is the lifespan of civilizations capable of interstellar communication, "L", which is the most influential among its seven variables [5], [10]. Taking this claim as stipulation, it follows immediately that the sub-factors comprising L must be identified and studied in detail if we are to maximize humanity's lifespan. ...
Preprint
Full-text available
Our Universe is a vast, tantalizing enigma - a mystery that has aroused humankind's innate curiosity for eons. Begging questions on alien lifeforms have been thus far unfruitful, even with the bounding advancements we have embarked upon in recent years. Coupled with logical assumption and calculations such as made by Dr. Frank Drake starting in the early 1960s, evidence in the millions should exist in our galaxy alone, and yet we've produced no clear affirmation in practice. So, where is everybody? In one sense, the seeming silence of the Universe past terra firma reveals layers of stubborn human limitation. Even as ambitious programs such as SETI aim to solve these knotty challenges, the results have turned up rather pessimistic possibilities. An existential disaster may lay in wait as our society advances exponentially towards space exploration, acting as the Great Filter: a phenomenon that wipes out civilizations before they can encounter each other, which may explain the cosmic silence. In this article, we propose several possible doomsday-type scenarios, including anthropogenic and natural hazards, both of which can be prevented with reforms in individual, institutional and intrinsic behaviors. We take into account multiple calamity candidates: nuclear warfare, pathogens and pandemics, artificial intelligence, asteroid and comet impacts, and climate change. Each of these categories have various influences but lack critical adjustment to accommodate to their high risk. We have long ignored the quickly encroaching Great Filter, even as it threatens to consume us entirely, especially as our rate of progress correlates directly to the severity of our fall. This indicates a necessary period of introspection, followed by appropriate refinements to properly approach our predicament, and see our way through it.
... At the travel velocity of Voyager 2 (17 km s −1 relative to the Sun) it would take 75,000 yr to reach the nearest star, and a billion years to cross the Galaxy. Such long times exceed the lifetime of plausible probe designs, and likely the lifetime, L, of most species (Schenkel 1999;Rubin 2001;Sotos 2019). Thus, for our scheme to remain within some boundaries considered sensible today, we require that probes travel between stars on timescales of perhaps 1000 yr or less. ...
... As one attempt among many, some authors argue that technological civilizations do not exist long enough to leave their home planet [33], [34]. Vakoch sees the reason in the fear of an invasion by aliens [35]. ...
Article
Full-text available
May, A. (2021c): History and future of life on Earth - a synthesis of natural sciences and theology. - Dialogo, vol. 8 (1): p. 233-251; Constanta, Romania. "A synthesis of research results of modern natural sciences and fundamental statements of the Christian faith is attempted. The creation of the universe is addressed. Four important events in the history of the Earth as well as the diversity of living beings are shortly discussed. There are good reasons to believe that the universe was created by a transcendent superior being, which we call God, and that this superior being intervened in evolution and Earth history to promote the development of intelligent life. Furthermore, it can be concluded that intelligent life is very rare in the universe. This is the explanation for the “Fermi paradox”. Intelligent life on planet Earth has cosmic significance. The overabundance of this universe inspires the hope for participating in the fulfilled eternity of the Creator in transcendence. Prehistoric humans had long had hope for life after biological death. While scientific speculation about the end of the universe prophesies scenarios of destruction, the Christian faith says that humanity is destined to be united with Jesus Christ. Furthermore, all evolution will be completed with the Creator in transcendence. Then the whole of creation will “obtain the freedom of the glory of the children of God”. From the first primitive living cell, an abundance of the most diverse living beings has evolved. Comparably, humanity has differentiated into a plethora of different cultures. This entire abundance will find its unification and fulfilment in transcendence with the Creator of the universe, without its diversity being erased."
... www.mdpi.com/journal/bdcc "humanity has no experience of contact with civilized extraterrestrials, compared to their potentially high likelihood of existence" [4]. In many cases, such an abuse of advanced technology is motivated by conflicts in societies, but eliminating all conflict is impossible. ...
Article
Full-text available
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.
... As early as 1961, a study suggested [50] that the progress of science and technology will inevitably lead to complete destruction and biological degeneration. This is further supported by many previous studies arguing that self-annihilation of humans is highly possible through various scenarios (e.g., [51]) including but not limited to war, climate change [52], and the development of biotechnology [53]. Additionally, studies also discussed the impact of strong sustainability constrains on the lifetime of civilization [54,55]. ...
Article
Full-text available
In the field of astrobiology, the precise location, prevalence, and age of potential extraterrestrial intelligence (ETI) have not been explicitly explored. Here, we address these inquiries using an empirical galactic simulation model to analyze the spatial–temporal variations and the prevalence of potential ETI within the Galaxy. This model estimates the occurrence of ETI, providing guidance on where to look for intelligent life in the Search for ETI (SETI) with a set of criteria, including well-established astrophysical properties of the Milky Way. Further, typically overlooked factors such as the process of abiogenesis, different evolutionary timescales, and potential self-annihilation are incorporated to explore the growth propensity of ETI. We examine three major parameters: (1) the likelihood rate of abiogenesis (λ_A); (2) evolutionary timescales (T_(evo)); and (3) probability of self-annihilation of complex life (P_(ann)). We found P_(ann) to be the most influential parameter determining the quantity and age of galactic intelligent life. Our model simulation also identified a peak location for ETI at an annular region approximately 4 kpc from the galactic center around 8 billion years (Gyrs), with complex life decreasing temporally and spatially from the peak point, asserting a high likelihood of intelligent life in the galactic inner disk. The simulated age distributions also suggest that most of the intelligent life in our galaxy are young, thus making observation or detection difficult.
... At the travel velocity of Voyager 2 (17 km s −1 relative to the sun) it would take 75,000 yrs to reach the nearest star, and a billion years to cross the galaxy. Such long times exceed the lifetime of plausible probe designs, and likely the lifetime L of most species (Schenkel 1999;Rubin 2001;Sotos 2019). Thus, for our scheme to remain within some boundaries considered sensible today, we require that probes travel between stars on timescales of perhaps 1,000 yrs or less. ...
Preprint
It has recently been suggested in this journal by Benford (2019) that "Lurkers" in the form of interstellar exploration probes could be present in the solar system. Similarly, extraterrestrial intelligence could send long-lived probes to many other stellar systems, to report back science and surveillance. If probes and planets with technological species exist in more than a handful of systems in our galaxy, it is beneficial to use a coordinated communication scheme. Due to the inverse square law, data rates decrease strongly for direct connections over long distances. The network bandwidth could be increased by orders of magnitude if repeater stations (nodes) are used in an optimized fashion. This introduction to a series of papers makes the assumptions of the communication scheme explicit. Subsequent papers will discuss technical aspects such as transmitters, repeaters, wavelengths, and power levels. The overall purpose is to gain insight into the physical characteristics of an interstellar communication network, allowing us to describe the most likely sizes and locations of nodes and probes.
... As early as 1961, Hoerner (1961) suggests that the progress of science and technology will inevitably lead to complete destruction and biological degeneration, similar to the proposal by Sagan and Shklovskii (1966). This is further supported by many previous studies arguing that self-annihilation of humans is highly possible via various scenarios (e.g., Nick, 2002;Webb, 2011), including but not limited to war, climate change (Billings, 2018), and the development of biotechnology (Sotos, 2019). ...
Preprint
Full-text available
In the field of Astrobiology, the precise location, prevalence and age of potential extraterrestrial intelligence (ETI) have not been explicitly explored. Here, we address these inquiries using an empirical galactic simulation model to analyze the spatial-temporal variations and the prevalence of potential ETI within the Galaxy. This model estimates the occurrence of ETI, providing guidance on where to look for intelligent life in the Search for ETI (SETI) with a set of criteria, including well-established astrophysical properties of the Milky Way. Further, typically overlooked factors such as the process of abiogenesis, different evolutionary timescales and potential self-annihilation are incorporated to explore the growth propensity of ETI. We examine three major parameters: 1) the likelihood rate of abiogenesis ({\lambda}A); 2) evolutionary timescales (Tevo); and 3) probability of self-annihilation of complex life (Pann). We found Pann to be the most influential parameter determining the quantity and age of galactic intelligent life. Our model simulation also identified a peak location for ETI at an annular region approximately 4 kpc from the Galactic center around 8 billion years (Gyrs), with complex life decreasing temporally and spatially from the peak point, asserting a high likelihood of intelligent life in the galactic inner disk. The simulated age distributions also suggest that most of the intelligent life in our galaxy are young, thus making observation or detection difficult.
Article
The concept of a rapid spread of self-replicating interstellar probes (SRPs) throughout the Milky Way adds considerable strength to Fermi's Paradox. A single civilization creating a single SRP is sufficient for a fleet of SRPs to grow and explore the entire Galaxy on timescales much shorter than the age of the Earth – so why do we see no signs of such probes? One solution to this Paradox suggests that self-replicating probes eventually undergo replication errors and evolve into predator-prey populations, reducing the total number of probes and removing them from our view. I apply Lotka-Volterra models of predator-prey competition to interstellar probes navigating a network of stars in the Galactic Habitable Zone to investigate this scenario. I find that depending on the local growth mode of both populations and the flow of predators/prey between stars, there are many stable solutions with relatively large numbers of prey probes inhabiting the Milky Way. The solutions can exhibit the classic oscillatory pattern of Lotka-Volterra systems, but this depends sensitively on the input parameters. Typically, local and global equilibria are established with prey sometimes outnumbering the predators. Accordingly, we find this solution to Fermi's Paradox does not reduce the probe population sufficiently to be viable.
Preprint
The Drake equation has been used many times to estimate the number of observable civilizations in the Galaxy. However, the uncertainty of the outcome is so great that any individual result is of limited use, as predictions can range from a handful of observable civilizations in the observable universe to tens of millions per Milky Way-sized galaxy. A statistical investigation shows that the Drake equation, despite its uncertainties, delivers robust predictions of the likelihood that the prevalent form of intelligence in the universe is artificial rather than biological. The likelihood of artificial intelligence far exceeds the likelihood of biological intelligence in all cases investigated. This conclusion is contingent upon a limited number of plausible assumptions. The significance of this outcome in explaining the Fermi paradox is discussed.
Article
In this work we address the problem of estimating the probabilities of causal contacts between civilizations in the Galaxy. We make no assumptions regarding the origin and evolution of intelligent life. We simply assume a network of causally connected nodes. These nodes refer somehow to intelligent agents with the capacity of receiving and emitting electromagnetic signals. Here we present a three-parametric statistical Monte Carlo model of the network in a simplified sketch of the Galaxy. Our goal, using Monte Carlo simulations, is to explore the parameter space and analyse the probabilities of causal contacts. We find that the odds to make a contact over decades of monitoring are low for most models, except for those of a galaxy densely populated with long-standing civilizations. We also find that the probability of causal contacts increases with the lifetime of civilizations more significantly than with the number of active civilizations. We show that the maximum probability of making a contact occurs when a civilization discovers the required communication technology.
Article
Context. Interest in searches for extraterrestrial civilizations (ETCs) has been boosted in recent decades by the discovery of thousands of exoplanets. Aims. We turn to the classification of ETCs for new considerations that may help to design better strategies for searching for ETCs. Methods. This study is based on analogies with our own biological, historical, technological, and scientific development. We took a basic taxonomic approach to ETCs and investigated the implications of the new classification on ETC evolution and observational patterns. Finally, we used the quantitative scheme of Kardashev and considered its implications on the searches for ETCs as a counter example to our qualitative classification. Results. We propose a classification based on the abilities of ETCs to modify and integrate with their environments: Class 0 uses the environment as it is, Class 1 modifies the environment to fit its needs, Class 2 modifies itself to fit the environment, and a Class 3 ETC is fully integrated with the environment. Combined with the classical Kardashev scale, our scheme forms a two-dimensional method for interpreting ETC properties. Conclusions. The new framework makes it obvious that the available energy is not a unique measure of ETC progress: it may not even correlate with how well that energy is used. The possibility for progress without increased energy consumption implies a lower detectability, so in principle the existence of a Kardashev Type III ETC in the Milky Way cannot be ruled out. This reasoning weakens the Fermi paradox, allowing for the existence of advanced, yet not energy hungry, low-detectability ETCs. The integration of ETCs with the environment will make it impossible to tell technosignatures and natural phenomena apart. Therefore, the most likely opportunity for SETI searches to find advanced ETCs is to look for beacons, specifically set up by them for young civilizations like ours (if they would want to do that remains a matter of speculation). The other SETI window of opportunity is to search for ETCs at technological level similar to ours. To rephrase the famous saying of Arthur Clarke, sufficiently advanced civilizations are indistinguishable from nature.
Article
The Drake equation has been used many times to estimate the number of observable civilizations in the galaxy. However, the uncertainty of the outcome is so great that any individual result is of limited use, as predictions can range from a handful of observable civilizations in the observable universe to tens of millions per Milky Way-sized galaxy. A statistical investigation shows that the Drake equation, despite its uncertainties, delivers robust predictions of the likelihood that the prevalent form of intelligence in the universe is artificial rather than biological. The likelihood of artificial intelligence far exceeds the likelihood of biological intelligence in all cases investigated. This conclusion is contingent upon a limited number of plausible assumptions. The significance of this outcome for the Fermi paradox is discussed.
Article
Space colonization is humankind's best bet for long-term survival. This makes the expected moral value of space colonization immense. However, colonizing space also creates risks — risks whose potential harm could easily overshadow all the benefits of humankind's long-term future. In this article, I present a preliminary overview of some major risks of space colonization: Prioritization risks, aberration risks, and conflict risks. Each of these risk types contains risks that can create enormous disvalue; in some cases orders of magnitude more disvalue than all the potential positive value humankind could have. From a (weakly) negative, suffering-focused utilitarian view, we therefore have the obligation to mitigate space colonization-related risks and make space colonization as safe as possible. In order to do so, we need to start working on real-world space colonization governance. Given the near total lack of progress in the domain of space governance in recent decades, however, it is uncertain whether meaningful space colonization governance can be established in the near future, and before it is too late.
Preprint
Full-text available
A foundational model has been developed based on trends built from empirical data of space exploration and computing power through the first six plus decades of the Space Age which projects earliest possible launch dates for human-crewed missions from cis-lunar space to selected Solar System and interstellar destinations. The model uses computational power, expressed as transistors per microprocessor, as a key broadly limiting factor for deep space missions’ reach and complexity. The goal of this analysis is to provide a projected timeframe for humanity to become a multi-world species through off-world colonization, and in so doing all but guarantees the long-term survival of the human race from natural and human-caused calamities that could befall life on Earth. Beginning with the development and deployment of the first nuclear weapons near the end of World War II, humanity entered a ‘Window of Peril’ which will not be safely closed until robust off-world colonies become a reality. Our findings suggest the first human-crewed missions to land on Mars, selected Asteroid Belt objects, and selected moons of Jupiter and Saturn can occur before the end of the 21st century. Launches of human-crewed interstellar missions to exoplanet destinations within roughly 40 lightyears of the Solar System are seen as possible during the 23rd century and launch of intragalactic missions by the end of the 24th century. An aggressive and sustained space exploration program, which includes colonization, is thus seen as critical to the long-term survival of the human race.
Preprint
Humankind faces many existential threats, but has limited resources to mitigate them. Choosing how and when to deploy those resources is, therefore, a fateful decision. Here, I analyze the priority for allocating resources to mitigate the risk of superintelligences. Part I observes that a superintelligence unconnected to the outside world (de-efferented) carries no threat, and that any threat from a harmful superintelligence derives from the peripheral systems to which it is connected, e.g., nuclear weapons, biotechnology, etc. Because existentially-threatening peripheral systems already exist and are controlled by humans, the initial effects of a superintelligence would merely add to the existing human-derived risk. This additive risk can be quantified and, with specific assumptions, is shown to decrease with the square of the number of humans having the capability to collapse civilization. Part II proposes that biotechnology ranks high in risk among peripheral systems because, according to all indications, many humans already have the technological capability to engineer harmful microbes having pandemic spread. Progress in biomedicine and computing will proliferate this threat. Savant'' software that is not generally superintelligent will underpin much of this progress, thereby becoming the software responsible for the highest and most imminent existential risk -- ahead of hypothetical risk from superintelligences. The analysis concludes that resources should be preferentially applied to mitigating the risk of peripheral systems and savant software. Concerns about superintelligence are at most secondary, and possibly superfluous.
Article
Purpose This paper provides a detailed survey of the greatest dangers facing humanity this century. It argues that there are three broad classes of risks – the “Great Challenges” – that deserve our immediate attention, namely, environmental degradation, which includes climate change and global biodiversity loss; the distribution of unprecedented destructive capabilities across society by dual-use emerging technologies; and value-misaligned algorithms that exceed human-level intelligence in every cognitive domain. After examining each of these challenges, the paper then outlines a handful of additional issues that are relevant to understanding our existential predicament and could complicate attempts to overcome the Great Challenges. The central aim of this paper is to constitute an authoritative resource, insofar as this is possible in a scholarly journal, for scholars who are working on or interested in existential risks. In the author’s view, this is precisely the sort of big-picture analysis that humanity needs more of, if we wish to navigate the obstacle course of existential dangers before us. Design/methodology/approach Comprehensive literature survey that culminates in a novel theoretical framework for thinking about global-scale risks. Findings If humanity wishes to survive and prosper in the coming centuries, then we must overcome three Great Challenges, each of which is sufficient to cause a significant loss of expected value in the future. Originality/value The Great Challenges framework offers a novel scheme that highlights the most pressing global-scale risks to human survival and prosperity. The author argues that the “big-picture” approach of this paper exemplifies the sort of scholarship that humanity needs more of to properly understand the various existential hazards that are unique to the twenty-first century.
Article
Full-text available
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a $10^{30}$ multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.
Article
Full-text available
The 1977-1978 influenza epidemic was probably not a natural event, as the genetic sequence of the virus was nearly identical to the sequences of decades-old strains. While there are several hypotheses that could explain its origin, the possibility that the 1977 epidemic resulted from a laboratory accident has recently gained popularity in discussions about the biosafety risks of gain-of-function (GOF) influenza virus research, as an argument for why this research should not be performed. There is now a moratorium in the United States on funding GOF research while the benefits and risks, including the potential for accident, are analyzed. Given the importance of this historical epidemic to ongoing policy debates, we revisit the evidence that the 1977 epidemic was not natural and examine three potential origins: a laboratory accident, a live-vaccine trial escape, or deliberate release as a biological weapon. Based on available evidence, the 1977 strain was indeed too closely matched to decades-old strains to likely be a natural occurrence. While the origin of the outbreak cannot be conclusively determined without additional evidence, there are very plausible alternatives to the laboratory accident hypothesis, diminishing the relevance of the 1977 experience to the modern GOF debate. Copyright © 2015 Rozo and Gronvall.
Article
Full-text available
Highly pathogenic avian influenza A/H5N1 virus can cause morbidity and mortality in humans but thus far has not acquired the ability to be transmitted by aerosol or respiratory droplet ("airborne transmission") between humans. To address the concern that the virus could acquire this ability under natural conditions, we genetically modified A/H5N1 virus by site-directed mutagenesis and subsequent serial passage in ferrets. The genetically modified A/H5N1 virus acquired mutations during passage in ferrets, ultimately becoming airborne transmissible in ferrets. None of the recipient ferrets died after airborne infection with the mutant A/H5N1 viruses. Four amino acid substitutions in the host receptor-binding protein hemagglutinin, and one in the polymerase complex protein basic polymerase 2, were consistently present in airborne-transmitted viruses. The transmissible viruses were sensitive to the antiviral drug oseltamivir and reacted well with antisera raised against H5 influenza vaccine strains. Thus, avian A/H5N1 influenza viruses can acquire the capacity for airborne transmission between mammals without recombination in an intermediate host and therefore constitute a risk for human pandemic influenza.
Article
Full-text available
The Fermi paradox is the discrepancy between the strong likelihood of alien intelligent life emerging (under a wide variety of assumptions) and the absence of any visible evidence for such emergence. In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that travelling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilisation, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to. Given certain technological assumptions, such as improved automation, the task of constructing Dyson spheres, designing replicating probes, and launching them at distant galaxies, become quite feasible. We extensively analyse the dynamics of such a project, including issues of deceleration and collision with particles in space. Using similar methods, there are millions of galaxies that could have reached us by now. This results in a considerable sharpening of the Fermi paradox.
Article
Full-text available
In the cycle of existence of all natural systems (possessing an excess of free energy with respect to the environment - stars, living organisms, social systems, etc.) 4 universal stages can be distinguished: growth. internal development, stationary state, ageing (Kompanichenko, Futures, 1994, 26/5). In the context of this approach, human civilization that originated 10000 years ago is going through the natural cycle of its development. The analogy between the two different-hierarchy active systems is drawn: a human ( constructed of 60 trillion autonomous living systems - cells) and the human community (consisting of 6 billion autonomous living systems - people). In the cycle of the existence of a human, each of the four stages accounts for roughly 25% of its lifespan: growth 0-18 years, internal development, reaching maturity 18-36 years, stationary state 36-54 years, ageing 54-72 years. The humankind is now almost approaching the limits of its growth. Consequently, it can correspond to the age of 16-17 years old person. Thus, we can assume that during 10000 years of its existence human civilization has passed about 25% of the cycle of development. There is 30000 years of normative existence ahead (middle estimation). Actual existence can oscillate between 300 years and 3 million years, depending on the reasonable, conscious approach of humankind to its future. According to the Drake equation, with normative estimation L=30000 years there should exist at least several thousand of intelligent civilizations in our Galaxy.
Article
Full-text available
We extend the concept that life is an informational phenomenon, at every level of organisation, from molecules to the global ecological system. According to this thesis: (a) living is information processing, in which memory is maintained by both molecular states and ecological states as well as the more obvious nucleic acid coding; (b) this information processing has one overall function-to perpetuate itself; and (c) the processing method is filtration (cognition) of, and synthesis of, information at lower levels to appear at higher levels in complex systems (emergence). We show how information patterns, are united by the creation of mutual context, generating persistent consequences, to result in 'functional information'. This constructive process forms arbitrarily large complexes of information, the combined effects of which include the functions of life. Molecules and simple organisms have already been measured in terms of functional information content; we show how quantification may be extended to each level of organisation up to the ecological. In terms of a computer analogy, life is both the data and the program and its biochemical structure is the way the information is embodied. This idea supports the seamless integration of life at all scales with the physical universe. The innovation reported here is essentially to integrate these ideas, basing information on the 'general definition' of information, rather than simply the statistics of information, thereby explaining how functional information operates throughout life.
Article
Full-text available
We present the full target list and prioritization algorithm developed for use by the microwave search for technological signals at the SETI Institute. We have included the Catalog of Nearby Habitable Stellar Systems (HabCat, described in Paper I), all of the nearest 100 stars and 14 old open clusters. This is further augmented by a subset of the Tycho-2 catalog based on reduced proper motions, and this larger catalog should routinely provide at least three target stars within the large primary field of view of the Allen Telescope Array. The algorithm for prioritizing objects in the full target list includes scoring based on the subset category of each target (i.e., HabCat, cluster, Tycho-2, or nearest 100), its distance (if known), and its proximity to the Sun on the color-magnitude diagram.
Article
Full-text available
Highly pathogenic avian H5N1 influenza A viruses occasionally infect humans, but currently do not transmit efficiently among humans. The viral haemagglutinin (HA) protein is a known host-range determinant as it mediates virus binding to host-specific cellular receptors. Here we assess the molecular changes in HA that would allow a virus possessing subtype H5 HA to be transmissible among mammals. We identified a reassortant H5 HA/H1N1 virus-comprising H5 HA (from an H5N1 virus) with four mutations and the remaining seven gene segments from a 2009 pandemic H1N1 virus-that was capable of droplet transmission in a ferret model. The transmissible H5 reassortant virus preferentially recognized human-type receptors, replicated efficiently in ferrets, caused lung lesions and weight loss, but was not highly pathogenic and did not cause mortality. These results indicate that H5 HA can convert to an HA that supports efficient viral transmission in mammals; however, we do not know whether the four mutations in the H5 HA identified here would render a wholly avian H5N1 virus transmissible. The genetic origin of the remaining seven viral gene segments may also critically contribute to transmissibility in mammals. Nevertheless, as H5N1 viruses continue to evolve and infect humans, receptor-binding variants of H5N1 viruses with pandemic potential, including avian-human reassortant viruses as tested here, may emerge. Our findings emphasize the need to prepare for potential pandemics caused by influenza viruses possessing H5 HA, and will help individuals conducting surveillance in regions with circulating H5N1 viruses to recognize key residues that predict the pandemic potential of isolates, which will inform the development, production and distribution of effective countermeasures.
Article
Full-text available
SETI is a comparatively new branch of scientific research that began only in 1959. The goal of SETI is to ascertain whether alien civilizations exist in the universe, how far from us they exist, and possibly how much more advanced than us they may be.SETI is a comparatively new branch of scientific research that began only in 1959. The goal of SETI is to ascertain whether alien civilizations exist in the universe, how far from us they exist, and possibly how much more advanced than us they may be.
Article
Full-text available
Most known extrasolar planets (exoplanets) have been discovered using the radial velocity or transit methods. Both are biased towards planets that are relatively close to their parent stars, and studies find that around 17-30% (refs 4, 5) of solar-like stars host a planet. Gravitational microlensing, on the other hand, probes planets that are further away from their stars. Recently, a population of planets that are unbound or very far from their stars was discovered by microlensing. These planets are at least as numerous as the stars in the Milky Way. Here we report a statistical analysis of microlensing data (gathered in 2002-07) that reveals the fraction of bound planets 0.5-10 AU (Sun-Earth distance) from their stars. We find that 17(+6)(-9)% of stars host Jupiter-mass planets (0.3-10 M(J), where M(J) = 318 M(⊕) and M(⊕) is Earth's mass). Cool Neptunes (10-30 M(⊕)) and super-Earths (5-10 M(⊕)) are even more common: their respective abundances per star are 52(+22)(-29)% and 62(+35)(-37)%. We conclude that stars are orbited by planets as a rule, rather than the exception.
Article
Full-text available
In 1977, H1N1 influenza A virus reappeared after a 20-year absence. Genetic analysis indicated that this strain was missing decades of nucleotide sequence evolution, suggesting an accidental release of a frozen laboratory strain into the general population. Recently, this strain and its descendants were included in an analysis attempting to date the origin of pandemic influenza virus without accounting for the missing decades of evolution. Here, we investigated the effect of using viral isolates with biologically unrealistic sampling dates on estimates of divergence dates. Not accounting for missing sequence evolution produced biased results and increased the variance of date estimates of the most recent common ancestor of the re-emergent lineages and across the entire phylogeny. Reanalysis of the H1N1 sequences excluding isolates with unrealistic sampling dates indicates that the 1977 re-emergent lineage was circulating for approximately one year before detection, making it difficult to determine the geographic source of reintroduction. We suggest that a new method is needed to account for viral isolates with unrealistic sampling dates.
Article
Full-text available
Genetic resistance to clinical mousepox (ectromelia virus) varies among inbred laboratory mice and is characterized by an effective natural killer (NK) response and the early onset of a strong CD8+ cytotoxic T-lymphocyte (CTL) response in resistant mice. We have investigated the influence of virus-expressed mouse interleukin-4 (IL-4) on the cell-mediated response during infection. It was observed that expression of IL-4 by a thymidine kinase-positive ectromelia virus suppressed cytolytic responses of NK and CTL and the expression of gamma interferon by the latter. Genetically resistant mice infected with the IL-4-expressing virus developed symptoms of acute mousepox accompanied by high mortality, similar to the disease seen when genetically sensitive mice are infected with the virulent Moscow strain. Strikingly, infection of recently immunized genetically resistant mice with the virus expressing IL-4 also resulted in significant mortality due to fulminant mousepox. These data therefore suggest that virus-encoded IL-4 not only suppresses primary antiviral cell-mediated immune responses but also can inhibit the expression of immune memory responses.
Article
Full-text available
The search for extraterrestrial intelligence (SETI) has been heavily influenced by solutions to the Drake Equation, which returns an integer value for the number of communicating civilisations resident in the Milky Way, and by the Fermi Paradox, glibly stated as: "If they are there, where are they?". Both rely on using average values of key parameters, such as the mean signal lifetime of a communicating civilisation. A more accurate answer must take into account the distribution of stellar, planetary and biological attributes in the galaxy, as well as the stochastic nature of evolution itself. This paper outlines a method of Monte Carlo realisation which does this, and hence allows an estimation of the distribution of key parameters in SETI, as well as allowing a quantification of their errors (and the level of ignorance therein). Furthermore, it provides a means for competing theories of life and intelligence to be compared quantitatively.
Book
In this compelling book, leading scientists and historians explore the Drake Equation, which guides modern astrobiology's search for life beyond Earth. First used in 1961 as the organising framework for a conference in Green Bank, West Virginia, it uses seven factors to estimate the number of extraterrestrial civilisations in our galaxy. Using the equation primarily as a heuristic device, this engaging text examines the astronomical, biological, and cultural factors that determine the abundance or rarity of life beyond Earth and provides a thematic history of the search for extraterrestrial life. Logically structured to analyse each of the factors in turn, and offering commentary and critique of the equation as a whole, contemporary astrobiological research is placed in a historical context. Each factor is explored over two chapters, discussing the pre-conference thinking and a modern analysis, to enable postgraduates and researchers to better assess the assumptions that guide their research.
Article
In this paper we address the cosmic frequency of technological species. Recent advances in exoplanet studies provide strong constraints on all astrophysical terms in the Drake Equation. Using these and modifying the form and intent of the Drake equation we show that we can set a firm lower bound on the probability that one or more additional technological species have evolved anywhere and at any time in the history of the observable Universe. We find that as long as the probability that a habitable zone planet develops a technological species is larger than ~$10^{-24}$, then humanity is not the only time technological intelligence has evolved. This constraint has important scientific and philosophical consequences.
Article
The waiting time for answers may be greater than the longevity of the technical state of mind.
Article
We proffer a contemporary solution to the so-called Fermi Paradox, which is concerned with conflict between Copernicanism and the apparent paucity of evidence for intelligent alien civilizations. In particular, we argue that every community of organisms that reaches its space-faring age will (1) almost immediately use its rocket-building computers to reverse-engineer its genetic chemistry and (2) self-destruct when some individual uses said technology to design an omnicidal pathogen. We discuss some of the possible approaches to prevention with regard to Homo sapiens' vulnerability to bioterrorism, particularly on a short-term basis.
Book
A global catastrophic risk is one with the potential to wreak death and destruction on a global scale. In human history, wars and plagues have done so on more than one occasion, and misguided ideologies and totalitarian regimes have darkened an entire era or a region. Advances in technology are adding dangers of a new kind. It could happen again. In Global Catastrophic Risks 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues - policy responses and methods for predicting and managing catastrophes. This is invaluable reading for anyone interested in the big issues of our time; for students focusing on science, society, technology, and public policy; and for academics, policy-makers, and professionals working in these acutely important fields.
Article
The L factor in the Drake equation is widely understood to account for most of the variance in estimates of the number of extraterrestrial intelligences that might be contacted by the search for extraterrestrial intelligence (SETI). It is also among the hardest to quantify. An examination of discussions of the L factor in the popular and technical SETI literature suggests that attempts to estimate L involve a variety of potentially conflicting assumptions about civilizational lifespan that reflect hopes and fears about the human future.
Article
Monte Carlo calculations of the expansion of space-faring civilizations are presented for a wide range of values of the population growth coefficient (α) and emigration coefficient (γ). Even for the very low values proposed by Newman and Sagan (α = 10−4per year; γ = 10−8per year) the migration wavefront expands at 1.4 × 10−5 pc per year. Even with this low expansion velocity, such a civilization would fill the Galaxy in about 109 years. Filling times of the order of 60 million years seem probable. The wavefront velocity is approximated by , where is the average radial distance traveled, the average distance traveled, and υs the ship speed. This approximation was derived by Newman.
Article
The L factor in the Drake equation is widely understood to account for most of the variance in estimates of the number of extraterrestrial intelligences that might be contacted by the search for extraterrestrial intelligence (SETI). It is also among the hardest to quantify. An examination of discussions of the L factor in the popular and technical SETI literature suggests that attempts to estimate L involve a variety of potentially conflicting assumptions about civilizational lifespan that reflect hopes and fears about the human future.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
Article
Attempts to explain away the Fermi Paradox often led to pessimistic extrapolations of the Human Analogy. It has been argued that most civilisations, reaching a technological level, are likely to destroy themselves and that therefore the factor L in Drake's equation should be a short period of time. The shortness of ETI's lifespan would help to explain why we haven't been detected and why we still haven't discovered evidence of ETI's existence. This paper rejects this pessimism and holds that our human society is slowly but successfully coping with our crisis syndromes. Underlying tendencies are moving humankind towards a higher peaceful organisational order. Self-extinction is ruled out. The proper interpretation of the Human Analogy argument therefore suggests that advanced ETI also will have progressed towards a superior political and ethical order. Its longevity is likely to be indefinite. Contact with a benign ETI would be highly beneficial for us. On the other hand, since we study primitive micro-organisms and molecules in space, it is only plausible to suppose that ETI would be interested in our species and civilisation.
Article
The singularity vs. the plurality of inhabited worlds in the universe is debated. Attention is given to astrophysical constraints on the evolution of intelligent species and to motivations for interstellar communication and exploration. It is argued that it is plausible that there is only one inhabited planet in the universe.
Article
For over 40 years the formalism known as the Drake equation has helped guide speculation about the likelihood of intelligent extraterrestrial life contacting us. Since the equation was formulated there have been significant advances in astronomy and astrophysics, sufficient to merit a review of the significance of the Drake equation. The equation itself is as a series of terms which, when combined, allow an informed discussion of the likelihood of contact with an alien intelligence. However, whilst it has a mathematical form (i.e. a series of terms multiplied together to give an overall probability) it is best understood not as an equation in the strictly mathematical sense. Some of the terms have a physically quantifiable, numerically based meaning (e.g. obtainable from astronomy) and some are more social in content in that they describe the behaviour and evolution of societies and thus are more social science in nature and not truly estimable without observation of a set of societies. Initially, almost all the terms had to be estimated based on informed guesswork or belief. However, in the intervening period since the early 1960s, many of the a priori scientific terms which were themselves initially so uncertain as to require estimation by guess work or belief are now, or will soon be, directly measurable from current or planned astronomical projects. This leaves the non-scientific terms as a distinct class of their own, still subject to analysis only by discussion. Thus observational astronomy has nearly caught up with parts of the Drake equation and will soon quantify the purely physical science parts of the equation. The social parts (concerning intelligent societies, etc.) are still a priori unknowable. In addition, the growth of the subject called astrobiology (i.e. the study of life in the Universe) has developed so fast that communicating with intelligent life is now increasingly seen as just one small part of a much larger discipline. The knowledge as to whether there is life per se (apart from on Earth) in our galactic neighbourhood may be obtainable in the near future directly from observation. Such knowledge will have a profound impact on mankind and will be obtained without the form of communication envisaged by the Drake equation.
Article
A new paradigm is needed for industrial civilization, because neither the traditional theory of exponential industrial growth nor the more recent steady-state hypothesis can satisfactorily explain historical data. As a basis for the paradigm, the long sweep of human history is divided into three phases: (1)pre-industrial, (2)industrial, and (3)de-industrial. This essay focuses on the second, or industrial, phase. The paradigm is embodied in four theories. The first theory states that industrial civilization can be graphed over time by energy-use per person in the shape of asingle pulse waveform. The second theory is derived from a well-established principle of human ecology. It defines a set ofnecessary conditions for the advance, stagnation and decline of industrial civilization in terms of world total energy-use and world total population. Next, the subject ofgoverning is analyzed in terms of ten requirements for system control. The third theory is derived from this analysis. It relates thesize, orcomplexity, of a society over time to the average energy-use per person in that society. Historical population and energy-use data and other considerations are used as the basis for the fourth theory. This, a predictive theory, states that thelife-expectancy of industrial civilization is less than 100 years.
Article
This paper introduces “computer viruses” and examines their potential for causing widespread damage to computer systems. Basic theoretical results are presented, and the infeasibility of viral defense in large classes of systems is shown. Defensive schemes are presented and several experiments are described.
Article
Emergence of resistance is a major concern in influenza antiviral treatment and prophylaxis. Combination antiviral therapy might overcome this problem. Here, we estimate that all possible single mutants and a sizeable fraction of double mutants are generated during an uncomplicated influenza infection. While most of them may sustain a fitness cost, some variants may confer drug resistance and be selected during therapy. We argue that a triple combination regimen would markedly reduce the risk of antiviral resistance emergence in seasonal and pandemic influenza viruses, especially in seriously ill or immunocompromised hosts.
Article
Twelve key events leading up to the emergence of the current pandemic swine-origin influenza A (H1N1) virus are reviewed.
Article
The probability is analyzed that intelligent civilizations capable of interstellar communication exist in the galaxy. Drake's (1960) equation for the prevalence of communicative civilization is used in the calculations, and attempts are made to place limits on the search range that must be covered to contact other civilizations, the longevity of the communicative phase of such civilizations, and the possible number of two-way exchanges between civilizations in contact with each other. The minimum estimates indicate that some 100,000 civilizations probably coexist within several tens of astronomical units of each other and that some 1,000,000 probably coexist within 10 light years of each other. Attempts to detect coherent signals characteristic of intelligent life are briefly noted, including Projects Ozma and Cyclops as well as some Soviet attempts. Recently proposed American and Soviet programs for interstellar communication are outlined.
Article
If extraterrestrial intelligent beings exist and have reached a high level of technical development, one by-product of their energy metabolism is likely to be the large-scale conversion of starlight into far-infrared radiation. It is proposed that a search for sources of infrared radiation should accompany the recently initiated search for interstellar radio communications.
Discussion of Space Science Board, National Academy of Sciences Conference on Extraterrestrial Intelligent Life
• F D Drake
F. D. Drake, "Discussion of Space Science Board, National Academy of Sciences Conference on Extraterrestrial Intelligent Life," November 1961. Green Bank, West Virginia.
The life-expectancy of industrial civilization
• R C Duncan
R. C. Duncan, "The life-expectancy of industrial civilization," in System Dynamics '91: Proceedings of the 1991 International System Dynamics Conference, Bangkok, Thailand, August 27 through 30, 1991, pp. 173-181, 1991.
Average lifetime of an intelligent civilization estimated on its global cycle
• V N Kompanichenko
V. N. Kompanichenko, "Average lifetime of an intelligent civilization estimated on its global cycle," in Bioastronomy 99: A New Era in the Search for Life, vol. 213 of Astronomical Society of the Pacific Conference Series, pp. 437-440, 2000.
Plagues and Peoples. Garden City, NY: Anchor
• W H Mcneill
W. H. McNeill, Plagues and Peoples. Garden City, NY: Anchor, 1976. Pages 94, 113ff, 152, 180, 186.
mpmath: a Python library for arbitrary-precision floatingpoint arithmetic (version 0.19)
• F Johansson
F. Johansson et al., mpmath: a Python library for arbitrary-precision floatingpoint arithmetic (version 0.19), June 2014. http://mpmath.org/.
Mushroom: The Story of the A-bomb Kid
• J A Phillips
J. A. Phillips, Mushroom: The Story of the A-bomb Kid. New York: Morrow, 1978.