Article

Cosmological natural selection as the explanation for the complexity of the universe

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A critical review is given of the theory of cosmological natural selection. The successes of the theory are described, and a number of published criticisms are answered. An observational test is described which could falsify the theory.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... HTUM offers a comprehensive model that redefines our understanding of the universe's structure and dynamics by addressing these challenges and providing a detailed mathematical framework [144,145]. The model's ability to integrate various aspects of cosmology, quantum mechanics, and higher-dimensional interactions makes it a promising candidate for further exploration and refinement [146,147]. ...
... By continuing to investigate the singularity and its role in HTUM, scientists can gain new insights into the nature of reality, the interconnectedness of all matter and energy, and the fundamental principles that govern the universe [146]. This research could revolutionize our understanding of the cosmos and our place within it. ...
... This boundary is where the deterministic laws of classical physics meet the probabilistic nature of quantum mechanics [250]. The event horizon is not static; it is a dynamic, evolving interface that reflects the continuous transformation and interconnectedness of the universe [146]. ...
Preprint
Full-text available
The Hyper-Torus Universe Model (HTUM) is a novel framework that unifies quantum mechanics, cosmology, and consciousness, proposing that the universe is a higher-dimensional hyper-torus containing all possible states of existence. This paper explores the fundamental concepts and implications of HTUM, which suggests that the universe is a quantum system in which all possible outcomes are inherently connected, with consciousness playing a crucial role in actualizing reality. HTUM addresses critical challenges in modern physics, such as the nature of quantum entanglement, the origin of the universe, and the relationship between mind and matter. By introducing concepts like singularity, quantum entanglement at a cosmic scale, and the self-actualization of the universe, HTUM provides a comprehensive framework for understanding the fundamental nature of reality. This paper discusses the mathematical formulation of HTUM, its implications for quantum mechanics and cosmology, and its potential to bridge the gap between science and philosophy. HTUM represents a significant shift in our understanding of the universe and our place within it, inviting further research and exploration into the nature of reality and consciousness.
... HTUM offers a comprehensive model that redefines our understanding of the universe's structure and dynamics by addressing these challenges and providing a detailed mathematical framework [23,124]. The model's ability to integrate various aspects of cosmology, quantum mechanics, and higher-dimensional interactions makes it a promising candidate for further exploration and refinement [290,315]. ...
... By continuing to investigate the singularity and its role in HTUM, scientists can gain new insights into the nature of reality, the interconnectedness of all matter and energy, and the fundamental principles that govern the universe [315]. This research could revolutionize our understanding of the cosmos and our place within it. ...
... This boundary is where the deterministic laws of classical physics meet the probabilistic nature of quantum mechanics [333]. The event horizon is not static; it is a dynamic, evolving interface that reflects the continuous transformation and interconnectedness of the universe [315]. ...
Preprint
Full-text available
The Hyper-Torus Universe Model (HTUM) is a novel framework that unifies quantum mechanics, cosmology, and consciousness, proposing that the universe is a higher-dimensional hyper-torus containing all possible states of existence. This paper explores the fundamental concepts and implications of HTUM, which suggests that the universe is a quantum system in which all possible outcomes are inherently connected, with consciousness playing a crucial role in actualizing reality. HTUM addresses critical challenges in modern physics, such as the nature of quantum entanglement, the origin of the universe, and the relationship between mind and matter. By introducing concepts like singularity, quantum entanglement at a cosmic scale, and the self-actualization of the universe, HTUM provides a comprehensive framework for understanding the fundamental nature of reality. This paper discusses the mathematical formulation of HTUM, its implications for quantum mechanics and cosmology, and its potential to bridge the gap between science and philosophy. HTUM represents a significant shift in our understanding of the universe and our place within it, inviting further research and exploration into the nature of reality and consciousness.
... By continuing to investigate the singularity and its role in the HTUM, scientists can gain new insights into the nature of reality, the interconnectedness of all matter and energy, and the fundamental principles that govern the universe [93]. This research could revolutionize our understanding of the cosmos and our place within it. ...
... This boundary is where the deterministic laws of classical physics meet the probabilistic nature of quantum mechanics [98]. The event horizon is not static; it is a dynamic, evolving interface that reflects the continuous transformation and interconnectedness of the universe [93]. ...
... • Unified Framework: By integrating the principles of the HTUM, we can develop a more comprehensive framework that unifies general relativity and quantum mechanics [93]. This could lead to a deeper understanding of the nature of event horizons and the behavior of black holes. ...
Preprint
Full-text available
The Hyper-Torus Universe Model (HTUM) is a novel framework that unifies quantum mechanics, cosmology, and consciousness, proposing that the universe is a higher-dimensional hyper-torus containing all possible states of existence. This paper explores the fundamental concepts and implications of the HTUM, which suggests that the universe is a quantum system in which all possible outcomes are inherently connected, with consciousness playing a crucial role in actualizing reality. The HTUM addresses critical challenges in modern physics, such as the nature of quantum entanglement, the origin of the universe, and the relationship between mind and matter. By introducing concepts like singularity, quantum entanglement at a cosmic scale, and the self-actualization of the universe, the HTUM provides a comprehensive framework for understanding the fundamental nature of reality. This paper discusses the mathematical formulation of the HTUM, its implications for quantum mechanics and cosmology, and its potential to bridge the gap between science and philosophy. The philosophical implications of the HTUM are also examined, including its impact on free will, determinism, and the mind-matter relationship. The HTUM represents a significant shift in our understanding of the universe and our place within it, inviting further research and exploration into the nature of reality and consciousness.
... By continuing to investigate the singularity and its role in the HTUM, scientists can gain new insights into the nature of reality, the interconnectedness of all matter and energy, and the fundamental principles that govern the universe [231]. This research could revolutionize our understanding of the cosmos and our place within it. ...
... This boundary is where the deterministic laws of classical physics meet the probabilistic nature of quantum mechanics [245]. The event horizon is not static; it is a dynamic, evolving interface that reflects the continuous transformation and interconnectedness of the universe [231]. ...
... • Unified Framework: By integrating the principles of the HTUM, we can develop a more comprehensive framework that unifies general relativity and quantum mechanics [231]. This could lead to a deeper understanding of the nature of event horizons and the behavior of black holes. ...
Preprint
Full-text available
The Hyper-Torus Universe Model (HTUM) is a novel framework that unifies quantum mechanics, cosmology, and consciousness, proposing that the universe is a higher-dimensional hyper-torus containing all possible states of existence. This paper explores the fundamental concepts and implications of the HTUM, which suggests that the universe is a quantum system in which all possible outcomes are inherently connected, with consciousness playing a crucial role in actualizing reality. The HTUM addresses critical challenges in modern physics, such as the nature of quantum entanglement, the origin of the universe, and the relationship between mind and matter. By introducing concepts like singularity, quantum entanglement at a cosmic scale, and the self-actualization of the universe, the HTUM provides a comprehensive framework for understanding the fundamental nature of reality. This paper discusses the mathematical formulation of the HTUM, its implications for quantum mechanics and cosmology, and its potential to bridge the gap between science and philosophy. The philosophical implications of the HTUM are also examined, including its impact on free will, determinism, and the mind-matter relationship. The HTUM represents a significant shift in our understanding of the universe and our place within it, inviting further research and exploration into the nature of reality and consciousness.
... Our large, long-lived universe with a hierarchy of complexity from the sub-atomic to the galactic is the result of particular values of these parameters. Physical theories do not offer an explanation of these parameters [2][3][4]. The masses and charges of elementary particles are free in the standard model, and many solutions to the equations of string theory appear valid [5][6][7]. ...
... Smolin introduced the idea that a process of natural selection at the cosmological scale selected the values of the physical parameters specifying our universe [2,[17][18][19], thereby introducing a causal mechanism for the observed fine-tuning. Smolin's theory builds on the established idea that singularities produced by stars in one universe inflate to become offspring universes. ...
... Critics of Smolin's hypothesis argue that the parameters do not appear to maximize reproduction through stars [24][25][26][27] (see also Smolin's counter arguments [2]). Following Smolin's work, others have adopted the perspective that our physical parameters result from an evolutionary process, but conjecture instead that the process has selected for life rather than stars [27][28][29][30][31][32][33][34]. ...
Article
Full-text available
If the parameters defining the physics of our universe departed from their present values, the observed rich structure and complexity would not be supported. This article considers whether similar fine-tuning of parameters applies to technology. The anthropic principle is one means of explaining the observed values of the parameters. This principle constrains physical theories to allow for our existence, yet the principle does not apply to the existence of technology. Cosmological natural selection has been proposed as an alternative to anthropic reasoning. Within this framework, fine-tuning results from selection of universes capable of prolific reproduction. It was originally proposed that reproduction occurs through singularities resulting from supernovae, and subsequently argued that life may facilitate the production of the singularities that become offspring universes. Here I argue technology is necessary for production of singularities by living beings, and ask whether the physics of our universe has been selected to simultaneously enable stars, intelligent life, and technology capable of creating progeny. Specific technologies appear implausibly equipped to perform tasks necessary for production of singularities, potentially indicating fine-tuning through cosmological natural selection. These technologies include silicon electronics, superconductors, and the cryogenic infrastructure enabled by the thermodynamic properties of liquid helium. Numerical studies are proposed to determine regions of physical parameter space in which the constraints of stars, life, and technology are simultaneously satisfied. If this overlapping parameter range is small, we should be surprised that physics allows technology to exist alongside us. The tests do not call for new astrophysical or cosmological observations. Only computer simulations of well-understood condensed matter systems are required.
... Our large, long-lived universe with a hierarchy of complexity from the sub-atomic to the galactic is the result of particular values of these parameters. Physical theories do not offer an explanation of these parameters [2][3][4]. The masses and charges of elementary particles are free in the standard model, and many solutions to the equations of string theory appear valid [5][6][7]. ...
... Smolin introduced the idea that a process of natural selection at the cosmological scale selected the values of the physical parameters specifying our universe [2,[17][18][19], thereby introducing a causal mechanism for the observed fine-tuning. Smolin's theory builds on the established idea that singularities produced by stars in one universe inflate to become offspring universes. ...
... Critics of Smolin's hypothesis argue that the parameters do not appear to maximize reproduction through stars [24][25][26][27] (see also Smolin's counter arguments [2]). Following Smolin's work, others have adopted the perspective that our physical parameters result from an evolutionary process, but conjecture instead that the process has selected for life rather than stars [27][28][29][30][31][32][33][34]. ...
Preprint
If the parameters defining the physics of our universe departed from their present values, the observed rich structure and complexity would not be supported. This article considers whether similar fine-tuning of parameters applies to technology. The anthropic principle is one means of explaining the observed values of the parameters. This principle constrains physical theories to allow for our existence, yet the principle does not apply to the existence of technology. Cosmological natural selection has been proposed as an alternative to anthropic reasoning. Within this framework, fine-tuning results from selection of universes capable of prolific reproduction. It was originally proposed that reproduction occurs through singularities resulting from supernovae, and subsequently argued that life may facilitate the production of the singularities that become offspring universes. Here I argue technology is necessary for production of singularities by living beings, and ask whether the physics of our universe has been selected to simultaneously enable stars, intelligent life, and technology capable of creating progeny. Specific technologies appear implausibly equipped to perform tasks necessary for production of singularities, potentially indicating fine-tuning through cosmological natural selection. These technologies include silicon electronics, superconductors, and the cryogenic infrastructure enabled by the thermodynamic properties of liquid helium. Numerical studies are proposed to determine regions of physical parameter space in which the constraints of stars, life, and technology are simultaneously satisfied. If this overlapping parameter range is small, we should be surprised that physics allows technology to exist alongside us. The tests do not call for new astrophysical or cosmological observations. Only computer simulations of well-understood condensed matter systems are required.
... In the VCRIS model, physical and informational processes that change unpredictably in successive replication cycles, to generate, maintain, and manage Variety, are in fundamental tension and opposition with physical and informational processes that change predictably in successive replication cycles, and thus generate, maintain, and manage Convergence. V and C are the first two terms in the VCRIS model, as these two oppositional processes are proposed as "root perspectives" in any model of physical and informational change in autopoetic systems, including our universe itself, if it is a replicating and adaptive system, as various theorists have proposed (Smolin 1992(Smolin , 1997(Smolin , 2004Vaas 1998;Vidal 2010;Price 2017). Standard evolutionary theory offers no model of this fundamental opposition, of the inheritance and tension between two classes of informational-physical initiating parameters (evo and devo) at every scale at which replication occurs, including gene, epigene, organism, group, niche, environment, and universe. ...
... In the VCRIS model, physical and informational processes that change unpredictably in successive replication cycles, to generate, maintain, and manage Variety, are in fundamental tension and opposition with physical and informational processes that change predictably in successive replication cycles, and thus generate, maintain, and manage Convergence. V and C are the first two terms in the VCRIS model, as these two oppositional processes are proposed as "root perspectives" in any model of physical and informational change in autopoetic systems, including our universe itself, if it is a replicating and adaptive system, as various theorists have proposed (Smolin 1992(Smolin , 1997(Smolin , 2004Vaas 1998;Vidal 2010;Price 2017). Standard evolutionary theory offers no model of this fundamental opposition, of the inheritance and tension between two classes of informational-physical initiating parameters (evo and devo) at every scale at which replication occurs, including gene, epigene, organism, group, niche, environment, and universe. ...
... In the VCRIS model, physical and informational processes that change unpredictably in successive replication cycles, to generate, maintain, and manage Variety, are in fundamental tension and opposition with physical and informational processes that change predictably in successive replication cycles, and thus generate, maintain, and manage Convergence. V and C are the first two terms in the VCRIS model, as these two oppositional processes are proposed as "root perspectives" in any model of physical and informational change in autopoetic systems, including our universe itself, if it is a replicating and adaptive system, as various theorists have proposed (Smolin 1992(Smolin , 1997(Smolin , 2004Vaas 1998;Vidal 2010;Price 2017). Standard evolutionary theory offers no model of this fundamental opposition, of the inheritance and tension between two classes of informational-physical initiating parameters (evo and devo) at every scale at which replication occurs, including gene, epigene, organism, group, niche, environment, and universe. ...
Chapter
Full-text available
This paper offers a general systems definition of the phrase “evolutionary development” and an introduction to its application to autopoetic (self-reproducing) complex systems, including the universe as a system. Evolutionary development, evo devo or ED, is a term that can be used by philosophers, scientists, historians, and others as a replacement for the more general term “evolution,” whenever a scholar thinks experimental, selectionist, stochastic, and contingent or “evolutionary” processes, and also convergent, statistically deterministic (probabilistically predictable), or “developmental” processes, including replication, may be simultaneously contributing to selection and adaptation in any complex system, including the universe as a system. Like living systems, our universe broadly exhibits both contingent and deterministic components, in all historical epochs and at all levels of scale. It has a definite birth and it is inevitably senescing toward heat death. The idea that we live in an “evo devo universe,” one that has self-organized over past replications both to generate multilocal evolutionary variation (experimental diversity) and to convergently develop and pass to future generations selected aspects of its accumulated complexity (“intelligence”), is an obvious hypothesis. Yet today, few cosmologists or physicists, even among theorists of universal replication and the multiverse, have entertained the hypothesis that our universe may be both evolving and developing (engaging in both unpredictable experimentation and goal-driven, teleological, directional change and a replicative life cycle), as in living systems. Our models of universal replication, like Lee Smolin’s cosmological natural selection (CNS), do not yet use the concept of universal development, or refer to development literature. I will argue that some variety of evo devo universe models must emerge in coming years, including models of CNS with Intelligence (CNSI), which explore the ways emergent intelligence can be expected to constrain and direct “natural” selection, as it does in living systems. Evo devo models are one of several early approaches to an Extended Evolutionary Synthesis (EES), one that explores adaptation in both living and nonliving replicators. They have much to offer as a general approach to adaptive complexity, and may be required to understand several important phenomena under current research, including galaxy formation, the origin of life, the fine-tuned universe hypothesis, possible Earthlike and life fecundity in astrobiology, convergent evolution, the future of artificial intelligence, and our own apparent history of unreasonably smooth and resilient acceleration of both total and “leading edge” adapted complexity and intelligence growth, even under frequent and occasionally extreme past catastrophic selection events. If they are to become better validated in living systems and in nonliving adaptive replicators, including stars, prebiotic chemistry, and the universe as a system, they will require both better simulation capacity and advances in a variety of theories, which I shall briefly review.
... That is black hole structure or geometry may be a subset of the cosmic geometry. This idea may be given a chance [21,22]. ...
... From the above collected recent research information it is possible to say that the universe may have been borne inside a black hole, and the black holes in our own cosmos might be birthing new universes of their own. Based on the natural selection scheme (CNS), black holes may be representing the primordial responsible mechanism for the observed cosmic reproduction within a multi-verse [21,22]. With reference to the well believed big bang, in the universe there is no centre, there is no preferred direction and there is no rotation. ...
... From relations (22,41,42) the Boltzmann's constant and Wien's displacement constant can be interrelated with the elementary charge in the following way. ...
Article
Considering 'black hole geometry' as the 'eternal cosmic geometry' and by assuming 'constant light speed rotation' throughout the cosmic evolution, at any time the currently believed cosmic 'critical density' can be shown to be the cosmic black hole's eternal 'volume density'. Thinking in this way and based on the Mach's principle, 'distance cosmic back ground' can be quantified in terms of 'Hubble volume' and 'Hubble mass'. To proceed further the observed cosmic redshift can be reinterpreted as an index of 'cosmological' light emission mechanism. By considering the characteristic mass unit
... For example, if there is only one universe, then the weak anthropic principle explains why it is complex given that it is observed, but does not explain why it complex and observed rather than simple and unobserved (see Ref. [11] for a detailed review). Consequently, an alternative explanation has been proposed by Smolin [12][13][14]: the fundamental constants of nature might have been literally fine-tuned, by a process of cosmological natural selection (CNS). Specifically, Smolin [12][13][14] suggests that there is a population of universes-the ' 'multiverse' '-in which individual universes vary in their fundamental constants, and give birth to offspring universes via the formation of black holes, with some fidelity of transmission of fundamental constants between parent and offspring. ...
... Consequently, an alternative explanation has been proposed by Smolin [12][13][14]: the fundamental constants of nature might have been literally fine-tuned, by a process of cosmological natural selection (CNS). Specifically, Smolin [12][13][14] suggests that there is a population of universes-the ' 'multiverse' '-in which individual universes vary in their fundamental constants, and give birth to offspring universes via the formation of black holes, with some fidelity of transmission of fundamental constants between parent and offspring. Thus, those universes that are more likely to form black holes leave more descendant universes than their counterparts, resulting in successive generations of universes being better adapted for black-hole formation. ...
... This idea relies on several important assumptions, all of which are controversial. First, it is key to the ideas of Smolin [12][13][14] that the endpoint of black-hole formation is actually a new universe, rather than simply a quantummechanical state that will decay over time and ultimately disappear through Hawking radiation. In the context of the AdS/CFT correspondence (a surprising isomorphism between d-dimensional gauge theory and d 1 1-dimensional gravity; [15]), the formation and subsequent decay of a black hole occurs as a regular and unitary procedure within the dual field theory. ...
Article
The cosmological natural selection (CNS) hypothesis holds that the fundamental constants of nature have been fine-tuned by an evolutionary process in which universes produce daughter universes via the formation of black holes. Here, we formulate the CNS hypothesis using standard mathematical tools of evolutionary biology. Specifically, we capture the dynamics of CNS using Price's equation, and we capture the adaptive purpose of the universe using an optimization program. We establish mathematical correspondences between the dynamics and optimization formalisms, confirming that CNS acts according to a formal design objective, with successive generations of universes appearing designed to produce black holes. © 2013 Wiley Periodicals, Inc. Complexity 18: 48–56, 2013
... The predictions are these: (a) In an alternative view to the 'anthropic principle' in the eternal inflation scenario on the landscape of the multiverse, Smolin proposes the theory of 'cosmological natural selection (CNS)' [5] which predicts as one of its clear-cut falsifiable predictions: "Find a neutron star whose mass appreciably exceeds M BB max . Then it will count against the CNS scnario" [5,6]. This prediction follows from the requirement of the theory that the number of black holes in this universe be maximized without changing the parameters of the Standard Model. ...
... Here while accepting a rich landscape of the kind string theory suggests, the CNS scenario differs from the anthropic principle with eternal inflation -which states that the universe is eternally inflating, endlessly spawning 'pocket universes' -in that it relies on the maximized formation of black holes. Among a variety of conditions for forming many black holes, there is one which concerns us, that is that, the upper mass limit of neutron stars be as low as possible [6] We are not in a position, nor is it our objective, to address the basic questions associated with the scenario, such as how black hole singularities bounce and reproduce stupendous numbers of new universes, how to resolve the information loss in the reproduction, how the size of a black hole relates to the number or size of universes, whether the CNS is an alternative or superior to the anthropic principle with eternal inflation etc. The dispute with pros and cons for both will wage on [8], on which we have nothing to say. ...
Preprint
It is argued that a well measured double neutron star binary in which the two neutron stars are more than 4% different from each other in mass or a massive neutron star with mass M > 2 M_sun would put in serious doubt or simply falsify the following chain of predictions: (1) nearly vanishing vector meson mass at chiral restoration, (2) kaon condensation at a density n ~ 3 n_0, (3) the Brown-Bethe maximum neutron star mass M_max ~ 1.5 M_sun and (4) Smolin's `Cosmological Natural Selection' hypothesis.
... Falsification tests for spacetime cosmology are few because of the difficulty of obtaining observational results. To his credit, Smolin (2004) provided a falsification test for his Cosmological Natural Selection (CNS) Theory. Muller (2016) also took a stance and predicted that LIGO data would support his own particular 'now' cosmological theory. ...
... The CNS theory makes a few falsifiable predictions. One is that neutron stars can be no heavier than twice the solar mass (Smolin, 2004). Important for our purposes is that their theory also implies that the experiential phenomenon of the lower level FOT (including motion) is 'real' (personal communication, 2016). ...
Article
One difficulty with spacetime theories is the absence of falsifications tests. Because the observer is part of quantum mechanics (QM) and therefore an obligatory part of cosmological theories that incorporate QM, the neurological sciences have a legitimate role. We examined six different QM based cosmological theories. They make implications about various aspects of the flow of time (FOT)--the past/present/future experience including subjective dynamism). First, a distinction is made between the FOT and the physical passage of time (POT), the occurrence of a new physical event at the periphery of an expanding universe. Four theories suggesting a real POT are the Evolving Block Universe, the Spacetime Dynamics Theory, Causal Set Cosmology; and Muller’s cosmology. However, they retain the past/present components of the Block Universe and imply that the upper level FOT (the experience that the ‘present’ is unique) is illusory. A falsification experiment is proposed utilizing Hartle’s proposed Information and Gathering and Utilizing Systems (IGUSs), which have different experiences of the ‘present’. A virtual reality IGUS which can experientially navigate between past and present is suggested. It could confirm their implication that the experience of past/present events is not unique (i.e., it is a variable cognition). Barbour’s timeless theory involves relative configuration spaces which contain enough information to provide an illusory dynamic scene. That experience of dynamism within that scene is said to be an illusion. A second falsification experiment is proposed to remove dynamism suggesting it is an irrelevant experience. It involves intermittent stimulation of the newly discovered consciousness center (the claustrum) to create brief intervals of unconsciousness. Doing this precludes the recently discovered ‘happening’ percept which can account for the entirely of dynamism (the lower level FOT). The negative implication for Temporal Naturalism in the Cosmological Natural Selection theory is briefly discussed.
... More generally, Charles Darwin fundamental idea-before and independently of his elaborated doctrine-that the biological reality is permanently in a natural movement, in a flow of renewal, accompanied by accidental mutations, with some of them leading to radical improvement of species-this popularity has finally eliminated from the scientific horizons all "theological" interest à la Johannes Kepler (Wolfenstein, 2003) in Why?, Thanks to what?, For what purpose?, and Who? (Kepler, 1619). Thus, for example, the only ambition of Optimality Theory (Prince & Smolensky, 1993/2002/2004) was, and remains, to introduce and to investigate some natural constrains on the linguistic flow of languages-the flow supposed to bring our languages from speechless vocality or manual nothing to their modern splendor. ...
... Design cannot precede evolution and therefore cannot underlie the universe. (p. 5) And many, many, too many have tried to be faithful to this condemnation of the design creativity to work out accidentally as it were: (1) biology (Dawkins, 1986), cosmology (Smolin, 2004), behavioral psychology (Crawford & Krebs, 1998), and lingustics (Pinker & Bloom, 1990); (2) all progress of sciences at large (Weinberg, 2001) and even more radically; (3) all intellectual endeavors and failures (Dennett, 2006) of humanity, if not; and (4) the very existence in, and ultimately, of the Universe (Dawkins, 2005). ...
Article
Full-text available
One of the most natural approaches to the problem of origins of natural languages is the study of hidden intelligent "communications" emanating from their historical forms. Semitic languages history is especially meaningful in this sense. One discovers, in particular, that BH (Biblical Hebrew), the best preserved fossil of the Semitic protolanguage, is primarily a verbal language, with an average verse of the Hebrew Bible containing no less than three verbs and with the biggest part of its vocabulary representing morphological derivations from verbal roots, almost entirely triliteral—the feature BH shares with all Semitic and a few other Afro-Asiatic languages. For classical linguists, more than hundred years ago, it was surprising to discover that verbal system of BH is, as we say today, optimal from the Information Theory's point of view and that its formal topological morphology is semantically meaningful. These and other basic features of BH reflect, in our opinion, the original design of the Semitic protolanguage and suggest the indispensability of IIH—Inspirational Intelligence Hypothesis, our main topic—for the understanding of origins of natural languages. Our project is of vertical nature with respect to the time, in difference with the vastly dominating today horizontal linguistic approaches. Language is one of the hallmarks of the human species—an important part of what makes us human. Yet, despite a staggering growth in our scientific knowledge about the origin of life, the universe and (almost) everything else that we have seen fit to ponder, we know comparatively little about how our unique ability for language originated and evolved into the complex linguistic systems we use today. Why might this be?
... The predictions are these: (a) In an alternative view to the 'anthropic principle' in the eternal inflation scenario on the landscape of the multiverse, Smolin proposes the theory of 'cosmological natural selection (CNS)' [5] which predicts as one of its clear-cut falsifiable predictions: " Find a neutron star whose mass appreciably exceeds M BB max . Then it will count against the CNS scnario " [5, 6] . This prediction follows from the requirement of the theory that the number of black holes in this universe be maximized without changing the parameters of the Standard Model. ...
... Here while accepting a rich landscape of the kind string theory suggests, the CNS scenario differs from the anthropic principle with eternal inflation – which states that the universe is eternally inflating, endlessly spawning 'pocket universes' – in that it relies on the maximized formation of black holes. Among a variety of conditions for forming many black holes, there is one which concerns us, that is that, the upper mass limit of neutron stars be as low as possible [6] We are not in a position, nor is it our objective, to address the basic questions associated with the scenario, such as how black hole singularities bounce and reproduce stupendous numbers of new universes, how to resolve the information loss in the reproduction, how the size of a black hole relates to the number or size of universes , whether the CNS is an alternative or superior to the anthropic principle with eternal inflation etc. The dispute with pros and cons for both will wage on [8], on which we have nothing to say. ...
Article
Full-text available
It is argued that a well-measured double neutron-star binary in which the two neutron stars are more than 4% different from each other in mass or a massive neutron star with mass M > or approximately 2M(middle dot in circle) would put in serious doubt or simply falsify the following chain of predictions: (1) a nearly vanishing vector meson mass at chiral restoration, (2) kaon condensation at a density n-3n0, (3) the Brown-Bethe maximum neutron-star mass Mmax approximately 1.5M(middle dot in circle), and (4) Smolin's "cosmological natural selection" hypothesis.
... A necessary feature of the hierarchy is the acceptance that selection and evolution do act at levels above those of genealogical units, the highest of which are species, without the necessity of replication, but with the possibilities of persistence, "re-production" and recurrence. It is difficult to speculate how far the principle of selection extends, but there is at least the proposal that fine-tuned fundamental parameters of the Standard Model are the result of selection, as in the hypothesis of Cosmological Natural Selection (Smolin, 2004). Emergent variation, system selection, and evolution likely represent universal principles and historical mechanisms that not only explain temporal change and stability on multiple timescales but, through multi-level feedbacks, unite multiple levels of material organization and complexity. ...
Article
Full-text available
The Phanerozoic fossil record can be organized as a nested set of persistent paleoecological units, ranging from paleocommunities to Sepkoski’s Evolutionary Faunas. This paper argues that the basis for ecological persistence on geological timescales is rooted in the robustness of ecological communities, that is, the resistance and resilience of communities when perturbed by the environment. Here I present the Ecological Functional Networks Hypothesis (EFNH) that proposes that networks of species functions, or Ecological Functional Networks (EFNs), underlie ecological stasis and persistence, and that EFNs are both subject to selection and evolve. An EFN varies if the species composition and hence functional structures of its constituent communities vary, and EFNs may differ from each other based on the robustness of those constituent communities, numerical representation, and biogeographic distribution. That variation is subject to selection acting on EFN community composition, and determines both the persistence of an EFN and the differential persistence among multiple EFNs. Selection pressures on EFNs in turn exert top-down influence on species evolution and extinction. Evidence is presented to both establish the reality of EFNs in the fossil record, for example, community structures that persist even as species composition changes, and the selection of EFNs, which is apparent during and after episodes of severe biotic turnover such as mass extinctions. Finally, tests are suggested that make the EFNH falsifiable, including testing the correlation between EFNs or EFN emergent traits and geological persistence, and using models of paleocommunity dynamics to examine the relationship between community or EFN robustness and geological persistence. The tests should be applied broadly throughout the Phanerozoic and diverse environments. The EFNH is part of a growing body of hypotheses that address the selection, evolution and persistence of non-reproducing systems, including ecosystems and entire biospheres, and addresses those concepts on geological timescales.
... Selection at higher levels feed back to lower levels, including species, influencing their evolutionary trajectories, and leading to differential system survival and greater persistence [Bouchard, 2008]. The nested hierarchy extends upward both materially and conceptually, because the organization, selection and evolution of systems are extendable to entire planetary systems, as in the Gaia hypothesis [Doolittle, 2014, Lenton et al., 2018, and ultimately to fine-tuned fundamental parameters of the Standard Model, as in the hypothesis of Cosmological Natural Selection [Smolin, 2004]. Emergent variation, system selection, and evolution likely represent a universal principle and historical mechanism that not only explain temporal change and stability on multiple timescales but, through multi-level feedbacks, unite multiple levels of material organization and complexity. ...
Preprint
The Phanerozoic fossil record can be organized as a nested set of persistent paleoecological units, ranging from paleocommunities to the Evolutionary Faunas. This paper argues that the basis for ecological persistence on geological timescales is rooted in the robustness of ecological communities, that is, the resistance and resilience of communities when perturbed by the environment. Here I present the Ecological Functional Networks Hypothesis (EFNH) that proposes that networks of species functions, or Ecological Functional Networks (EFNs), underlie ecological stasis and persistence, and that EFNs are both subject to selection and evolve. An EFN varies if the species composition and hence functional structures of its constituent communities vary, and EFNs may differ from each other based on the robustness of those constituent communities, numerical representation, and biogeographic distribution. That variation is subject to selection acting on EFN community composition, and determines both the persistence of an EFN and the differential persistence among multiple EFNs. Selection pressures on EFNs in turn exert top-down influence on species evolution and extinction. Evidence is presented to both establish the reality of EFNs in the fossil record, for example community structures that persist even as species composition changes, and the selection of EFNs, which is apparent during and after episodes of severe biotic turnover such as mass extinctions. Finally, tests are suggested that make the EFNH falsifiable, including testing the correlation between EFNs or EFN emergent traits and geological persistence, and using models of paleocommunity dynamics to examine the relationship between community or EFN robustness and geological persistence. The tests should be applied broadly throughout the Phanerozoic and diverse environments. The EFNH is part of a growing body of hypotheses that address multilevel selection and evolution of non-reproducing systems, including ecosystems and entire biospheres, and addresses those concepts on geological timescales.
... Multiverse theories include the idea that our universe is infinite with every possibility realised in some part of it, eternal inflation with an infinite number of pocket universes, and the many worlds interpretation of quantum mechanics (see Tegmark 2003;Page 2008). Another idea is that black holes within universes may spawn offspring universes (Smolin 1997;Smolin 2004;Vaas 2003). The multiverse is a substantial area for investigation and analysis in theology and philosophy as well as in physics. ...
Preprint
The Fermi paradox concerns the possible existence of advanced extraterrestrial civilizations. The Kardashev scale envisages advanced civilizations that may control and consume energy up to a galactic scale. Postulated extensions of the scale consider advanced entities that are powerful enough to create new universes. In an eternal existence of multiple universes, the emergence of such an entity may be inevitable. Such an advanced entity could perhaps be of sufficient greatness to be God. The scientific and theological feasibility of this idea of God is discussed. A possible advantage of viewing God in this way is that the supernatural might be dispensed with. However, from the viewpoint of religious practice, it is not clear whether such a God would be acceptable to major religions.
... In the VCRIS model, physical and informational processes that change unpredictably in successive replication cycles, to generate, maintain, and manage Variety, are in tension and opposition with physical and informational processes that change predictably in successive replication cycles, and thus generate, maintain, and manage Convergence. V and C are the first two terms in the VCRIS model, as these two oppositional processes are proposed as "root perspectives" in any model of physical and informational change in replicating complex adaptive systems, including our universe itself, if it is a replicating and adaptive system, as various theorists have proposed (Smolin 1992(Smolin , 1997(Smolin , 2004Vaas 1998;Vidal 2010;Price 2017). Standard evolutionary theory offers no model of this fundamental opposition, at every scale at which replication occurs, including gene, epigene, organism, group, niche, environment, and universe. ...
Chapter
Full-text available
This paper offers a systems definition of the phrase evolutionary development (evo devo, ED), and a few examples of generic evolutionary and developmental process in autopoetic (self-reproducing) systems. It introduces a toy conceptual model, the VCRIS evo devo model of natural selection, exploring autopoetic selection in both evolutionary and developmental terms. It includes an empirical observation, the 95/5 rule, generalized from observations in evo-devo biology, to offer a preliminary sketch of the dynamical interaction of evolutionary and developmental processes in living replicators. Autopoetic models may be applied to both to living systems and to nonliving adaptive replicators at many scales, even to the universe as a complex system, if it is a replicator in the multiverse. Evo devo models offer potentially fundamental dynamical and informational ways to understand autopoetic systems. If such models are to become validated in living systems, and generalized to nonliving autopoetic systems, they will require significant advances in both simulation and theory in coming years.
... By approaching the problem of the unification of the theories of modern science with triplewise (rather than pairwise) methodology I present an original quantum foundations concept of (what appears to be) our feedback-reinforced Quantum Deep Learning Universe (or Triuniverse). Given its broad reach, this concept draws on very diverse scientific works including those by Aldridge, Hodgson & Knudsen [4], Armstrong et al. [5], Back, Fogel & Michalewicz [6], Basu, Pollack & Roy [7], Barvinsky [8], Bianchi et al. [9], Bickhard & Campbell [10], Campbell [11], Carroll [12], Champagnat, Ferrière & Méléard [13], Chastain, Livnat, Papadimitriou & Vazirani [14], Dawkins [15], Deutsch & Jozsa [16], Dobzhansky [17], Eldredge [18], Flores Martinez [19], Fogel [20], Georgiev & Georgiev [21], Gray, He & Lukas [22], Han & Kim [23], Hartshorne [24], Hormozi et al. [25], Kiefer [26], Koutnik, Cuccu, Schmidhuber & Gomez [27], Last [28], LeCun, Bengio & Hinton [29], Peldán [30] [31], Marolf [32], Meusburger & Noui [33], Narain et al. [34], Nielsen & Chuang [35], Penrose & Jorgensen [36], Pitkänen [37], Powell & Mariscal [38], Rickles [39], Rovelli & Smolin [40], Smolin [41], Schmidhuber [42], Smart [43], Smolin [44]- [46], Sterelny, Smith & Dickison [47], Thielmann [48], Valiant [49], Vidal [50] [51], Watson [52], Watson et al. [53], Watson and Szathmáry [54], Watson et al. [55], Wiebe et al. [3], Wheeler [56] [57] and Zurek [58]- [60]. ...
Article
Full-text available
An original quantum foundations concept of a deep learning computational Universe is introduced. The fundamental information of the Universe (or Triuniverse) is postulated to evolve about itself in a Red, Green and Blue (RGB) tricoloured stable self-mutuality in three information processing loops. The colour is a non-optical information label. The information processing loops form a feedback-reinforced deep learning macrocycle with trefoil knot topology. Fundamental information processing is driven by ψ -Epistemic Drive, the Natural appetite for information selected for advantageous knowledge. From its substrate of Mathematics, the knotted information processing loops determine emergent Physics and thence the evolution of superemergent Life (biological and artificial intelligence). RGB-tricoloured information is processed in sequence in an Elemental feedback loop (R), then an Operational feedback loop (G), then a Structural feedback loop (B) and back to an Elemental feedback loop (R), and so on around the trefoil in deep learning macrocycles. It is postulated that hierarchical information correspondence from Mathematics through Physics to Life is mapped and conserved within each colour. The substrate of Mathematics has RGB-tricoloured feedback loops which are respectively Algebra (R), Algorithms (G) and Geometry (B). In Mathematics, the trefoil macrocycle is Algebraic Algorithmic Geometry and its correlation system is a Tensor Neural Knot Network enabling Qutrit Entanglement. Emergent Physics has corresponding RGB-tricoloured feedback loops of Quantum Mechanics (R), Quantum Deep Learning (G) and Quantum Geometrodynamics (B). In Physics, the trefoil macrocycle is Quantum Intelligent Geometrodynamics and its correlation system is Quantum Darwinism. Super-emergent Life has corresponding RGB-tricoloured loops of Variation (R), Selection (G) and Heredity (B). In the evolution of Life, the trefoil macrocycle is Variational Selective Heredity and its correlation ecosystem is Darwin’s ecologically “Entangled Bank”.
... While one could personify natural selection as an intelligent agent, we would also need to account for cosmological structures, which cannot as easily be ascribed to the powers of Darwinian natural selection (though some have argued that this is also the case, e.g., (Smolin, 2004)). However, invoking natural selection as an intelligent agent acting across all space and time (wise, omnipresent and eternal) brings us remarkably close to the Charybdis of classical theism. ...
Conference Paper
“Can machines think?” When faced with this “meaningless” question, Alan Turing suggested we ask a different, more precise question: can a machine reliably fool a human interviewer into believing the machine is human? To answer this question, Turing outlined what came to be known as the Turing Test for artificial intelligence, namely, an imitation game where machines and humans interacted from remote locations and human judges had to distinguish between the human and machine participants. According to the test, machines that consistently fool human judges are to be viewed as intelligent. While popular culture champions the Turing Test as a scientific procedure for detecting artificial intelligence, doing so raises significant issues. First, a simple argument establishes the equivalence of the Turing Test to intelligent design methodology in several fundamental respects. Constructed with similar goals, shared assumptions and identical observational models, both projects attempt to detect intelligent agents through the examination of generated artifacts of uncertain origin. Second, if the Turing Test rests on scientifically defensible assumptions then design inferences become possible and cannot, in general, be wholly unscientific. Third, if passing the Turing Test reliably indicates intelligence, this implies the likely existence of a designing intelligence in nature.
... Design cannot precede evolution and therefore cannot underlie the universe." This is how, after being stampeded by the dominating mode into this most desperately arid, inhabitable intellectual desert -which the author were con-demned to cross, as it were, on foot -they came eventually to believe that, today, the natural selection epistemological matrix, with its specific reductionist, overreaching interpretations -(i) of all past, current, and future contributions to biology [10], to cosmology [47], to behavioral psychology [8], to lingustics [44], (ii) of all progress of sciences at large [50], and even more radically, (iii) of all intellectual endeavors and failures [12] of humanity, if not (iv) of the very existence in, and ultimately, of the Universe [11], -has become both too rarified and fanciful, too rigid and dogmatic to provide a universal explanatory basis for, and too heavily ideologically charged to serve as the main theoretical desk for the admittance into the scientific circulation of new facts, problems, conjectures, and theories -from both natural and cognitive sciences. This is why -taking into account the widespread and apparently voluntary acceptance by the scientific community at large of the natural selection paradigm as the ultimate theory of everything -we have chosen to delineate our vision of the unhealthy pervasiveness and deficiencies of methodological (ab)uses of this paradigm, -and thus to prepare the ground for the detailed presentation of our controversial, to say the least, study, its inspirational sources, motivations, methods and results. ...
Research
Full-text available
The present paper studies some implications of the well-known but almost universally disregarded tight combinatorial morphological-semantic structure of the verbal system of Biblical Hebrew, to show that this linguistic fossil testifies to the existence of a now extinct Proto-Language whose extremely tight verbal organization and meaningful architecture made it both structurally strikingly similar and expressively vastly superior to humanly designed Assembler languages, – an absolutely novel, paradoxical phenomenon, never before and nowhere else observed and apparently incompatible with the basic tenets of modern linguistic natural selection theories and, at the very least, crying out for new explanatory linguistic paradigms.
... Authors published their concepts on black hole cosmology in many online journals [1][2][3][4][5][6][7][8][9][10][11][12][13]. In this paper by highlighting the basic short comings of Friedmann cosmology [14] an attempt is made to review the model of black hole cosmology [15][16][17][18][19][20][21][22][23][24][25][26][27][28] in terms of cosmic redshift, CMBR redshift, cosmic growth index, cosmic growth rate and cosmic age. The basic shortcomings of modern cosmology can be expressed as follows. ...
Article
Throughout the cosmic evolution, currently believed cosmic ‘critical density’ can be shown to be a default result of the ‘positively curved’ light speed rotating black hole universe ‘volume density’. As there is no observational or experimental evidence to Friedmann’s second assumption, the density classification scheme of Friedmann cosmology must be reviewed at fundamental level and possibly can be relinquished. The observed cosmic redshift can be reinterpreted as an index of ‘cosmological’ thermodynamic light emission mechanism. Clearly speaking during cosmic evolution, at any time in the past, in hydrogen atom- emitted photon energy was always inversely proportional to the cosmic temperature. Thus past light emitted from older galaxy’s excited hydrogen atom will show redshift with reference to the current laboratory data. Note that there will be no change in the energy of the emitted photon during its journey from the distant galaxy to the observer. In no way ‘redshift’ seems to be connected with ‘galaxy receding’. By considering the ‘Stoney mass’ as the initial mass of the baby cosmic black hole, past and current physical and thermal parameters (like angular velocity, growth rate, age, redshift, thermal energy density and matter density) of the cosmic black hole can be understood. For a cosmic temperature of 3000 K, obtained redshift is 1100. From now onwards, CMBR temperature can be called as ‘Comic Black Hole’s Thermal Radiation’ temperature and can be expressed as ‘CBHTR’ temperature. Current cosmic black hole is growing at a rate of 14.66 km/sec in a decelerating mode. Uncertainty relation and all other microscopic physical constants play a crucial role in understanding the halt of the present cosmic expansion. In view of the confirmed zero rate of change in inverse of the Fine structure ratio (from the ground based laboratory experimental results), zero rate of change in the current CMBR temperature (from satellite data) and zero rate of change in the current Hubble’s constant (from satellite data), it can be suggested that, current cosmic expansion is almost all saturated and at present there is no significant cosmic acceleration.
... Design cannot precede evolution and therefore cannot underlie the universe." And many, many, too many have tried to be faithful to this condemnation of the Design creativity to work out accidentally as it were: 1. biology [9], cosmology [37], behavioral psychology [7], lingustics [36] , 2. all progress of sciences at large [40], and even more radically, 3. all intellectual endeavors and failures [11] of humanity, if not 4. the very existence in, and ultimately, of the Universe [10]. §3. ...
Article
Full-text available
One of the most natural approaches to the problem of origins of natural languages is the study of hidden intelligent "communications" emanating from their historical forms. Semitic languages history is especially meaningful in this sense. One discovers, in particular, that Biblical Hebrew, BH, the best preserved fossil of the Semitic protolanguage, is primarily a verbal language, with an average verse of the Hebrew Bible containing no less than three verbs and with the biggest part of its vocabulary representing morphological derivations from verbal roots, almost entirely triliteral – the feature BH shares with all Semitic and a few other Afro- Asiatic languages. For classical linguists, more than hundred years ago, it was surprising to discover that verbal system of BH is, as we say today, optimal from the Information Theory’s point of view and that its formal topological morphology is semantically meaningful.
... Design cannot precede evolution and therefore cannot underlie the universe." This is how, after being stampeded by the dominating mode into this most desperately arid, inhabitable intellectual desert -which the author were con-demned to cross, as it were, on foot -they came eventually to believe that, today, the natural selection epistemological matrix, with its specific reductionist, overreaching interpretations -(i) of all past, current, and future contributions to biology [10], to cosmology [47], to behavioral psychology [8], to lingustics [44], (ii) of all progress of sciences at large [50], and even more radically, (iii) of all intellectual endeavors and failures [12] of humanity, if not (iv) of the very existence in, and ultimately, of the Universe [11], -has become both too rarified and fanciful, too rigid and dogmatic to provide a universal explanatory basis for, and too heavily ideologically charged to serve as the main theoretical desk for the admittance into the scientific circulation of new facts, problems, conjectures, and theories -from both natural and cognitive sciences. This is why -taking into account the widespread and apparently voluntary acceptance by the scientific community at large of the natural selection paradigm as the ultimate theory of everything -we have chosen to delineate our vision of the unhealthy pervasiveness and deficiencies of methodological (ab)uses of this paradigm, -and thus to prepare the ground for the detailed presentation of our controversial, to say the least, study, its inspirational sources, motivations, methods and results. ...
Article
Full-text available
Scientific enterprise is a part and parcel of the contemporaneous to it general human cultural and, even more general, existential endeavor. Thus, the fundamental for us notion of evolution, in the modern sense of this characteristically Occidental term, appeared in the 19-th cen- tury, with its everything pervading, irreversible cultural and technological change and the existential turmoil. Similarly, a formerly relatively recher- ché word emergence, became a widely used scientific term only in the 20-th century, with its cultural, economical, political, and national sagas of emergence and destruction played against a background of the universe emerging from the Big Bang and disappearing into its black holes, if not into its ultimate Big Collapse. Today, the rules of engagement in scientific emergence-evolution games, steadily spreading from natural to cognitive sciences, and beyond, are dominated by the 19-th century concept of natural selection which has inverted the time-arrow of the classical creationist dogma, with its rarely spelled out pessimistic implication that the life is moving from the high- est biological organization to an entropic chaos. In its turn, the natural selection's excessively contagious, "do-it-yourself" optimism might ulti- mately turn out to be its undoing : the natural selection conjecture, when transposed to such fields as linguistics from the strictly biological scene, with its times of engagement ranging from at most hundred years of life expectancy for an individual organism to at least millions and even bil- lions of years for evolutionary processes to bring this or that organism to existence, becomes for the first time verifiable and even falsifiable. The present paper studies some implications of the well-known but al- most universally disregarded tight combinatorial morphological-semantic structure of the verbal system of Biblical Hebrew, to show that this lin- guistic fossil testifies to the existence of a now extinct Proto-Language whose extremely tight verbal organization and meaningful architecture made it both structurally strikingly similar and expressively vastly su- perior to humanly designed Assembler languages, - an absolutely novel, paradoxical phenomenon, never before and nowhere else observed and ap- parently incompatible with the basic tenets of modern linguistic natural selection theories and, at the very least, crying out for new explanatory linguistic paradigms.
... It could, therefore, serve as a powerful frame of reference for future research. In fact, recent discoveries have revealed that galaxies also obviously evolve; they seem to be shaped by their surroundings, and form through hierarchical process such as the merging of smaller galaxies to form bigger one, [52][53][54] as well as by possible reproductive fission events. Self-organization is not just a biological phenomenon but, in its dealing with the nature of space and time, it has validity also for fundamental physics. ...
Article
Full-text available
From a structural standpoint, living organisms are organized like a nest of Russian matryoshka dolls, in which structures are buried within one another. From a temporal point of view, this type of organization is the result of a history comprised of a set of time backcloths which have accompanied the passage of living matter from its origins up to the present day. The aim of the present paper is to indicate a possible course of this 'passage through time, and suggest how today's complexity has been reached by living organisms. This investigation will employ three conceptual tools, namely the Mosaic, Self-Similarity Logic, and the Biological Attraction principles. Self-Similarity Logic indicates the self-consistency by which elements of a living system interact, irrespective of the spatiotemporal level under consideration. The term Mosaic indicates how, from the same set of elements assembled according to different patterns, it is possible to arrive at completely different constructions: hence, each system becomes endowed with different emergent properties. The Biological Attraction principle states that there is an inherent drive for association and merging of compatible elements at all levels of biological complexity. By analogy with the gravitation law in physics, biological attraction is based on the evidence that each living organism creates an attractive field around itself. This field acts as a sphere of influence that actively attracts similar fields of other biological systems, thereby modifying salient features of the interacting organisms. Three specific organizational levels of living matter, namely the molecular, cellular, and supracellular levels, have been considered in order to analyse and illustrate the interpretative as well as the predictive roles of each of these three explanatory principles.
... 2 Note that this result is different from the suggestion [14] that new universes are created inside black holes, which would require the black hole singularity to be resolved in such a way as to connect a big crunch with a big bang. Evidence such as [15,16,17] contraindicates the hypothesis of a crunch-bang inside black holes. ...
Article
Full-text available
We present a mechanism for catalyzed vacuum bubble production obtained by combining moduli stabilization with a generalized attractor phenomenon in which moduli are sourced by compact objects. This leads straightforwardly to a class of examples in which the Hawking decay process for black holes unveils a bubble of a different vacuum from the ambient one, generalizing the new endpoint for Hawking evaporation discovered recently by Horowitz. Catalyzed vacuum bubble production can occur for both charged and uncharged bodies, including Schwarzschild black holes for which massive particles produced in the Hawking process can trigger vacuum decay. We briefly discuss applications of this process to the population and stability of metastable vacua.
... The later did however (for the sake of wider readability) omit many of the details that were thought noteworthy by subsequent authors. It is for this reason, as well as for its purely historical interest, that the original version reproduced here has been directly cited in a variety of relatively ancient [5, 6, 7, 8, 9], and also more recent [10, 11, 12, 13, 14, 15, 16] publications. The purpose of this belated transcription is therefore to make the omitted details more generally accessible via electronic archiving. ...
Article
This is the first part of a survey whose ultimate purpose is to clarify the significance of the famous coincidence between the Hubble age of the universe and a certain combination of microphysical parameters. In this part the way is prepared by a discussion of the manner in which familiar local phenomena depend qualitatively, and in order of magnitude, quantitatively on the fundamental parameters of microphysics. In order to keep the account concise while remaining self contained, only the barest essentials of the standard nuclear physical and astrophysical calculations involved are given. Only six of the fundamental parameters play a dominant part, namely the coupling constants of the strong, electromagnetic, and gravitational forces, and the mass ratios of the proton, neutron, electron and pi-meson. Attention is drawn to the important consequences of three coincidental relationships between these parameters. It is shown that most of the principle limiting masses of astrophysics arise (in fundamental units) simply as the reciprocal of the gravitational fine structure constant, with relatively small adjustment factors. The dividing point between red dwarf and blue giant stars turns out to be an exception: this division occurs within the range of the main sequence stars only as a consequence of the rather exotic coincidence that the ninth power of the electromagnetic fine structure constant is roughly equal to the square root of the gravitational fine structure constant.
Article
The phenomenon of augmented gravity on the scale of galaxies, conventionally attributed to dark matter halos, is shown to possibly result from the incremental growth of galactic masses and radii over time. This approach elucidates the cosmological origins of the acceleration scale a0≈cH0/2π≈10−10 ms−2 at which galaxy rotation curves deviate from Keplerian behavior, with no need for new particles or modifications to the laws of gravity, i.e., it constitutes a new explanatory path beyond Cold Dark Matter (CDM) and Modified Newtonian Dynamics (MOND). Once one formally equates the energy density of the universe to the critical value (ρ=ρc) and the cosmic age to the reciprocal of the Hubble parameter (t=H−1), independently of the epoch of observation, the result is the Zero-Energy condition for the cosmic fluid’s equation of state, with key repercussions for the study of dark energy since the observables can be explained in the absence of a cosmological constant. Furthermore, this mass-energy evolution framework is able to reconcile the success of CDM models in describing structure assembly at z≲6 with the unexpected discovery of massive objects at z≳10. Models that feature a strong coupling between cosmic time and energy are favored by this analysis.
Chapter
Much of contemporary atheism is stuck in Plato’s cave. This is also Nietzsche’s cave, which is filled with the shadows of God. And while twilight atheists don’t believe in God, they still believe in his shadows. They are cultural theists who endorse theistic if-then chains which bind valuable things to God. Twilight atheists agree with theists that if there is no God, then: there is no objective meaning to life; there is no objective morality; there is no cosmic meaning or purpose; there is no modal or mathematical objectivity; there is no life after death; and there are no deities and no things with any divine attributes. Twilight atheism is nihilism. Atheistic Platonists seek to overcome both theism and twilight atheism. We aim to construct new cultures which depend neither on theism nor on its nihilistic negation.
Article
The most public-facing forms of contemporary Darwinism happily promote its worldview ambitions. Popular works, by the likes of Richard Dawkins, deflect associations with eugenics and social Darwinism, but also extend the reach of Darwinism beyond biology into social policy, politics, and ethics. Critics of the enterprise fall into two categories. Advocates of Intelligent Design and secular philosophers (like Mary Midgley and Thomas Nagel) recognise it as a worldview and argue against its implications. Scholars in the rhetoric of science or science communication, however, typically take the view that Darwinism isn't a worldview, but a scientific theory, which has been improperly embellished by some; they uphold the distinction between is and ought and argue that science is restricted to the former. This prompts an is-ought problem on another level. I catalogue the ways in which Darwinism plainly is a worldview and why commentators' beliefs that it ought not to be distorts their analysis. Hence, it is their own worldview that precludes them from accepting Darwinism's worldview implications.
Chapter
From things in the universe, Dawkins turns his attention to the universe itself. Our universe is extremely complex and therefore extremely improbable. It appears to be finely tuned for the internal evolution of complex things. Dawkins repeatedly declares that our universe is beautiful and rationally organized. Some explanation is needed for our universe and its cosmic features. Dawkins considers several possible explanations. He rejects the God hypothesis and the simulation hypothesis. Dawkins considers various multiverse hypotheses from physics. The eternal inflation hypothesis and fecund universe hypothesis suffer from fatal problems. But if the Dawkinsian logic of possibility applies to organisms, then it applies to universes too. There is a library of possible universes. Adopting some Leibnizian ideas, it is the modal library. Its books can be sorted into ranks by complexity. Hence they can be ranked by intrinsic value. Possible universes are books on the slopes of the cosmic Mount Improbable.
Chapter
Complexity and life as we know it depend crucially on the laws and constants of nature as well as the boundary conditions, which seem at least partly “fine-tuned.” That deserves an explanation: Why are they the way they are? This essay discusses and systematizes the main options for answering these foundational questions. Fine-tuning might just be an illusion, or a result of irreducible chance, or nonexistent because nature could not have been otherwise (which might be shown within a fundamental theory if some constants or laws could be reduced to boundary conditions or boundary conditions to laws), or it might be a product of selection: either observational selection (weak anthropic principle) within a vast multiverse of many different realizations of physical parameters, or a kind of cosmological natural selection making the measured parameter values quite likely within a multiverse of many different values, or even a teleological or intentional selection or a coevolutionary development, depending on a more or less goal-directed participatory contribution of life and intelligence. In contrast to observational selection, which is not predictive, an observer-independent selection mechanism must generate unequal reproduction rates of universes, a peaked probability distribution, or another kind of differential frequency, resulting in a stronger explanatory power. The hypothesis of Cosmological Artificial Selection (CAS) even suggests that our universe may be a vast computer simulation or could have been created and transcended by one. If so, this would be a far-reaching answer – within a naturalistic framework! – of fundamental questions such as: Why did the big bang and fine-tunings occur, what is the role of intelligence in the universe, and how can it escape cosmic doomsday? This essay critically discusses some of the premises and implications of CAS and related problems, both with the proposal itself and its possible physical realization: Does CAS deserve to be considered as a convincing explanation of cosmic fine-tuning? Is life incidental, or does CAS revalue it? And are life and intelligence ultimately doomed, or might CAS rescue them?
Article
This essay is an attempt to explain why Charles Sanders Peirce’s evolutionary metaphysics would not have seemed strange to its original 1890s audience. Building on the pioneering work of Andrew Reynolds, I will excavate the scientific context of Peirce’s Monist articles—in particular “The Law of Mind” and “Man’s Glassy Essence,” both published in 1892—focusing on the relationship between protoplasm, evolution, and consciousness. I argue that Peirce’s discussions should be understood in the context of contemporary evolutionary and physiological speculations, many of which were featured in late-1880s issues of Open Court, sister journal to the Monist.
Article
Full-text available
The standard model of physics is built on the fundamental constants of nature, but it does not provide an explanation for their values, nor require their constancy over space and time. Here we set a limit on a possible cosmological variation of the proton-to-electron mass ratio μ by comparing transitions in methanol observed in the early universe with those measured in the laboratory. From radio-astronomical observations of PKS1830-211, we deduced a constraint of ∆μ/μ = (0.0 ± 1.0) × 10−7 at redshift z = 0.89, corresponding to a look-back time of 7 billion years. This is consistent with a null result.
Article
Full-text available
The influence of the Maxwell field on a static, asymptotically flat and spherically-symmetric Gauss-Bonnet black hole is considered. Numerical computations suggest that if the charge increases beyond a critical value, the inner determinant singularity is replaced by an inner singular horizon. Comment: 5 pages, 5 figures, published version with minor changes
Article
This brief note, written for non-specialists, aims at drawing an introductive overview of the multiverse issue. Comment: 6 pages, html version available here : http://cerncourier.com/cws/article/cern/31860
Article
Full-text available
The author has analyzed the influence of dust content, UV radiation, primary cosmic ray ionization rate, and metal abundance upon the conditions for small cool clouds condensation. The hypothesis that, in isolated galaxies, the star formation rate is self-regulated in such a way that it maintains Pmax close to the ISM gas pressure P (where Pmax is the gas pressure at the marginal state of stability for the transition from warm gas to small cool clouds) has been tested for a sample of isolated and interacting galaxies. For the former, it appears that P ≅ Pmax. This result suggests that the UV radiation regulates the star formation and that the dust content plays an important role in determining the rate of star formation.
Article
Full-text available
In a simple one-zone model of mass exchange between three components (stars, clouds, and diffused gas), a self-regulating mechanism based on the sensitivity of the condensation of small cool clouds upon the radiation density in the 912-1100 A band is presently included. This mechanism is capable of affecting the large-scale structure of the galaxies due to the fact that it acts at a large scale in a very short time. Even in the most favorable models for the production of nonlinear oscillations, the inclusion of this mechanism of self-regulation leads, in many cases, to the progressive damping of the oscillations.
Article
Full-text available
The assumption that the pressure in the interstellar medium is derived from energetic processes associated with star formation and with the older stellar population is shown to lead to a law of star formation which is applicable to all disk galaxies. This states that the rate of star formation per unit total mass is linearly related to the ratio of gas to total surface densities. The star formation rate decreases exponentially in a given galaxy, but the gas content declines to a finite limit. The solution is characterized by two natural time scales, a gas depletion time scale, tau0, and an equipartition time scale, tau1, which is the inverse of the specific rate of star formation at which the pressure produced by young stars in the interstellar medium matches the pressure generated by the older population. Real galaxies appear to obey this law of star formation and it is found that tau0 = (2-4) x 10 to the 9th yr, tau1/tau sub 0 = 0.005 (+0.002, -0.003) gives the best fit to the data. In galaxies in which the rate of star formation determines the pressure of the disk medium, it is also shown that the rate of star formation is directly proportional to the product of the total mass and the mean surface density of gas.
Article
Full-text available
On the assumption that two regulating mechanisms act simultaneously, the radial dependence in the Galactic plane of the star formation rate (SFR) and the value of various variables related to the state of the interstellar medium are calculated in an effort to prove that the warm gas in the Galactic plane is in a state close to that of marginal stability for the warm gas - small clouds transition in the isobaric mode. This hypothesis makes it possible to predict the radiation density in the 912-110-A band, the thermal pressure, and the temperature and fractional ionization of both. The mechanism of self-regulation is described. Results are presented for the radial dependence in the Galactic disk of: the self-regulated state of the warm gas and small cloud cores, the self-regulated star formation rate, and the regulated state of the molecular phase. The dependence of the model results on the input data is discussed.
Article
Full-text available
A sample of active dwarf galaxies and a sample of Sc and irregular galaxies are analyzed based on the assumption that the stellar formation rate is self-regulated in such a way that it maintains the critical pressure P(max) near the gas pressure P(g) of the interstellar medium, where P(max) is the pressure of the marginal state of stability for the transition warm gas to small clouds. The stellar formation rate Psi(PP) consistent with the condition P(max) = P(g), can be expressed as a simple analytical function of global galactic parameters. When it is assumed that the dust temperature, the gas metallicity, and the galactic diameter remain constant, it is found that Psi(PP) is reduced to the law of Schmidt with an exponent value of 1.43. Specifically, for the Galaxy, it is fond that Psi(PP) = about 10 solar masses/yr, which is reasonably consistent with previous values.
Article
Full-text available
Cygnus X-2 is one of the brightest and longest known X-ray sources. We present high-resolution optical spectroscopy of Cyg X-2 obtained over 4 yr, which gives an improved mass function of 0.69 ± 0.03 M☉ (1 σ). In addition, we resolve the rotationally broadened absorption features of the secondary star for the first time, deriving a rotation speed of v sin i=34.2±2.5 km s-1 (1 σ), which leads to a mass ratio of q=Mc/M X = 0.34 ± 0.04 (1 σ, assuming a tidally locked and Roche lobe-filling secondary). Hence, with the lack of X-ray eclipses (i.e., i73°) we can set firm 95% confidence lower limits to the neutron star mass of MX > 1.27 M☉ and to the companion star mass of Mc > 0.39 M☉. However, by additionally requiring that the companion must exceed 0.75 M☉ (as required theoretically to produce a steady low-mass X-ray binary), then MX > 1.88 M☉ and i < 61° (95% confidence lower and upper limit, respectively), thereby making Cyg X-2 the highest mass neutron star measured to date. If confirmed, this would set significant constraints on the equation of state of nuclear matter.
Article
Full-text available
Star-formation (SF) processes occurring on the scale of giant molecular clouds (10 to the 6th solar masses and 10 to the 20th cm) or smaller are discussed, reviewing the results of recent theoretical and observational investigations. Topics examined include the origin of stellar masses; bimodal SF; initial mass functions; binary stars, bound clusters, and hierarchical fragmentation; and the efficiency of SF. The properties of molecular clouds and the origin of substructures in molecular clumps are explored in detail, and consideration is given to gravitational collapse and protostars, bipolar outflows from young stellar objects, visible young stellar objects, and the implications for binary-star and planetary-system formation.
Article
Full-text available
We construct equilibrium sequences of rotating neutron stars in general relativity. We compare results for 14 nuclear matter equations of state. We determine a number of important physical parameters for such stars, including the maximum mass and maximum spin rate. The stability of the configurations to quasi-radial perturbations is assessed. We employ a numerical scheme particularly well suited to handle rapid rotation and large departures from spherical symmetry. We provide an extensive tabulation of models for future reference. Two classes of evolutionary sequences of fixed baryon rest mass and entropy are explored: normal sequences, which behave very much like Newtonian sequences, and supramassive sequences, which exist for neutron stars solely because of general relativistic effects. Adiabatic dissipation of energy and angular momentum causes a star to evolve in quasi-stationary fashion along an evolutionary sequence. Supramassive sequences have masses exceeding the maximum mass of a nonrotating neutron star. A supramassive star evolves toward eventual catastrophic collapse to a black hole. Prior to collapse, the star actually spins up as it loses angular momentum, an effect that may provide an observable precursor to gravitational collapse to a black hole.
Chapter
This is the first book devoted to Bose–Einstein condensation (BEC) as an interdisciplinary subject, covering atomic and molecular physics, laser physics, low temperature physics and astrophysics. It contains 18 authoritative review articles on experimental and theoretical research in BEC and associated phenomena. Bose–Einstein condensation is a phase transition in which a macroscopic number of particles all go into the same quantum state. It has been known for some time that this phenomenon gives rise to superfluidity in liquid helium but recent research has focused on the search for BEC in other condensed matter systems, such as excitons, spin-polarised hydrogen, laser-cooled atoms, high-temperature superconductors and subatomic matter. This unique book gives an in-depth report on progress in this field and suggests promising research topics for the future. It will be of interest to graduate students and research workers in condensed matter, low temperature, atomic and laser physics.
Article
The following topics were dealt with: Galactic molecular cloud distribution, external galaxies, molecular cloud structure, photon dominated regions, interstellar chemistry, star formation in molecular clouds, molecular outflows, and new instrumentation.
Book
The volume consists of up-to-date reviews and a selection of contributed papers on subjects including the structure and physical properties of molecular clouds, their role in the star formation process, their dust and chemical properties, molecular cloud surveys of the Milky Way, cloud evolution, problems in cloud mass determinations (a panel discussion and review), the CO properties of external galaxies, nuclei of galaxies as revealed by molecular observations, and galactic spiral structure as reflected by molecular cloud distributions. The abstracts of poster papers on these topics presented at the conference are also included. This book is both a valuable reference and a compendium of current knowledge in this field. It should be of special interest to all students and researchers who work on the physics of star formation, the interstellar medium, molecular clouds and galactic structure.
Article
Recent investigation (Parravano, 1987) shows that the diffuse phases of the ISM condense mainly by the transition from warm gas to small cool clouds (WG-->SC). The author introduces a new hypothesis that the star formation rate (SFR) in isolated galaxies is self-regulated in such a way that it maintains Pmax close to the ISM gas pressure. Here Pmax is the gas pressure at the marginal state of stability for the transition WG-->SC. This hypothesis leads to a relation between global galactic parameters which appears to be applicable to various morphological groups of isolated galaxies.
Article
We review estimates of the mass of the compact core in SN 1987A and conclude that the most accurate determination can be obtained from the known value of approximately 0.075 solar mass of Ni production in the explosion. With binding energy correction, this gives an upper limit of gravitational mass of approximately 1.56 solar mass, slightly larger than Brown & Bethe's previous estimate of approximately 1.5 solar mass. Observation by Observing System Simulation Experiment (OSSE) of the ratio of gamma-rays from Co-57 and Co-56 indicates that neutron-rich material from the inner regions does not reach the mass cut by convection of Rayleigh-Taylor instability. Arguments that the core of SN 1987A went into a black hole are reviewed. If one accepts this to be true, then the maximum compact core mass gives an upper limit on neutron star masses of (MNS)max approximately equals 1.56 solar mass (gravitational), in rough agreement with the previous result of Brown & Bethe.
Article
We point out that stars within a fairly large range of masses, roughly 18-30 Msun, can both explode as supernovae (and give off neutrinos as observed in SN 1987A) and then go into black holes. The masses of the black holes so formed are only slightly above 1.5 Msun, the maximum mass for compact cores, according to our arguments. These masses are substantially smaller than those of the best candidates for black holes observed thus far. We discuss the possibility that SN 1987A produced one of these small black holes. We review the requirements that observed nucleosynthesis puts on theoretical nucleosynthesis. Especially the empirical value of ΔY/ΔZ = 4±1.3 gives a sensitive determination of the mass at which nucleosynthesis must be cut off, by heavier stars collapsing into black holes without returning matter to the Galaxy, because otherwise heavy stars would produce chiefly metals and unduly lower the calculated value of A Y/AZ. We arrive at a value of Mcutoff = 25±5 Msun for the mass at which nucleosynthesis must be cut off. We show that the introduction of kaon condensation sufficiently softens the equation of state of dense matter, so that compact cores of mass greater than Mmax ≃Msun will not be stable. For the first ≳12 s, however, compact cores up to ˜1.84 Msun are stabilized; this gives the nucleosynthesis mass cutoff as ˜30 Msun, at the upper end of our limit from nucleosynthetic demands. Our soft equation of state extends black hole production to stars of lower mass than previously estimated, and, therefore, increases the estimated number of black holes by an order of magnitude or more, to ˜109 in the Galaxy.
Article
We follow the work of Chevalier, Houck, Blondin, and Park on hypercritical spherical accretion onto compact objects, applying their work to the case of the compact object and remnant formed in SN 1987A. We begin by motivating the Bondi spherical accretion theory, obtaining an expression for the mass accretion rate. We take SN 1987A parameters from Woosley and Bethe to evaluate the expression for this particular case and find hypercritical accretion on the order of 104 times the Eddington rate. The Eddington rate can be (greatly) exceeded because neutrinos carry off the energy. In this situation photons within a certain distance from the compact object are trapped; we derive an expression for this trapping radius, which decreases in time. For the case that the compact object is a neutron star, even though photons are trapped, neutrinos can still escape and carry off accretion energy, allowing for self-consistent solutions for hypercritical accretion. We use the work of Dicus on neutrino cooling to derive an expression for the shock radius, that is, the distance of the accretion shock front from the neutron star. The shock radius increase with time, so that at some critical time the shock radius equals the trapping radius. We find this critical time to be about 0.6 yr. After this time the luminosity in photons should increase to the Eddington limit, 3.8 x 1038 ergs/s. For the case that the compact object is a black hole, only the internal energy produced by the pdV work on the infalling matter outside of the trapping radius can be radiated. This would result in a luminosity approximately 1034-1035 ergs/s. The observed light curve of SN 1987A is explained by radioactive decays with a current luminosity of a few times 1036 ergs/s. The expected contribution from spherical accretion onto a neutron star is clearly not present, while the expected contribution for a black hole would be too small to detect. Our considerations thus support the hypothesis that the compact object formed in SN 1987A is a black hole rather than a neutron star.
Article
Observations of interstellar molecular clouds in the mm and submm bands and the techniques and instruments used to obtain them are discussed in reviews and reports. Sections are devoted to molecular-cloud physics, individual molecular-cloud regions, external galaxies, theoretical models, molecular-cloud chemistry, and instrumentation. Particular attention is given to the role of small telescopes in Galactic molecular surveys, the structure of star-forming regions, molecular outflows, the magnetic field of molecular cloud B1, a complete CO survey of the LMC, C II line emission from spiral galaxies, the interaction between a hypersonic jet and the ISM, a focal-plane imaging array for 3-mm astronomical spectroscopy, the Swedish submm telescope at ESO, and the receivers of the Cologne University 3-m radio telescope.
Article
Book
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Book
Lee Smolin offers a new theory of the universe that is at once elegant, comprehensive, and radically different from anything proposed before. Smolin posits that a process of self organization like that of biological evolution shapes the universe, as it develops and eventually reproduces through black holes, each of which may result in a new big bang and a new universe. Natural selection may guide the appearance of the laws of physics, favoring those universes which best reproduce. The result would be a cosmology according to which life is a natural consequence of the fundamental principles on which the universe has been built, and a science that would give us a picture of the universe in which, as the author writes, "the occurrence of novelty, indeed the perpetual birth of novelty, can be understood."Smolin is one of the leading cosmologists at work today, and he writes with an expertise and force of argument that will command attention throughout the world of physics. But it is the humanity and sharp clarity of his prose that offers access for the layperson to the mind bending space at the forefront of today's physics.
Article
The Life of the Cosmos. LEE SMOLIN. Oxford University Press, New York, 1997. viii, 358 pp., illus. $30. ISBN 0-19-510837-x. Published in the UK by Weidenfeld and Nicolson; £20, ISBN 0-297-81727-2.
Article
The time variation of the interstellar medium regulated by supernova remnants is studied by considering the interchange processes among six components: a warm gas, a general ambient gas, a hot gas, small clouds, molecular clouds and giant molecular clouds. Two interesting results are found. One is that the interstellar medium does not always attain a steady state such as a two- or three-phase model, but may exhibit a cyclic phase-change like a limit cycle or go to a runaway state. The other is that bursts of star formation from molecular clouds are expected if the gravitational instability of a cloud ensemble is considered.
Article
The physics of a superdense star are considered along with pulsars, supernovae, quasi-stellar objects, the traditional three tests of relativity, relativistic effects in planetary and lunar motions, the expanding universe, the evolving universe, the significance of the microwave background, questions of galaxy formation, the mystery of the missing mass, the cosmic background radiation, and aspects regarding 'noncanonical' models and the very early universe. Topics examined in connection with black holes are related to a collapse of a spherically symmetrical cloud of dust, the Kruskal diagram, rotation and the Kerr geometry, the final outcome of the collapse of a rotating body, and the search for black holes and their effects. The angular distribution of gravitational radiation is discussed together with the sources of gravitational radiation and the seismic response of the earth to the gravitational radiation in the one-Hertz region.
Article
It is shown that observations of the relative abundances of nitrogen and sulfur in H II regions of disk galaxies favor the possibility of producing primary nitrogen in intermediate-mass stars. The constancy of the ratio of nitrogen to sulfur for low-luminosity disk galaxies and the rise with luminosity for more luminous galaxies may be interpreted as a consequence of supernova-driven winds efficiently removing stellar ejecta from galaxies with binding energy per unit mass below a critical value, shown here to be around 100 km/s, while more massive galaxies can enrich their gas through successive generations of stars. The effect of large gas loss from shallow potential wells is also manifest in the relation between mean gas metallicity, measured by the mean oxygen abundance in H II regions, and absolute magnitude of the parent galaxy.
Article
A model for the chemical evolution of a disk galaxy is presented. The model reproduces all the basic parameters of our own Galaxy, including, for the first time, the correlation between metallicity and height above the Galactic plane. One version of this model had been used earlier to study the effect of heavy element loss from spiral galaxies on the chemical evolution of the intergalactic medium. Here, the primary emphasis is on investigations of the star formation history. Varying the basic parameters of the model - the mass and radius of the galaxy - and incorporating a delay in the onset of star formation make it possible to model different scenarios for the evolution of a galaxy's luminosity. There is a phase of enhanced star formation in the early stages of evolution of massive galaxies. Taking into account the absorption of optical radiation by dust grains makes it possible to explain certain observed characteristics of the luminosity evolution, in particular, the absence of bursts in optical luminosity in disk galaxies at the initial period of intense star formation. The possibility of reconstructing the star-formation history and galactic luminosity from observations of galaxies at different redshifts z is discussed. If selection effects are accurately taken into account, the model makes it possible to construct specific evolutionary scenarios that are consistent with the observations. Factors responsible for the deviation of the evolution of the average luminosity from the luminosity history in individual galaxies are discussed.
Article
The basic features of galaxies, stars, planets and the everyday world are essentially determined by a few microphysical constants and by the effects of gravitation. Many interrelations between different scales that at first sight seem surprising are straightforward consequences of simple physical arguments. But several aspects of our Universe—some of which seem to be prerequisites for the evolution of any form of life—depend rather delicately on apparent ‘coincidences’ among the physical constants.
Article
A new type of explanatory mechanism is proposed to account for the fact that many of the dimensionless numbers which characterize particle physics and cosmology take unnatural values. It is proposed that all final singularities 'bounce' or tunnel to initial singularities of new universes at which point the dimensionless parameters of the standard models of particle physics and cosmology undergo small random changes. This speculative hypothesis, plus the conventional physics of gravitational collapse, together comprise a mechanism for natural selection, in which those choices of parameters that lead to universes that produce the most black holes during their lifetime are selected for. If our Universe is a typical member of the ensemble that results from many generations of such reproducing universes then it follows that the parameters of our present Universe are near a local maximum of the number of black holes produced per universe. Thus, modifications of the parameters of particle physics and cosmology from their present values should tend to decrease the number of black holes in the universe. Three possible examples of this mechanism are described.
Article
Calculations of the evolution of the interstellar medium (ISM) in a one-zone model are presented. The purpose is to study the influences of different processes on the evolution of the ISM and the star-formation rate by applying a detailed description of the stars and the ISM as well as their interactions. Different processes and time-scales are taken into account: stellar evolutionary time-scales and nucleosynthesis, stellar mass loss and energy release to the ISM by means of both supernovae and stellar winds, and a multi-component ISM with phase transitions by means of condensation, evaporation, and ionization as well as metal-dependent heating and cooling of the different phases. Moreover, we allow for intrinsic heating of the cool star-forming clouds. The results show that in addition to the heating by means of supernovae that represents the global star-formation regulation mechanism young stars act predominantly already locally within the star-forming sites. Open-box models allowing for inflow of all existing gas phases with similar physical states but restricting the outflow to the hot phase only are able to explain successfully the dilution of metallicity and large fluctuations of the star-formation rate, of the volume-filling factors, and of the amounts of different gas phases.
Article
The internal structure of spacetime inside a black hole is investigated on the assumption that some limiting curvature exists. It is shown that the Schwarzchild metric inside a black hole can be attached to the de Sitter one at some spacelike junction surface which may represent a short transition layer. The possible fate of the de Sitter space which arises in the interior of a black hole in this model is discussed.
Article
It is proposed that the dense matter formed in the collapse of large stars goes strange while still in the nucleon-meson (broken chiral symmetry) phase through kaon condensation. The K−−meson energy is lowered, with increasing density, by the attractive vector mean field originating from the dense nucleonic matter. Once the K− energy comes down to the electron chemical potential μe, which increases with increasing density, the elec- trons change into K−−mesons through the reaction e− → K− + v. This is estimated to occur at a density ϱc ∼ 3ϱ0, where ϱ0 is nuclear matter density.Above the density ϱc, the K− mesons condense into a zero momentum state. Roughly as many protons as neutrons are present because the charge carried by the former can be neutralized by the kaons. As a result, the compact remnant of the collapse is a nuclear matter, or “nucleon”, star, rather than a neutron star.With inclusion of kaon condensation, the equation of state of dense matter is softened, resulting in a maximum stable compact object mass of Mmax ∼− 1.5 M⊙. It is shown, however, that cores of masses in the range of ∼ 1.5–1.8 M⊙ can be stable for long enough to produce nucleosynthesis and to return matter to the galaxy before they later collapse into black holes.
Article
We propose a model describing two globally interacting universes containing particles with positive and negative energies respectively. It is shown that the elementary particle physics inside each of the universes remains the same as if there were no other universe with particles with another sign of energy, whereas the effective cosmological constant in each of the universes vanishes automatically.
Article
It is argued that if the present vacuum energy density ϱv exceeds some extremely small critical value ϱc (ϱcR∼10−107g cm−3 for chaotic inflation in the theory ), then the lifetime of mankind in the inflationary universe should be finite, even though the universe as a whole will exist without end. A possible way to justify the anthropic principle in the context of the baby universe theory and to apply it to the evaluation of masses of elementary particles, of their coupling constants and of the vacuum energy density is also discussed.
Article
We show that one may expect a charged kaon condensate to form in matter at several times nuclear density. The condensate is driven to a large extent by the “sigma term Interaction” between mesons and baryons — a symmetry breaking effect proportional to the strange quark mass. Using the SU(3) × SU(3) chiral Lagrangian to model meson-baryon interactions, we show that as density increases to three or four times nuclear density, baryonic matter acquires a strangeness-to-baryon ratio approaching unity. The relevance of kaon condensation as a route to strange quark matter is discussed.
Article
An extended chaotic inflation scenario is proposed. In this scenario the values of the effective gravitational constant in different parts of the universe may differ from each other. Depending on the choice of a particular model, the value of the gravitational constant in our part of the universe either can be expressed through other coupling constants in the theory or can be determined with the help of anthropic considerations. In some models the weakness of the gravitational interactions may be related to the duration of inflation.
Article
It is proposed that the energy and momentum injected from OB associations can act as the regulators of high-mass star formation throughout the disk of a spiral galaxy. The mechanism, interacting blast waves driven by supernovae and stellar winds, leads to a generalized Schmidt law for the star formation rate and could explain in a natural way the galactic gradients of metals and star formation rate. The formation of massive stars is determined by the average ambient density in the disk while low-mass star formation is regulated by the density in molecular clouds. That is, there are two modes of star formation, but the physical mechanism is the same in both cases.
Article
Assuming that star formation regions are supported against gravity by winds from low mass young objects, the stellar birthrate obtained for winds interacting in the momentum conservation stage is correlated with the molecular gas density of the parent fragment as n to the 13/8 power or n to the 5/8 power, respectively, for rates/unit volume or rates/unit mass. Birthrates derived from protostellar rotationally driven winds are in good agreement with the observed star production in the cloud B18. With the aid of observed Taurus-Auriga complex properties, the present model is extrapolated to the Galaxy as a whole, yielding a Milky Way predicted average rate that is in good agreement with standard estimates based on observations of the solar neighborhood.
Article
Analytic models for the evolution of disk galaxies are presented, placing special emphasis on the radial properties. These models are straightforward extensions of the original Schmidt (1959, 1963) models, with a dependence of star formation rate on gas density. The models provide successful descriptions of several measures of galactic disk evolution, including solar neighborhood chemical evolution, the presence and amplitude of metallicity and color gradients in disk galaxies, and the global rates of star formation in disk galaxies, and aid in the understanding of the apparent connection between young and old stellar populations in spiral galaxies.
Article
The pre-main sequence X-ray emitting stars observed in molecular clouds appear to provide the bulk of the ionization. Newly forming stars therefore control the coupling of the magnetic field to the cloud. Since this coupling itself is believed to be responsible for the rate of cloud collapse, it is suggested that there is a natural feedback mechanism, involving observed X-rays, which is capable of regulating molecular cloud evolution and the rate of star formation.
Article
Understanding how stars like the sun formed constitutes one of the principal challenges confronting modern astrophysics. In recent years, advances in observational technology, particularly at infrared and millimeter wavelengths, have produced an avalanche of critical data and unexpected discoveries about the process of star formation, which is blocked from external view at optical and shorter wavelengths by an obscuring blanket of interstellar dust. Fueled by this new knowledge, a comprehensive empirical picture of stellar genesis is beginning to emerge, laying the foundations for a coherent theory of the birth of sunlike stars.
Article
The implications of a cosmological scenario which explains the values of the parameters of the standard models of elementary particle physics and cosmology are discussed. In this scenario these parameters are set by a process analogous to natural selection which follows naturally from the assumption that the singularities in black holes are removed by quantum effects leading to the creation of new expanding regions of the universe. The suggestion of J. A. Wheeler that the parameters change randomly at such events, leads naturally to the conjecture that the parameters have been selected for values that extremize the production of black holes. This leads directly to a prediction, which is that small changes in any of the parameters should lead to a decrease in the number of black holes produced by the universe. Thus, in this case a hypothesis about particle physics and quantum gravity may be refuted or verified by a combination of astrophysical observation and theory. This paper reports ...
Article
PSR J1518+4904 is a recently discovered 40.9 ms pulsar in an 8.6 day, moderately eccentric orbit. We have measured pulse arrival times for this pulsar over 1.4 yr at several radio frequencies, from which we have derived high precision rotational, astrometric, and orbital parameters. The upper limit for the period derivative of the pulsar, P ! 4 Theta 10 Gamma20 , gives a characteristic age of at least 1:6 Theta 10 10 yr, among the highest known. We find the orbit to be precessing at a rate of 0:0111 Sigma 0:0002 ffi yr Gamma1 , which yields a total system mass (pulsar plus companion) of 2:62 Sigma 0:07M fi according to general relativity. Further analysis of the orbital parameters yields a firm upper limit of 1.75M fi on the pulsar mass and constrains the companion mass to the range 0.9 to 2.7 M fi . These masses, together with the sizable orbital eccentricity and other evidence, strongly suggest that the companion is a second neutron star. Subject headings: b...
Article
We attempt to understand the fate of spacelike gravitational singularities in string theory via the quantum stress tensor for string matter in a fixed background. We first approximate the singularity with a homogeneous anisotropic background and review the minisuperspace equations describing the evolution of the scale factors and the dilaton. We then review and discuss the behavior of large strings in such models. In a simple model which expands isotropically for a finite period of time we compute the number density of strings produced by quantum pair production and find that this number, and thus the stress tensor, becomes infinite when the Hubble volume of the expansion exceeds the string scale, in a manner reminiscent of the Hagedorn transition. Based on this calculation we argue that either the region near the singularity undergoes a phase transition when the density reaches the order of a string mass per string volume, or that the backreaction of the produced string matter dramatically modifies the geometry. Comment: Some minor errors corrected; version to be published in Class. Quant. Grav.; standard LaTeX, uses epsf.tex, 44 pages, 4 figures