ArticlePDF Available

More is different: Broken symmetry and the nature of the hierarchical structure of science

E:CO 2014 16(3): 117-134 | 117
More is dierent
More is dierent: Broken symmetry
and the nature of the hierarchical
structure of science
P.W. Anderson (with an introduction by Jerey A. Goldstein)
Reduction, construction, and emergence in P. W.
Anderson’s “More is dierent”
The central task of theoretical physics in our time is no longer to write down the
ultimate equations but rather to catalogue and understand emergent behavior in its
many guises…
—Laughlin and Pines (2000)
P. W. Anderson’s classic paper was selected for republishing in this issue for sev-
eral reasons. First, because it presages several of the major constructs underly-
ing the contemporary study of complex systems. Second, that the central focus
of the paper is on what later became known as “emergence”, one of the dominant
themes of our classic papers. Third, because Anderson was one of the founders of
the Santa Fe Institute in the mid-eighties where his earlier formulation of ideas like
spin glasses, complex optimization, simulated annealing, evolution on rugged land-
scapes, collective excitations, and spontaneous symmetry breaking were taken up
Classic Paper Section
Anderson, P.W. (1972) “More is dierent: Broken symmetry and the nature of the hi-
erarchical structure of science,” Science, ISSN 0036-8075, 177(4047): 393-396. Repro-
duced by kind permission.
118 |
Anderson & Goldstein
by researchers coming from diverse disciplines (see the very informative interview of
Anderson conducted by Alexei Kojevnikov, 1999, 2000; and Anderson’s webpage at
Princeton University where as an
nonagenarian, he is
still active as a professor
emeritus, Anderson, nd).
Although Anderson’s ideas on emergence had long been a rich source of inspira-
tion for me, the more I read in preparing this introduction, the more startled I became
by the vast breadth, depth, and prescience of his work. The era of neo-emergentism
which we are passing through now—Emergence: Complexity & Organization being
an emblem of the kind of themes explored in contemporary work into complex sys-
tems—has been marked by a movement away from the more speculative character of
proto- and mid-phase emergentism; a shift largely made possible by innovative tools
of empirical research and experimental design, as well an impressive array of sophis-
ticated mathematical and computational perspectives. Anderson’s paper “More is dif-
ferent” can be viewed as a primer of what a large part of the study of complex systems
would later include.
That his early work extends out beyond partisan issues in solid state physics can
be appreciated by considering the fact that Anderson’s championing of the notion of
spontaneous-symmetry breaking (SBB), although in this paper as a general “mecha-
nism” for the processes of emergence, had originally been oered by Anderson to
Peter Higgs and other progenitors of the now notorious Higgs Particle as an explana-
tion for mechanism by which the Higgs particle acquires mass. In fact, many physicists
have urged that the name of this momentous discovery at CERN be changed to the
Anderson-Higgs” mechanism, an appellation that can already be found in many inu-
ential papers in the eld (see Moat, 2014). Although the idea of SBB has the advan-
tage of being a way of accounting for processes of emergence in a theoretical realm
all-too-often neglecting the whole issue of process (see Goldstein, 2013b, 2014), it is
not an idea coming without a certain measure of obscurity and even ambiguity as to
what it ultimately amounts to explanatorily, a topic I will say more about below.
It is worthwhile to recognize that Anderson’s paper was written within the context
of an ongoing, and at the time vituperative debate, between particle physicists, on
the one hand, with their highly eective Standard Model of the so-called fundamental
forces (such as weak, strong, electro-magnetic on up to their nal unied “theory of
everything”) and mostly negative attitude towards emergence in the past, and sol-
id state or condensed matter physics, on the other hand, whose investigations into
phenomena such as phase transitions, superconductivity, ferromagnetism and so on
required the introduction of constructs and methods pertaining to higher scale dy-
E:CO 2014 16(3): 117-134 | 119
More is dierent
namics, organizing principles, and emergent collectivities. Two of the chief antago-
nists in this conceptual battle have been the Nobel Laureate particle physicist Steven
Weinberg known for his work on the unication of the electro-magnetic and the weak
forces and Anderson who of course is another Nobel Prize winning physicist (on this
dispute see Silberstein, 2009). This clash shows itself in this classic paper through An-
derson’s attack on strident reductionism, of which Weinberg has long been a vigorous
proponent, along with Victor Weiskopf whose reductionist stance involving extensive
and intensive explanatory strategies Anderson takes on in his paper.
Later, Anderson (2001) looked back and saw those early times as when he had
become fed up with the denigration he believed the particle physicists were aiming
at solid state and other physicists. He pointed out that most physicists were conduct-
ing research in what he considered a harder eld than particle physicists; a “frontier
between the mysterious and the understood: the frontier of complexity…where the
watchword is not reductionism but emergence.” Of course I don’t know the inside
story, but the more I have read about this acrimony the more I see how much ordinary
old ego has been involved in these so-called theoretical debates in physics. Perhaps
there is a touch of envy on both sides, for example, the SSB notion that Anderson
put forward very early had a big inuence in Weinberg’s later work. And then there’s
the rancorous story of how Anderson opposed Weinberg’s plea for the building of a
super-duper collider in Texas before the Large Hadron Collider was built at CERN, with
Anderson’s side winning-out of course.
In his classic paper, Anderson did not then, nor does he now, completely renounce
reductionism as such as if he were calling for an embrace of some kind of “holism”. In-
stead his criticism is of the totalizing type which he describes through his notion of the
constructionist hypothesis”: “The ability to reduce everything to simple fundamental
laws does not imply the ability to start from those laws and reconstruct the universe”.
Strident reductionists should give fealty to the contructionist hypothesis since they
hold that what the reductionist approach discovers about the foundational dynamics,
formulated as foundational equations (e.g., through the formats of canonical Lagrang-
ians and Hamiltonians), are what’s ultimately causing the system under scrutiny to be
and act as it does. From such a perspective, if you know the foundational equations
you certainly should be able to reconstruct the whole thing from the foundation up.
For Anderson, though, the constructionist hypothesis does not and cannot live up
to its promise since the reduction on which it is based had not included the equally
fundamental fact that “entirely new properties” arise at each new level of complex-
ity and scale (scale here is not simply a synonym for level of resolution). It is these
120 |
Anderson & Goldstein
entirely new properties”, which later would be termed “emergent properties”, which
require the introduction of a new formulation, the order parameter, which is the metric
of the new order expressed in the emergent properties. The notion of an order pa-
rameter went back to the great Russian physicists Lev Landau and Vitaly Ginzburg in
their “phenomenological” models of such emergent phenomena as superconductivity.
Later, Hermann Haken, and to a lesser extent Ilya Prigogine, took up the mantle of or-
der parameter in the Synergetics approach and the Far-from-equilibrium approach to
understanding collective or coherent phenomena (dissipative or partly ordered struc-
tures as one of the prototypes of emergence). The very fact of needing a new variable
like an order parameter to formulate these collective phenomena shows that there
was something seriously lacking in a strictly reductionist strategy.
By the way, it seems that Anderson was not particularly fond of the work of
Prigogine, a sentiment that he was not bashful to frequently and publicly announce.
According to Anderson (cited in Hartman, 2000), although Prigogine’s dissipative
structures did indeed consist of new patterns, Anderson held such patterns to be
only temporary modications that did not possess enough permanence to account
for enduring emergent phenomena. Furthermore, Anderson doubted that in the far-
from-equilibrium conditions within which dissipative structures arise, there was a well-
dened function behaving as an order parameter. One can surmise, however, that be-
sides genuine scientic disagreement, one might see not a little ego involved here as
well: they both received Nobel Prizes in 1977 with Anderson in physics and Prigogine
in chemistry; some have held that it was not uncommon for Prigogine to strut his
ideas around as solving deep issues in quantum mechanics, cosmology, and meta-
physics without a great deal of experimental support for their generality; and Ander-
son was well-known as a curmudgeon whose blunt hammer could land on subjects
about which he did not know that much.
Superconductivity and spontaneous symmetry-breaking
Anderson’s push for emergence and allied concepts for explaining collective
phenomena has had a deep and lasting eect in many elds where complexity
constructs have been applicable. Usually, this application of emergence also
includes the theoretical framework of spontaneous symmetry breaking as how emer-
gence comes about. In the BCS theory of superconductivity of 1957 which led to the
award of a Nobel Prize to its three theorists—John Bardeen, Leon Cooper, and Robert
Schrieer—one does nd a micro-level theory on the emergence of the radically novel
feature of resistance-free electrical current in a metal taken to a very low temperature,
an unexpected micro-level pairing of electrons (called Cooper Pairing). However, the
E:CO 2014 16(3): 117-134 | 121
More is dierent
Cooper Pairing eect on the micro-level is only a doorway to the generation of a mac-
ro-level collection action, a macro-level “quantum wave”, a surprising collectivity that
seems to go against what the foundational equations would suggest since electrons
are supposed to repel and not attract each other because they have the same charge.
This puzzle was eventually solved after thirty years of intense investigations by
some of the world’s most eminent physicists and chemists who already knew most
of the foundational equations that would be operative, and one of the keys to this
solution would be SBB. Taking a complicated and convoluted story and boiling it
down intentionally to a simplistic model: rst, the metal has to have a unique consti-
tution whereby at normal temps it is not a good conductor of electricity. This neces-
sity means that the mechanism at work in superconductivity cannot be explained as
an extension of an already existing metallic property. Second, very low temperatures
are required, eventually realized as a necessary way that the expected thermal noise
in the system could be diminished to the extent that allowed something unique to
take place. Then, once this noise was low enough, electrons could pair up against ex-
pectations through the intermediary actions of phonons, quasi-particles or “collective
excitations” composed of vibrations of the atoms on the metallic lattice structure. As
the thermal noise decreased, the electrons could be attracted to the phonons which
played the role as kind of marriage brokers which then passed the attraction on to
another electron. Of course, there is a lot more going on here concerning the role of
the phonons and the forming of layers of electrical ow and the emptying out of the
magnetic eld and so forth. What is important for appreciating the role of SBB is that
the collective “quantum wave” of the electron pairs became a kind of order parameter
representing a breaking of the original gauge symmetry, a feat accomplished by the
low temperature.
In two quite illuminating but dicult papers on the SBB in emergent supercon-
ductivity, the philosopher of science Margaret Morrison (2006, 2012) has called at-
tention to how it is indeed tempting to interpret the Cooper-pairing scenario as an
example of an eective reductive explanation. Yet, Morrison goes on to demonstrate
how a close scrutiny of the foundational equations with their accompanying gauge
symmetry simply cannot predict the ensuing emergent novelty. One reason has to do
with the property of universality, which refers to how a replacement with a completely
dierent metal with its dierent micro-level, i.e., foundational, composition can yield
the same phenomena of superconductivity (for more on universality and this view of
emergence, see Batterman, 2005).
122 |
Anderson & Goldstein
According to Morrison, it is because Cooper pairs only manifest at the critical low
temperature required, their presence demonstrates that the system has undergone a
phase transition which calls into play a novel order parameter (and I might add, not
just more criticalizations of any control parameters) measuring the amplitude of this
emergent macro-level collective quantum wave. She argues that it is possible to de-
rive the exact, emergent properties of the superconductive phenomena by empirical
measure of the ostensive phenomena and not through appeal to the foundational
Lagrangian and its symmetry.
It is a general principle in physics that systems seek to be in the most stable state.
A simple example is the instability of a pencil balanced on its end. A very slight move-
ment of a table underneath will perturb this instability so the pencil falls down and as-
sumes the much more stable condition of lying horizontally on the table. It would take
an appreciably much stronger jolt to get the horizontally lying pencil to move and
maybe fall o the table. In a superconducting metal, at low temperatures the Cooper
pairing and collective electron ow is an unstable energy condition. The symmetry of
the original equations is still being conformed with, but it is now in an unstable state.
A more stable state is for the macro-scope wave function of the collective electrons
to form. It has been said that the context of the appearance of this emergent phe-
nomena at the low temperature “solves” its governing equations by assuming a more
stable dynamics. The stability of the asymmetric emergent phenomena in the system’s
new context trumps the unstable but symmetric equations it is supposed to follow. As
stated by the prominent quantum eld theorist and historian of physics, Silvan Schwe-
ber (cited in Mainwood, 115) that Anderson’s main message with his use of SBB:
... it is not enough to know the ‘fundamental’ laws at a given level. It is the solutions to
equations, not the equations themselves, that provide a mathematical description of the
physical phenomena. ‘Emergence’ refers to properties of the solutions in particular, the
properties that are not readily apparent from the equations.
The micro-level explanation utilizing foundational equations with their original
symmetry is not wrong—rather the symmetry is “hidden” behind the appearances of
the observed non-symmetrical but stable condition. Thus, at higher temps, the stable
states show the symmetry of the Hamiltonian, e.g., with no regular spatial arrange-
ments (special spatial arrangements break the symmetry—compare a circle with a
circle in which a pod is forming aimed in some preferred direction). To be sure, the
construct of SBB has not struck only a few eminent physicists as somehow shy in the
sense of its peculiar ability to ad hoc explain via the dierence between equations and
E:CO 2014 16(3): 117-134 | 123
More is dierent
their solutions and an appeal to hidden and revealed symmetries. We’ll come back to
this seemingly explanatory legerdemain later in this introduction.
A helpful discussion for what is going on in the case of this bi-fold explanatory
strategy, contrasting equation and solution, can be found in a very accessible paper
(Laughlin & Pines, 2000) authored by Robert Laughlin, another Nobel Laureate and
David Pines, Anderson’s eminent nonagenarian colleague and Santa Fe Institute pio-
neer (see references for the webpage of the Institute for Complex Adaptive Matter
founded by Pines). They distinguish between knowing the rules operative in the actual,
manifested domain of the emergent phenomena (calling such collectivities “protector-
ates” because of their independence from micro-level uctuations) and knowing the
rules of the foundational equations. A close examination of the emergent protector-
ates reveals they are governed by emergent rules which cannot be determined by the
foundational equations. Rather, one needs experiment, measurement, and how exper-
iment and measurement reveals the emergent context and its new asymmetric rules.
Furthermore, according to Laughlin and Pines, the emergent protectorates require
“higher organizing principles” which are not discernible at the level of the founda-
tional equations and are consequently typically downplayed by reductionist scientists:
The fact that the essential role played by higher organizing principles in determining
emergent behavior continues to be disavowed by so many physical scientists is a
poignant comment on the nature of modern science. To solid-state physicists and
chemists, who are schooled in quantum mechanics and deal with it every day in the
context of unpredictable electronic phenomena such as organogels, Kondo insulators,
or cuprate (high temperature) superconductivity, the existence of these principles is so
obvious that it is a cliché not discussed in polite company. However, to other kinds of
scientist the idea is considered dangerous and ludicrous, for it is fundamentally at odds
with the reductionist beliefs central to much of physics. But the safety that comes from
acknowledging only the facts one likes is fundamentally incompatible with science…
(Laughlin & Pines, 2000: 30).
Finally, concerning the unpredictability of higher-level emergent collective phe-
nomena from foundational equations, Gu et al. (2008) oer a very sophisticated up-
dating of Anderson’s early work (using the Ising model formulations of collective phe-
nomena) within the context of ndings from mathematical logic and computational
complexity theory concerning undecidability in formal systems (in a previous paper,
Goldstein, 2014, I tried to show a related way of linking undecidability, uncomput-
ability, and the emergent gap). The Gu et al. paper demonstrated how in the eld
124 |
Anderson & Goldstein
of solid state physics, systems manifesting emergent collectivity that is observable
macroscopically, the dening properties or behaviors cannot be deduced from rst
principles …[so that] from knowledge of the lattice Hamiltonian … any macroscopic
law that governs these quantities must be logically independent of the fundamental
interactions” (on a closely related connection of physics with undecidabilty see Moore,
The process of emergence, SBB, and the need for
contextual exploration
Recently (Goldstein, 2013a, 2013b, 2014), I have called attention to what I see
as a troubling lack of inquiry into how emergence works, i.e., the mechanisms,
processes, and operations possessing the requisite potency for generating
emergent phenomena with their unique and radical properties. This trend can be no-
ticed not only in critiques of the idea where it might be expected because emergence
itself is denied, but also somewhat surprisingly in strong endorsements of the idea.
For instance, in o-handed remarks about emergence happening on a higher level
out of interactions on a lower level, not much of interest at all is given. Such a de-
ciency can damage the credibility of the idea which I think has happened in the case
of the questionable and strikingly lightweight co-optations of the idea by the particle
physics and cosmology community.
Anderson though does oer a “mechanism” or process by which emergents
emerge, namely, spontaneous symmetry-breaking as described above in the emer-
gence of superconductivity. To better appreciate what SSB can oer to an understand-
ing of the processes of emergence, it needs to be distinguished from other types of
symmetry-breaking. One type, sometimes called “explicit” symmetry breaking (see,
Anderson, 1984) involves adding a symmetry-breaking a term that is added to the
fundamental equations, e.g., a term representing an operation that leads to a particu-
lar spatial direction thereby breaking an initial state when no special spatial direction
is chosen. Related is the kind of symmetry breaking associated with bifurcation in a
dynamical system occurring when a bifurcation parameter reaches a certain threshold,
e.g., a bifurcation parameter which leads to a criticalization so that symmetry breaks
in the so-called “one hump” (quadratic) maps studied by May and Feigenbaum and
which inspired so many aspiring complexity bus in the nineteen eighties. It has not
been uncommon for approaches to emergence to include the arising of new attrac-
tors at bifurcation thresholds.
E:CO 2014 16(3): 117-134 | 125
More is dierent
However, the SBB that Anderson is known for and that he talks about in this clas-
sic case is a stranger beast. Above we came across such descriptors as “hidden” and
“apparent”, equation versus solution, gauge symmetry versus actual ostensive man-
ifestations of emergent phenomena, stable versus unstable, and others. Of course,
there is nothing particularly misleading by such ways of interpreting SBB and thereby
how emergence works. But what have we really added to our store of understanding
through these metaphors? The problem is not that we are resorting to metaphors,
since all explanations at some point employ metaphors, a “nucleus” is a metaphor,
electric current” is a metaphor, “string” is a metaphor. Instead, the problem is more
like that something has occurred and yet one cannot quite get a conceptual grasp,
and what is oered instead is something as vague and non-forthcoming as “hide and
seek.” In an important sense the supposed symmetry of the foundational symmetries
have to break since the apparent symmetry was nothing more than supercial any-
way: “a way for nature to say, ‘your theory of symmetry is wrong in the rst place’” and
this is related to kind of epistemological ltering (this way of putting it is thanks to
Kurt Richardson, Managing Editor of E:CO and complexity-oriented physicist himself).
I’ve had the sense of a kind of conceptual sham being perpetrated, but not inten-
tional, rather out of ignorance and failure to see that at some point the mathematical
predictability has to be seen as not the essential fundamental nature of nature.
I propose thinking about the dierence between symmetry and the ensuing bro-
ken symmetry as like the shift in “aspect seeing” of the famous “duckrabbit” draw-
ing appealed by Wittgenstein in The Philosophical Investigations. From one perspec-
tive, the drawing appears as a duck’s bill with the duck’s eye facing to the left, while
from the other perspective, the drawing shifts to the duck’s bill now having been
transposed to a rabbit’s long ears and the rabbit’s eye (the same eye but direction
switched) facing to the right. Does this mean it is all totally arbitrary as to which as-
pect one is seeing? I suggest no, since the sequence of the aspect-seeing manifests
the temporal unfolding of the phase direction from symmetry to symmetry breaking.
One starts, say, with the duck and assumes there is the same duck mirrored looking
in the opposite direction. But then the aspect shifts and one sees the rabbit thereby
realizing there is another perspective, but to see this perspective demands one breaks
the symmetry of the presumed mirror image of the duck. In a sense, this new aspect
can only be discerned through the recognition of the entirely new properties which is
occasioned by experiment, measurement, and the subsequent new context. I believe
Anderson would go along with this since he remarks in this classic paper that at some
point even symmetry breaking needed to be overtaken by considerations of increas-
ing complexity, since it is complexity involved with entirely new properties that is re-
126 |
Anderson & Goldstein
vealed as we ascend the hierarchy of the special sciences. Each of these new sciences
are investigating new levels with their own new foundational stories which underlay
that “basically new types of behavior can result.
Much of this, in my opinion, has to do with the explanatory gap that nearly all
accounts of emergence claim for emergence. This is the gap of predictability, deduct-
ibility, computability. In terms of SBB this is the gap of the symmetry-breaking, this is
the gap of not able to go from the foundation to the emergent outcome. As I have
pointed out in Goldstein (2014). In fact, upon rst coming across it, it can evoke a
sense of trickery or sham, that it is postulated to have occurred and yet one cannot
quite see how. This sentiment is related to how SBB talks about a hidden symmetry
associated with the foundational formulation. At high temperatures, the stable state
is the one with the foundational equations symmetries. The strange thing is that the
foundational equations, and the symmetries associated with them, remain even with
the transition occurring at the very low temperature. At that point, however, for the
system to remain stable (which seems to be the stronger pull) these symmetries break,
but breaking doesn’t mean they are deleted.
The self-transcending construction hypothesis
Anderson, not content to leave his paper with just the negative message of de-
crying reductionist science, also emphasizes that “at some point we have to
stop talking about decreasing symmetry and start calling it increasing com-
plication.. with increasing complication at each stage, we go on up the hierarchy of
sciences”. Today, of course are more likely to use the term “complexity” for what he
meant by “complication”. Furthermore, he shifts the concept of fundamental from re-
ferring only to one fundamental set of laws at the level of the tiniest micro-scopic to
the recognition that each level has its own set of foundational dynamics, behaviors
and thereby laws of the new dynamics at that new level.
I propose following this sentiment by relooking at his constructionist hypothesis
by means of inverting it and adding to what we normally take as constructions, crucial
aspects of the entirely new properties discovered at each new level. In previous work,
I have termed these emergent generating constructions “self-transcending construc-
tions”, the qualication of “self-transcending” indicating the unique nature of this kind
of construction: they must be capable of producing the requisite emergent novelty at
each new level. For example, superconductivity, as an actual real world phenomena,
results from self-transcending constructional processes. In this perspective, the sym-
metry breaking of course remains, but the emphasis instead is on how the founda-
E:CO 2014 16(3): 117-134 | 127
More is dierent
tional equations are combined with the numerous other factors in the facilitation of
the radical new properties of the ultra-cold metal to the outcome of an STC. Most im-
portant is that observation, context, and measurement are coupled with whatever can
be gleaned from foundational questions. Insight into what will need to be included in
the formulation of the self-transcending construction will require, in each instance of
experiment that discovers “new laws, concepts, and generalizations”, “inspiration and
If emergence eludes explanation via strict reductionism, then the specic ways
such systems elude reduction makes up the crucial factors that must be added to how
the self-transcending constructional process works. This is a call to envision construc-
tional processes that somehow or other manage to incorporate what reduction on its
downward trajectory to the foundation has left out, that is, the entirely new proper-
ties at each level of scale or complexity. This very dierent type of construction would
need to be able to, de facto, contain those operations, processes, and constraints able
to construct emergent phenomena with their radically novel properties. Emergence is
quite dierent than ordinary change and the ordinary novelty that results from ordi-
nary change. That is why its construction needs to be radically dierent than ordinary
change processes.
Since processes of constructions are all about the building-up of structure, pat-
tern, organization, ordering, self-transcending construction consist of what possesses
the potency for the building-up of novel structure, pattern, organization, ordering. This
implies that processes with this potency must have a capacity for continually taking
extant structure and subjecting it to operations which transform this structure to now
have “entirely new properties.” This seems a tall-order and, as I have tried to indicate
in past work, our imaginations have not been shaped to easily accept its possibility.
That is why in previous papers I have oered examples from mathematics that spe-
cically demonstrate in which kinds of operations that self-transcending constructions
consist. That was meant to pry open the imagination and not that the self-transcend-
ing constructions at work in the instances of emergence all around us must conform
to such obscure mathematics. Instead, it is a call to attend to what Laughlin and Pines
say in the opening quote to this paper, that it is time to shift our attention away from
the foundational equations and focus on observation and context.
In fact, the challenge facing the conceptualization of a self-transcending construc-
tion contains something akin to a paradox (as I’ve said before, a “irtation with para-
dox” and not an embrace): what is being constructed via operations on substrates at
their own level must at the same time transcend the level of these substrates since
128 |
Anderson & Goldstein
the emergent phenomena as “protectorates” being constructed are independent of
the lower more micro-scopic level. These emergent protectorates possess what can
be taken as the opposite characteristic to that found, e.g., in chaotic attractors with
their sensitivity to initial conditions: emergent protectorates must be in an important
sense insensitive to initial or micro-level conditions or they would be capable of en-
during. One response to this challenge is that even though emergent phenomena are
built out of lower level substrates, Ganeri (2011) has pointed out the key operation of
transformation, i.e., the substrates or part are so transformed that the resulting emer-
gent is constituted by what are eectively radically novel parts no longer tethered as
before to their roots on the original level. The independence of emergent protector-
ates doesn’t happen by magic or creation ex nihilo but by the action of self-transcend-
ing constructional operations on phenomena below.
By the way, certain readers might nd the working out of similar ideas but in a
very dierent mathematical framework, that of category theory (Ehresmann & Van -
bremeersch, 2007). There emergence can be likened to the mathematical object of a
category” on which four prototypes of construction are operated: absorption, elimi-
nation, binding, and classication (or association). The self-transcending construction
leading to the emergent phenomena of the category is understood as a “complexi-
cation” whose formation has been the result of so much intermingling, the substrates
of the formation “cannot be untangled”, Chapter 4 especially).
The anti-reductionist stance described by Anderson in this classic paper as well
as his later work is obviously not some uninformed and poorly thought-out gib-
berish condemning science that unfortunately one nds too much of these days,
even among those who should denitely know better. Rather it an attempt, on the
part of a celebrated Nobel Laureate and coming-out of his seven decades of research
and theorizing, to lay-out serious limitations in the thought-numbing variety of strict
reductionism. Furthermore, the position taken in support of the idea of emergence
and its application in the sciences did not emanate from any preconceived scientic
or philosophical commitment to the idea—Anderson has intimated in interviews that
he had not even known of the word “emergence” before he wrote this classic (which is
why one cannot nd the word used in the paper). It was only later after his paper be-
came more well-known that complexity/emergentist accolytes contacted him, inviting
him to conferences and to contribute papers that he felt himself drawn to a complex-
ity perspective. It was only a dozen years later that the Santa Fe Institute was founded
E:CO 2014 16(3): 117-134 | 129
More is dierent
for complexity oriented research and theorizing and Anderson was one of its chief
intellectual architects.
One can read about Anderson’s own work as well as the history of the SFI in many
places. Here, I would like to very strongly recommend one of his recent works, More
and Dierent: Notes from a Thoughtful Curmudgeon (Anderson, 2011) a fun and en-
lightening read replete with popular writings, essays, anecdotes, book reviews, and
so on covering an incredible and prolic career and life. Anderson was never one to
be bashful in expressing his viewpoints, a curmudgeon not afraid to call out when
the Emperor is truly not wearing any clothes. Without even knowing it, without even
having to struggle with dicult technical ideas, after reading this book I felt that an
enormous amount of information was mysteriously transmitted to my brain. I think it
had to do with the book being a peek into the thinking of a truly great thinker.
Anderson, P.W. (1984). Basic Notions of Condensed Matter Physics, ISBN 9780201328301.
Anderson, P.W. (1995). “Physics: The opening up to complexity,” Proceedings of the National
Academy of Sciences, ISSN 1091-6490, 92: 6653-6654.
Anderson, P.W. (2001). “More is dierent: One more time,” in N.-P. Ong and R. Bhatt (eds.),
More is Dierent: Fifty Years of Condensed Matter Physics, ISBN 9780691088662, pp. 1-8.
Anderson, P.W. (2001). “Science: A ‘dappled world’ or a ‘seamless web’?” Studies in the History
and Philosophy of Modern Physics, ISSN 1355-2198, 32(3): 487-494.
Anderson, P.W. (2011). More and Dierent: Notes from a Thoughtful Curmudgeon, ISBN
Anderson, P.W. (n.d.). Anderson’s personal webpage at Princeton University.
Batterman, R.W. (2005). The Devil in the Details: Asymptotic Reasoning in Explanation,
Reduction, and Emergence, ISBN 9780195314885.
Ehresmann, A.C. and Vanbremeersch, J.P. (2007). Memory Evolutive Systems: Hierarchy,
Emergence, Cognition, ISBN 9780444522443.
Ganeri, J. (2011). “Emergentisms, ancient and modern,Mind, ISSN 0026-4423, 120 (479): 671-
Goldstein, J. (2013a). “Re-imagining emergence, Part 2,Emergence: Complexity &
Organization, ISSN 1521-3250, 15(3): 121-138.
Goldstein, J. (2013b). “Re-imagining emergence: Part 1,Emergence: Complexity &
Organization, ISSN 1521-3250, 15(2): 78-104.
Goldstein, J. (2014). “Reimagining emergence, Part 3: Uncomputability, transformation, and
self-transcending constructions,Emergence: Complexity & Organization, ISSN 1521-
3250, 16(2): 116-176.
130 |
Anderson & Goldstein
Gu, M., Weedbrook, C., Perales, A., and Nielsen, M.A. (2008). “More really is dierent,” http://
Hartman, H. (2000). “Symmetry breaking and the origins of life,” in Y. Bar-Yam (ed.), Unifying
Themes in Complex Systems: Proceedings of the International Conference on Complex
Systems, ISBN 9780738200491, pp. 248-257.
Institute for Complex Adaptive Matter,
Kojevnikov, A. (1999, 2000). “Interview with Dr. Philip Anderson. Oral history transcript
(conducted at the Princeton Physics Department Building, March 30, 1999, May 30, 1999,
November 23, 1999, and, June 29, 2000).
Laughlin, R. and Pines, D. (2000). “The theory of everything,” Proceedings of the National
Academy of Sciences, ISSN 1091-6490, 97 (1): 28-31.
Mainwood, P. (2006). Is More Dierent? Emergent Properties in Physics, Doctoral Dissertation,
University of Oxford, Oxford, England,
Moat, J. (2014). Cracking the Particle Code of the Universe, ISBN 9780199915521.
Moore, C. (1990). “Unpredictability and undecidability in dynamical systems,Physical Review
Letters, ISSN 0031-9007, 64(20): 2354”2357.
Morrison, M. (2006). “Emergence, reduction, and theoretical principles: Rethinking
fundamentalism,Philosophy of Science, ISSN 0031-8248, 73: 876”887.
Morrison, M. (2012). “Emergent physics and micro-ontology,Philosophy of Science, ISSN
0031-8248, 79: 141-166.
Silberstein, M. (2009). “When super-theories collide: A brief history of the emergence/
reduction battles between particle physics and condensed matter theory,” Conference:
Integrated History and Philosophy of Science, South Bend, Indiana: University of Notre
E:CO 2014 16(3): 117-134 | 131
More is dierent
132 |
Anderson & Goldstein
E:CO 2014 16(3): 117-134 | 133
More is dierent
134 |
Anderson & Goldstein
... Philip W. Anderson wrote in his famous essay "More is different" in 1972 on the topic: "The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other" [24]. Such a system of interacting particles defines the quantum many-body problem, where we primarily focus on vast numbers of electrons that interact through the screened Coulomb repulsion. ...
... In CMP, we call the analog to the flock a collective mode of electrons, or specifically a spin-wave, that correlates spins within a macroscopic distance as a consequence of microscopic, short-ranged interactions. Let us summarize the discussion of ordering phenomena as follows: A phase that emerges due to interaction-induced symmetry breaking brings forth something unprecedented; or to close the circle to Phil Anderson: "In this case we can see how the whole becomes not only more than but very different from the sum of its parts" [24]. ...
... The remaining MF derivatives of the z-matrix can conveniently be evaluated in the Pauli matrix expanded form given by Equation (A. 24). Notice that in the basis chosen for the fluctuations, we need to replace the e-field by means of the ↵-constraint 1 ⇤ e 2 + d 2 1 + d 2 2 + p 2 0 + p 2 before taking derivatives. ...
Emergent phenomena in condensed matter physics like, e.g., magnetism, superconductivity, or non-trivial topology often come along with a surprise and exert great fascination to researchers up to this day. Within this thesis, we are concerned with the analysis of associated types of order that arise due to strong electronic interactions and focus on the high-\(T_c\) cuprates and Kondo systems as two prime candidates. The underlying many-body problem cannot be solved analytically and has given rise to the development of various approximation techniques to tackle the problem. In concrete terms, we apply the auxiliary particle approach to investigate tight-binding Hamiltonians subject to a Hubbard interaction term to account for the screened Coulomb repulsion. Thereby, we adopt the so-called Kotliar-Ruckenstein slave-boson representation that reduces the problem to non-interacting quasiparticles within a mean-field approximation. Part I provides a pedagogical review of the theory and generalizes the established formalism to encompass Gaussian fluctuations around magnetic ground states as a crucial step to obtaining novel results. Part II addresses the two-dimensional one-band Hubbard model, which is known to approximately describe the physics of the high-\(T_c\) cuprates that feature high-temperature superconductivity and various other exotic quantum phases that are not yet fully understood. First, we provide a comprehensive slave-boson analysis of the model, including the discussion of incommensurate magnetic phases, collective modes, and a comparison to other theoretical methods that shows that our results can be massively improved through the newly implemented fluctuation corrections. Afterward, we focus on the underdoped regime and find an intertwining of spin and charge order signaled by divergences of the static charge susceptibility within the antiferromagnetic domain. There is experimental evidence for such inhomogeneous phases in various cuprate materials, which has recently aroused interest because such correlations are believed to impact the formation of Cooper pairs. Our analysis identifies two distinct charge-ordering vectors, one of which can be attributed to a Fermi-surface nesting effect and quantitatively fits experimental data in \(\mathrm{Nd}_{2-\mathrm{x}}\mathrm{Ce}_\mathrm{x}\mathrm{CuO}_4\) (NCCO), an electron-doped cuprate compound. The other resembles the so-called Yamada relation implying the formation of periodic, double-occupied domain walls with a crossover to phase separation for small dopings. Part III investigates Kondo systems by analyzing the periodic Anderson model and its generalizations. First, we consider Kondo metals and detect weakly magnetized ferromagnetic order in qualitative agreement with experimental observations, which hinders the formation of heavy fermions. Nevertheless, we suggest two different parameter regimes that could host a possible Kondo regime in the context of one or two conduction bands. The part is concluded with the study of topological order in Kondo insulators based on a three-dimensional model with centrosymmetric spin-orbit coupling. Thereby, we classify topologically distinct phases through appropriate \(\mathbb{Z}_2\) invariants and consider paramagnetic and antiferromagnetic mean-field ground states. Our model parameters are chosen to specifically describe samarium hexaboride (\(\mbox{SmB}_6\)), which is widely believed to be a topological Kondo insulator, and we identify topologically protected surface states in agreement with experimental evidence in that material. Moreover, our theory predicts the emergence of an antiferromagnetic topological insulator featuring one-dimensional hinge-states as the signature of higher-order topology in the strong coupling regime. While the nature of the true ground state is still under debate, corresponding long-range magnetic order has been observed in pressurized or alloyed \(\mbox{SmB}_6\), and recent experimental findings point towards non-trivial topology under these circumstances. The ability to understand and control topological systems brings forth promising applications in the context of spintronics and quantum computing.
... The spontaneous emergence of complex computation is an example of a symmetry breaking phase transition, as the giant connected component (spanning cluster) comes into existence at the critical connectivity [1,17]. We conjecture that we are witnessing how complexity of functionality results from symmetry breaking in systems [1]. ...
... The spontaneous emergence of complex computation is an example of a symmetry breaking phase transition, as the giant connected component (spanning cluster) comes into existence at the critical connectivity [1,17]. We conjecture that we are witnessing how complexity of functionality results from symmetry breaking in systems [1]. This complexity takes on a distribution that reflects a hierarchy in an exponential rank-ordering law. ...
... Antagonism fraction (θ) agrees with biology; non-monotone functions also predicted by path requirements. (a) For networks with N = 10000 nodes and k = 2 inputs, over 500 realizations, varying the mean degree z and fraction of antagonistic nodes θ ∈ {0,1 6 , 2 6 , ...1}, we observe that the mean number of unique functions per network is maximized over several orders of magnitude (z ∈ [2 3 , 2 10 ]) by networks having a fraction of antagonistic nodes θ = 1 3 (triangles), coinciding with other findings[5]. (b) At θ = 1 3 and z = 2 6 , we again observe a skewed frequency, and a proportional relationship between function frequency and probability due to complexity(4), having Pearson correlation of 0.91. ...
Full-text available
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
... В таких системах целое больше, чем сумма частей, в том смысле, что по заданным свойствам частей и их взаимодействиям нельзя правильным образом выявить свойства всей системы» (Саймон 2004: 104-105). Ф. Андерсон указывает на иерархическую структуру сложных объектов (Anderson 1972), а Г. Саймон подчеркивает, что «…сложность часто проявляется в форме иерархии и что иерархические системы имеют некоторые общие свойства, не зависящие от их конкретного содержания. ˂…˃ иерархия -это одна из центральных структурных схем, которую использует архитектор сложности» (Simon 1996: 184). ...
Full-text available
The study presents an overview of discursive complexology, an integral paradigm of linguistics, cognitive studies and computer linguistics aimed at defining discourse complexity. The article comprises three main parts, which successively outline views on the category of linguistic complexity, history of discursive complexology and modern methods of text complexity assessment. Distinguishing the concepts of linguistic complexity, text and discourse complexity, we recognize an absolute nature of text complexity assessment and relative nature of discourse complexity, determined by linguistic and cognitive abilities of a recipient. Founded in the 19th century, text complexity theory is still focused on defining and validating complexity predictors and criteria for text perception difficulty. We briefly characterize the five previous stages of discursive complexology: formative, classical, period of closed tests, constructive-cognitive and period of natural language processing. We also present the theoretical foundations of Coh-Metrix, an automatic analyzer, based on a five-level cognitive model of perception. Computing not only lexical and syntactic parameters, but also text level parameters, situational models and rhetorical structures, Coh-Metrix provides a high level of accuracy of discourse complexity assessment. We also show the benefits of natural language processing models and a wide range of application areas of text profilers and digital platforms such as LEXILE and ReaderBench. We view parametrization and development of complexity matrix of texts of various genres as the nearest prospect for the development of discursive complexology which may enable a higher accuracy of inter- and intra-linguistic contrastive studies, as well as automating selection and modification of texts for various pragmatic purposes.
... En otras palabras, la complejidad no aumenta, aunque haya mayores variables o una gran cantidad de datos. Uno de los padres de la complejidad lo expuso de manera sucinta y precisa: "Más es diferente" (Anderson, 1972). ...
Full-text available
This Working Paper argues that nowadays the material scientific basis for knlwedge and acience in general is biology. This, however, does not entail any kind of reduccionis, Simply stated, we must know about nature. This WP is an invitation so see nature as the rationale for any disicpline or theory.
... Specifically, explanations over many different levels of a system are not possible. The point here, which Anderson called 'more is different', is that human psychology, for example, cannot be derived from the physics of elementary particles [52]. As Polanyi puts it, 'life transcends physics and chemistry' [53] A complex system never emerges from a historical vacuum. ...
Full-text available
When mathematical modelling is applied to capture a complex system, multiple models are often created that characterize different aspects of that system. Often, a model at one level will produce a prediction which is contradictory at another level but both models are accepted because they are both useful. Rather than aiming to build a single unified model of a complex system, the modeller acknowledges the infinity of ways of capturing the system of interest, while offering their own specific insight. We refer to this pragmatic applied approach to complex systems -- one which acknowledges that they are incompressible, dynamic, nonlinear, historical, contextual, and value-laden -- as Open Machine Learning (Open ML). In this paper we define Open ML and contrast it with some of the grand narratives of ML of two forms: 1) Closed ML, ML which emphasizes learning with minimal human input (e.g. Google's AlphaZero) and 2) Partially Open ML, ML which is used to parameterize existing models. To achieve this, we use theories of critical complexity to both evaluate these grand narratives and contrast them with the Open ML approach. Specifically, we deconstruct grand ML `theories' by identifying thirteen 'games' played in the ML community. These games lend false legitimacy to models, contribute to over-promise and hype about the capabilities of artificial intelligence, reduce wider participation in the subject, lead to models that exacerbate inequality and cause discrimination and ultimately stifle creativity in research. We argue that best practice in ML should be more consistent with critical complexity perspectives than with rationalist, grand narratives.
... The aspiration for situational awareness is understanding how all these pieces fit together in a consistent epidemiology. While it's sometimes conceptually appealing to build from the ground up, first characterizing the biology within individuals and then carefully working towards larger collections, it becomes clear quickly that "more is different" [2]. Population dynamics both poorly constrain and are poorly constrained by tractable interaction models [3], and so it's often neither feasible nor desirable to try to approach the daily practice of public health in this way [4]. ...
Full-text available
In this paper we create a compartmental, stochastic process model of SARS-CoV-2 transmission, where the process's mean and variance have distinct dynamics. The model is fit to time series data from Washington from January 2020 to March 2021 using a deterministic, biologically-motivated signal processing approach, and we show that the model's hidden states, like population prevalence, agree with survey and other estimates. Then, in the paper's second half, we demonstrate that the same model can be reframed as a branching process with a dynamic degree distribution. This perspective allows us to generate approximate transmission trees and estimate some higher order statistics, like the clustering of cases as outbreaks, which we find to be consistent with related observations from contact tracing and phylogenetics.
Dans cette thèse nous nous intéressons à l’étude et à la modélisation du phénomène de complexité émergent des séries financières selon des échelles temporelles et spatiales différentes. Nous proposons une méthode statistique pour mesurer le niveau de complexité d’une série financière quelconque sans aucun a priori sur le caractère déterministe ou aléatoire de cette dernière. L’étude spatio-temporelle repose sur approche statistique connue sous le nom de la Dynamique Symbolique, une branche de la théorie d’information permettant notamment de simplifier l’analyse de l’évolution des systèmes dynamiques complexes au travers d’une transformation des données observées en une suite de symboles. Nous en proposons une application sur l’indice Standard & Poor500 et constatons au travers de l’entropie généralisée de Shannon que sa dynamique varie fortement selon l’échelle de temps d’étude, exhibant une information plus prévisible sur des fréquences temporelles plus hautes et se caractérisant par un processus plus Markovien sur des échelles plus basses. Puis nous montrons par des expériences numériques que le phénomène complexe de cette série financière est la conséquence d’une superposition de plusieurs statistiques locales sur des échelles de temps différentes se traduisant à l’échelle global par l’émergence d’une variance non constante. Nous proposons ainsi, une méthode de modélisation de cette variance non constante par une généralisation de la statistique de Boltzmann au travers d’une inférence bayésienne. Nous montrons notamment que les familles inverse gamma et log normal caractéristiques d’une loi de puissance et de présence de mémoire longue permettent de modéliser de manière significatif le processus de volatilité stochastique sur des échelles de hautes fréquences. Sur des échelles de basses fréquences, la famille gamma est préférée. Nous montrons par ailleurs que cette transition de statistique selon l’échelle temporelle permet entre autres d’expliquer le changement du comportement complexe de la série étudiée, comme la présence de forte corrélation entre les rendements observées sur des hautes fréquences et le caractère plus aléatoire et moins prévisible sur des échelles plus basses.
From the perspective of the physics of complex systems (1) we deal with the current state of modern physics including the crisis in physics demonstrated through its epistemological, psychological, economical as well as the social context; (2) considering the strength of the Goedel Incompleteness Theorems we point out the following open questions in physics: (i) the limits of the precision of certainty, (ii) the limits of making decisions on information and (iii) the limitations of reasoning that sometimes affect progress, (3) since further advances will necessarily require synergy between physics and seemingly distinct fields - mathematics, information science, chemistry, biology, medicine, psychology, and art. We illustrate this relation by providing examples based on our research. Point (1) is discussed in Ch1 while Ch2-Ch5 encompasses the point (2) and point (3) is covered in Ch6-Ch10.
Full-text available
Teleology and Teleonomy items are widely used in science and philosophy. If a scientific theory implies teleological factors, to some extent, it would be considered as unmodern. It is difficult for specific scientific practice to eliminate teleology or reduce it to a non-teleological explanation. The dilemma of teleology lies in the fact that it is asymmetric on the uniformity of interpretive principles with respect to non-teleology. Non-teleology keeps monism of causation on each step of interpretation process, however, teleology of scientific theory goes in a different direction, that is, teleology partially yields to monism and would not shift it to pluralism until the key step comes up. Occam’s Razor opposes the theories committing infinite regress of entities, and denies the explanations yielding to alternative interpretive principles. The teleological form of explanation proposed by Nagel is generally accepted, from which any further analysis comes to a conclusion that teleology implies more imformation than that of non-teleology. The book mainly takes as Nagel’s form of teleology, and extends its concept to a broader notion by the boundary of Mayr’s types of teleology. The extended one is an explanation of generalized teleology. It makes sense in dealing with large amounts of teleological factors in modern science historically or ideologically. The theoretical tool comes from the premise of Nagel’s model of reduction, that is, the properties in explanan and explanandum are homogeneous. More strictly, explanatory reduction lies on the conceivability of homogeneity of explanan and explanandum. On the contrary, emergent explanation is realized. The key characteristic of emergence is the novelty of hierarchical properties. Whether teleology could be further explained by emergentism depends on the novelty of teleonomies on each hierarchy. The book will analyse Anderson’s theory to show the condition of novelty of hierarchical properties, that is, whether the key characteristics of properties named by the same item are suddenly truncated or separated in some middle level. Besides, according to S-R model, if a large number of repeatable facts show that two types of events occur continuous enough spatio-temporally, it can be explained that there exists causation between them. The scientific evidence is overwhelming that emergence is usually accompanied by the breaking of symmetric. Teleonomy also goes hand in hand with the interchange of symmetric and asymmetric between its level and the adjacent lower one. It can be concluded that there surely exists causation between teleonomy and emergence. Moreover, emergence interpretation of quantum measurements provides explanations of influences on non-teleological sub-systems from a teleological one.Finally,to be a reasonable explanation, emergence thoery itself still faces its own problems. Kim holds that emergence is denied by three problems, that is, overdetermination, causal difference of multiple realizers, and individuation of levels. The book gives a response to Kim in the three aspects. Teleology commits backward causation which is the influence on behavior in past from state in future. The explanatory foundation comes from time asymmetric. How is emergent explanation of the generalized teleology possible? The key to the argument lies in whether that foundation is reductive. Nagel shows three types of homogeneity in explanatory reduction, the third one of which could be named the type of statistical stipulation. It depends on the multiple realizability and emergent explanation. The book will propose the concept of granularity which means the degree of coarse-graining. It is exemplified by the method of history of science that it leads to hidden wrong conclusions if it is lack of discussion on granularity difference between premise and reasoning process in thought experiments when it involves levels. That is to say, the equivalence of conceivability and consistency by logic or possibility by laws of nature is not universal. It hardly denies the existence of emergence in a system, nor proves the homogeneity of properties in explanan and explanandum. The concept of granularity fits to the respond to Nagel’s third type of explanatory reduction, that is, the multiple realizability and hierarchy-reciprocating leads to emergent explanation. Multiple-realizability is considered as one of the judgement conditions on emergence. Hierarchy-reciprocating shows the non-monotonicity of leves in defining concepts determined by hierarchy, that is, the concept involes the events or properties in a lower level and implies some constraint from an upper one. Essentially, it yields to downward causation epistemologically. Penrose proposes a new notion of entropy. On account of multiple realizability and hierarchy-reciprocating, Penrose entropy is a tricky one to be identified as explanatory reduction. It inevitably needs interpretation from emergence theory. In a word, by the means of conceptual analysis, thought experiment and history of science, we try to propose a model of explanation by emergence theory, in order to illustrate teleology and teleonomy and to solve the dilemma of teleological explanation. On this basis, people’s misconception on explanatory reduction will be reversed. It will be revealed to all that granularity is able to be the boundary condition of thoughts and it determines why emergent explanation of generalized teleology is possible.
What science does, what science could do, and how to make science work? If we want to know the answers to these questions, we need to be able to uncover the mechanisms of science, going beyond metrics that are easily collectible and quantifiable. In this perspective piece, we link metrics to mechanisms by demonstrating how emerging metrics of science not only offer complementaries to existing ones, but also shed light on the hidden structure and mechanisms of science. Based on fundamental properties of science, we classify existing theories and findings into: hot and cold science referring to attention shift between scientific fields, fast and slow science reflecting productivity of scientists and teams, soft and hard science revealing reproducibility of scientific research. We suggest that interest about mechanisms of science since Derek J. de Solla Price, Robert K. Merton, Eugene Garfield, and many others complement the zeitgeist in pursuing new, complex metrics without understanding the underlying processes. We propose that understanding and modeling the mechanisms of science condition effective development and application of metrics.
Full-text available
This paper concludes a three part series by reimagining processes of emergence along the lines of a formal "blueprint" for the "logic" of these processes, a topic surprisingly neglected even within the camp of those advocating some form of emergence. This formalism is presented according to the following conceptual strategy. First, the explanatory gap of emergence, the presence of which is one of the main defining characteristics of emergent phenomena, is interpreted in terms of uncomputability, an idea introduced in complexity science in order to supplement the more traditional features of unpredictability, nondeducibility, and irreducibility. Uncomputability is traced back to a method devised by Georg Cantor in a very different context. I label Cantor's formalism a type of "self-transcending construction" (STC), a phrase coined by an early commentator on Cantor's work. Next, I examine how Cantor's STC was appropriated, respectively, in the work of Gödel and Turing on undecidability and uncomputability. Next, I comment on how self-transcending constructions derive a large measure of their potency via a kind of "firtation" with paradox in a manner similar to what Gödel and Turing had done. Finally, I offer some suggestions on how the formalism of an STC can shed light on the nature of macro-level emergent wholes or integrations. This formalism is termed a "self-transcending construction" a term derived from the anti-diagonalization method devised by George Cantor in 1891 and then utilized in the limitative theorems of Godel and Turing.
Conference Paper
Full-text available
Philip Anderson was educated at University High School in Urbana, Illinois, at Harvard (BS 1943, PhD 1949), and further educated at Bell Laboratories, where his career (1949-1984) coincided with the greatest period of that remarkable institution. Starting in 1967, he shared his time with Cambridge University (until 1975) and then with Princeton, where he continued full time as Joseph Henry Professor until 1997. As an emeritus he remains active in research, and at press time he was involved in several scientific controversies about high profile subjects, in which his point of view, though unpopular at the moment, is likely to prevail eventually. His colleagues have made him one of the two physicists most often cited in the scientific literature, for several decades. His work is characterized by mathematical simplicity combined with conceptual depth, and by profound respect for experimental findings. He has explored areas outside his main discipline, the quantum theory of condensed matter (for which he won the 1977 Nobel Prize), on several occasions: his paper on what is now called the “Anderson-Higgs mechanism” was a main source for Peter Higgs' elucidation of the boson; a crucial insight led to work on the dynamics of neutron stars (pulsars); and his concept of the spin glass led far afield, to developments in practical computer algorithms and neural nets, and eventually to his involvement in the early years of the Santa Fe Institute and his co-leadership with Kenneth Arrow of two influential workshops on economics at that institution. His writing career started with a much-quoted article in Science titled “More is Different” in 1971; he was an occasional columnist for Physics Today in the 1980s and 1990s. He was more recently a reviewer of science and science-related books for the Times (London) Higher Education Supplement as well as an occasional contributor to Science, Nature, and other journals. © 2011 by world Scientific Publishing Co. Pte. Ltd. All right reserved.
Jaegwon Kim has argued (Kim 2006a) that the two key issues for emergentism are to give a positive characterization of the emergence relation and to explain the possibility of downward causation. This paper proposes an account of emergence which provides new answers to these two key issues. It is argued that an appropriate emergence relation is characterized by a notion of ‘transformation’, and that the real key issue for emergentism is located elsewhere than the places Kim identifies. The paper builds on Victor Caston’s important work on ancient philosophy of mind (Caston 1997, 2001), but appeals to sources he has not considered.