ThesisPDF Available

Performance-oriented Architecture – An integrated discourse and theoretical framework for architectural design and sustainability towards non-discrete and non-anthropocentric architectures

An integrated discourse and theoretical framework for architectural
design and sustainability towards non-discrete and non-
anthropocentric architectures
PhD Thesis
School of Construction Management and Engineering
Michael U. Hensel
05 June 2012
The interaction of humankind with the natural world begets their mutual becoming:
transformation arises from the agency that imbues and entwines them. As the impact
of human intervention upsurges and accelerates the transformation of the natural
environment the question arises whether their perceived diametrical opposition
continues to be useful in locating an integrated and complex approach to
architectural design and sustainability. Could an intensively context-embedded
architecture be in the service of the natural environment by interlinking its inherent
and latent agency with that of the natural environment? And if thus a performance-
oriented architecture is possible how may it be thought of? This thesis focuses on
formulation an integrated and overarching theoretical framework for a performance-
oriented architecture. It explores the concepts of non-discrete and non-
anthropocentric architecture that opens itself out to the natural environment and
seeks to locate in the consequentially evolving extended threshold a heterogeneous
space that offers varied and sustainable provisions for human use and local
ecosystems. Four main efforts underlie this endeavour:
1. Critical examination of relevant architectural theories, practices and works,
paralleled, wherever useful, by a historical account of the developments that
led up to these theories and practices. This effort is geared towards
formulating core concepts and traits of a performance-oriented architecture.
2. Research by design efforts that inform the development of the traits of
performance-oriented architecture.
3. The development of an overarching inclusive and integrated theoretical
framework for performance-oriented architecture.
4. Discussing ways in which architecture can serve as an extended interface
between the man-made and natural environments.
I confirm that this is my own work and the use of all material from other sources has
been properly and fully acknowledged.
Michael U. Hensel, 05 June 2011
My wholehearted gratitude belongs to my supervisor Prof. Emeritus Dr. George
Jeronimidis for supporting the latitude that this PhD thesis required and to my
examiners Prof. Dr. David Leatherbarrow and Prof. Emeritus Dr. Derek Clements-
Croome, as well as to and my readers Dr. Pavel Hladik, Dr. Martina Keitsch, and
Prof. Dr. Birger Sevaldson, for immensely insightful commentary. I am grateful for
the DTA grant given by the University of Reading, which made it possible for me to
undertake this PhD.
I wish to express my heartfelt appreciation to the numerous companions and
collaborators on the road towards performance-oriented architecture. To name but a
few, I wish to thank my long-term companions at the OCEAN Design Research
Association and at the Sustainable Environment Association (SEA), as well as my
friends and colleagues Ludo Grooteman, Prof. Dr. Christopher Hight, Sabine Kraft,
Jeffrey Turko, and Prof. Dr. Julian Vincent for their generous support and prolific
collaboration. My sincere gratitude belongs to my committed students at the Oslo
School of Architecture and Design, the Architectural Association, the Izmir University
of Economy, the Rice School of Architecture, the University of Technology in Sydney,
and numerous other schools, who have massively contributed to this work through
their immensely dedicated individual and collective efforts.
Finally, my deepest gratitude belongs to my wife Defne Sunguroğlu Hensel for her
absolutely dedicated contributions, and unwavering support and patience and to my
mother and late father who made everything possible for me. Thanks also to Jojo
and Donny for joyful interruptions that helped maintain sense and sanity when the
road ahead seemed exceedingly steep.
Content List
1. Introduction: The task at hand 7
2. The road(s) to performance-oriented architecture 18
2.1 The rise of the sciences - in particular biology - and their impact on
architecture 19
2.2 The rise of the notion of environment 34
2.3 The rise of systems-theory and systems-thinking and their impact on
architecture 39
2.4 The rise of the notion of performance in the humanities, arts and
sciences 48
2.5 The rise of the notion of performance in architecture 52
2.6 Discourses and research by design en route to performance-oriented
architecture 56
2.7 Recent developments of design methods and research by design 68
3. Towards non-discrete architectures 84
4. Towards non-anthropocentric architectures 104
5. Traits of performance-oriented architecture 112
5.1 Reconciling dialectics 117
5.2 The local physical environment local climate and micro-climate 119
5.3 The role of material performance 121
5.4 The active boundary, the articulated envelope and heterogeneous
environments 134
5.5 The extended threshold 156
5.6 2nd degree auxiliarity - supplementary architectures 169
5.7 1st degree auxiliarity embedded architectures 190
5.8 Multiple grounds and settlement patterns 197
6. The built environment as a repository of knowledge and different modes of
knowledge production in architecture 213
7. The road(s) ahead 222
8. Context of research 229
9. Bibliography 231
1. Introduction: The task at hand
‘Above all we must remember that nothing that exists or comes into being,
lasts or passes, can be thought of as entirely isolated, entirely unadulterated.
One thing is always permeated, accompanied, covered, or enveloped by
another; it produces effects and endures them. And when so many things
work through one another, where are we to find the insight and discover what
governs and what serves, what leads the way and what follows?’
(Goethe 1988 [1825]: 145-46)
‘One can start from the idea that the world is filled not, in the first instance,
with facts and observations, but with agency. The world, I want to say, is
continually doing things, things that bear upon us as forces upon material
beings.’ (Pickering 1995: 6)
‘The environment must be organised so that its own regeneration and
reconstruction does not constantly disrupt its performance.’
(Alexander 1964: 3)
This thesis pursues the formulation of an integrated theoretical framework to
architectural design entitled performance-oriented architecture. What is meant by
‘integrated’ is [i] that the different currently existing and in part seemingly opposing
approaches to performance in architecture are reconciled and worked into a coherent
theoretical framework, and [ii] that the interdisciplinary character of performance-
oriented architecture is recognized and coalesced into a unified approach.
The thesis commences with this introduction that seeks to make the case for a
coalesced theoretical framework for performance-oriented architecture and that
introduces the scope of currently existing approaches to performance in architecture.
The following part examines several historical developments that have played a key
role in informing architecture and its relation to performance in particular (chapter 2).
This is followed by an elaboration of the underlying core concepts of [i] an intensely
context-embedded non-discrete architecture that draws away from a fixation with the
idiosyncratic object (chapter 3); and, [ii] a non-anthropocentric architecture that
caters not only for humans, but interfaces intensely with natural systems and makes
multi-species provisions (chapter 4). Subsequently general and specific traits of
performance-oriented architecture are set forth and developed (chapter 5). In so
doing one task was to reconcile the entrenched form and function separation that
continues to constrain the coalescing of the continuing debate on performance in
architecture. The following part focuses on discussing the built environment as a
repository of knowledge and examines how architectural knowledge is constructed in
different modes (chapter 6). Finally the thesis is concluded by a critical evaluation of
the approach and an outline of further areas of research (chapter 7).
The task at hand required drawing from a number of disciplines in search of an
architecture that is in the service of the natural environment. The intent of the thesis
is, however, to deliver a theoretical framework for architectural design and its focus is
on architecture. Emphasis is therefore placed on the spatial and material
organisation of architecture and its interaction with the environment as the locus for
provision, participation, performance and sustainability potential. The aim was to
arrive at an approach that is relevant to everyday architecture and that can inform the
formulation of policies that are required preconditions to its further development and
implementation in practice.
As is seems, architectural discourse has over recent decades become both
increasingly diverse and fragmented. The beginning of this development cannot be
assigned to any singular event or time. Numerous social, cultural and economic
developments may have played their own particular role in this development
throughout the previous century and recent decades: the occurrence and of the
modern movement in architecture and its division into functionalist and structuralist
factions, the founding and eventual disbanding of CIAM (International Congress of
Modern Architecture) in 1959, the growing emphasis on critical approaches since the
1960s and the associated increase in journals and schools of architecture dedicated
to the purpose, the differences in local resources, the accelerating economic down-
and upturns since the energy- and oil-crisis in the early 1970s, the turbo-capitalism of
the late 1990s and early 2000s, the economic downturn of the recent decade, to
name but a few. Numerous themes were explored in architecture during the previous
two decades: architecture and politics, architecture and feminism, the diagram,
landscape urbanism, geometry and topology, computation, fabrication, technology,
biology, emergence, performance, sustainability, and a whole range of overtly formal
topics, etc. The themes of topic related journals in architecture over the last two
decades, such as AD, ANY Journal, Arch+, Assemblage, Praxis, etc., give evidence
of this development.
Presently there seems to exist no discernable leading architectural discourse in spite
of certain recurring themes. This may either indicate that the discipline has matured
to the point where simply parallel alternative choices are at hand when required, or,
alternatively, that architecture has become oversaturated with discourse and that the
production of discourses has become an endeavour of its own due to an accelerated
desire for uniqueness and idiosyncrasy in the production of designs and theories
alike. Be that as it may, today there exist numerous specialist discourses in
architecture that focus on somewhat isolated topics. It may be argued that this
development mirrors what is taking place in other disciplines where accelerated
specialisation takes place to such an extent that general overviews are becoming
increasingly difficult due to the rate of research and dissemination in each field of
specialisation, like for instance genetics in biology. However, there exists no
equivalent dominating research-environment in architecture yet that would stimulate
focused and methodical specialisation in a similar way or rate as in the sciences.
It would seem obvious that integrative approaches in architecture must now begin to
coalesce specialist discourses for the sake of a concerted effort towards [i]
establishing what constitutes current core knowledge in architecture at this point in
time; and, [ii] improving upon the built environment and its impact on our world.
This is especially necessary due to current tasks of architecture becoming
progressively complex in the context of a growing scope of questions pertaining to
social and environmental sustainability and the increasing pressure of the built
environment onto the natural environment asserted by a rapidly growing world
population and the associated decline of the natural environment, ecosystems and
In the context of the wide range of architectural discourses today performance is one
of the discernable and longer-lasting notions that has emerged, or, more precisely
reappeared. When such a notion arises and potentially enables the charting of a new
approach it presents the opportunity to reconsider both core knowledge in
architecture and positions related to previous discourses. In relation to the pursuit of
the notion of performance such opportunities were, however, chiefly missed in recent
years, due to the insisted upholding of entrenched dialectics such as ‘form’ and
‘function’ that belonged to preceding oppositional discourses. Due to such persistent
dialectics there exist a range of different approaches to the notion of performance in
architecture. Presently six more or less distinct and partially related positions can be
The first position emerged from an interest in representation, symbolism and
meaning in architecture from the late 1960s onwards through the efforts of Charles
Jencks (Jencks & Baird1969, Jencks 1977) and others. Charles Jencks posited that
‘Radical Eclecticism is multivalent, as against so much Modern architecture: it pulls
together different kinds of meaning, which appeal to opposite faculties of the mind
and the body, so that they interrelate and modify each other.’ (Jencks 1978 [1977]:
132) However, Jencks’ pursuit of Postmodernism in architecture swiftly attracted
broad criticism in the wake of a diversifying critical discourse in architecture. Kenneth
Frampton argued that ‘the arts have continued to gravitate, if not towards
entertainment, then certainly towards commodity and in the case of that which
Charles Jencks has since classified as Post-Modern Architecture towards pure
technique and scenography’. (Frampton 1983: 19) Moreover, Andreas Huyssens
criticised that ‘postmodernist avant-garde is not only the end game of avant-
gardism. It also represents the fragmentation and decline of critical adversary
culture’. (Huyssens 1981) Jeffrey Kipnis pursued a two-pronged criticism, positing
that ‘Post-modernism’s critique of the politics of erasure / replacement and emphasis
on recombination have also led to its greatest abuse, for it has enabled a reactionary
discourse that re-establishes traditional hierarchies and supports received systems of
power’ and ‘post-modern collage is an extensive practice wholly dependent on
effecting incoherent contradictions within and against a dominant frame. As it
becomes the prevailing institutional practice, it loses both its contradictory force and
its affirmative incoherence. Rather then destabilising an existing context, it operates
more and more to inscribe its own institutional space. The only form collage
produces, therefore, is the form of collage.’ (Kipnis 1993: 42)
In the wake of the re-emerging interest in performance in architecture Charles
Jencks’ approach resurfaced by way of an issue of the AD journal entitled ‘Radical
Post-Modernism’, that reissues Jencks’ predilection both for the ‘radical’ and
‘Postmodernism(Jencks & FAT 2012). In the introduction to the issue Jencks posits
three core concepts that underpin the notion of ‘Radical Post-modernism’. Jencks
defined these as follows:
Communication, and its attendant qualities metaphor, iconography,
symbolism, image, surface, narrative, irony was one value that ties together
the 1960s concerns and those of today. formal tropes of today’s Post-
Modernism obviously grew out of yesterday: complexity and contradiction,
ornament and multiple articulation, collage and juxtaposition, layering and
ambiguity, multivalence and double coding. Social content is the third
concern that underlies our common definition of radical, framed in several
ways.’ (Jencks et at 2011: 15)
This reworked approach to Post-Modernism appears, however, seems not to have
taken on the scope of criticism that had arisen since its first appearance in the late
1960s and early 1970s. Instead the argument presented in ‘Radical Post-Modernism’
seems to focus on detecting or showcasing the stated features of Post-Modernist
architecture in a broad range of projects (some of which were ironically designed by
architects who joined the vanguard of fervent critique of post-modernist collage in the
1990s), so as to derive a claim of sustained relevance. Moreover, ‘radical
eclecticism’ is upon architecture once again in different guises, driven by the divorce
of form from structure, envelope from interior, etc. Sylvia Lavin, for instance,
proposed the notion of the ‘free skin’ that is ‘free from formal and expressive
obligations to the interior and is free to develop its own qualities and performance
criteria’ (Lavin 2012: 25). This position embraces eclecticism by way of contrasting
the different elements that constitute a given architecture and thus continues
forcefully the predisposition that favours division over integration. Ultimately Lavin’s
call for ‘techniques of cunning, scenography, special effects, theatre and energy’
(Lavin 2012: 21) brings this approach full circle back to Frampton’s critique of
postmodernist architects exclusive focus on technique and scenography.
The second and third positions relate largely to the entrenched form and function
dialectic that has been dominant in its various guises in architectural discourse since
the 1930s. The related approaches to performance can thus chiefly be divided into
formal approaches on the one hand, and functional approaches on the other. This
division frequently coincides with the related ‘art and science’ dialectic in
architecture. The formal approach tends to emphasise the ‘artistic’ aspect in
architecture, while the functional emphasis is frequently associated with science,
and, more specifically, engineering. The former frequently criticize the latter of being
too rigid and the latter criticize the former of being to elusive. Numerous publications
on performance in architecture since the late 1960s give ample evidence of this
prevailing predisposition. (PA 1967, Kolarevic & Malkawi 2005, Grobman & Neuman
2008, 2012, etc.)
The emphasis of eclecticism based on postmodern collage of the first approach and
the insistence in the engrained dialectics of the second and third approach render
these evidently antithetical to approaches that emphasize an integration. However,
the focal matter of each of these first three approaches is not incompatible with one
another and can be incorporated into an integrated approach.
The forth position foregrounds the notion of event that counters planned relation
between architectures and their use and emphasizes unplanned appropriations and
inadvertent latent capacities of architectures. In criticizing the form-function dialectic
Bernard Tschumi posited the importance of the relation between architecture and
‘There is no architecture without program, without action, without event.
architecture is never autonomous, never pure form, and, similarly,
architecture is not a matter of style and cannot be reduced to a language
[the aim is] to reinstate the term function and, more particularly, to re-inscribe
the movement of bodies in space, together with the actions and events that
take place within the social and political realm of architecture [and to] refuse
the simplistic relation by which form follows function, or use, or
socioeconomics.’ (Tschumi 1994: 3-4)
From a different perspective Antoine Picon discussed
‘the capacity of architecture to become an event, to participate in a world
which is more and more often defined in terms of occurrences rather than as
a collection of objects and relations. In a penetrating essay published a few
years ago, the philosopher Paul Virilio rightly evokes the growing domination
of “what happens”. (Virilio 2002) As a performing art, or to be more accurate,
an art the productions of which are now supposed to perform at various
levels, from an ecological footprint to the realm of affects, architecture has
become a component of this domination’. (Picon 2012: 18)
The event related take on performance appears often as part of other approaches to
performance in architecture, and is frequently appropriated to sustain arguments that
more often then not seek to sustain exclusively formalistic and/or effect-related
stances and some form of scenographic neo-eclecticism. The notion of event is,
however, one key element that performance-oriented architecture needs to address
and integrate.
The fifth approach was formulated by David Leatherbarrow (Leatherbarrow 2009) as
a series of critiques of prevailing approaches to performance in architecture and
focuses on the relation between planned and unplanned performances, between
performance, place and purpose, and a building and its larger context or
‘topography’. Leatherbarrow argued that ‘a physicalistic understanding of architecture
is inadequate to a building’s requirement with respect to human praxis and
misconceived if taken to be wholly adequate to the architectural task’.
(Leatherbarrow 2009: 14-15) Moreover he reasoned ‘against the various ways of
conceiving the building as self-sustained and internally defined product of design’,
while also pointing out that ‘the city (the concrete embodiment of common culture) is
not something that single designs can form, shape, construct, or achieve only
condition and approximate’. (Leatherbarrow 2009: 15) Leatherbarrow’s approach is
both striking and unique in the way it seeks to straddle the complexities that arise
from planned and unplanned conditions that architectures encounter, participate in,
seek to provide and are modified by across different scales. In so doing this
approach makes clear that disassociation of these complex relations is inadequate
and, instead, must be considered inseparable. Thus Leatherbarrow delivers an
erudite outline for an integrated approach that can be seen to incorporate a particular
take on the notion of event by way of architectures participation in unplanned
The sixth and as of yet less developed approach also emphasises the integration of
different aspects of performance into an overarching approach. Its beginnings can be
discerned in various commentaries since the early to mid 2000s. Kolarevic and
Malkawi, for instance posited that the ‘emphasis on building performance is
influencing building design, its processes and practices, by blurring the distinction
between geometry and analysis, between appearance and performance’. (Kolarevic
and Malkawi 2005: 3) In this context David Leatherbarrow charted two different
characteristics of performance in architecture that in his view are inseparable:
‘The kind that can be exact and unfailing in its predictions of outcomes, and
the kind that anticipates what is likely, given the circumstantial contingencies
of built work. The first sort is technical and productive, the second contextual
and projective. There is no need to rank these two in a theory of architectural
performance; important instead is grasping their reciprocity and joint
necessity.’ (Leatherbarrow 2005: 18)
The consequential pursuit of an integrated notion of performance offers also the
consideration of relevant first principles. Chris Luebkeman, for instance, posited that:
‘Performance-based design is really about going back to basics and to first
principles, taking into account the experience one has gained over time as
well as field and laboratory observations about the non-linear behaviour of
elements and components. It is the combination of first principles with
experience and observations that is the fundamental potential of the design
philosophy. It places the design imperative back in the hands of the designer.
And, more importantly, it also places responsibility and accountability back
into the designer’s hands in a very obvious way. One can no longer hide
behind building codes.’ (Luebkeman 2003: 284-285)
Given these realisations the question arises as to why integrated approaches have
thus far not come to full fruition. The answer seems plain: performance-oriented
architecture requires an overarching and inclusive discourse and a detailed
theoretical framework together with integrated and instrumental concepts, design
strategies and methods. From this realization developed the objective of this thesis to
formulate a theoretical framework for an integrative approach towards a
performance-oriented architecture. In so doing it was the intention to strike a balance
between providing a tangible theoretical framework and instrumental concepts that
are at the same time adaptable according to context and circumstances so as to be
useful for everyday practice. This aim resonates with Martin Bechthold’s observation:
‘Despite its muddled attitude towards performance it is crucial to move
performance-thinking back to the core of the disciplinary consciousness.
What could be more timely at the age of a globally warming planet and
dwindling natural resources? performance-based design should be here to
stay, less as an “ism”, but as an ethical obligation to the profession and to
society’. (Bechthold 2012: 52)
When extending the notion of performance-oriented architecture to address a broad
range of sustainability-related aspects the question arises as to whether the
perceived diametrical opposition between the man-made and the natural is of
continued use. Instead, it may be asked whether architecture could be in the service
of the natural environment by way of its inherent and latent agency and its interaction
with the natural environment and whether this approach could lead to a different
thinking about provisions made by architectures. It also yields the more general
question as to what may constitute architectural core knowledge in general and in
relation to a performance-oriented approach to architecture and suitable modes of
knowledge production.
2. The road(s) to performance-oriented architecture
Several momentous historical developments have played a key role in the
development of the proposed notion of performance-oriented architecture. These
[i] The impact of the development of the science, in particular biology, from the 17th
century onwards on the parallel development of architecture;
[ii] The rise of the notion of environment;
[iii] The rise of systems-theory and systems-thinking during the 20th century and its
impact on architecture;
[iv] The ‘performative turn’ in the humanities and arts and the ‘performative idiom’ in
the sciences during the second half of the 20th century;
[v] The rise of the notion of performance in architecture;
[vi] The development of specific linages within critical discourse coupled with
research by design from the 1990s onwards.
[vii] Specific developments of design methods and research by design over recent
These developments are examined in the following part.
2.1 The rise of the sciences - in particular biology - and their impact on
‘In biology concepts play a far greater role in theory formation than do laws.
The two major contributions to a new theory in the life sciences are the
discovery of new facts (observations) and the development of new concepts.
(Mayr 1997: 62)
The development of the natural sciences from the 17th century onwards had in
various ways fundamental impact on the development of architecture as a discipline.
The emerging approaches to scientific language, concepts, methods, systematics
and taxonomy, individually and collectively informed and transformed architecture
Among the important developments of the sciences featured prominently Michel
Eyquem de Montaigne’s (1533-1592) hugely influential Essais published in 1580, the
elaboration of method by René Descartes (1596-1650) as published in Discourse on
Method (Discours de la méthode pour bien conduire sa raison, et chercher la vérité
dans les sciences) in 1637, as well as David Hume’s (1711-1776) A Treatise of
Human Nature: Being an Attempt to introduce the experimental method of Reasoning
into Moral Subjects (1740) and An Enquiry concerning Human Understanding (1748).
The works of illustrious polymaths like Blaise Pascal who wrote in defence of
scientific method (1623-1662), Christiaan Huygens (1629-1695), Antonie Philips van
Leeuwenhoek (1632-1723, Robert Hooke (1635-1703), Sir Isaac Newton (1642-
1727), Gottfried Wilhelm Leibniz (1646-1716), Maria Sibylla Merian (1647-1717),
Lamarck (1744-1829), Johann Wolfgang von Goethe (1749-1832), George Cuvier
(1769-1832), Alexander von Humboldt (1769-1859), and many others advanced the
development of science at an immense rate. Likewise the development of
philosophical thought asserted enormous influence such as Immanuel Kant’s (1724-
1804) Critique of Pure Reason (Kritik der reinen Vernunft) published in 1781, and
Gotthold Ephraim Lessing’s (1729-1781) notion of approach-dependent truth.
A significant impact on architecture had Charles-Louis de Secondat Baron de
Montesquieu’s (1689-1755) publication of The Spirit of the Laws published in 1748,
in which he reasoned for ‘differences in morals, customs, and taste of various nations
by way of scientific explanation’ (Hvattum 2004: 37) and his conviction that ‘only
through a careful observation and analysis of the particular conditions governing a
nation its climate, topography, and geology could a “natural” explanation of its
character be reached’. Therefore ‘”nature” and “natural principles” had come to be
seen as a particularising principle, capable of explaining not everything’s uniformity,
but rather everything’s difference’. (Hvattum 2004: 38) This new emphasis on
difference necessitated the development of a methodological language of description
particular to different fields of inquiry and an instrumental systematisation of
During this period a ‘language of science’ began to take shape that shifted scientific
language ‘from the vernacular to the technical’ (Crosland 2006) and was correlated
with fundamental efforts towards systematisation. In 1751 the first edition of Denis
Diderot’s (1713-1784) and Jean-Baptiste le Rond d’Alembert’s (1717-1783)
Encyclopaedia or a Systematic Dictionary of the Sciences, Arts and Crafts
(Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers) was
published. In parallel to the developing encyclopaedic approach numerous discipline
specific efforts took shape that aimed at systematising and developing nomenclature
and taxonomies. In 1753 Carl Nilsson Linnæus (Carl von Linné) (1707-1778)
published his Systema Naturæ that established the Linnaean biological classification
taxonomy. Antoine Lavoisier (1743-1794) published his Method of Chemical
Nomenclature (Méthode de nomenclature chimique) in 1787, which contained a
Systematic chemical nomenclature. Two years after in 1789 he published the
Elementary Treatise on Chemistry (Traité élémentaire de chimie), which is widely
considered the first modern chemical textbook. In 1803 a particularly remarkable
accomplishment was achieved when Luke Howard (1772-1864) published his Essay
on the modification of Clouds, which set forth a nomenclature system for clouds.
Through his early interests in botany Howard was familiar with the Linnaean
approach to classification, which he utilized successfully in his contribution to
Likewise the development of a visual descriptive language evolved that was
encyclopaedic in character. Especially Maria Sibylla Merian’s (1647-1717) illustrated
books of studies of plants and the metamorphosis of insects, The Caterpillars'
Marvellous Transformation and Strange Floral Food (1679) and Metamorphosis
insectorum Surinamensium (1705) were of great importance for the study of
processes of living nature and the development of visual comparative methods.
[Fig. 01, p. 22]
Fig. 01 Plate 23 “Solanum mammosum” from Maria Sybilla Merian, (1705), Metamorphosis insectorum
Surinamensium. Source: Das Insektenbuch, Frankfurt a.M., Leipzig: Insel Verlag.
From the onset of its initiation as a unified discipline during the late 18th and early 19th
biology had a major influence on architecture. Several individuals are credited with
initiating the use of the term biology: poet and scientist Johann Wolfgang von Goethe
used it in 1796, Friedrich Burdach published it in 1800, as did Lamarck and Gottfried
Reinhold Trevianus in his Biology or the Philosophy of Living Nature (Biologie
oder die Philosophie der Lebenden Natur) in 1802.
At the same time Georges Cuvier published Leçons d'anatomie compare (1800), in
which he introduced comparative anatomy. Ernst Cassirer (1847-1945) commented
on Cuvier’s work that:
‘Cuvier’s system of types was no longer really concerned with single
characteristics: it was their relationship one to the other that was for him the
decisive and determining factor The knowledge of a single form, if it is
really to penetrate the heart of the matter, always presupposes a knowledge
of the world of forms in its entirety. Systematic biology, therefore, as
understood by Cuvier, was no mere device of classification and arrangement
that can easily be apprehended, but a disclosure of the very framework of
nature herself.’ (Cassirer 1950: 131)
‘He wanted to identify the stable and enduring forms, and distinguish them
clearly one from the other.’ (Cassirer 1950: 132)
The Norwegian architectural theoretician Mari Hvattum examined in detail the
relation between these developments in the sciences and particular linages of
development of architectural thought and inquiry. This involved in particular the
efforts of Marc-Antoine Laugier (1713-1769), Antoine-Chrysostome Quatremère de
Quincy (1755-1849), Jean-Nicolas-Louis Durand (1760-1834), and Gottfried Semper
(1803-1879). In his Essay on Architecture published in 1753 Marc-Antoine Laugier
discussed the search for underlying universal principles in architecture that would
raise architecture from the arts to the sciences through the means of science “strict
reasoning”. This line of thinking was furthered by Quatremère de Quincy who in
contrast to Laugier’s universal principles and instead like Montesquieu with his notion
of ‘particularising principle, capable of explaining everything’s difference’ (Hvattum
2004: 38) ‘presented the origin of architecture as a historically and geographically
differentiating principle.’ (Hvattum 2004: 39) This in turn gave rise to the comparative
method in architecture. Jean-Nicolas-Louis Durand (1760-1834) pursued the
development of a type of comparative matrix, developed by his mentor Julien David
Leroy (1724-1803). In his book Recueil et parallèle des édifices de tout genre,
anciens et modernes published in 1799 he used the comparative method to survey
building types from different times and contexts, aiming to:
‘fulfil functional and representational demands with as much economy as
possible. His “type” was the geometric translation and optimization of these
demands. By systematically extracting and classifying such types from the
history of architecture, the Recueil was to provide students of architecture
with empirical standards for design.’ (Hvattum 2004: 120)
Gottfried Semper also aimed at developing a comparative method in architecture,
‘one that could replace Durand’s mechanistic typology with a new dynamic-organic
notion of type’. (Hvattum 2004: 123) The Dutch art historian and philosopher Caroline
van Eck elaborated that:
‘Gottfried Semper used notions founded in Cuvier’s classification of animals
as the basis of a typology of architectural form, which in its turn was to serve
as the basis both for a theory of invention and for a way of articulating the
non-literal meaning of the visual arts. Semper’s theories are the culmination
of philosophically oriented organicism, in which concerns for the invention
and interpretation of architecture are closely intertwined.’ (Caroline van Eck
1994: 214)
Semper as well as Carl Gottlieb Wilhelm Bötticher (1806-1841) pursued organicism
in search of an architectural style and ‘shared a great concern for the autonomy of
architecture, which led to an increasingly tectonic view of architecture as essentially
a way of creating and covering space. This view of architecture was supported by a
corresponding view of the relation between architecture and nature, in which the
imitation of the constructive methods of nature, and the way structure is represented
in nature’s forms, is the central consideration’. (Caroline van Eck 1994: 142)
As van Eck pointed out the French architect and theorist Eugène Viollet-le-Duc
(1814-1879) also worked on the relation between architecture and living nature
‘which constituted the next stage of the development of scientific organicism - a
much more limited and completely secular approach, inspired by Cuvier’s work and
by recent developments in mechanics and other branches of physics.’ (Caroline van
Eck 1994: 215)
In setting forth morphology Johann Wolfgang von Goethe (1749-1832) postulated a
crucial distinction between Gestalt or structured form, and the process of Bildung or
formation, which changes structured form in a perpetual process. He stated ‘when
something has acquired a form it metamorphoses immediately into a new one’.
(Goethe 1817) Cassirer elucidated that:
‘For Goethe, too, the concept of type held a dominant position, and without it
he could not have proposed and developed his theory of metamorphosis
He never renounced this idea of type, though he modified the version of
Cuvier and Candolle He did not think geometrically or statically, but
dynamically throughout Form belongs not only to space but to time as well,
and it must assert itself in the temporal.’ (Cassirer 1950: 138-139)
Goethe related the realisations that had grown out of his botanical studies and
evolved into morphology to architecture. As Caroline van Eck pointed out Goethe like
Schlegel pursued ‘organicism as a strategy’, which ‘resulted in two extremely
important new insights: that architecture can be considered from the perspective of
formal development, and that architecture can share the autonomy of the living
organism, despite its functional character.’ (Caroline van Eck 1994: 125)
Goethe’s work on morphology caused an immense acceleration of related research,
documentation, and in particular illustration. Johann Baptist von Spix (1781-1826),
for instance, published in 1811 his History and Evaluation of all Systems in Zoology
according to their appearance from Aristotle until the present time (Geschichte und
Beurtheilung aller Systeme in der Zoologie nach ihrer Entwicklungsfolge von
Aristoteles bis auf die gegenwärtige Zeit) and brought thousands of specimen back
from a journey to Brazil and worked extensively on their classification and illustration
until his early death in 1826. In 1904 the zoologist and professor of comparative
anatomy Ernst Haeckel (1834-1919) published his seminal and elaborately illustrated
Art forms of Nature (Kunstformen der Natur), which contained systematic and
comparative illustrations of organisms shown in meticulous detail. This work became
influential among Art Nouveau artists and architects. [Fig. 02, p. 27]
Fig. 02 Plate 8 “Discomedusae” from Ernst Haeckel, (1904), Kunstformen der Natur, Leipzig, Vienna:
Bibliographisches Institut. Source: Reprint, (1998), Munich, New York: Prestel.
In 1916 E. S. Russell published his influential Form and Function A Contribution to
the History of Animal Morphology (Russell 1916), in which he discussed three
approaches to morphological thought:
‘The main currents of morphological thought are to my mind three - the
functional or synthetic, the formal or transcendental, and the materialistic or
disintegrative The main battleground of these two opposing tendencies is
the problem of the relation of function to form. Is function the mechanical
result of form, or is form merely the manifestation of function or activity? What
is the essence of life - organisation or activity? In the course of this book I
have not hidden my own sympathy with the functional attitude. It appears to
me probable that more insight will be gained into the real nature of life and
organisation by concentrating on the active response of the animal, as
manifested both in behaviour and in morphogenesis than by giving
attention exclusively to the historical aspect of structure, as is the custom of
"pure morphology." I believe we shall only make progress in this direction if
we frankly adopt the simple everyday conception of living things that they
are active, purposeful agents, not mere complicated aggregations of protein
and other substances.’ (Russell 1916: Preface)
One year later in 1917 D’Arcy Wentworth Thompson published On Growth and Form
(Thompson 1917). Thompson pursued an argument that postulated the role of
physical and mechanical conditions as a critical factor in the development of the
morphology of organisms and that criticised the emphasis on the primacy of
evolution as a determinant. In parallel to explicating through numerous examples the
correlation between morphology and physical conditions Thompson also developed a
simple and powerful descriptive graphic method for the chapter The Comparison of
related Forms, in which he used a grid as an overlay to illustrate the geometric
transformation that establishes the difference in the form of related animals.
His work has remained influential in architecture and has been used to underpin
various approaches to morphology in architecture. Meanwhile rival approaches to
morphology continued to develop in the discipline of biology. These shared the
understanding that the structure of organisms constitutes an organised system, but
differ in the question as to how the structure of organisms is to be understood. One
of the approaches is based on the assumption that the parts of the body of an
organism are related in a functional sense, while the second posits that the build of
the body follows an underlying plan or schema and that the parts of the body are not
functionally, but, instead, structurally organised. Likewise, current morphological
approaches to architecture tend to either be form or function driven and in so doing
continue to mirror in some way the discussions in morphology in biology.
The biologists Walter Bock and Günther Wahlert pointed out that a part of a body can
display more than one functional potential, and that it therefore constitutes a form-
function-complex characterised by specific capacities than can, but must not
necessarily be used. (Bock & Wahlert 1965) In the context of functional
morphological approaches an organism cannot plainly be seen as a functionally
organised system only, but instead, as a layer in a hierarchy of functional systems.
(Hertler 2005) This may be of operational use when considering aspects of multi-
functionality in architecture, but must obviously be integrated into a more overarching
and integrative approach to the relation between architecture and environment.
Therefore it is now necessary to reconsider the disciplinary affiliation of architecture
to biology in order to address aspects above and beyond form-function relations and
to inquire into wider questions of the relation of the human made environment in
relation to the abiotic and biotic environment and ecology more specifically.
The work of Frei Otto and the special research group SFB 230 focused on research
of morphology in both living and non-living systems to examine form-generation
processes in both domains so as to develop suitable analogies, methods and models
for architectural research and design (see for instance Otto 1988). These were
elaborated into a design method called form-finding in which material systems
acquire their optimal (structural) shape under extrinsic influence (see for instance
Otto & Rasch 1995). Form-finding as a design method thus utilises the active agency
and interaction between non-living systems and their environment. However, thus far
form-finding in the main established a single criterion optimisation process and needs
to be restated as a multi-criteria process that does not necessarily tend towards an
optimal state. This relates also to questions of evolution in biology and to the way
architecture relates to evolution.
Two protagonists of natural selection shaped the notion of evolution: Alfred Russell
Wallace (1823-1913) (On the Tendency of Varieties to depart indefinitely from the
Original Type, 1858) and Charles Darwin’s (1809-1882) (On the Origins of Species,
1859). Darwinian evolution relinquished the notion of fixed species that had prevailed
before and affected the discipline of biology so wholesomely that the evolutionary
biologist and geneticist Theodosius Dobzhansky (1900-1975) posited that ‘nothing in
biology makes sense except in the light of evolution’ (Dobzhansky 1973).
Gregor Mendel (1822-1884) ushered in the discipline of genetics by demonstrating
the inheritance of traits in his famous experiments with peas. The Danish botanist,
plant physiologist and geneticist Wilhelm Johannsen (1857-1927) proposed first the
notion of ‘gene’ and later the notions of ‘genotype’ and ‘phenotype’. (Johannsen
1903) An organism’s ‘genotype’ is the complete inherited information embedded
within its genetic code, while its ‘phenotype’ unfolds from the interaction between the
genotype, environmental factors and random variation. The resulting phenotypic
variation is a key process in evolution by natural selection. ‘Phenotypic plasticity’
constitutes the extent to which the genotype determines an organism’s phenotype.
High phenotypic plasticity entails a greater environmental influence on the
development of the phenotype, while low plasticity entails a greater influence of the
How to employ concepts from evolutionary biology in architecture has been
discussed for a while and evolutionary processes in architectural design by
computational means have been experimented with for a number of decades (see for
instance Frazer 1995). These tend to follow two different trajectories, one that tends
towards a single optimised design output, and another that stakes out a wider search
space for solutions that pass a specified benchmark of criteria. What is at any rate of
interest are the recursive and iterative steps by which designs are evolved, as well as
the built in pseudo random factor that provides a computational equivalent to
mutation. This entails that the process will generate results that the designer might
have either not considered or discarded before detailed analysis. The other aspect of
interest is that generative and analytical modes are often methodologically linked in
evolutionary design processes that are frequently utilised to ‘solve’ complex design
problems and integrating multi-functional criteria.
A further disciplinary relation between architecture and Biology concerns the
development of bionics, biomimetics, biomimicry and bio-inspired design. The term
bionics was coined by the American medical doctor and US Air Force colonel Jack E.
Steel (1924-2009). The term biomimetics is credited to the American biophysicist and
engineer Otto Herbert Schmitt (1913-1998). Biomimetics is understood as ‘the
abstraction of good design from nature’ (website of the Centre for Biomimetics at the
University of Reading) and frequently involves the analogous translation of principles
in living nature to mechanical engineering. The German architect Frei Otto and the
biologist Günter Helmke and their collaborators at the Institute of Lightweight
Structures in Germany conducted extensive morphological studies that related
biological form to architectural form. Current distinguished experts in the field of
biomimetics include Werner Nachtigall (Nachtigall 2002) in Germany, and Julian
Vincent (Vincent 1999, Vincent at al 2006) and George Jeronimidis in England.
‘Theory for Inventive Problem Solving’ better known in its Russian acronym TRIZ
was developed by the Russian engineer Genrikh Saulovich Altshuller (1926-1998)
Altshuller 1984, 1994, 1999). This theory is based on a detailed analysis of historical
pattern of invention across many different fields, with the aim to generalize and
systematize these patterns so as to make them available for problem solving.
BIOTRIZ is a version of this approach that does not commence from an analysis of
human invention pattern but, instead, from biological systems and their specific
functional adaptation to highly specific conditions (see for instance Vincent et al
2006). Here the aim is to utilize biological systems as a source to inform problem
solving and design and engineering. Research fellow Defne Sunguroğlu Hensel
examines in her PhD thesis entitled ‘Correlating Biology and Architecture in problem-
solving’ (to be submitted in December 2012) the use of BioTriz, etc., in architectural
From the above it is possible to conclude that past and present fascinations of
architecture with biology have served to derive metaphors, analogies, morphologies,
as well as formal and functional repertoire for architectural design and problem
solving in architecture. In these instances problem that needed to be solved was
defined by the design problem.
However, the premise changes entirely when the natural environment becomes part
of the architectural design problem as the fast expanding build environment
increasingly replaces the natural environment. The question that arises is whether
the prevailing perceived division between man-made and natural environment can be
upheld any further in a similar manner as before? What happens when the natural
environment becomes increasingly part of the design problem? Due to this
development the concept of ecology is currently being re-examined and appropriated
at an increasing rate in architecture. However, it is doubtful that metaphors or
analogical modes will suffice on their own in addressing the task at hand. Instead it is
important to examine how the built environment can be integrated with the local
physical environment and ecosystems and promote biodiversity specific to particular
local ecosystems. This is a complex task that requires careful research and
elaboration as it necessitates scrutinizing and altering what constitutes architectural
core knowledge and repositioning the disciplinary affiliations between architecture
and biology.
2.2 The rise of the notion of environment
‘The notion of environment (milieu) is becoming a universal and required way
of capturing both the experience and the existence of living beings and we
could almost speak of it being a category of contemporary thought.’
(Canguilhem 1980 [1952]) (Translation by Graham Burchell)
Environment is one of the notions of fundamental importance in relation to
performance-oriented architecture. It is, however, a notion with greatly varying
definitions and implications and requires clarification in relation to the approach to
performance-oriented architecture elaborated here within.
The German physicist Rudolf Julius Emanuel Clausius (1822-1888) is frequently
credited for formulating the concepts of entropy and surroundings (later
environment). However true this might be in relation to his foundational works in
thermodynamics, there are also other key historical contributions to the notion of
Thomas Brandstetter and Karin Harrasser (Brandstetter & Harrasser 2010)
foregrounded two significant works in relation the genealogy of the notions of
ambiance and milieu from the 1940ies: Leo Spitzer’s “Milieu and Ambience An
Essay in Historical Semantics” (Spitzer 1942) and Georges Canguilhem’s lecture
from 1946-47 later published under the title “Le vivant et son milieu” (Canguilhem
1980 [1952]). Spitzer traced the development of the notion of ambiance from the
Greek periechon and Latin ambiens via the notion of medium to the modern notions
of ambiance and milieu. Canguilhem commences from the 18th century import of the
notion of environment from mechanics into biology.
Both cite Isaac Newton (1642-1727) who used the notion of medium to refer to ether
as the locus of gravitational force and Auguste Comte (1798-1857) who extended the
French term milieu to encompass not only the physical medium that surrounds an
organism, but to include the general scope of external conditions that are necessary
to support the existence of a specific organism.
Where they defer is in assessing the work of the biologist Jakob von Uexküll (1864-
1944) who examined how living beings perceive their environment subjectively.
Uexküll posited that:
‘All reality is subjective appearance. This must constitute the great,
fundamental admission even of biology. It is utterly vain to go seeking through
the world for causes that are independent of the subject; we always come up
against objects, which owe their constitution to the subject.
When we admit that objects owe their construction to a subject, we tread on
firm and ancient ground, especially prepared by Kant to bear the edifice of the
whole of natural science. Kant set the subject, man, over against objects, and
discovered the fundamental principles according to which objects are built up
by our minds.
The task of biology consists in expanding in two directions the results of
Kant’s investigations:
[i] by considering the part played by our body, and especially by our sense-
organs and central nervous system, and
[ii] by studying the relations of other subjects (animals) to objects.’
(von Uexküll, 1909: xv)
Consequentially Uexküll introduced a distinction between the general surrounding or
Umgebung and subjectively perceived environments or Umwelt (Uexküll 1909,
1926), and the Umwelt from the inner world or Innenwelt of an organism. [Fig. 3]
Fig. 3 Diagram showing ‘the inner world (of an organism) is divided into two parts; one, which receives
the impressions, is turned towards the world-as-sensed, and the other, which distributes the effects, is
turned towards the world of action. Between mark-organ and action-organ lies the watershed of the
whole function-circle (Uexküll 1926: 155) In this way, the animal’s own action-rule fits in with the
indications stimulated from without Uexküll 1926: 157) Source: Uexküll 1926: p. 157.
The study of the relation of animals to their environments or umwelten led Uexküll to
posit that the reaction to perceived sensory data as signs entails that all organisms
are subjects. In so doing he gave rise to a field of study in biology entitled
biosemiotics, a termed coined by the psychiatrist and semiotician Friedrich
Rothschild (1899-1995) (Rothschild, 1962). Kalevi Kull elaborated:
‘Biology has studied how organisms and living communities are built. But it is
no less important to understand what such living systems know, in a broad
sense; that is, what they remember (what agent-object sign relations are
biologically preserved), what they recognize (what distinction they are
capable and not capable of), what signs they explore (how they
communicate, make meaning and use signs) and so on. These questions are
all about how different living systems perceive the world, what experience
motivates what actions, based on those perceptions.’ (Else, 2010: 31)
This notion of the subjective perception of umwelt offers an interesting approach to
the notion of environment in that it involves the organism’s active agency and relates
to questions of agency elaborate in chapter 2.4 ‘The Rise of Performance in the
Humanities, Arts and Science’. As Brandstetter and Harasser pointed out Spitzer
was critical in his article from 1942 of a pronounced leaning to determinism in relation
to specific scientific notions of milieu and umwelt (Brandstetter & Harrasser 2010:
14). However, George Canguilhem argued that Uexküll’s notion of umwelt took
adequate account of the irreducible activity of live (Brandstetter & Harrasser 2010:
16) and maintained that:
‘if science is the work of a humanity rooted in life before being enlightened by
knowledge, if it is a fact in the world at the same time as a vision of the world,
then it sustains a permanent and necessary relation with perception. And
therefore man’s specific environment is not situated in the universal
environment like content in its container. A centre is never resolved into its
environment. A living being is not reducible to a meeting point of influences.
Whence the inadequacy of any biology which, through complete submission
to the spirit of the physicochemical sciences, would eliminate from its domain
every consideration of meaning. A meaning, from the biological and
psychological point of view, is an assessment of values in keeping with a
need. And for who experiences and lives it, need is an irreducible and thereby
absolute system of reference.’ (Canguilhem 1980 [1952]) (Translation by
Graham Burchell)
Considerations of perception and agency are therefore key to the notion of
environment in performance-oriented architecture. Moreover it is also of importance
to address environment in its breadth including the biophysical, social and built
environment and their interactions. For this to be possible it is necessary to have an
understanding of the concept of systems and their environment. The next chapter
focuses therefore on the rise of systems-theory and systems-thinking and their
impact on architecture.
It is noteworthy that bio-semiotics may offer a significant approach to questions of
semiotics and symbolism in the man-made environment, not only in relation to the
more obvious aspect of species integration, but also as a way of addressing the
problems associated with the postmodern approach to questions of meaning and
representation in architecture as discussed in chapter 1.
2.3 The rise of systems-theory and systems-thinking and their impact on
The rise of systems-theory and systems-thinking is of marked importance in the
sense that systems and their surroundings are a indispensible core concept for
performance-oriented architecture.
In general terms a system constitutes a set of interacting or interdependent elements
and relationships forming an integrated whole. A system has a boundary and
surroundings and the nature of the system boundary and its relation to its
surroundings determines whether a system is open or closed. The notion of system
can be tracked back to the foundational works of Plato and Aristotle and Euclid. For
the purpose at hand it may, however, be permissible to forego an account of the
ancient and medieval development of the system concept.
Angelica Nuzzo, Professor of Philosophy at Brooklyn College, CUNY, pointed out
that in the 17th and 18th century the word ‘system’ was one of the most often used
ones in the title of philosophical works. (Nuzzo 2003) In tracing the historical
development of the notion of system during this period Nuzzo referred to the Swiss
mathematician, physicist and philosopher Johann Heinrich Lambert (1728-1777) and
his Fragments of a Systematology (Fragment einer Systematologie):
‘If one considers the definition of system “in its full extent, which it acquired by
and by”, one must admit that every whole must be termed system. And if
there exists a whole that cannot be called a system, than this happens
because this whole is not fully understood or because it is itself part of
another system.’ (Nuzzo 2003: 11) [my translation]
The philosophical reflection of the concept of system reached a highpoint in
Immanuel Kant’s (1724-1804) Critique of Pure Reason (Kritik der reinen Vernunft)
published in 1781. Nuzzo argued that while Kant emphasized the methodological
and epistemological function of system as a rational concept or idea, he also denied
its objective reality. For Kant the system fulfilled a methodological function in relation
to the ‘constitution and architecture of human knowledge, that is ontologically not
actualized in reality. Thus system is not an empirical entity, but a transcendental
concept for the understanding of objects.’ (Nuzzo 2003: 16) [my translation]
Furthermore Nuzzo posited that Kant’s Critique of Judgement (Kritik der Urteilskraft)
(1790) constituted a ground-breaking advancement for the development of the
concept of system in that
‘the inner teleology of the system structure, the reciprocal dependencies of all
elements of the organic totality that is inherent to the existence of the whole,
serves Kant to initiate a science theoretical as well as practical outlook at
organic nature and the “world in general”. This constitutes a specific heuristic
perspective of reflective judgement. The transcendental position, which
denies all ontological and empirical validity, is preserved in the Critique of
Judgement, yet system is used as if organisms are systems.’ (Nuzzo 2003:
17) [my translation]
(The ‘as if’ clause applies also to the way in which the concept of system and
environment is understood in the context of the proposed approach to performance-
oriented architecture.)
Initial developments of system concept in the sciences are related to the
development of thermodynamics. In 1803, the year of Luke Howard’s publication On
the Modification of Clouds, the French mathematician, engineer and politician Lazare
Nicolas Marguerite Comte Carnot (1753-1823) published Fundamental Principles of
Equilibrium and Movement in which he examined the conservation of mechanical
energy and inferred that perpetual motion is not possible. In 1824 his son the French
military engineer and physicist Nicolas Léonard Sadi Carnot (1796-1832) made a
significant contribution to thermodynamics by way of the co-called Carnot cycle,
which constitutes the most efficient cycle for converting thermal energy into work.
The contained principles were further developed by his compatriot Benoît Paul Émile
Clapeyron (1799-1864) in his publication Driving force of the heat in 1834, and by the
German physicist Rudolf Julius Emanuel Clausius’ (1822-1888) On the mechanical
theory of heat, published in 1850. The latter is also credited for formulating the
concepts of entropy and surroundings (later environment) as mentioned in the
previous chapter. Their successive efforts led to the formulation of the second law of
A further important contribution was Alexander Bogdanov’s Tektology Universal
Organisation Science (Bogdanov 1922). The polymath Bogdanov aimed at
developing a general law of organisational principles that underlie all systems.
Leonid Abalkin, Director of the Russian Academy of Sciences explained:
‘Tektology was one of the first attempts to find and analyse general laws of
the development of nature, society and thinking from an organisational point
of view. Its birth was organically connected with the deep social perturbations
experienced by European societies during the first quarter of the 20th century,
and with the crisis in the natural sciences, which occurred a little earlier.
There was an urgent need to develop and apply a new type and style of
scientific thinking. This need could only be met through the application of
universal, encyclopaedic knowledge, which overcame the limitations of a
specialised approach A qualitatively new feature of Tektology was that it
attempted to discover the general laws by which systems are established and
function Bogdanov’s ideas and conclusions in many respects anticipated
the birth of systems theory and cybernetics, and became an integral part of
these new disciplines.’ (Abalkin 1998: 7-8)
Major figures associated with the development of Systems Theory include Ludwig
von Bertalanffy who developed General Systems Theory (1901-1972) (von
Bertalanffy 1945, 1950, 1951, 1968, etc.), William Ross Ashby (1903-1972) (Ashby
1946, 1947, 1962, etc.), Anatol Rapoport (1911-2007) (Rapoport 1986), Charles
West Churchman (1913-2004) (Churchman 1970), Niklas Luhmann (1927-1998),
and others.
In elaborating the aims of General System Theory von Bertalanffy stated that:
‘There exist models, principles, and laws that apply to generalized systems or
their subclasses, irrespective of their particular kind, the nature of their
component elements, and the relationships or "forces" between them. It
seems legitimate to ask for a theory, not of systems of a more or less special
kind, but of universal principles applying to systems in general.’ (von
Bertalanffy 1968: 32)
The more generally defined Systems Theory constitutes a field of transdisciplinary
study of systems and underlying principles that are applicable to all systems in all
areas of inquiry. In this context systems are frequently understood as self-regulating
my way of feedback.
Cybernetics involves the interdisciplinary study of regulatory systems. The field
began to develop in the 1940s. Major figures include Norbert Wiener (1894-1964)
(Wiener 1948), Warren Sturgis McCulloch (1898-1969), Arturo Rosenblueth Stearns
(1900-1970), Louis Couffignal (1902-1966), William Ross Ashby (1903-1972), Julian
Bigelow (1913-2003), Walter Harry Pitts Jr. (1923-1969), etc. The French
mathematician Louis Couffignal described cybernetics as ‘the art of ensuring the
efficacy of action’. (Couffignal 1956) In general the aim of cybernetics is to
understand and define the functions and processes of goal-based systems that
participate in causal sequences. This entails the circular sequence of action, sensing,
evaluation, and further action, aiming for efficiency and effectiveness. Cybernetics is
related to Systems Theory, Information Theory (concerned with the quantification of
information) (Shannon 1948), Control Theory (concerned with the behaviour of
dynamical systems), Systems Analysis (concerned with sets of interacting entities),
Operations Research (concerned with effective use of technology by organisations)
(Saaty 1959, 1961), and Systems Engineering (concerned with the design and
management of complex engineering projects and their associated life cycles).
One contribution of note in this context is Thomas L. Saaty’s development of the
Analytical Hierarchy Process (AHP). AHP constitutes a decision making approach
towards large scale, multi-criterion and multi-stakeholder problems (Saaty 1980,
Within Systems Theory there exists a distinct difference between the hard systems
and soft systems approaches. The former focuses on quantifiable variables, while
the latter concentrates on non-quantifiable ones. In consequence hard and soft
systems approaches vary greatly in their subject matter and methods. Hard Systems
approaches utilises mathematical modelling, statistical analysis, simulation and
optimisation methods. Soft Systems approaches utilise a variety of different methods
such as Action Research (Lewin 1946, Burns 2007, etc.), Soft Systems Methodology
(Davies & Ledington 1991, Checkland & Scholes 1990, Stringer 1999, Wilson 2001,
etc.) and Morphological Analysis (Fritz Zwicky 1969). The latter serves to study all
possible solutions to a multi-dimensional and non-quantified problem. A third and
decidedly interdisciplinary approach known as Evolutionary Systems developed from
the work of the Hungarian linguist and systems scientist Béla Bánáthy, who posited
evolutionary systems as open, complex systems, which evolve over time (see for
instance Bánáthy 1998).
Systems-thinking comprises of a series of practices that embrace a holistic view and
understanding of how things influence one another in a specific manner. Over time it
fostered the rise of the notions of synergy, complexity, interdependence, hierarchy,
differentiation, self-organization, emergence, etc.
The rise of Systems Thinking in architecture is not so easy to pinpoint. A series of
historical developments preceded what is portrayed below, in part related to the
relation between the development of the sciences and architecture as discussed in
the previous chapters. Here it shall suffice to examine the more directly related
advances that had their distinct role in the line of developments towards a
performance-oriented architecture.
One such development originated with the tremendous disparities in views regarding
function and style in modern architecture in the early 1930, in particular in relation to
the ‘International Style’ exhibition at MOMA in New York in 1932, curated by Philip
Johnson, Henry-Russell Hitchcock, and Alfred Barr. As Felicity Scott (2002) and
Joachim Krausse (2011) pointed out the exhibition’s exclusive emphasis of style over
other attributes of modern architecture prompted a succinct reaction by those that felt
that this emphasis was inadequate and lacking. In response to the exhibition and its
consequences Buckminster Fuller, Fredrik Kiesler, Knud Lönberg-Holm, Carl
Theodore Larson, and others, launched the Structural Study Associates group and
the revamped Shelter magazine (formerly T-Square magazine) as a decidedly critical
response to the exclusive emphasis on style. The new subtitle of the May issue of
Shelter in 1932, ‘A Correlating Medium for the Forces of Architecture’, made Fuller
and colleagues’ intention clear: focus was placed on the investigation of ‘the
reciprocity and interaction of forces between dynamic systems and their
“environments”, and thus to find form.’ (Krausse 2011: 47) Joachim Krausse
characterized Fuller’s approach as ‘authentic functionalism’ in which it is not the
machine, but instead the ‘recreative potential of life’ on which technical reproduction
and standardization is based. Pointedly Krausse labelled this take ‘systems approach
avant la lettre’. (Krausse 2011: 17) Moreover, Krausse recounts Kiesler’s biology
related theory of ‘Correalism’ as ‘the interactions between the human subject and
natural and biological environment’. (Krausse 2011: 47) Fuller and Kiesler’s
approach thus contained some of the nuclei of performance-oriented architecture by
foregrounding the interaction between dynamic systems.
In 1936 Larsen and Lönberg-Holm co-authored an article for Architectural Review
entitled ‘Design for Environmental Control’ in which they stated that:
‘the introduction of industrial techniques makes possible a new design
integration which is needed for a more precise control of environmental forces
it is not economical to air-condition forms that have evolved out of
requirements of natural ventilation These varying and changeable forms of
energy constitute the material for design.’ (Larsen & Lönberg-Holm 1936:
Joachim Krausse argued that through this arising notion of environmental control the
SSA departed wholesomely from orthodox functionalism (Krausse 2011: 52).
Moreover, the experimental, research oriented and integrated approach of the group
began to gradually manifest as an educational approach. Larsen, for instance,
initiated and directed the Architecture Research Laboratory (ARL) at the University of
Michigan in 1948, which pursued the integration of planning, design, research,
construction, and technology.
Christopher Alexander’s seminal ‘Notes on the Synthesis of Form’ (Alexander 1964)
constituted a significant effort to develop an approach to design rooted in systems
thinking, featuring all the key traits of the systems discussion of the time: fitness,
feedback, adaptation, hierarchical structures, and so on. Alexander argued that the
need for this approach had arisen from the shift from traditional ‘unself-conscious’
processes of designs over generations with small adaptations taking place all the
time and knowledge passed on from master to apprentice, towards todays condition
where the complexity of design problems needs to be handled in a short time in a
‘self-conscious’ design process. Alexander stated vehemently that ‘cultural pressures
change so fast, any slow development of forms becomes impossible Bewildered,
the form-maker stands alone the intuitive resolution of contemporary design
problems simply lies beyond a single individual’s grasp.’ (Alexander 1964: 4-5) In his
pursuit to articulate a systems-based approach to design Alexander acknowledged
the need of functionality, but aimed to reposition it by stating that ‘we ought always to
design with a number of nested, overlapped form-context boundaries in mind.’
(Alexander 1964: 18) Therefore, in consequential rejection of singular cause and
effect relations that would best be answered in some kind of optimisation process
Alexander called for some kind of synthesized loose-fit relation:
‘A design problem is not an optimisation problem. In other words, it is not a
problem of meeting any one requirement in the best possible way For most
requirements it is important only to satisfy them at a level which suffices to
prevent misfits between the form and the context, and to do this in the least
arbitrary manner possible.’ (Alexander 1964: 99)
Eventually the search for a systematic design ‘logic’ and the intention to reduce
arbitrariness led Alexander to the formulation of a mathematical approach and thus
to operating with quantifiable variables. Later Alexander revised his theoretical and
practical approach, yet maintained a recognisable systems orientation in his work.
Interestingly, however, Alexander’s and also Marvin Manheim’s (Manheim 1966)
systems theoretical approach to design was recognised in the scientific community
that focused on systems theory and related fields. The field of hierarchical structures
research that developed in the wake of systems theory culminated in a symposium in
1968. In the proceedings Donna Wilson elaborated:
‘Alexander’s Notes on Synthesis gives a detailed discussion of both the
decomposition of a design problem and its recombination in solution. His
point that “design” is more then “selection” (which can be treated by computer
analysis) rests on the argument that for problems requiring “design” there
exist no adequate descriptions of a range of alternative solutions nor criteria
for evaluating these solutions in terms of the same descriptive symbolism.
Again we encounter the problem of parts and wholes discussed under
physics and biology.’ (Wilson 1969: 302)
In spite of these relations the direct influence of systems-theory and thinking in
architecture began to decline and made way instead for an increased emphasis of
standardisation. However, further developments of the Analytical Hierarchy Process
(AHP), TRIZ and BioTRIZ are promising require closer examination regarding their
possible contribution to performance-oriented architecture.
2.4 The rise of the notion of performance in the humanities, arts and science
The forth development of significance concerns the rise of the notion of performance
from the mid-20th Century onwards first in the humanities and social sciences and
later also in the arts and science in general. This development took shape during the
1940s and 1950s with an intellectual movement known as the ‘performative turn’: a
paradigm shift in the humanities and social sciences, with focus on theorizing
performance as a social and cultural element. This comprised of efforts that [i]
focused on the elaboration of a dramaturgical paradigm to be applied to culture at
large and that facilitated the view of all culture as performance (see the works of
Kenneth Duva Burke, Victor Wittner Turner, Erving Goffman, etc), and [ii] the work of
the British Philosopher of language John L. Austin, who posited that speech
constitutes an active practice that can affect and transform realities. (Austin 1962)
Due to the ‘performative turn’ performance is today commonly understood as a
concept that serves as a heuristic approach to understanding human behaviour, and
is rooted in the hypothesis that all human practices are performed and are affected
by their specific context. In so doing these efforts paved the way for an
understanding of active human agency.
This was followed by the ‘performative turn’ in the arts. Fine arts, music, literature
and theatre all ‘tend to realise themselves through acts (performances)’, thus shifting
the emphasis from works to events that increasingly involve the ‘recipients, listeners,
spectators’. (Fischer-Lichte 2004: 29) Furthermore, Fischer-Lichte proposed that
Austin’s notion of the performative is not only applicable to speech, but that it can
also be applied to corporeal acts. This relates to the development of the
‘performance arts’ as situation-specific, action emphasising and ephemeral artistic
presentations of a performer. It thus engages spatial and temporal aspects, as well
as the performer and a specific relation between performer and audience.
Subsequently the concept of performance also began to surface in the natural
sciences, technology studies, and in economic science. Andrew Pickering, Professor
of Sociology and Philosophy at the University of Exeter, charted a shift within the
sciences away from a ‘representational idiom’ and towards a ‘performative’ one,
proposing that:
‘Within an expanded conception of scientific culture - one that goes beyond
science-as-knowledge, to include material, social and temporal dimensions of
science it becomes possible to imagine that science is not just about
representation One can start from the idea that the world is filled not, in the
first instance, with facts and observations, but with agency. The world, I want
to say, is continually doing things, things that bear upon us not as observation
statements upon disembodied intellects but as forces upon material beings.’
(Pickering 1995: 5-6)
Pickering elaborated that ‘practice effects associations between multiple and often
heterogeneous cultural elements’, and that through practice operates the production
of knowledge and scientific practice as a means of doing things. (Pickering, 1995:
95) In so doing Pickering paved the way for an understanding of active human
agency in the context of the sciences, and of the world being filled with and
intrinsically characterised by active agency.
The concept of agency in philosophy and sociology refers to the capacity of a person
or entity to act in the world. Michael E. Bratman, Professor of Philosophy at Stanford
University, elaborated human agency as follows:
‘We are purposive agents; but we adult humans in a broadly modern world
are more than that. We are reflective about our motivation. We form prior
plans and policies that organize our activity over time. And we see ourselves
as agents who persist over time and who begin, develop, and then complete
temporally extended activities and projects. Any reasonably complete theory
of human action will need, in some way, to advert to this trio of features to
our reflectiveness, our planfulness, and our conception of our agency as
temporally extended I will say that these are among the core features of
human agency.’ (Bratman 2007: 21-22)
Studies of human agency are generally characterized by differences in
understanding within and between disciplines, yet it is usually not contested as a
general concept. The concept of non-human agency, however, has remained to
some extent controversial. Actor-Network Theory as developed by Michel Callon,
Bruno Latour, John Law, and others is a social theory that postulates non-human
agency as one of its core features. Bruno Latour explained that:
‘If action is limited a priori to what “intentional”, “meaningful” humans do, it is
hard to see how a hammer, a basket, a door closer, a cat, a rug, a mug, a list,
or a tag could act. They might exist in the domain of “material” “causal”
relations, but not in the “reflexive” “symbolic” domain of social relations. By
contrast, if we stick to our decision to start from the controversies about
actors and agencies, then any thing that does modify a state of affairs by
making a difference is an actor or, if it has no figuration yet, an actant.
Thus, the questions to ask about any agent are simply the following: Does it
make a difference in the course of some other agent’s action or not? Is there
some trial that allows someone to detect this difference?’ (Latour 2005: 71)
Latour referred to such items as ‘participants in the course of action awaiting to be
given figuration’. (Latour 2005: 71) Moreover, Latour elaborated that such
participants can operate on the entire range from determining to serving human
actions and from full causality to none, and called for analysis ‘to account for the
durability and extension of any interaction’. (Latour 2005: 72) The proposed grading
of causality is of interest in that it can serve as a systematic approach to specific
aspects of performance-oriented architecture.
There exist several fundamental criticisms of Actor-Network Theory. One key
criticism focuses on the property of intentionality as a fundamental distinction
between humans and animals or objects. Activity theory, for instance, operates on
intentionality as a fundamental requirement and thus ascribes agency exclusively to
humans. In contrast the concept of agency in Actor-Network Theory is not predicated
on the postulation of intentionality nor assigns intentionality to non-human agents.
Recognizing non-human agency does not, however, necessitate the relinquishing of
concerns for human intentionality. The different forms of self-conscious and un-self-
conscious intentionality will need to be incorporated into an integral approach to
performance-oriented architecture. Likewise the proposed notion of non-
anthropocentric architecture does not entail the removal of human concerns, but,
instead, the recognition that exclusively human centred approaches may be
inadequate in addressing today’s problems of the built environment in a more holistic
The above indicates that the general characteristics of the notion of performance that
are of use for the purpose at hand include: human and non-human agency,
interaction, context- and time-specificity, discoursivity and a related type of
2.5 The rise of the notion of performance in architecture
The 1960s witnessed one of the most complex systems-engineering projects ever:
the United State’s National Aeronautics and Space Administration’s (NASA) Apollo
human spaceflight program (1961-1972). The space-race and the Cold War related
construction of nuclear shelter bunkers necessitated the design of contained life- or
‘eco’-systems that required a far more complex approach to design and engineering
than ever before. In the midst of this context, in august 1967 the US-American
journal Progressive Architecture (P/A) dedicated an entire issue to a topic of
performance entitled ‘Performance Design’. P/A listed as points of origin for the
architectural approach it portrayed Systems Analysis, Systems Engineering and
Operations Research, all essentially hard systems oriented. Emphasis was therefore
placed on methods of addressing and modelling complex engineering problems,
which involved mathematical modelling towards optimisation and efficiency, and
generally the application of scientific method towards this end. Yet, while questions
outside of the scope of quantifiable variables such as aesthetics etc. were
recognized and discussed, and concerns with balancing quantitative and qualitative
measures and measurements were voiced, the shortcoming of the take portrayed by
P/A was the almost exclusive alignment with the hard systems approach. This
required that design problems had to be fully described a priori, and that the
approach was therefore mainly one of problem solving through methods that are not
equipped to account for unquantifiable variables. Therefore problem solving focused
mainly on questions of efficiency, effectiveness and optimisation. This constituted in
some ways a self-defeating process in that the increasing number of progressively
tighter standards that arose from the emphasis on efficiency and optimisation made a
systems approach increasingly redundant. [Fig. 4, p. 53]
Fig. 04 Cover of Progressive Architecture (P/A) journal from August 1967 themed “Performance Design”
At the same time, in the late 1960s, the critique of functionalism and hard systems
approaches in architecture drew the rudder into a number of different directions.
Functionalism and Rationalism became the foci of architectural debates and
triggered in their wake numerous counter reactions: Neo-functionalism, Neo-
rationalism, Post-functionalism, and so on. From here two parallel reactions of note
began to take shape. One reaction was a succinct critique of program as a
deterministic approach to the relation between space and space use rooted in hard
systems approaches. The other reaction was based on the gradual rise of semiotics
in architecture and triggered the aim to locate performance in the meaning of
architecture, the symbolic, in signification and representation thus fostering the
ascent of Postmodernism in architecture. The latter was in turn criticised from the
late 1980s and early 1990s onwards as operating on a limited set of culturally
determined references and thus finite repertoire incapable to produce a new
architecture. The emphasis shifted instead to more abstract formal experimentation
and the restatement of criteria for a new architecture (see for instance Unger, 1991
and Kipnis 1993), and the production of architectural effects (see for instance Kipnis
Eventually the last decade witnessed a gradual return of an explicit interest in the
relation between architecture and performance that involved also a revival of an
interest in function and a counter-reaction of exclusive formalism. Thus the
architectural community remained thoroughly split into those that continued to locate
performance in a quantifiable manner foregrounding function and functionality along
a hard systems approach and those that continued Philip Johnson’s ‘style, nothing
but style’ argument and thus locate performance in the formal aesthetic domain.
Typically the formally oriented architects criticise the functionalist as delimiting and
insufficient in order to constitute true architecture, while functionally oriented
architects criticise the formal approach as superficial and gestural. The unfortunate
dilemma of this argument is the simplistic conflation of performance with either
function or form.
Yet, there exist also the beginning of integrative approaches to performance-oriented
design as outlined in chapter 1. There exists no instantaneous solution to the
problem, since solving this predicament requires the formulation of an inclusive
overarching theoretical and methodological framework nurtured by extensive
research by design efforts. Such efforts have, however, gradually emerged over the
past two decades and the traits of an integrated performance-oriented architecture
can now begin to be formulated owing to these accumulative efforts.
2.6 Discourses and research by design en route to performance-oriented
The notion of performance-oriented architecture is rooted in the broadening of
architectural discourse as part of architectural practice and education during the 20th
century. These developments were related to efforts such as the German Werkbund
(1907-34), CIAM - International Congresses of Modern Architecture (1928-59), ANY
(1990-2000), etc. A robust relation between practice, discourse and dissemination
began to emerge from these efforts that led to the appearance of critical architectural
journals such as Shelter (1930-32 with Buckminster Fuller as Editor), Arch+ (1967-
ongoing), Oppositions (1973-84), Assemblage published by MIT (1986-2000), ANY
Journal (1993-2000), Grey Room (2000-ongoing), Log (2003-ongoing), and so on. A
key effort to revitalize the discourse and dissemination effort of groups like CIAM was
launched by the ANYone Corporation, established in 1990 and directed by Cynthia
Davidson, with the purpose ‘to advance the knowledge and understanding of
architecture and its relationships to the general culture through international
conferences, public seminars, and publications that erode boundaries between
disciplines and cultures’ (ANYone Corporation website). Due to the involvement of
Peter Eisenman, Jeffrey Kipnis, and others, part of these discussions focused on
design techniques, rigour and instrumentality, and sought to lay out key features of a
‘new architecture’ (Unger 1991, Kipnis 1993) and the production of ‘architectural
effects’ (Kipnis 1997).
In parallel and related to these efforts a reciprocal development of a rigorous and
instrumental approach to architectural design and theory took shape. Rigour was
understood as decisive in relation to the application of new design techniques
according to specified rules so as to suspend and overcome inherited
preconceptions about architecture, as well as to provide an instrumental approach to
design and the production of architectural effects.
These efforts were characterized by an accompanying interest into various sciences
and scientific developments. This resulted from a double movement, one that had a
continuous development of interest in science re-emphasized among other
developments by the rise of systems thinking in architecture, and another in the
perception of science as a medium of subversion by the ’68 generation. This new
interest placed emphasis to the notion of complexitythe return of systems theory in
a different guise - and a renewed particular interest in biology. Some architects
approached their interest into other disciplines from the outside - that is from an
architectural perspective in search of analogous processes and techniques for
generating new form, or, at the other end of this spectrum, productive concepts for
an integrated and interdisciplinary architectural discourse. The particular set of
biological concepts and related illustrations that received a great deal of attention at
the time included the morphological transformation diagrams of D’Arcy Thompson
(Thompson 1942), the epigenetic landscape of Conrad Waddington (Waddington
1940) and the morphogenetic diagrams of Rene Thom (Thom 1972), (for evidence
see Kwinter 1992a and 1992b, Lynn 1992, etc.) all concepts that foreground a
generative dynamic. In addition there emerged also an increasing interest in
biological notions, such as morphogenesis, adaptation, fitness, robustness, and
ecology, and, ultimately, into questions of performance in general. However, the
majority of these efforts were and still are geared exclusively towards the production
of architectural form or pattern.
Out of these discourses a more specific development ensued regarding formal
relations between context and design scheme. In this context Jeffrey Kipnis launched
a forceful critique of the predominant mode of collage as a ‘graft’ that produces
‘extensive incoherence and contradiction’ and operates on enhanced difference
between context and scheme, versus other forms of ‘grafts’ that ‘seek to produce
heterogeneity within an intensive cohesion’ (Kipnis 1993: 42). This reveals two
significant preferences: [i] heterogeneous instead of homogenous space (for an
account of the development of this preference see: Hensel, Hight, & Menges, 2009),
and, [ii] the development of specific types of relations between context and scheme.
Kipnis elaborated that ‘intensive coherence implies that the properties of certain
monolithic arrangements enable the architecture to enter into multiple and even
contradictory relationships’ and proposed a preference of a particular kind of relation
namely ‘shifting affiliations that resist to settle into stable alignments‘ (Kipnis 1993:
43). Moreover he stated that:
‘Affiliations are provisional, ad hoc links that are made with secondary
contingencies that exist within the site or extended context. Rather than
reinforcing the dominant modes of the site, therefore, affiliations amplify
suppressed or minor organisations that also operate within the site, thereby
re-configuring the context into a new coherence. Because they link disjoint,
stratified organisations into a coherent heterogeneity, the effect of such
affiliations is called “smoothing”.’ (Kipnis 1993: 45).
Such design strategies had the intention to overcome the hard-edged difference
between scheme and context that is characteristic of collage. If this difference is
‘smoothened’ through a specific set of relations between the scheme and its context
a new ‘coherence’ emerges that is ‘intensive’ because it relies on a mutual system of
organisation and the relationships that develop out if it. It can be argued that this was
a new beginning of a mode of embedding scheme and context into one another, one
that challenged the discreteness of architectures precisely because it subverts the
difference between project and context along their shared border and transforms the
latter into a zone of gradual transformation. Two arenas for elaborating such
strategies prevailed in the early 1990s: [i] architectural competitions and [ii] related
design-oriented studios in leading schools of architecture (both partly due to the
difficulty to gain direct commissions at the tail-end of an economic downturn). Often
there existed a direct link between these two arenas as the architects that
experimented with design in their competition entries also taught experimental design
studios and often experimented with the same subject matter and projects in both
arenas. It can be argued that by virtue of the discourse-driven experimental nature of
the design inquiry, the focus on method and the production of effect actual, and the
need for detailed observation and analysis, research by design in architecture
gradually evolved and came into a full swing.
In 1992 in the context of the newly established graduate design programme at the
Architectural Association in London, directed by Jeffrey Kipnis and Donald Bates, the
author and his peers were introduced to the above elaborated developments which
was rooted in Jeffrey Kipnis seminal article ‘Towards a new Architecture’
(Kipnis1993). In this article Kipnis elaborated five points or principles - in direct
response to an analogous attempt by Roberto Mangabeira Unger during the
ANYONE conference in 1990 (Unger 1991) - together with two divergent modes of
actualising these principles: firstly DeFormation with emphasis on the articulation of
articulated monolithic form and interstitial space, and, secondly, InFormation with
emphasis on questions of programme while de-emphasising form. It was immediately
obvious that the DeFormationist projects with their monolithic volumes, multiple
envelopes, and heterogeneous and interstitial spaces were located outside of the
canon of established architectural form (see for instance the Alexandria Library
Competition Entry in 1989 and the Nara Convention Hall Competition Entry in 1992
by Bahram Shirdel). Yet, these projects were mostly unbuilt and their detailed
articulation was difficult to grasp, in part due to their monolithic character and lack of
development into built architectures. It was not entirely clear whether these projects
just established a series of exotically shaped discrete architectures, or whether they
indeed entered into a different relation with their specific contexts. In the context of
the new graduate design programme the aim was therefore to pursue a detailed
development of DeFormationist architecture with an added goal of the decisive
dissolution of built form into a non-discrete tectonic landscape. If this would be
possible it would entail that built form would no longer constitute a figure-ground
relation, but, instead, built mass and landscape would engage in the formation of a
heterogeneous and coherent fusion that could offer new forms of architectural
provisions. In this context the author and collaborators worked on the Spreebogen
project and produced several key items that both developed and portrayed the
project intentions:
[i] A programme and event map that contained information about activities,
circulation, landscape items and surfaces for programme and public appropriation,
assembly fields, time-specific plantation schemes and lighting systems, river
regulation and flooding areas, in short all systems that organise the site and its
specific provisions [Fig. 5, p. 61];
[ii] An axonometric that elaborated spatial transitions and degrees of interiority in
conjunction with landscape surfaces that make up the tectonic landscape together
with other spatial elements such as season-specific plantation instructions, etc. [Fig.
6, p. 61];
[iii] A conceptual model that indicated the fusion of landscape and built mass into one
another, using colour-coding for the various surface systems that make up the
tectonic landscape [Fig. 7, p. 62].
Fig. 5 Top: Spreebogen Program and Event Map
Fig. 6 Bottom: Spreebogen Tectonic Systems Axonometric
Fig. 7 Spreebogen Conceptual Model
However, it was not possible in the time of a single academic year to develop the
notion of a tectonic landscape into a fully defined architectural scheme, although the
foundation for a new series of experimentations and indeed the foundations of a new
research trajectory were laid. The significance of this design experiment was not in
its proximity to what has come to be termed ‘landscape urbanism’. Instead it is its
organisation of the various items and systems that would eventually culminate into
an urban and architectural project that is seamlessly integrated into its existing
context. This held the promise of redefining a heterogeneous spatial scheme based
on expanded spatial transitions and the extension of the architectural threshold, in
short the ushering in of emerging traits of a performative architecture. In this might
lay perhaps one of the greatest potentials with regards to Kipnis heralded emergence
of new institutional form and social formations, and architectural effect.
Further ensuing developments in architectural discourse, practice and education did
not broadly embrace this development. It so clearly contradicted the aspiration of
architects to distinguish their design as discernibly individual, new and different all
characteristics that are difficult to accomplish with a built fabric that is intensively
fused with its context in the manner described above. A renewed interest in discrete
architectures surfaced that focused on ‘exotic’ form, groups of progressively varied
built volumes, on ‘flocks’, ‘swarms’, and ‘schools of fish’, as they were referred to at
the time. On an urban scale such designs have come to be known as ‘parametric
urbanism’. In such projects frequently architects exclusively sculpt the exterior,
whereas normative design solutions define the interior. Thus the architectural project
has gradually become one of a total exterior by necessity ideally designed by one
and the same practice in order to maintain coherent appearance to fulfil the criteria of
similarity and variation of the ‘school of fish’ arrangement. Yet, numerous
experiments continued in promising directions that came to be of significance to the
development of performance-oriented architecture.
One key development concerns the notion of architectural effect. In examining
Herzog and de Meuron Signal Box project Jeffrey Kipnis highlighting the material
effects emanating from the copper strip skin laid over the actual climate envelope of
the building (Kipnis 1997). Here Kipnis distinguished ornamentation from cosmetics,
characterising the former as discrete aesthetic entities and the latter as fields and as
atmospheric. This notion of fields and atmospherics resonates in some way with
Kwinter’s notion of micro- and macro-architectures elaborated above, due to how
these notions extend the focus beyond the physical envelope of architectures.
What comes to the fore is the notion of the performance envelope of a project: the
sum of all conditions arising from the interaction between the architecture and
context-specific circumstances. This significantly differs from Greg Lynn’s notion of
performance envelope as a ‘multi-type’ which ‘does not privilege a fixed type but
instead a series of relationships or expressions between a range of potentials this
concept of an envelope of potential from which either a single or series of instances
can be taken’. (Lynn 1999: 13-14) Lynn described what is today widely known as
‘associative’ models the form of which can be parametrically varied and informed by
multiple inputs. In and of itself this notion tends only to lead back to ‘school of fish’
type designs. In the context of OCEAN, initially founded in 1994 as an international
network of collaborating architects, the performance envelope of an architecture was
instead seen to comprise not only of the form of the material envelope but also of the
conditions and effects generated by it, which arise from its interaction with
anticipated conditions and inadvertent contingencies. (Bettum & Hensel 2000) The
unbuilt Synthetic Landscape Pavilion by OCEAN, for instance, was designed
according to such considerations:
‘The pavilions performance becomes a function of the constituent systems
that together form its performative envelope: material surfaces, differential
climate spaces, activities and visitors While in the eyes of the conventional
designer the material envelope of a project is the final result of formation, the
process of ever-actualising the reaffiliation between activational and spatial
potentials forms the true envelope of a project’. (Bettum & Hensel 2000: 42)
OCEAN’s work is largely characterised by this understanding, and aims to fuse
architecture and landscape in an attempt to approach a detailed resolution of a
tectonic landscape. From 1995 onwards the work of Diploma Unit 5 at the
Architectural Association in London directed by Farshid Moussavi and Alejandro
Zaera-Polo pursued a similar agenda, developing and employing highly instrumental
design techniques in search of specific fusions of architecture, urbanism and
landscape that also characterised the work of their practice Foreign Office Architects.
From 1996 onwards Diploma Unit 4 at the Architectural Association, initially directed
by Ben Van Berkel and the author, developed the notion of a decidedly urban
performance envelope. Emphasis was placed on spatial transitions, incorporating
spatial, material and environmental gradients towards provisions for dynamic and
variable modes of multiple space use. This research was intensified in Diploma Unit
4 from 1999 onwards - directed by Ludo Grooteman and the author and
complemented by a program for interiorised intensive agricultural production in The
Netherlands. In this way it was possible to enrich the design experiments with a
varied and nuanced set of requirements for heterogeneous interior microclimates,
and the transitions between exterior and interior. Gradually an architecture took
shape that was grounded in the interaction between spatial and material
constituents, multiple use provisions and environmental dynamics. This cycle of
investigation eventually resulted in a final stage of research in Diploma Unit 4
directed by Achim Menges and the author - that focused on the development of a
comprehensive design logic to articulate material systems that are able to provide
performative capacity. (Hensel & Menges 2006, 2008) Similar intentions underpinned
the research in the Studio module of the Emergent Technologies and Design master
program at the Architectural Association, directed by the author from 2001 to 2009
(see Hensel 2010b).
As the discourse on performance in architecture resurfaced in the middle of the last
decade, numerous efforts ensued to map the scope of approaches. The initially more
infrequent articles culminated into a series of readers and events, such as Branko
Kolarevic and Ali Malkavi’s Performative Architecture Beyond Instrumentality
reader (Kolarevic & Malkawi 2005), the Performalism Form and Performance in
Digital Architecture Exhibition at the Tel Aviv Museum of Art in 2008 curated by
Yasha Grobman and Eran Neuman and the associated exhibition catalogue
(Grobman & Neuman 2008) and revised and extended book (Grobman & Neuman
2012). These efforts showed clearly the division between either predominantly formal
or functional approaches to the notion of performance. These efforts in mapping the
field were paralleled by attempts that aimed to initiate more systematically detailed
efforts such Farshid Moussavi’s The Function of Form (Moussavi 2009). David
Leatherbarrow’s approach in Architecture Oriented Otherwise (Leatherbarrow 2009)
offered a systematic and detailed analysis and elaboration of different types of
performance. Motivated by David Leatherbarrow’s systematic and multi-angled
analysis this thesis seeks to deliver what seemed to be missing: an integrated and
cross-scalar approach to performance-oriented architecture that is projective and
points towards an alternative approach to architecture and questions of sustainability
that can be relevant for the bulk of architectural design today. As argued in the
introduction to this thesis there exists now a need for an integrative discourse that at
the same time critically reviews and inherited biases and partialities that present
obstacles to an integrative discourse. The introduction also offered an elaboration of
six approaches to performance in architecture today:
[i] The revived postmodern approach;
[ii] The predominantly formalist approach;
[iii] The predominantly functionalist approach;
[iv] The approach that foregrounds the notion of event that counters planned
relation between architectures and their use and emphasizes unplanned
appropriations and inadvertent latent capacities of architectures;
[v] The approach that seeks to integrate the participation of architecture in
authored and un-authored conditions;
[vi] The detailed integrative approach attempted in this thesis largely based
on 5.
The developments outlined in this chapter was paralleled by a particular set of
advancements of design methods and research by design in general which is
discussed in the following chapter.
2.7 Recent developments of design methods and research by design
The following part constitutes an account of a specific trajectory of development of a
methodological approach to architectural design, which both paralleled and facilitated
the development of research by design as a mode of knowledge and design
production in architecture.
This development relates to and commences from the discourse on design
techniques, rigour and instrumentality, and the production of architectural effect.
Operative notions such as smoothing, folding, etc. related to the rigorous
implementation of instrumental design techniques for the purpose of the production
of innovative formal experiments, spatial organisation, institutional form and social
arrangements, and material effects (see for instance Kipnis, 1993). Rigour was
indispensable in relation to the implementation of new design techniques by means
of specified rules in order to derive specific effects associated with a respective
technique through a translational process into an architectural scheme. Design
technique was thus intended to be instrumental both with regards to specific ways of
execution and the production of architectural effects. The development of conceptual
approaches over the last two decades has in parts been linked to and driven by
methodological developments.
The early to mid-1990s witnessed the decisive transition to computational design.
The subsequent development with regards to design techniques focused on digital
methods, and more specifically digital animation techniques. These incorporated a
time-aspect in the production of form; an unfolding or becoming that encapsulated
some of the concurrent interest in the writings of philosophers such as Deleuze and
Bergson. dECOi, Greg Lynn, Stephen Perrella, Marcos Novak, NOX, OCEAN, et al
(see for instance Zellner, 1999) all utilised time-based data-driven processes in
evolving their schemes that remained over a period of time rather elusive regarding
their detailed material articulation. The difficult question associated with this
approach is which instance of an animation should be chosen to inform the final
design. Clearly this was one of more intuitive aspects of the design process. At this
point the becoming aspect shifted to an argument about the specific qualities of the
chosen moment / key frame and its particular characteristics that distinguished it
from any other potential moments / key frames [Fig. 8].
Fig. 8 Design process diagram showing the translational process from a key-frame of a digital animation
as a graphic dataset to an architectural scheme. Diagram by the author.
The unquestionable arbitrariness seemed to contradict the wish for the rigorous
instrumental execution of what was desired as a seamless process of translation that
would retain all the specific characteristics and effects of the initial dataset. While this
may or may not necessarily constitute a problem, it was here that two distinct
approaches diverged from one another: one that insisted in foregrounding aesthetic
qualities of the design versus one that began to explore the potentials of interrelating
analytical and generative process with the aim to evaluate different design iterations
that evolve from this process. Evolutionary design process were one significant
development of the latter (see for instance Frazer 1995).
of General
Design Criteria
of Modelling
of Modelling
Animation Seq. 1
Animation Seq. 2
Animation Seq. 3
Tec t o ni c
of Design
In this context it is of interest to examine several parallel developments in the
collaborative network OCEAN, as well as various teaching environments where
OCEAN members were involved from the mid 1990s onwards. These developments
originated in the early 1990s in the Graduate Design Program at the Architectural
Association directed initially by Jeffrey Kipnis and later in collaboration with Bahram
Shirdel as discussed above. In the studio translational processes were explored that
in part had evolved in the context of Peter Eisenman’s work and Jeffrey Kipnis
collaboration with Eisenman and in parts in Bahram Shirdel’s work. This involved for
instance the concept and method of folding. The translational process was
protracted, meticulous and hyper-detailed in order to secure a high level of rigour and
instrumentality in the design process [Fig. 9].
Fig. 9 Design process giagram showing the translational process from a graphic dataset to an
architectural scheme. Diagram by the author.
Tec t o n ic
Tec t o n ic
Tec t o n ic
Event Map
Tec t o n ic
Due to the educational emphasis on design method the studio projects, like the
Spreebogen project discussed in the previous chapter developed in the Graduate
Design program at the Architectural Association in 1992-93, remained overly
abstract. The process had not matured enough to deliver detailed designs in the time
of a semester or academic year. Attention to context was mainly concentrated on two
aspects, firstly the relation between the new scheme and the existing urban context
along the perimeter of the scheme (inspired by the two corner elements of the Vitra
production hall in Weil am Rhein designed by Frank Gehry which ‘smoothen’ the
relation between the exotic form of his museum and the rectilinear production hall),
as well as the selective retaining of site features in an otherwise tabula rasa scheme
(inspired by OMA’s scheme for La Defense). All other systemic relations within the
scheme depended on the nuanced translation of the graphic underlay into an
architectural scheme and the maintenance of intensive coherence (Kipnis, 1993)
between the various systems that make up the scheme. This approach was new to
Europe and originated within a specific circle of US-American academia and Avant-
guard practice. In due course several young European architects had recognised the
potential and appropriated it to the European contexts. Diploma Unit 4 at the
Architectural Association, directed by Ben Van Berkel, Ludo Grooteman, and the
author deployed several of the techniques with the aim to satisfy a series of central
design criteria in parts introduced by Jeffrey Kipnis. However, this was done in much
more direct contact with very specific urban and regional conditions, and with greater
emphasis on a detailed analysis of context. The means for this were graphic
mapping techniques that captured contextual conditions in a similar graphic language
as the graft or alternatively the diagrams deployed for the design process. This
constituted an instrumental way of injecting more context-specific data into the
design process and went hand in hand with the development of the systemic relation
between the deployed design techniques. The latter was accomplished by what was
termed operational matrix, which had elements of an operational flow chart, but with
multiple entry points and multiple ways of sequencing the deployment of design
techniques, whereby different routes through the operational matrix could result in
fundamentally different outcomes.
The interesting aspect in this development is the succinct move from ensuring
instrumentality and rigor within a specific design technique to instrumentality and
rigour in the combination and integration of design techniques, as well as
emphasising the importance of data collection and processing in the design process.
In parallel to this development the collaborative network OCEAN developed some of
the graphic design techniques introduced by Jeffrey Kipnis further. This involved
greater emphasis on the mapping of context-specific conditions (geographic,
demographic, climatic, etc.) and elaborating more systematically complex grafts that
could receive animated time-based information. In the context of Diploma Unit 4
additional emphasis was placed on the articulation of buildings that retained the
intricate systemic organisation of the various translational diagrams of the design
process. This was done to arrive sooner at an initial design scheme so as to utilise it
as a context for analysis to further elaborate the scheme, thus introducing an
recursive logic to design based on the evaluation and further articulation of a given
scheme. This approach placed more emphasis on spatial relations between exterior,
interior and transitional spaces. In response to this Ludo Grooteman and the author
shifted emphasis in Diploma Unit 4 to designs that were geared towards complex
interiorised agricultural production in The Netherlands. In doing so it was possible to
enrich the design brief with a varied and nuanced set of requirements for
microclimates, and the transitions between them. This approach proved successful
with regards to the further development of instrumental design techniques that
explored the dynamic relation between the material articulation of a scheme and its
interaction with a given environment.
Likewise OCEAN utilised context-related data in their work (see for instance the
Synthetic Landscape Phase 3: Bettum & Hensel, 2000) and greater emphasis on the
potentials arising from the differentiation of interior climates as the following
examples demonstrate. OCEAN’s Time Capsule project (1998) was an entry to an
invited competition to design a time capsule for the lobby of the New York Times
building to contain selected items that were to be preserved for a period of thousand
years to exemplify to future generations the technical and design sensibility of the
late 20th Century. Rather then selecting specific items and to design only one time
capsule OCEAN pursued a fundamentally different approach. A state-of-the-art
digital animation technique was utilised to wrap any number and shape of objects
into an intricately articulated envelope. The suggestion was to produce nine different
yet similar capsules, all with different content and consequently with different
resulting geometries. This was done to emphasise the shift from mass produced to
mass customised objects, which characterised the related fundamental shift in
design and technical sensibility at the end of the 20th century. The capsules were to
be placed in different locations in the Antarctic ice shelf. If the ice shelf would melt
due to local or global climate change the capsules would be released into the
OCEAN currents at different times and in different locations increasing the chance
that some capsules would survive the dramatic changes that a millennium might
witness. For this purpose the material form of the capsules had to withstand the ice
pressure and also be aquadynamic. Shaping the capsules therefore involved two
generative data-streams, one that operated from within by means of the shape of the
objects contained and their spatial organisation within the capsule, and one operating
from the outside by means of the external context and the resulting performative
requirements. The design thus resulted from these interacting sets of requirements.
The material solution required specialist expertise in metallurgy for the outer titanium
layer of the capsule and in advanced ceramics for the inner layer, coupled with
specialist expertise in climatology and oceanology. [Fig. 10, p. 74]
Fig. 10 A-Drift Time Capsules by OCEAN, 1998. Competition entry to the invited New York Times Time
Capsules. Top: stills from the digital animation that simulates the wrapping of a surface around object.
Bottom left: Digital model of a time capsule showing the inner ceramic capsules nested with the outer
titanium alloy capsule. Bottom right: Rapid prototype model (SLS) at 1/10 scale of a time capsule.
Source: Courtesy of OCEAN Design Research Association.
This approach was further developed for an architectural design. In 2001, a few
months after the 9/11 attack fifty selected architects, including OCEAN, were invited
by the Max Protetch Gallery in New York to develop designs for a new World Trade
Centre to be shown in an exhibition. OCEAN projected a scheme for a new World
Centre for Human Concerns that would provide forms of representation for all
peoples, not only those represented by nation states. This required a formal
approach that was not already associated with existing forms of representation and
institutional forms and arrangements. OCEAN opted for wrapping a new volume
around the void of the former twin towers. This was done by deploying a similar
method as for the time capsules and resulted in a very deep plan of a very large
volume. Instead of attempting to bring daylight into the deep plan arrangement the
possibility emerged to organise a 24-hour interior environment with constant night
areas in the portions of the deep plan that is the darkest. In other words a gradient of
conditions was utilised as the pre-condition of developing ideas for inhabiting such a
space. [Fig. 11, p. 76]
Fig. 11 World Centre for Human Affairs by OCEAN, 2002. Design for the ‘A new World Trade Centre
Exhibition at the Max Protetch Gallery in New York. Top: stills from the digital animation that served the
computational form-finding process. Bottom left: Rapid prototype model (SLS) at 1/10 scale of a time
capsule. Bottom right: Digital model showing the different tectonic systems of the scheme: envelopes,
floor slabs, external basket-type structure that also serves as vertical circulation system. Source:
Courtesy of OCEAN Design Research Association.
The design process for the New York Times Capsules and the World Centre projects
shows how the use of animation techniques in a considerable way anticipated the
parametric associative modelling processes that are now widely in use. During the
mid-2000s first Diploma Unit 4 and later the studio work in Emergent Technologies
and Design master-program at the Architectural Association began collaborating with
the developers of a parametric associative modelling package in its beta-stage in
order to assist the development of its functionality and to connect it to analytical
methods. This was with the aim to strengthen the connection between analytical and
generative methods in the design process. Parametric associative modelling is not in
and of itself generative. However, when set into a systemic approach of connecting
instrumental design techniques, such as in the operational matrix approach
described above, the generative function of the design process could be enabled by
means of iterative steps in a positive feedback set-up to promote change in the
design and the evaluation of set criteria through each iterative step.
The methodological set-up in performance-oriented design can be in parts described
as a set of interrelated design techniques organised into an operational matrix and
with various embedded feedback loops. [Fig. 12, p. 78]
Fig. 12 Process diagram of a recursive design process geared towards performance-oriented
architecture. Diagram by the author.
If the process commences based on consideration pertaining to the material domain,
this often involves preliminary basic research based on form-finding experiments that
deploy material self-organisation in response to extrinsic influences. In so doing the
exchange between material and environment can inform the design process. This
form-finding approach is grounded in the works of Antoni Gaudí, Frei Otto, Heinz
Increasing Complexity
I. Experimental Process:
Establishment of Modelling
Processes and Logics and
Production of Empirical Data
Increasing Performance Definition
II. Morphogenetic Process
Introduction of Specific Context
System and Context Interaction
Generation of Specific System Configuration
Construction of
in Context
III. Actualisation Process
of System
Isler and others. However, it also clearly departs from single criteria form-finding and
can be applied to different scales of magnitude or material system hierarchies (see
for instance Hensel & Menges 2006, 2008). In its most complex form multiple criteria
form-finding takes place on multiple nested and interacting hierarchical and scalar
Associative computational modelling can be informed by data pertaining to extrinsic
influences. A plethora of research in and development of related computational
processes and methods are currently under development in many places (see for
instance the numerous related papers submitted to dedicated conferences and
symposia such as ACADIA and ECAADE).
A recent example is the Diploma project by Joakim Hoen undertaken during the Fall
Semester 2011 at the Oslo School of Architecture under the supervision of the
author. The central aim of the project was to develop design strategies and
computational methods for a multiple envelope non-standard coastal holiday home
for Southern Norway that can be customised according to contextual conditions and
customer requirements that can be mass-fabricated. [Fig. 13, p. 80]
Fig. 13 Design for a non-standard mass-fabricated coastal holiday home for Southern Norway, 2011,
Diploma project Joakim Hoen at the Oslo School of Architecture and Design.
The detailed form of the terrain of a specific site and its particular wind conditions
served as environmental input into the design process and informed the articulation
of the outer screen-type envelope. [Fig. 14, p. 82] The intermediary space between
outer screen-type envelope and the inner climate envelope was dimensioned in
relation to the environmental modulation capacity of the outer envelope. [Fig. 15, p.
83] The latter concerned primarily the dissipation of horizontal loads and reduction of
climate impact on the inner envelope, as well as the deceleration of airflow velocity
from the exterior to the intermediary space to make it useable even during more
severe wind conditions. If thermal conditions will be integrated in such research by
design projects it will become possible to extend the time during which such buildings
can be free-running, as severe environmental impact can be reduced. Intermediary
space(s) can provide choice to the inhabitants. Such strategies have been in use in
historical buildings as the following analyses show. However, in order to regain and
update the required expertise for such designs research by design efforts and
detailed analysis of existing cases must be coupled and cross-inform one another.
Fig. 14 Process diagram of weather and terrain data retrieval from national weather stations and terrain
data servers and preparation of the data for the design process, 2011, Diploma project Joakim Hoen at
the Oslo School of Architecture and Design.
Fig. 15 Process diagram of the linked form-generation and environmental analysis process that
generates the outer screen-type envelope of the project, 2011, Diploma project Joakim Hoen at the Oslo
School of Architecture and Design.
3. Towards non-discrete architectures
Performance in architecture entails active agency - the capacity to act in the world -
that is context-specific participation in cultural, social and natural settings and
processes. Accordingly performance-oriented architecture is based on the
understanding that architectures unfold their performative capacity by being
embedded in nested orders of complexity and auxiliary to numerous conditions and
processes: such architectures are essentially non-discrete. This understanding is
aligned with Christopher Alexander’s statement that ‘we ought always really to
design with a number of nested, overlapped form-context boundaries in mind’
(Alexander 1964: 18) and resonates with aspects of Actor Network Theory that
locates agency in non-human entities and systems. (Latour 2005)
Pursuing this hypothesis requires a corresponding reconceptualization of the relation
between architectures and the environments they are set within on a spatial, material
and temporal level, thus problematizing exterior to interior relations, the associated
question of extended threshold conditions and the time-specific interaction with a
dynamic environment.
However, the majority of today’s designs develop in the exact opposite direction:
architectures are almost invariably perceived and designed as discrete objects.
Discreteness implies various kinds and degrees of disconnection from a given
context in order to stand out and arises from a number of predilections of
architectural practice, some of which are related to idiosyncratic or so-called
signature architectures, while others arise from more general trends in normative
practice and the industries involved in the making of the built environment.
It would seem that the notion of non-discreteness is antithetical to architectural
design for as long as it is primarily the objectness of architectures that is of central
importance to architects (and clients alike). This tendency is further enhanced by the
emphasis placed on idiosyncratic expression in highly design-oriented contemporary
architecture, which results in objects that celebrate discreteness as their core
feature. In such cases the emphasis has in recent years almost entirely been placed
on the ‘styling’ of the building envelope a kind of branding by means of shaping,
patterning and ornamentation. The resulting divorce of the logic of the building
envelope from the logic of interior is indeed celebrated by some: ‘The skin is free
from formal and expressive obligations to the interior ’. (Lavin 2012: 25) Whatever
design process these projects follow, the reality of such schemes is that architects
sculpt the exterior, while the interior is frequently consists of unrelated and often
quite normative solutions. Thus the idiosyncratic architectural project has
progressively become one of a sculptural total exterior. Context, however, seems not
to yield any tangible difference in such works. So-called signature architectures are
principally exchangeable, irrespective of differences in location, culture and climate.
Ironically the accelerating multiplication of the idiosyncratic converts it progressively
into a new generic: the spectacular is absorbed back into monotonous normality by
way of incessant replication. Such works bear little projective power relative to a
specific situation or context precisely because of their increasing exchangeability and
literal superficiality.
A more general trend also enhances discreteness in a significant way. Current
approaches in sustainable design that focus predominately on technical solutions
tend to enhance the division of interior from exterior environments. Great efforts are
invested in the development of more efficient building insulation and technological
regulation of environmental exchange between interior and exterior. In this context it
is easily overlooked that technology-dominated solutions are a rather recent
phenomenon. From the 1960s onwards mechanical-electrical interior climate
modulation redefined the architectural boundary as a quasi-hermetic flattened one
that has progressively abandoned intermediary spaces as architectural means of
environmental provision and potential for adaptive habitation. This development
prompted Kenneth Frampton to diagnose that:
‘Modern building is now so universally conditioned by optimised technology
that the possibility of creating significant urban form has become extremely
limited Today the practice of architecture seems so increasingly polarised
between, on the one hand, a so-called “high-tech” approach predicated
exclusively upon production and, on the other, the provision of a
“compensatory façade” to cover up the harsh realities of this universal
system’. (Frampton 1983: 17)
While interesting concepts have gradually emerged that operate on notions such as
free-running buildings, ‘adaptive models of thermal comfort’ (i.e. de Dear & Barger
1998), etc., detailed discussions of heterogeneous or gradient conditions are still
lacking. The majority of architects have at this stage not recognised the potential of
such developments or incorporated them into their work. Instead the general trend is
to favour pre-calculated technical solutions that secure swift planning approval.
Moreover, clients generally tend to expect that the maximum available footprint of a
project be defined as a fully climatized interior, minus the necessary thickness of the
climate envelope.
One of the most fundamental consequences of the prevailing dominance of
objectness and discreteness of architecture is that it is thereby locked into the
stringent dialectic of the natural versus the man-made. There is no option for a more
subtle and graded relation within which architecture could extensively participate in a
wide range of interlinked environmental and ecological processes outside of the
narrow bandwidth of technologically facilitated exchanges. Architects are therefore in
need of reconsidering their preoccupation with discreteness.
Yet, some works begin to point in a direction that might be conceptually considered
non-discrete architecture. In reference to the work of Bruno Taut and his own work
the Japanese architect Kengo Kuma demanded in his book ‘Anti-object’ that
architects must ‘shun the stability, unity and aggregation known as the object’. (Kuma
2008: 120) Kuma elaborated that ‘making architecture into an object means
distinguishing between its inside and outside and erecting a mass called “inside” in
the midst of an “outside” (of which nature is one version)’. (Kuma 2008: 77) The
following four types of works present alternative strategies:
1.The oeuvre of the Brazilian architect Paulo Mendes da Rocha features a series of
projects that engage the ground in the formation of the architecture and thereby
extend the space of the project beyond its actual footprint. Examples include the
Brazilian Pavilion for the Osaka Expo 1970 [Fig. 16, p. 89] and MUBE - the Brazilian
Museum of Sculpture in São Paulo (1988) [Fig. 17, p. 90], both of which constitute in
some way constructed landscapes and are not actually immediately recognizable as
buildings with interior space. The latter is set into the constructed landscape and the
landscaped surface continues in an articulated manner over the ‘burrowed’ interior.
In both cases a visible structure frames the site to a greater or lesser extend. In case
of the Brazilian Pavilion this is a concrete slab forming a canopy, while in the case of
MUBE it is a very large beam. These light shelters captivatingly enhance the notion
of the constructed landscape and emphasize by virtue of a ‘compressed’ space the
horizontal extent of space the project engages. Interestingly MUBE was initially
thought to combine a sculpture and ecology museum and was thus planned from the
onset as a landscape with integrated gardens, water-pools, etc., designed by the
Brazilian landscape designer Roberto Burle Marx. This kind of architecture can no
longer be described as a figure-ground relation. It renders space as particularized,
yet continuous and the architecture non-discrete.
Fig. 16
Fig. 17
2. Elizabeth Diller and Ricardo Scofidio’s Blur building for the Swiss Expo 2002
located in Yverdon-les-Bains on Lake Neuchatel was, according to the architects,
conceived as an ‘anti-spectacle’ characterized by a building envelope that is entirely
dissolved into an technology-generated cloud-shaped mist of water. The project
constitutes a dynamic climatic event that is affected by the climate around it rather
than a rigid material construct. In so doing it corresponds with Reyner Banham’s
notion of the campfire as a nomadic paradigm of spatial organization:
‘Societies who do not built substantial structures tend to group their activities
around some central focus a water hole, a shade tree, a fire, a great
teacher and inhabit a space whose external boundaries are vague,
adjustable according to functional need, and rarely regular. The output of heat
and light from a campfire is effectively zoned in concentric rings, brightest and
hottest close to the fire, coolest and darkest away from it but at the same
time, the distribution of heat is biased by the wind so that the concentric
zoning is interrupted by other considerations of comfort or need.’ (Banham
1969: 19-20)
The physicality of the building incorporates and more than other architectures
consists of the microclimate it creates and in so doing engages an extreme version of
3. François Roche and Stéphanie Lavaux’s Spidernethewood project in Nîmes,
France (2007), utilizes the dense growing vegetation to mask the massing of the
building together with its elevations, but maintains a spatial labyrinth by employing
nets to constrain the growing natural vegetation. The spatial labyrinth is designed as
a continuum across the interior-exterior threshold. It would seem that François Roche
and Stéphanie Lavaux responded to Kengo Kuma assertion that ‘if we are to achieve
more open spaces, we must aim for a wilderness rather than a garden’ (Kuma 2008:
112). Yet, although Kuma advocated giving up ‘paths that are determined by their
designers’ (Kuma 2008: 112) the Spidernethewood project clearly features
determined paths, thresholds and spatial continuities. It does so, however, in a
labyrinthine convoluted manner that erases any perception of determined sequence
and suggests instead a maze carved within barely restrained vegetation. And
although the scheme does have clear separations between an inside and an outside,
it doubles it up into the net-restrained interior carved out of the vegetation (which is at
the same time its exterior), its continuation in the carved interior of the barely
perceivable actual building mass. [Fig. 18, p. 93; Fig. 19, p. 94; Fig. 20, p. 95]
Fig. 18 Spidernethewood, Nîmes, France, 2007, R&Sie(n): Site-plan showing the project in the context
of the dense natural vegetation. Source: Courtesy of the Architect.
Fig. 19 Spidernethewood, Nîmes, France, 2007, R&Sie(n): Axonometrics showing the nets that
constrain the growth of the natural vegetation. Source: Courtesy of the Architect.
Fig. 20 Spidernethewood, Nîmes, France, 2007, R&Sie(n): various views of the extended threshold
conditions of the project. Source: Courtesy of the Architect.
4.The forth type of project spreads the boundaries and thresholds that would define
the discrete object into a series of layers. This approach coincides with what Jeffrey
Kipnis referred to as box-in-box section (Kipnis 1993: 44) that constitutes degrees of
interiority. However, here it is important to distinguish between two different types of
projects, one that features a continuous outer envelope and those that do not.
Examples of the former include, for instance, Jean Novel’s unbuilt scheme for the
New National Theatre in Tokyo (1986) [Fig. 21 and 22, p. 97] and Bahram Shirdel’s
unbuilt entry for the Nara Convention Centre competition (1993). The latter include,
for instance, Bernard Tschumi’s Le Fresnoy Art Centre in Tourcoing, France (1991-
97) and Stephen Holl’s unbuilt entry to the Palazzo del Cinema Venice competition
(1990) [Fig. 23 and 24]. The first type of project maintains a strong emphasis on the
objectness of the scheme: it is the outer envelope that is perceived from the exterior
that also clearly divides interior from exterior, in spite of the degrees of interiority
experienced in the interior. The second type of project distributes the threshold by
not pursuing a full enclosure with the outer layer: the exterior extends beyond the first
layer of the multiplied envelope. The cited projects nevertheless end up emphasizing
the objectness of the scheme as recognizable signature buildings of the architect.
They point, however, in an interesting direction that implies a further distribution of
the exterior to interior transition by increasing the number of layers that are partially
or fully open with interstitial modulated microclimates that can constitute a varied or
gradient environment.
Fig. 21 New national Theatre Competition Entry, Tokyo, Japan, 1986, Jean Nouvel: elevation showing
the closed envelope of the scheme. Source: Courtesy of the Architect.
Fig. 22 New national Theatre Competition Entry, Tokyo, Japan, 1986, Jean Nouvel: section showing the
box-in-box section of the scheme. Source: Courtesy of the Architect.
Fig. 23 Palazzo del Cinema Competition Entry, Venice, Italy, 1990, Stephen Holl: competition model.
Source: Courtesy of the Architect.
Fig. 24 Palazzo del Cinema Competition Entry, Venice, Italy, 1990, Stephen Holl: plans and sections
showing the box-in-box sectional arrangement and the articulation of the perimeter threshold. Source:
Courtesy of the Architect.
From these examples it is possible to extract a first set of principles towards an
intensively embedded non-discrete architecture:
[i] Architectures can be embedded in a continuous landscape with gradual transitions
from exterior to interior. Through a committed engagement with landscape
architectures can be embedded in lithospheric (pedospheric), hydrosphere and biotic
[ii] Architectures are always already participating in atmospheric processes and the
production of heterogeneous microclimate. However, the interaction between
architectures and (local) climate can be strategized in a much more complex and
nuanced manner.
[iii] Expanding upon points 1 and 2 architectures can participate in in the production
of a dynamic continuous space and environment that consists of and / or provides for
local ecosystems.
[iv] Architectures can provide distributed thresholds that articulate heterogeneous
spatial and environmental conditions to make versatile provisions for habitation and
ecological processes.
As these examples show, the pursuit of non-discrete architectures requires no denial
of formal concerns, but also that these cannot truly strive on exclusive formalistic
concerns if architectures are to be designed in consideration of their ‘participation in
authored and un-authored conditions’ (Leatherbarrow 2011).
Therefore it is of interest to examine the notion of non-discrete architectures relative
to the object / subject relation. Architectures can engage the subject above and
beyond practical purposes and can accomplish this by emphasis on being
outstanding (spectacular) and/or by emphasis on unfamiliarity. In this context
Umberto Eco’s seminal ‘The Open Work’ (Eco 1989 [1962]) is of interest, in which he
elaborated a kind of work of art that invests part of the action in the spectator. Such
‘open work’ or ‘work in movement’, as Eco called it, is characterised by a deliberate
ambiguity of meaning and seeks to avoid conventional forms of expression and
prescribed interpretation. According to Eco, ‘open works’ must leave the
arrangement of some of their constituents to the public or to chance, hence giving
these works a ‘field of possible orders’ rather than a single fixed one. The subject can
move freely within this field of possibilities. At the same time, Eco points out that this
does not imply a comprehensive laissez-faire and amorphousness. Instead, there
needs to exist a guiding directive from the designer that structures the field of
possibilities in some way for the subject. Eco elaborated that:
‘(1) “open” works, insofar as they are in movement, are characterised by the
invitation to make the work together with the author and that (2) on a wider
level (as a subgenus in the species “work in movement”) there exist works
which, though organically completed, are “open” to a continuous generation
of internal relations which the addressee must uncover and select in his act of
perceiving the totality of incoming stimuli. (3) Every work of art, even though it
is produced by following an explicit or implicit poetics of necessity, is
effectively open to a virtually unlimited range of possible readings, each of
which causes the work to acquire new vitality in terms of particular taste, or
perspective, or personal performance.’ (Eco 1989 [1962]: 21)
In so doing ‘open works’ are based on the active agency of the subject. The question
is how generative ambiguity can be devised? The author of a work could, for
instance, accomplish ambiguity by saturating a work with meaning or foreground
displacements of meaning. This bears firstly the problem of cultural specificity of
meaning, which limits experiences of such kind to preferred subjects, and secondly, if
Kipnis aforementioned critique of post-modern collage stands, empties itself out
through repetition and the increasing replacement of a dominant frame upon which
such collages rely. A second option is the reduction of meaning through forms of
abstractions, which might also be culturally specific. If, however, in result the subject
needs to actively engage the built environment in order to discover through curious
participation actual and latent provisions or new potentials an interesting situation
arises. (Hensel 2003)
Jeffrey Kipnis notions of ‘blankness’ and ‘pointing’ in architecture are of interest in
relation to the concept of open works. According to Kipnis blankness implies ‘the
suppression of quotation or reference through the erasure of decoration and
ornament to include canonic form and type. By avoiding formal or figurative
reference, architecture can engage in unexpected formal and semiotic affiliations
without entering into fixed alignments’ (Kipnis 1993: 43). Pointing implies that
‘architecture must be projective, i.e., it must point to the emergence of new social
arrangements and to the construction of new institutional forms. In order to
accomplish this, the building must have a point, i.e., project a transformation of a
prevailing political context’ (Kipnis 1993: 43). ‘Blankness’ and ‘pointing’ extends
Eco’s notion of open works in an interesting way by suggesting that architecture can
structure a field of possibility and, in so doing, can point towards ‘new social
arrangements’ and ‘institutional form’. However, blankness too becomes through
repetition canonical form. Thus the problem may be restated not as one of blankness
versus meaning, but instead as one of the articulation of division, or more specifically
the seams between elements and thresholds between spaces.
The aforementioned Brazilian Pavilion designed by Paulo Mendes da Rocha for the
Osaka World Expo in 1970, is worth re-examining in this context. The project
constitutes a constructed undulating landscape roofed over by a canopy that
mirrored the undulation of the ground. Typical for da Rocha is the construction of
abstract landform architecture that frames and condenses the expanse of exterior
space, and particularises space locally. A small ramp leads to the required utilities
below ground, the only actual interior space of the project. Openings in the canopy
enabled shifting bands of sunlight to animate the constructed landscape surface. The
local particularisation of an otherwise continuous space by way of engaging the
spatial and material articulation of the project and its interaction with the environment
can thus render a project non-discrete. However, attention must also be placed on
the way in which the project meets its context. In case of the Brazilian Pavilion this
entails the articulation of the surface of the constructed landscape to that beyond it.
For the sake of the argument we may consider three different versions of Paulo
Mendes da Rocha’s Brazilian pavilion: the first version clearly demarcates the extent
of the constructed landscape by changing materiality and coloration of the ground
surface at the border of the plot, the second version continues the surface material
and coloration of the context throughout the project, and the third version features a
gradual change on materiality and coloration of the ground surface from the plot
border towards the particularised space. These three versions would be perceived in
quite different ways. The first version emphasizes the discreteness and objectness of
the scheme in spite of the existence of a continuous space, based on perception of a
hard edge of the unfamiliar object placed against a familiar context in a typical
collage manner. The second version operates on the perception of some kind of
unfamiliar contraction of the familiar context. The third version operates on the
perception of a gradual movement from the familiar to the unfamiliar. Each of these
perceptions would be fundamentally different. The first adheres to the logic of
discreteness, while the second and the third erode this perception and offer two
distinct versions of non-discrete architecture, while at the same time maintaining the
key element of the unfamiliar.
Consequentially the question arises whether all types of buildings can or should be
embedded non-discrete architectures. Embeddedness does not per se imply that
buildings should no longer feature discrete spaces of any description. Architectures
may feature box-in-box-sections in which discrete pockets of space can be
embedded within more continuous space, such as in Steven Holl’s Palazzo del
Cinema or R&Sie(n)’s Spidernethewood project. This indicates that hard thresholds
can very well exist in schemes that are at the same time seamlessly embedded
within their context and that feature gradients or extended thresholds. Law courts or
prisons, for instance can feature an extended threshold as an interface with the
surrounding context, while at the same time featuring clearly demarcated security
zones, cells, etc. Perhaps for some institutional buildings the critical rethinking of the
perimeter threshold will yield some new thoughts relative to their institutional
purpose, pattern of use and provisions. Perhaps it may be argued that monuments
need to stand out and be more discrete than other constructions and architectures.
This depends, however, entirely on the approach to the notion of monument. In the
work of Kengo Kuma there are clear attempts to articulate embedded monuments
that draw away from an articulation as a discrete object (see for instance the scheme
for the Electronic Memorial Space in Takasaki, Japan, 1997-98). Recent land-art
inspired projects and atmospheric works such as Olafur Eliasson’s recent projects or
Diller and Scofidio’s Blur building begin to erode the prevailing object-oriented
perspective as to what outstanding and discrete might mean. Such projects show
that identity is not necessarily equivalent depending on discreteness, but perhaps
more on experience that can be organised across a gradient. However, one must
proceed carefully when examining the implications of applying the concept of non-
discreteness to different types of architectures.
4. Towards non-anthropocentric architectures
‘The altered environmental conditions of today can no longer be mastered
with the architectural resources of the past The relationship between
biology and building is now in need of clarification due to real and practical
exigencies. The problem of environment has never before been such a threat
to existence. In effect, it is a biological problem.’ (Otto, 1971: 7)
‘Overpopulation, the destruction of the environment, and the malaise of the
inner cities cannot be solved by technological advances, nor by literature or
history, but ultimately only by measures that are based on an understanding
of the biological roots of these problems.’ (Mayr, 1997: xix)
For a number of decades numerous politicians, biologists and architects have stated
the need for a biological approach to some of the most pressing problems arising
from the extensive impact by humankind on the natural environment. The ‘Report of
the World Commission on Environment and Development: Our Common Future’,
commonly known as the ‘Brundtland report’, elaborated within a broad range of
sustainability concerns also the fundamental necessity of preserving the abiotic and
biotic environment and its associated processes:
‘Important are the vital life processes carried out by nature, including
stabilization of climate, protection of watersheds and soil, preservation of
nurseries and breeding grounds, and so on. Conserving these processes
cannot be divorced from conserving the individual species within natural
ecosystems. Managing species and ecosystems together is clearly the most
rational way to approach the problem.’ (Report of the World Commission on
Environment and Development: Our Common Future 1987)
Yet, how are natural processes and ecosystems to be maintained? And what kind of
disciplinary affiliation between architecture and biology is needed in order to tackle
the complexity of the problems arising from the interaction between the human-made
and the natural environment? To confront this problematic it is now necessary to
forego exclusively formalistic, metaphoric and analogical modes of relating the two
disciplines and their subject matter and, instead, to focus all efforts on the question
as to how the built environment can be in the service of the natural environment. For
this a much more direct approach is needed that focuses on how architectures can
progressively be thought as being embedded in natural processes and how it may
provide for ecosystems by way of mediating the interaction between the abiotic and
biotic environment. This requires clarification and integration of core concepts in
architecture and biology so as to inform the integrated spatial and material
organisation of architecture and its interaction with the physical environments
towards the production of heterogeneous provisions that can help sustain
ecosystems and biodiversity.
Engaging architecture in the service of the natural environment concerns question of
ecology a sub-discipline of biology set forth by the German biologist Ernst Haeckel
(Haeckel 1866) that concerns the relationship between living organisms and their
environment. Today ecology comprises of studies across a wide range of spatial and
temporal scales concerning life processes and adaptation, distribution and
abundance of organisms, the relation of material and energetic processes to living
communities, function and development of ecosystems, and the role of biodiversity in
ecosystem functioning.
The sum of all ecosystems constitutes the biosphere, a concept devised by the
geologist Eduard Suess who also coined the notions of hydrosphere and lithosphere.
(Suess 1883) Dickinson and Murphy elaborated that the biosphere ‘is located at the
junction of the three terrestrial “spheres” or shells around the planet: the atmosphere,
hydrosphere and lithosphere’, yet ‘the dynamic nature of the physical environment is
not the only reason why ecosystems are dynamic. Organisms must react to the
challenges and opportunities of the physical environment as well as interact with
other organisms’. (Dickinson & Murphy 2007: 6-7) An Ecosystem is generally defined
as a ‘community of living organisms together with the physical processes that occur
within an environment’. (Pullin 2002) They constitute hierarchical systems of
perpetually interacting agents that accumulate into complex integrated wholes, which
are characterized by emergent non-reducible properties. Ecosystems generate
biophysical feedback between living and non-living domains and are sustained by
biodiversity. The latter indicates the extent of genetic, taxonomic, and ecological
diversity over all spatial and temporal scales. (Harper & Hawksworth 1995) Naeem,
Loreau and Inchausti pointed out that:
‘Through the collective metabolic and growth activities of its trillions of
organisms, Earth’s biota moves hundreds of thousands of tons of elements
and compounds between the hydrosphere, atmosphere, and lithosphere
every year. It is this biogeochemical activity that determines soil fertility, air
and water quality, and the habitability of ecosystems, biomes and Earth itself
While the functional significance of Earth’s biota to ecosystem or Earth-
system functioning is well established, the significance of Earth’s biodiversity
has remained unknown until today.’ (Naeem, Loreau & Inchausti 2002: 3)
While research into the significance of biodiversity in ecosystem functioning
continues, numerous industries, including pharmaceutical and agrochemical
industries, agriculture, forestry and so on, have begun to incorporate biodiversity
considerations into their operation. (See for instance: Wrigley, et al 2000; Mc Neely &
Scherr 2002; Bunnell & Dunsworth, 2009)
Interlinked with the notion of biodiversity is geodiversity, which concerns the diversity
of earth materials, forms and processes that constitute and shape the abiotic
environment. This involves water, soil, sediments and minerals, geomorphology and
geological processes. It is generally thought that geology asserts a strong influence
on biodiversity. (Gray 2004) This comprises also the pedospheric regime, which
concerns soil and soil formation. The latter is of particular importance for the biotic
linkages and interactions between aboveground and belowground communities
(Bardgett & Wardle 2010). Moreover, the interaction between the biotic and abiotic
environment entails biogeochemical cycles (carbon, nitrogen, oxygen, phosphorus,
sulphur and water cycles) that need to be considered.
Today ecologists and experts in environmental studies work with different kinds of
models to simulate different aspects of ecosystems, abiotic processes or linkages
between the two respectively. Often, however, significant underlying concepts are
used in an ambiguous manner and require clarification. Zoologist Michael Kearney
pointed out that ‘fundamental ecological concepts including “habitat”, “environment”
and “niche” lack rigorous and consistent definitions (Haskell 1940, Whittaker et al.
1973)’. (Kearney 2006: 186)
Habitat is typically defined as ‘the locality, site and particular type of local
environment occupied by an organism’. (Lincoln, Boxshall, Clark, 1998: 132)
Environment is typically defined as ‘the complex of biotic, climatic, edaphic
(pertaining to, or influenced by, the nature of the soil) and other conditions, which
comprise the immediate habitat of an organism; the physical, chemical and biological
surroundings of an organism at any given time’. (Lincoln, Boxshall, Clark, 1998: 101)
The concept of the niche is generally defined as ‘The ecological role of a species in a
community; conceptualised as the multidimensional space, of which the coordinates
are the various parameters representing the condition of existence of the species;
sometimes used loosely as an equivalent of microhabitat in the sense of the physical
space occupied by a species’. (Lincoln, Boxshall, Clark, 1998: 201) Moreover, it is
necessary to distinguish between the notions of fundamental niche and realised
niche. A fundamental niche is defined as ‘The entire multidimensional space that
represents the total range of conditions within which an organism can function and
which it could occupy in the absence of competitors or other interacting species.
(Lincoln, Boxshall, Clark, 1998: 121) A realised niche is defined as ‘that part of the
fundamental niche actually occupied by a species in the presence of competitive or
interactive species’. (Lincoln, Boxshall, Clark, 1998: 256) These definitions describe
a successively tighter defined space of interaction and can be useful in the
communication between biologists and architects in defining what kind of parameters
are concerned at various levels of specificity, as well as developing integrated
modelling towards a built environment in the service of the natural environment.
In order to gain an understanding as to how the built environment might interact with
the natural environment, it is of use to consider current approaches that are specific
to other types of human dominated environments and their relation to the natural
environment. One interesting field in this context is contemporary agroecosystems
management, a field in which experts have begun to hypothesise ‘that under
conditions of global change, complex agricultural systems are more dependable in
production and more sustainable in terms of resource conservation than simple
ones’. (Vandermeer et al. 1998: 4) In this context ‘complex systems’ refer to multi-
species agroecosystems, while ‘simple systems’ refer to those tending towards
monoculture. In relation to complex systems Vandermeer et al. distinguish between
[i] planned biodiversity, [ii] associated biodiversity, and, [iii] associated components,
and elaborated that:
‘The planned biodiversity will give rise to an associated biodiversity, the host
of weeds and beneficial plants that arrive independently of the farmer’s plans,
the soil flora and fauna that may respond to particular crops planted, the
myriad arthropods that arrive on the farm, etc. Finally, the extra-planned
organic resources, plus the planned biodiversity plus the associated
biodiversity combine in a complicated fashion to produce the ultimate
agroecosystems function, its productivity and sustainability.’ (Vandermeer et
al. 1998: 6)
What is of interest for architecture is the immediacy of planned and associated
biodiversity in agroecological contexts where ‘multi-species cultivation clearly
necessitates biodiversity management on the plot-scale. It also, however, requires
consideration of its biogeographical context within the surrounding area, requiring
recognition of processes operating on various scales.’ (Vandermeer et al. 1998: 6)
Developments in this field can therefore deliver ways of managing the correlation
between planned and the evolving aspects in ecosystems and their associated
biodiversity and also deliver general models for integrating the various scales
involved in human influence and natural processes interactions. A useful inroad to
the formulation of a non-anthropocentric architecture does therefore not necessarily
involve highly detailed provisions for the entire scope of biodiversity of a given
ecosystem, but, instead, the integration of provisions for planned biodiversity that can
help sustain an associated biodiversity and, in turn, sustain the freely evolving one.
This approach may help dealing with another complex problem. Commonly
ecosystem conservationists consider ecosystems to be in a state of dynamic
equilibrium, which tends to lead to the protection of species and ecosystems in the
condition in which they were found and described. The German zoologist, ecologist
and evolutionary scientist Josef Helmut Reichholf criticised this approach as not
sufficiently incorporating evolutionary processes and the possibility of natural
extinction, as maintained equilibrium entails rigid maintenance of the found condition,
and proposed instead the notion of stabile disequilibria (Reichholf 2008).
Another considerable interdisciplinary effort of interest is the field of Urban Ecology,
which focuses on:
‘The study of ecosystems that include humans living in cities and urbanising
landscapes. It is an emerging, interdisciplinary field that aims to understand
how human and ecological processes can co-exist in human-dominated
systems and help societies with their efforts to become more sustainable
Because of its interdisciplinary nature and unique focus on humans and
natural systems, the term “urban ecology” has been used variously to
describe the study of humans in cities, of nature in cities, and the coupled
relationship between humans and nature. Each of these areas is contributing
to our understanding of urban ecosystems and must be understood to fully
grasp the science of Urban Ecology.’ (Marzluff, et al 2008)
For the field of urban ecology the steep task is to come to an understanding of the
complexities involved at larger scales and to study the impact of mosaics of
heterogeneous and discontinuous spaces from the periphery to the centre of cities.
The involved conceptualisation, analysis, evaluation and incorporation of the
understandings that arise from urban ecology research into urban design can either
develop along an anthropocentric trajectory or along a non-anthropocentric one.
While the former seems more likely the latter might constitute a powerful alternative.
Experiments on an urban and regional scale are risky in that the result can be
disastrous on a large scale, while experimentation towards multi-species
environments on the scale of one or a few buildings may equally go wrong but might
contain the consequences of potentially negative outcomes. Experiments geared
towards non-anthropocentric architectures could therefore be locally conducted on a
building scale and carefully increased in size for as long as the observed results are
deemed positive. Evidently, such experiments would entail an intensely
interdisciplinary approach that necessitates the involvement of architects,
climatologists and micro-climatologists, geologists, botanists, zoologists, ecologists,
and also urban ecologists and agroecosystems experts. The concept of non-discrete
architectures might provide the extended threshold or interface that is required to
negotiate multi-species provisions, including of course those for human inhabitants.
5. Traits of performance-oriented architecture
The concept of non-discrete embedded architectures yields questions as to what the
conditions are that architectures are to engage in, how architectures should partake
in specific settings, and how to instrumentalise this understanding. This does involve
questions of context-specificity and response and ties profoundly into different kinds
of questions of sustainability. Paul Reitan, Professor Emeritus of Geological
Sciences at the University of Buffalo, elaborated that:
‘Successfully sustainable human societies must be as attuned as possible
to their local and regional environments, their geo-ecological support
systems; lifestyles must be adapted to the ecosystems in which societies live
and which support them with cultures, practices, economic systems, and
governing policies each adjusted to fit their area, not a single dominant
culture or way of living spread across the globe. This would be a world of
multiple, diverse societies with their numbers also adjusted to what regional
geo-ecological support systems can sustain.’ (Reitan 2005: 77)
For architecture this implies the significance of complex context-specific relations
across a range of scales and in particular geo-ecological conditions and in
consequence local difference in architectural response. The emphasis on context-
specificity does not, however, simply imply a return to Kenneth Frampton’s ‘Critical
Regionalism’ (Frampton 1983), as Frampton’s approach does not necessarily
encourage non-discrete or non-anthropocentric architectures. Sanford Kwinter,
Professor of Theory and Criticism at the Harvard University Graduate School of
Design, offered a useful conceptual repositioning regarding what architectures are
and what they do:
Thus the object be it a building, a compound site, or an entire urban matrix,
insofar as such unities continue to exist at all as functional terms would be
defined now not by how it appears, but rather by practices: those it partakes
of and those that take place within it. On this reconception, the unitariness of
the object would necessarily vanish deflected not into a single but doubly
articulated field (relations, by definition, never correspond with objects). What
comes to the fore are, on the one hand, those relations that are smaller than
the object, that saturate it and compose it, the “micro-architectures” for lack of
a happier term, and on the other, those relations or systems that are greater
or more extensive than the object, that comprehend or envelope it, those
“macro-architectures” of which the “object”, or the level of organisation
corresponding to the object is but a relay member or part.’ (Kwinter 2001: 14)
While Reitan’s proposition indicates what kind of relations might be involved, the
question as to what kind of micro- and macro-architectures might need to be
considered in each case and to which extent requires careful attention and will
benefit from a systems approach to define the extent of relevant interactions to be
included in architectural design considerations and processes. This becomes even
more profound when considering the immensity of the transformation of Earth’s
biosphere due to human intervention: experts now posit that we have entered a new
geological age dominated by human intervention. Nobel-prize laureate and chemist
Paul Crutzen argued that our geological time period should be termed
‘Anthropocene’, as ‘human activity is now affecting the Earth so profoundly that we
are entering into a new epoch’. (Holmes 2009: 32) This view alerts to the fact that
human actions are not only simply adding up, but might have passed a critical
When consequences are so far reaching the problem is what to include in
considerations that inform an appropriate architectural response to this level of
complexity. It is a classical problematic known to systems-thinkers as the ‘boundary
problem’. It involves what is included in or excluded from architectural design
considerations and requires knowledge also of those aspects that are to be
excluded. Where the boundary is drawn is of key significance, as this will influence
how a given problem is understood and dealt with. (Churchman 1970, Midgley 2000)
As Werner Ulrich pointed out ‘the meaning and the validity of professional
propositions always depend on boundary judgments as to what “facts” (observations)
and “norms” (valuation standards) are to be considered relevant and what others are
to be left out or considered less important’. (Ulrich 2002: 41) For the task at hand this
implies the involvement of an interdisciplinary scope of experts that define borders
perpetually according to circumstances. A valuable clue as to how to address the
problem of complex relation vis-à-vis questions of sustainability was offered by Pim
Martens, professor and chair of Sustainable Development at Maastricht University,
who suggested that:
‘A new research paradigm is needed that is better able to reflect the
complexity and the multidimensional character of sustainable development.
The new paradigm, referred to as sustainability science, must be able to
encompass different magnitudes of scales (of time, space, and function),
multiple balances (dynamics), multiple actors (interests) and multiple failures
(systemic faults).’ (Martens 2006: 38)
In so doing he calls for a new research paradigm for ‘Sustainability Science’. Clark
explained that ‘fundamental properties of the complex, adaptive humanenvironment
systems are the heart of sustainability science.’ (Clark 2007) Performance-
oriented architecture can benefit from a disciplinary affiliation with sustainability
science in that human environment systems are shared core interests.
One of the core areas of architectural production is the combined spatial and material
organisation of a project by which architecture makes provisions. Capacity for active
agency on a range of scales is inherent to the domains of spatial and material
organisation. As these two domains are interdependent it is of use to view them as a
combined spatial and material organisation complex. This complex interacts with the
local environment: it receives stimuli from the environment and modulates it in turn.
The locally modulated environment is integral part of the spatial organisation of
architectures and can have a supporting or diminishing effect on local eco-systems
and cultural patterns. Therefore it seems suitable to state the four domains of agency
as: [i] local communities biotic factors and interactions, [ii] the local physical
environment abiotic processes and interactions, and the [iii] spatial and [iv] material
organisation complex. [Fig. 25, p. 116] It is then necessary to define the different
traits and scale-ranges in which the spatial and material organisation complex can be
thought of and instrumentalised. The following part focuses on elaborating specific
traits performance-oriented architecture progressing upwards in scale.
Fig. 25 The four domains of agency of the proposed approach to performance-oriented architecture: [i]
the local physical environment, [ii] local communities, [iii and iv] the spatial and material organisation
complex incorporating the proposed specific traits of performance-oriented architecture. Diagram by the
Spatial and Material Organisation Complex
Local Communities
biotic factors and interactions
Local Physical Environment
abiotic factors and interactions
Unmediated Feedback
Unmediated Feedback
Mediated Feedback
Material Performance
The Actice Boundary
The Extended Threshold
2nd Degree Auxiliarity
Multiple Grounds
1st Degree Auxiliarity
Mediated Feedback
Settlement Pattern and Processes
The Articulated Envelope
Biophysical Environment
5.1 Reconciling dialectics
The notion of performance-oriented design raises the question of causality and
control. Embedded architectures will be entangled in complex multi-level interactions
that make it difficult to decide what kind of multiple-ways cause and effect relations
are to be taken into consideration, while at the same allowing for contingent
influences. Typically the complexity of a given problem is ‘reduced’ for the sake of
understanding, intelligibility and instrumentalisation. Architecture too employs
reductionism for such purposes. However, the often-resulting artificial dichotomies
tend to stubbornly persist in separating architectural discourses into oppositional
One unquestionably iconic artificial dichotomy in architecture divides form from
function. The debate and disagreement on their relation has divided architects since
the 1930’s (Blundell Jones 2003) and in some ways even before as can be gleaned
from the different positions that were informed by different takes in comparative
anatomy and morphology in biology. The form-function dialectic constitutes a
profound problem for an integrated take on performance-oriented architecture as it
continues to divide architects into factions with either predominately formal or
functional predilections. What makes matters additionally difficult is the frequent
general conflation of the notions of function, purpose, use and program and the
notions of function and performance. It is therefore necessary to reconcile the form
and function dialectic and to provide an unambiguous definition of the related
To solve this problem it is proposed to shift the definition of function from the building
scale to materials, material systems and building elements, so as to describe the way
in which these fulfil their tasks and affect conditions. The notion of program is shifted
away from sets of activities assigned to spaces towards the participation of
architectures in systems to which to which they are auxiliary. This relates to the
concept of 1st degree Auxiliarity as elaborated further below. The notion of space use
is maintained as the relation between spaces and activities, but with emphasis on the
fact that architecture can only make provisions towards habitation and space use. In
consequence the pursuit of single space use is relinquished. According to these
definitions it is possible to state function, program and provision for habitation, as
some of the particular subsets of performance.
5.2 The local physical environment - local climate and microclimate
The biophysical environment encompasses the natural and the built environment.
It combines both biotic and abiotic components. Ecosystems generate biophysical
feedback between living and non-living domains. Architecture in the service of the
natural environment, and, more specifically, local ecosystems, needs to engage the
local physical environment. For the argument pursued in the following chapters it is
of use to elaborate some of the basic aspects of climate and microclimate to locate
spatial and temporal scales of magnitude of interaction with the spatial and material
organisation complex.
Architecture interacts with and affects various spatial and temporal scales of
atmospheric processes. However, of the very large range of scales of atmospheric
phenomena only a specific portion is of immediate relevance. As Tim R. Oke pointed
out the influence of the surface of the Earth is on a spatial scale limited to the
troposphere, the lowest 10 kilometres of the atmosphere, while on the timescale of
24 hours this influence is limited to the atmospheric boundary layer, which can vary
in height between 100 meters and 2000 meters depending on surface generated
mixing. (Oke 1987: 3) Oke elaborated the related climatic strata upwards in scale: [i]
the laminar boundary layer ‘which is in direct contact with the surface(s) the non-
turbulent layer, at most a few millimetres thick, that adheres to all surfaces and
establishes a buffer between the surface and the more freely diffusive environment
above’; [ii] the roughness layer that extends above the surface and objects about one
to three times their height or spacing and that is ‘highly irregular being strongly
affected by the nature of the individual roughness features’; [iii] the turbulent surface
layer, up to 50 metres high, that features ‘intense small-scale turbulence generated
by the surface roughness and convection’ (Oke 1987: 6). Moreover, the vertical
extent of these strata is dynamically affected by the atmospheric boundary layer that
is characterised by turbulences ‘generated by frictional drag as the Atmosphere
moves across the rough and rigid surface of the Earth, and the ‘bubbling-up’ of air
parcels from the heated surface’ (Oke 1987: 4-5).
A further useful difference concerns micro- and macroclimate. Rosenberg, Bald and
Verma elaborate as follows:
‘Microclimate is the climate near the ground, that is, the climate in which plants
and animals live. It differs from the macroclimate, which prevails above the first
few meters over the ground, primarily in the rate at which changes occur with
elevation and with time. Whether the surface is bare or vegetated, the greatest
diurnal range in temperature experienced at any level occurs there. Temperature
changes drastically in the first few tens of millimetres from the surface into the
soil or into the air. Changes in humidity with elevation are greatest near the
surface. Very large quantities of energy are exchanged at the surface in the
processes of evaporation and condensation. Wind speed decreases markedly as
the surface is approached and its momentum is transferred to it. Thus it is the
great range in environmental conditions near the surface and the rate of these
changes with time and elevation that makes the microclimate so different from
the climate just a few metres above, where atmospheric mixing processes are
much more active and the climate is both more moderate and more stable.
(Rosenberg, Blad & Verma 1983: 1)
Leaving very tall buildings aside, it is therefore the laminar boundary layer, the
roughness layer and the turbulent surface layer, as well as the micro-climate that are
of immediate relevance for the bulk of architectures interaction with the atmosphere.
5.3 The role of material performance
Materials are defined by their specific composition and structure from which their
properties arise. While some material properties are relatively constant others vary
due to their interaction with independent variables, such as temperature, ambient
humidity, etc. Varying material properties are thus ‘indicative of the energy stimuli
that every material must respond to’. (Addington and Schodek 2005: 39) Material
properties in interaction with the specific environment within which a material is
located yields material behaviour such as dimensional variability due to, for instance,
temperature changes or changes in ambient humidity. In turn material behaviour can
also affect its surroundings. Materials can absorb or reflect thermal energy and give
stored thermal energy off to the environment. Hygroscopic materials such as wood
can absorb moisture from the environment or yield it back, ‘thereby attaining a
moisture-content which is in equilibrium with the water vapour pressure of the
surrounding atmosphere’. (Dinwoodie 2000:49) Material behaviour can be put to task
and constitute the potential of material performance. [Fig. 26]
Fig. 26 Material performance capacity arises from its properties in interaction with extrinsic influences.
Diagram by the author.
Material Assembly Performance Capacity
Material Assembly Logic
Material Performance Capacity
Material Behaviour
Material Properties
Material Composition and Structure
While variable material properties and behaviour present an important opportunity for
performance-oriented architecture there exists, however, a profound obstruction in
current practice and industry. With the increase in standardisation, tight tolerances
and stringent liability in the building industry explicit and variable material behaviour
and associated variable dimensionality is generally deemed a negative characteristic.
Constrained by stringent standards architects principally seek to prevent or neutralise
the effects of variable material behaviour already on the scale of the chosen material,
building component or assembly method to prevent accumulative effects of material
behaviour. Divergence from defined standards and pre-calculated solutions generally
requires costly tests and proof for the architect and a lengthy process towards
permission of use. Only few practices can generally afford this course of action and
more often than not such efforts focus on narrowly framed applied research rather
than the extensive basic research that is required for a critical repositioning of the
prevailing approach to variable material behaviour and dimensionality. And so the
question arises in which context the necessary depth and breadth of inquiry and
empirical knowledge production could take place. It would seem that such activities
could currently best be undertaken in dedicated architectural research centres.
However, funding and resourcing largely basic research in the current research grant
environment is also not an easy task. One way forward is sustained research by
design that can bridge between basic research undertakings and testing by way of
full-scale experiments within the target context.
To illustrate the potential of material behaviour it is useful to discuss a specific one at
some level of detail. Wood as a biological grown material it is one of the new
hallmarks and clichés of material sustainability in architecture. However, essential
characteristics of the material are deemed undesirable: its internal differentiation,
which results from growth related variables and its resultant behaviour. If one
considers, for instance, how wood may be utilised with regards to its hygroscopic
behaviour, one also needs to take into consideration all properties and
characteristics that affect its response to moisture disequilibria, i.e. its species-
specific density, anisotropy, porosity and cellular differentiation. It is, however, the
internal differentiation of wood that comes into conflict with the prevailing
considerations concerning standardisation, tolerances and liability. This explains why
the preferred mode of working with wood is moving into the direction of
homogenising its behaviour by way of cutting or chipping it into smaller elements that
are laminated together.
When pursuing an alternative path it is necessary to consider how wood comes to be
the way it is. The internal structure of wood depends on the circumstances under
which a tree from which the wood is harvested has grown. Various recent
publications give evidence of increased interest in the internal differentiation of wood
in relation to environment (Schweingruber 2007, Zobel & Buijtenen 1989) and the
technical innovation potentials associated with the anisotropic character of wood
(Wagenführ 2008). This implies that considerations regarding environment must be
twofold, first with regards to its impact on the internal differentiation of wood in its
growth phase, and second, the two-way exchange between the harvested wood in a
designed assembly and the environment in which it is placed. The numerous
variables related to the growth process can thus become a matter of design
consideration, as does the resultant material behaviour based on its differentiation
and heterogeneity.
Already at a material scale it is of use to employ a systems approach to adequately
meet the complexities involved in a performance-oriented approach to a material. If
the properties of a material and its related behaviour are foregrounded this will have
repercussions along the entire supply and demand chain. In order to develop a
suitable approach the Research Centre for Architecture and Tectonics at the Oslo
School of Architecture and Design pursues ‘Holistic and Integrated Wood Research’.
This involves the detailed mapping of related existing research and the sustainability
aspects involved in and between specific parts of the supply and demand chain. As
industrial forestation is in increasing need to emphasize biodiversity instead of
monoculture (Bunnell & Dunsworth 2009) and architects seek for a much broader
range of available wood species and products there exists a promising match of
interests. However, the question arises as to how to reposition the intermediary parts
of the wood industry regarding questions of wood sorting, treatment and machining.
Likewise policy makers will need to rethink the role and extent of existing and future
standards and tolerances in material behaviour. This can only be accomplished in a
concerted effort that involves all stakeholders and involves the detailed mapping of
related knowledge in the still existing traditional use of wood craftsmanship in
architecture and boatbuilding in Norway, in particular regarding detailed knowledge
of wood properties and behaviour and a related detailed logic of wood sorting,
storage, tooling and fabrication.
Any change in the way wood properties and behaviour may be instrumentalised will
involve different timelines. Clearly the change in the Norwegian forestry from current
predominant monocultures of spruce and pine to a biodiverse industrial forestry will
require decades, even centuries. Changes in tooling and machining will take a
number of years. The production of knowledge through research by design
experimentation can, however, commence with immediate effect and may need to
focus on two aspects: [i] the development of reliable data and [ii] the production of
intellectual tools and sensibilities in education to provide architects and craftsmen
with the required knowledge and skills.
Industrial Design master student Linn Tale Haugen [C:8] (Haugen 2010) examined
the seedpod of a Flamboyant Tree (Delonix regia) regarding its material make-up
and resulting self-shaping tendencies induced by hygroscopic behaviour. The
seedpod is characterised by a layering of material with different fibre directions. The
angle of rotation of the fibres in the layers and the thickness of the layers determines
the degree of warping of the two parts of the seed pot as a result of moisture-loss
induced shrinkage. The warping serves the purpose of separating the two parts of
the seed pot and releasing the seeds. Based on this observation Haugen re-
examined lamination rules for form-stabile laminates. Timber laminates are generally
composed of an odd number of layers since this locks the warping directions of the
different layers into a form-stabile configuration. As the warping is determined by the
fibre-direction, the specific rotation of the layers is key to accomplishing form-
stability. Likewise, however, this offers the opportunity to devise non-form-stabile
laminates that exploit the hygroscopic behaviour of the material. In a laminate with an
even number of layers, the fibre-direction of the various layers can be utilized to warp
the laminate in a controlled way. In addition wood species and cuts with lesser or
greater hygroscopic behaviour determines the degree of warping. This then delivers
control over the direction and extent of what becomes controlled warping.
Specific single or double curvature of laminates can be attained by way of fibre
direction in the different layers and the related directions of swelling and shrinkage in
moisturizing and drying the wood. It is then no longer necessary to derive such
curved elements by means of machining, such as routing, which results in a large
amount of off-cuts or sawdust, or, alternatively, the costly production of moulds. After
numerous experiments with different types of wood Haugen decided on using beech
veneer due to its elasticity and related ability to warp without cracking. Subsequently
she undertook a large number of experiments to arrive at pre-specified curvatures of
the laminate. This was initially done with continuous layers, that is to say one fibre-
direction per layer, and subsequently with layers consisting of rotated patches to gain
more surface area and more curvature variation in the laminate. The self-shaping
process remains to some extent reversible when the material remains untreated.
Alternatively the laminate can be fixed in the warped shape by sealing the surface. In
her master dissertation Haugen also demonstrated various product design related
uses, including a screen-wall and a lampshade that respond to changes in the
ambient humidity. [Fig. 27, p. 127; Fig. 28, p. 128; Fig. 29, p. 129]
Fig. 27 Experiments with non-form-stabile laminates served to accomplish specific double-curvature.
Studio project, 2009, Linn Tale Haugen at the Oslo School of Architecture and Design.
Fig. 28 Systematic experiments with non-form-stabile laminates serve to extract reliable empirical data.
Diploma project, 2010, Linn Tale Haugen at the Oslo School of Architecture and Design.
Fig. 29 Non-form-stabile laminate screen-wall. Diploma project, 2010, Linn Tale Haugen at the Oslo
School of Architecture and Design.
Master students Wing Yi Hui and Lap Ming Wong [C:8] also utilised the hygroscopic
behaviour of wood but for a different purpose. Their goal was to work towards the
structural use of 0,75 mm thin pine veneer in a structural web. An initial series of
experiments served to establish the relationship between cut, fibre-direction,
moisture content and the extent to which the rectangular veneer elements could be
bend and twisted without cracking. In a following series of experiments the elements
were configured into assemblies, with each element bend and twisted and a high
moisture content in the assembly process. They then examined were cracks
occurred in the drying process due to the way the elements shrunk in relation to one
another. This enabled eventually a controlled process of assembling elements with
high moisture content that in the process of drying and shrinking increased the
tension in the assembly without cracking. This process of post-stressing due to
drying increased the structural capacity of the resultant structural web. This capacity
was demonstrated in a full-scale construction of a small pavilion for the Oslo
Architectural Triennial in 2010. [Fig. 30, Fig. 31, Fig. 32, p. 131]
Fig. 30 Construction information derived from physical and computational form-finding. Pavilion for the
Oslo Architectural Triennial 2010, Studio project, 2010, Wing Yi Hui and Lap Ming Wong at the Oslo
School of Architecture and Design.
Fig. 31 Pavilion for the Oslo Architectural Triennial 2010, Studio project, 2010, Wing Yi Hui and Lap
Ming Wong at the Oslo School of Architecture and Design.
Fig. 32 Local post-stressing of the structural web is possible by saturating the pine veneer strips with
moisture content and increasing the contact area of neighbouring strips that is glued together, Studio
project, 2010, Wing Yi Hui and Lap Ming Wong at the Oslo School of Architecture and Design.
The combination of basic research and research by design experiments geared
towards the production of reliable knowledge and re-skilling need not necessarily be
constrained to wood or anisotropic materials in general. What is important is that the
self-organization of the material under investigation in relation to specific extrinsic
influences is made operational. The correspondence between biological precedent
and designed wood product in Linn Tale Haugen’s work highlights one particular
trend of learning from biological materials. The analysis of the seedpod of the
Flamboyant Tree regarding its material composition, structure, properties and
behaviour in response to extrinsic stimuli is of great use for harnessing material
performance. Substantial research into biological materials is in existence (Vincent
1990, Nachtigall 2002, etc.) but not sufficiently recognised by architects as a source
for innovation.
Material performance and its capacity to engage and affect local microclimate are of
interest for performance-oriented architecture. Considerations as to the microclimatic
conditions of a relatively undisturbed site or the desired conditions to provide for
existing or desired local communities can inform the choice of materials and their
exposure and orientation to climatic conditions and the resulting thermal and airflow
conditions. In the case of materials with hygroscopic behaviour this can also involve
the humidity regime close to the material surface. Therefore studies of the range of
micro-climatic modulation capacity of materials in a specific context are of great
interest. [Fig. 33]
Fig. 33 Wooden building elements giving of water vapour in order to gain moisture equilibrium with the
surrounding environment. Photography: Defne Sunguroğlu Hensel, 2010.
5.4 The active boundary, the articulated envelope and heterogeneous
The architectural boundary is generally understood as a material partition: a floor,
wall or ceiling that separates adjacent spaces or interior from exterior, while a
threshold is understood as a zone between outside and inside, or one space and
another, that connects and divides at the same time. Throughout architectural history
and across different cultures and climate zones the articulation of the architectural
boundary and threshold varied greatly together with their symbolic connotation and
functional specificity, engaging the environment and offering a broad range of
different degrees of connection, openness and closeness, and provisions for
Developments associated with industrialisation contributed significantly to narrowing
down this spectrum, in particular articulating the building envelope as a firm division
between exterior and interior together with the technical climatization, standardisation
and homogenisation of interior environments. The advent of mechanical-electrical
interior climate modulation was paralleled and enhanced by the attempt to devise
closed ecological systems for spaceflight programmes and the design of cold war
bunkers, at their height in the 1960s. Together these developments accelerated the
material boundary towards a quasi-hermetic division between exterior and interior.
Reyner Banham anchored this development in architectural theory by way of his
seminal book ‘The Architecture of the Well-tempered Environment’ (Banham 1969].
In this book appeared a diagram of a tent (Banham, 1984 [1969]: 18) that displayed
the tent membrane as a hermetic enclosure that ‘deflects’ moisture, airflow or
thermal radiation. Thus the diagram gives no evidence of the conditions that arise out
of the obvious and unavoidable degree of permeability of any tent membrane: a
significant but deliberate error that idealised a desired condition that found its apex in
the Banham bubble.
Banham showed the tent diagram together with one of a campfire to illustrate what
he thought of as two historically different modes of organising space, a ‘western’ one
that operates through partitioning of spaces by means of physical boundaries, and, a
‘nomadic’ one characterised by the vague boundaries of gradient conditions such as
heat and light, exemplified by the campfire around which people organise themselves
according to preference of exposure and social hierarchy. The stance behind these
diagrams still dominates architecture today. It would seem that not only the tent
diagram was on a fundamental level inaccurate, but also the dialectic of Banham’s
proposed division between the two modes of spatial organisation, as both together
characterised architectures of many pre-industrial cultures.
Many other attempts to theorize the role of the envelope in orchestrating the relation
between architecture and environment have been pursued since. The German
Philosopher Peter Sloterdijk, for instance, located the key advance in considering
environment in architecture in the 19th Century, when in his view the then newly
emerging hothouses or glasshouses in Great Britain aimed for the first time at the
provision of interior conditions that differed dramatically from the local environments
they were placed within, so as to provide suitable conditions for alien plant species.
This development initiated, according to Sloterdijk, not only the notion of environment
in architecture, but also ‘a new politics of trans-human symbiosis’. (Sloterdijk 2005:
944) Sloterdijk elaborated that:
‘Only gradually did nineteenth-century minds grasp the paradigmatic
significance of constructing glasshouses. Such edifices took into account that
organisms and climate zones reference each other as it were a priori and that
the random uprooting of organisms to plant them elsewhere could only occur
if the climatic conditions were transposed along with them... It bears
considering that it was the afore-mentioned exercise of granting plants
hospitality that first created the conditions under which it became possible to
formulate a concept of environment. I can of course forgo providing any
detailed explanation of how and why the concept of ‘environment’ as coined
by biologist Jacob von Uexküll in 1909 (in his book Umwelt und Innenwelt der
Tiere, second edition, 1921) was one of those twentieth-century innovations
in logic that was to have the greatest impact. Not only do large stretches of
modern biology depend on it but also both ecology as a whole and systems
theory. If post-Uexküll the talk was of ‘environment’, then this meant thinking
not just of the natural habitat of exotic animals and plants but also of the
procedures for the technical reproduction of that habitat in alien
surroundings.’ (Sloterdijk 2005: 944-945).
Sloterdijk further theorized that the invention of bent glass and prefabrication of
standardised elements was the key contribution to this endeavour. Yet, herein also
lies the predicament: standardisation enforced its own logic onto the construction,
specifically repetition in structure and symmetrical volumes of the glasshouses,
which made it difficult to employ these architectures to a full capacity of modulating
interior environments by architectural means in response to seasonal differences, the
path and angle of the sun, prevailing wind and weather directions and so on, and
necessitated instead mechanical means for this purpose. This question also extends
to the different requirements for plants that are native to very different climate zones:
it was difficult to provide heterogeneous environmental conditions within one building
envelope and so different glasshouses were built for species of more or less the
same climate zone. If one compares these buildings there is not as much difference
in the architecture as one should expect. Instead the difference lies again in the
mechanical modulation of the respective interior environments. In order to control the
latter it was then also necessary that the impact of all undesired exterior conditions
where eliminated - instead of putting them to task - thus enhancing the separation of
the interior from the exterior. And, although Sloterdijk pointed out that ‘we
encounter the materialisation of a new view of building by virtue of which climatic
factors were taken into account in the very structure made’ (Sloterdijk 2005: 945) and
that this understanding continued in modern architecture, the contradictions and
related shortcomings prevailed too. This is not to say that there were no interesting
developments in subsequent architectures, yet, the contradictions were in the
majority of projects reinforced instead of repositioned and solved. However, some
developments of interest existed that are worth mentioning in this context.
The Open Air School Movement began to take shape in the early 1900s in Germany.
Works of note emerged in the Netherlands, as well as in France. This movement
focused on the design of school often for health impaired pre-tubercular children and
delivered some interesting examples that foregrounded the use of exterior space and
often also the modulation of environment. Jan Duiker and Bernard Bijvoet’s open-air
school in Amsterdam, completed in 1928, operated mainly on the principle of
maximizing the interface between interior and exterior. The four-storey building
contained a combination of indoor and roofed-over outdoor classrooms connected by
moveable parts of the building envelope and material transparency. A particularly
interesting example with regards to environmental modulation is the Open Air School
in Suresnes in France designed by Beaudoin and Lods and completed in 1935. This
school featured south-facing freestanding classrooms aligned along a massive and
opaque North wall with sufficient thermal mass, while the other three sides consisted
of foldable glass elements that could be entirely retracted, thus exposing the
classroom to the exterior climate. A special floor heating system made it possible to
ensure a suitable temperature at seating height while at the same time providing the
maximum of oxygen-rich fresh air (see for instance Dudek 2000). David
Leatherbarrow referred to this approach as the ‘device paradigm’, in which the action
of the building is located in mechanically moveable parts, in the above cases as
integral part of the envelope, to enable adjustment. The range of adjustability is key
to ‘the modification and mediation of the environment in its widest sense, from
climate to human behaviour.’ (Leatherbarrow 2005: 12)
Generally, however, the division of homogenised interior environments from the
exterior environment accelerated. A peak was reached with the development of the
so-called office landscapesa 1950s movement in organising large open plan office
pioneered by the Quickborner Team for Planning and Organisation which ironically
intended to provide a more humane office environment. Office landscapes or
Bürolandschaften constituted vast open-plan office landscape in which clusters of
workstations were arranged according to anticipated workflow. It was argued that a
homogeneous interior environment minimised any visual, aural or tactile distractions,
thus optimising the workflow, and a corresponding set of rules for environmental
homogenisation were laid down accordingly.
These developments further intensified in the 1960s. Tight regulations for regular
homogenous interior environments were soon to follow. These were based on
statistical averages and aimed at comfort and safety. The statistical averages were
often based on rather predictable stereotypes and only a minimum of variations were
considered in terms of degree of clothing or degree of movement relative to activity.
It is hard to imagine that a specific person will have the same comfort requirements
at different times and in different circumstances and so the very basis of the
preference for homogenous interior environments seems flawed.
Interior climatization rapidly became both a status symbol, as well as increasingly
affordable and replaced from the 1970s onwards other traditional architectural means
for environmental modulation throughout large parts of the world. When eventually
questions of sustainability began to surface these were invariably connected to the
question of ‘power-based solutions’, in such a manner as to reinforce the role of
technology and interior/exterior division. ‘Low’ and ‘Zero’ energy efforts do not
necessarily indicate a shift away from energy-based technology, but instead towards
the least energy-consuming technologies. This constitutes today’s prevailing
approach in the more developed and energy dependent countries across all climate
zones, from the hot to the cold and from the humid to the dry regions of the world. In
order to facilitate this approach the architectural boundary continues to be
predominantly a dividing element that is largely passively resistant and technology
constitutes the means of active exchange.
There are, however, also a number of promising approaches. These include ‘free-
running buildings’, ‘adaptive approach to thermal comfort’, etc. Free-running
buildings are not heated or cooled in general or during particular seasons. In
temperate climates for instance a lot of buildings are not cooled or heated during the
summer months. While the terminology features for the first time in regulations (see
for instance the European Standard EN 15251: Allowing for thermal comfort in free-
running buildings) the principle itself is not new. This notion applies in general to pre-
industrial buildings. There exists, though, a fundamental difference between how
climate-specific free-running buildings in the past and today are thought of. The
‘adaptive approach to thermal comfort