The Power of Systems: How Policy Sciences Opened Up the Cold War WorldHow Policy Sciences Opened Up the Cold War World
Abstract
This book introduces readers to one of the best-kept secrets of the Cold War: the International Institute of Applied Systems Analysis, an international think tank established by the U.S. and Soviet governments to advance scientific collaboration. From 1972 until the late 1980s IIASA in Austria was one of the very few permanent platforms where policy scientists from both sides of the Cold War divide could work together to articulate and solve world problems. This think tank was a rare zone of freedom, communication, and negotiation, where leading Soviet scientists could try out their innovative ideas, benefit from access to Western literature, and develop social networks, thus paving the way for some of the key science and policy breakthroughs of the twentieth century. Ambitious diplomatic, scientific, and organizational strategies were employed to make this arena for cooperation work for global change. Under the umbrella of the systems approach, East-West scientists co-produced computer simulations of the long-term world future and the anthropogenic impact on the environment, using global modeling to explore the possible effects of climate change and nuclear winter. Their concern with global issues also became a vehicle for transformation inside the Soviet Union. The book shows how computer modeling, cybernetics, and the systems approach challenged Soviet governance by undermining the linear notions of control on which Soviet governance was based and creating new objects and techniques of government.
... While the centers could be critiqued for focusing on commodity crops rather than fruits and vegetables, they have produced international public goods as well as helped build national research capacities (Thornton et al., 2022). Another international research center was IIASA, the International Institute of Applied Systems Analysis, which was established jointly by the United States and Soviet Union in Austria in 1972, and focused on solving global problems with systems-based approaches; as political sociologist Egle Rindzeviciute shows, this east-west cooperation helped develop global governance as an intellectual and socio-technical project (Rindzeviciute, 2016). The model of international research centers, if applied to climate response, could involve producing regional knowledge on mitigation, adaptation, carbon removal, and solar geoengineering as well. ...
Solar geoengineering, or reflecting incoming sunlight to cool the planet, has been viewed by international relations and governance scholars as an approach that could exacerbate conflict. It has not been examined through the framework of environmental peacebuilding, which examines how and when environmental challenges can lead to cooperation rather than conflict. This article argues that scholars should treat the link between solar geoengineering and conflict as a hypothesis rather than a given, and evenly examine both hypotheses: that solar geoengineering could lead to conflict, and that it could lead to peace. The article examines scenarios in which geoengineering may lead to negative peace—peace defined as the absence of conflict—and then applies a theoretical framework developed by environmental peacebuilding scholars to look at how solar geoengineering could relate to three trajectories of environmental peacebuilding. A peace lens for solar geoengineering matters for research and policy right now, because focusing narrowly on conflict in both research and policy might miss opportunities to understand and further scenarios for environmental peacebuilding. The paper concludes with suggestions for how research program managers, funders, and policymakers could incorporate environmental peacebuilding aims into their work.
... IIASA's lasting impact and legacy lies in the provision of a sometimes contested but often innovative environment for the collaborative coproduction of common problems. A depoliticised systems approach allowed for international collaboration, mutual learning and varieties of boundary transgressions, in which disciplinary perspectives, trainings and subjectivities were made explicit and sometimes put aside in order to generate novel responses to the challenges of late modern societies (Rindzevičiūtė 2016). As a result of these collaborations numerous novel interdisciplinary and multilateral perspectives emerged at IIASA -among other places -that broadened the scope of questions to be dealt with on a scientific basis: especially, as there was often no exchange or joint problematisation at a political level on issues such as transboundary pollution, the challenges of technological change and associated risks and the problem of sustainable development. ...
The notion of »the problematic« has changed its meaning within the history of power and knowledge since the early 20th century, leading up to today's performative, neocybernetic fascination with generalized management ideas and technocratic models of science. This book explores central scenes, conceptual elaborations, and practical affiliations of what historically has been called »the problem« or »the problematic«. By way of considering modes of problematization as modes of inhabitation, intervention, and transformation the contributions map its current conceptual-political uses as well as onto-epistemological challenges. Thus, »problematization« is positioned as a critical concept that links, often in intricate ways, several currents from speculative philosophy to the formation of interdisciplinary fields. The »problematic«, as it turns out, has been the source of change in philosophy and the sciences all along.
... IIASA's lasting impact and legacy lies in the provision of a sometimes contested but often innovative environment for the collaborative coproduction of common problems. A depoliticised systems approach allowed for international collaboration, mutual learning and varieties of boundary transgressions, in which disciplinary perspectives, trainings and subjectivities were made explicit and sometimes put aside in order to generate novel responses to the challenges of late modern societies (Rindzevičiūtė 2016). As a result of these collaborations numerous novel interdisciplinary and multilateral perspectives emerged at IIASA -among other places -that broadened the scope of questions to be dealt with on a scientific basis: especially, as there was often no exchange or joint problematisation at a political level on issues such as transboundary pollution, the challenges of technological change and associated risks and the problem of sustainable development. ...
The notion of »the problematic« has changed its meaning within the history of power and knowledge since the early 20th century, leading up to today's performative, neocybernetic fascination with generalized management ideas and technocratic models of science. This book explores central scenes, conceptual elaborations, and practical affiliations of what historically has been called »the problem« or »the problematic«. By way of considering modes of problematization as modes of inhabitation, intervention, and transformation the contributions map its current conceptual-political uses as well as onto-epistemological challenges. Thus, »problematization« is positioned as a critical concept that links, often in intricate ways, several currents from speculative philosophy to the formation of interdisciplinary fields. The »problematic«, as it turns out, has been the source of change in philosophy and the sciences all along.
This chapter examines the role of economics and economists in the Soviet Union. Stalinist policies resulted in a particular model of relations between the social sciences and the state that severely limited the scientific autonomy and professional agency of economists. Despite opportunities for expansion in economic research and education, particularly after World War II, economics remained subordinate to party ideology and academic administration, where it played a predominantly technical role. The postwar resurgence of economics as a political science, driven by heightened Cold War competition and a focus on science and technology for economic and social progress, led to the emergence of new institutions and reformist identities within economics and facilitated dialogue between Western and socialist economists. We also examine how this movement fostered a distinct technocratic mindset and reformist ethos among economists, who sought to introduce various tools to improve existing planning and management practices, yet struggled to balance loyalty to the state with structural constraints.
While the USSR stood at the forefront of the global climate change agenda, Russia has forsaken much of this legacy, becoming engulfed in climate scepticism furthered by the oil and gas lobby which, over the years, has become inseparable from the state itself. Hard-pressed by the imminence of the introduction of carbon-based tariffs on its key export products by leading world economies, Russia, in 2021, pledged to achieve carbon neutrality by 2060 and implement a system of carbon tariffs within a few years. However, the wave of Western sanctions that followed Russia’s military operation in Ukraine in 2022 has made Russia’s breakthrough climate commitments of the previous year largely untenable, as the country extensively depends on the importation of corresponding equipment and technologies from the West. Moscow has now been turning towards non-Western actors such as BRICS countries for cooperation in decarbonization efforts. In view of the above, this chapter reviews the evolution of Soviet and Russian environmental policies—emphasizing the strategic significance of Russia in the global climate mitigation effort and its historic involvement in the sphere, analyses internal and external factors that have influenced climate agenda in the country, and stresses that geopolitical, economic, and capacity considerations will continue to shape the climate calculations of the Kremlin, as there appears to be insufficient concern for both Russia’s elite and the masses regarding the real implications of anthropogenic climate change.
This paper explores the political uses of images generated by Earth System science. It argues that images of possible climate futures, maps of potential worlds of heatwaves and wildfires, are made legible to policymakers by an alliance with a class of climate-economy models that associate scientific estimates of climate impacts with a prescribed international policy and technology mix. While environmental models have successfully mobilized policymakers in the past by providing images of “planetary scenarios” accompanying different emissions pathways, with climate change a political actor outside the administrative state is required to overcome the entrenchment of fossil capital. The paper suggests such actors are empowered not by the rhetoric of scenario modeling but by the emerging practice of “planetary sensing,” where activists and stakeholders directly mobilize the planetary images generated by Earth System science as they work to evacuate prisons, track pollutants, and repair pipelines.
Zusammenfassung
Das International Institute for Applied Systems Analysis (IIASA) wurde 1972 in Laxenburg bei Wien gegründet und war das weltweit erste Forschungsinstitut, an dem – dem eigenen Anspruch zufolge – Wissenschaftler:innen von Ost und West gemeinsam an der Lösung globaler oder transnationaler Probleme arbeiten sollten. Nach einer Beschreibung der Gründung des IIASA wendet sich der vorliegende Beitrag den Analysemethoden zu, die bei IIASA zum Einsatz kamen, und zeichnet anhand der Geschichte dieser Analysemethoden Wechselwirkungen zwischen Organisationsform und Erkenntnisform nach, die den Forschungsalltag am IIASA in den ersten zwei Jahrzehnten seines Bestehens prägten. Aufgrund seiner spezifischen Ausrichtung und Organisationsform, bot das IIASA eine förderliche Umgebung für die Entwicklung eines innovativen Zugangs zu jenen epistemologischen, methodologischen und wissenspolitischen Problemen, mit denen sich eine global ausgerichtete Erforschung des Planeten und seiner Bewohner:innen in der zweiten Hälfte des 20. Jahrhunderts konfrontiert sah.
This article argues that the Cold War-era battle between information and uncertainty is a critical origin point for contemporary social theory-informed, dataintensive projects of the US national security state. Beginning in the 1950s, international relations experts and government officials turned to digital computing to help make decisions under the unavoidable pressures of geopolitical uncertainty. By the 1970s, their data banks of political knowledge and novel statistical tools purported to forecast political unrest long before an unaided human could. These efforts sparked a new epistemology of political knowledge, one that is now common in data science, in which designers and users prioritize correlation over causality and the instrumental management of problems over scholarly understanding or explanation. Far from a historical curiosity, this history is a warning. The sensibilities of Cold War technopolitical projects are continually rematerialized in contemporary computational security projects. Left unchallenged, their durability will continue to increase in tandem with the national security state's continued investment in computational social scientific projects for geopolitical management.
In the 1960s, creativity became an important category for the Soviet state. Soviet educators and policy makers came to define creativity as problem solving in the service of Soviet automation. At the same time, the introduction of cybernetics, information theory and methods of artificial intelligence (AI) to psychology enabled Soviet researchers to perform quantitative studies of human cognition. The state concern with creative thinking and the cyberneticization of Soviet psychology allowed for the first quantitative studies of human problem solving. These shifts in Soviet society and scientific communities created fertile ground for the creation of Lev Landa's algo-heuristic theory (AHT), a pedagogical method of cultivating rule-bound creativity relying on tools and instruments developed and perfected in information theory and AI research. Drawing on scholarship in the history of algorithmic rationality, the Cold War discourse on creativity as a corporate imperative, and the place of cybernetics-inflected methods in the welfare domain, this article analyses the AHT as rule-based instrument of making creative thinking accessible to the lay mind.
Non-technical summary
This article uses water to examine how the relationships of ethics to science are modified through the pursuit of Earth stewardship. Earth stewardship is often defined as the use of science to actively shape social–ecological relations by enhancing resilience. The changing relations of science to values are explored by considering how ideas of resilience operate to translate different ways of knowing water into the framework of Earth stewardship. This is not a neutral process, and Earth stewardship requires careful appraisal to ensure other ways of knowing water are not oppressed.
Technical summary
Scientific disclosures of anthropogenic impacts on the Earth system – the Anthropocene – increasingly come with ethical diagnoses for value transformation and, often, Earth stewardship. This article examines the changing relationship of science to values in calls for Earth stewardship with special attention to water resilience. The article begins by situating recent efforts to reconceptualize human–water relations in view of anthropogenic impacts on the global water system. It then traces some of the ways that Earth stewardship has been articulated, especially as a framework supporting the use of science to actively shape social–ecological relations by enhancing resilience. The shift in relations of ethics and science entailed by Earth stewardship is placed in historical context before the issues of water resilience are examined. Resilience, and critiques of it, are then discussed for how they operate to translate different ways of knowing water into the framework of Earth stewardship. The ethical stakes of such translations are a core concern of the conclusion. Rather than reducing different ways of knowing water to those amendable to the framework of Earth stewardship, the article advances a pluralized approach as needed to respect multiple practices for knowing and relating to water – and resilience.
Social media summary
Water resilience is key to Earth stewardship; Jeremy Schmidt examines how it changes relations of science and ethics.
This article analyses the intellectual and institutional development of the artificial-intelligence (AI) research programme within the Soviet Academy of Sciences from the 1970s to the 1980s. Considering the places and ideas from which it borrowed, I contextualize its goals and projects as part of a larger technoscientific movement aimed at rationalizing Soviet governance, and unpack shared epistemological and cultural assumptions. By tracing their origins to debates accompanying the introduction of cybernetics into Soviet intellectual and political life in the 1950s and early 1960s, I show how Soviet conceptions of ‘thinking machines’ interacted with dialectical materialism and communist socio-technical imaginaries of governance and control. The programme of ‘situational management’ developed by Dmitry Pospelov helps explain the resulting conception of AI as control systems aimed at solving complex tasks that cannot be fully formalized and therefore require new modelling methods to represent real-world situations. This specific orientation can be understood, on the one hand, as a research programme competing with systems analysis and economic cybernetics to rationalize Soviet management, and, on the other hand, as a field trying to demarcate itself from a purely statistical or mathematical approach to modelling cognitive processes.
This article studies successive Soviet and Russian government positions on climate change between the late 1980s and the Putin era. It thereby bridges a gap between expanding research on both the role of the Soviet Union in climate change science and diplomacy and on Russian climate change policy after the turn of the millennium. While far-reaching late Soviet plans for decisive participation in the groundbreaking Rio Earth Summit contrasted with the lack of priority accorded to it by Russia during a period of political and economic turmoil, this article argues that there was, before and after 1991, a remarkable continuity of real concern in government about anthropogenic climate change and its negative consequences, not least for the Soviet Union and Russia. This continuity of concern took form in 1989 and lasted for a decade. In contrast to the misleading picture presented to outside observers, notably by the highly visible Yuri Izrael’ and some of the Russian delegations at international climate conferences in the 1990s, a neglect of anthropogenic climate change and its dangers for Russia took hold in the Russian government only after Vladimir Putin came to power. A renewed official recognition of the dangers of anthropogenic climate change materialized only with the 2009 Climate Doctrine. However, until recently this recognition remained half-hearted in comparison with the clear government positions of the late 1980s and the 1990s.
The Cambridge History of the Polar Regions is a landmark collection drawing together the history of the Arctic and Antarctica from the earliest times to the present. Structured as a series of thematic chapters, an international team of scholars offer a range of perspectives from environmental history, the history of science and exploration, cultural history, and the more traditional approaches of political, social, economic, and imperial history. The volume considers the centrality of Indigenous experience and the urgent need to build action in the present on a thorough understanding of the past. Using historical research based on methods ranging from archives and print culture to archaeology and oral histories, these essays provide fresh analyses of the discovery of Antarctica, the disappearance of Sir John Franklin, the fate of the Norse colony in Greenland, the origins of the Antarctic Treaty, and much more. This is an invaluable resource for anyone interested in the history of our planet.
From the beginning, a science-based approach to questions of the future and – more precisely – thinking in alternative futures was in latent conflict with the official ideology of the German Democratic Republic, according to which East German society (and indeed, the whole humankind) was heading towards a communist future. During the 1960s, however, prognostics – the socialist type of futures studies – fitted well into the ambition of political leaders to foster economic development by promoting scientific-technological progress and adopting new management systems of the national economy. Prognostics was to a certain extent institutionalized and obtained in parts a cybernetic underpinning, but ideological constraints on knowledge never vanished. Moreover, prognostics had to distinguish itself clearly from “late-capitalist” futurology. With the reorientation of politics after Walter Ulbricht lost power, prognostics was cut back as was its cybernetic underpinning. As the official belief in the communist future eroded during the 1980s, there was no longer any room for governmental foresight. Futures thinking was taken up by the dissident movement.
This article explores the practice of pre-empting controversy as an example of the wicked problem of cultural participation in the digital media. Drawing on science and technology studies (STS), research into the history of cybernetics, artificial intelligence (AI), and policy studies, it argues that the ongoing digital transformation and the expansion of the algorithmic public sphere does not solve but amplifies the problem of cultural participation, challenging the ‘participatory turn’ in cultural policy, defined as cultural policy’s re-orientation to encourage participation of different stakeholders at different stages of policymaking. This process is analysed through two cases: the postponing of a retrospective exhibition of the painter Philip Guston in the United States and the pre-emptive ban of a public art project centred on a monument for the Soviet Lithuanian writer Petras Cvirka in Lithuania. In both cases, risk management through pre-emption backfired and revealed the lack of institutional preparedness to foresee and deal with the digital social.
This chapter focuses on the sixth Pugwash conference, held in Moscow in late 1960. It argues, first, that this conference witnessed a decisive shift in terms of the quality of the dialogue about disarmament between US and Soviet scientists, giving rise to a novel form of technopolitical communication. Second, these exchanges carried new political weight because those around the Pugwash table included scientists close to the White House and the Kremlin. As a result, western perceptions of the Pugwash organization changed: US and UK governments came to see it as relevant to their interests. The Moscow conference also led to delegates from the US and the USSR creating a bilateral East–West Study group on disarmament, based on the Pugwash model but operating outside it. Overall, the chapter argues that this conference created an important platform for Pugwash and its scientists within the international realm of informal Track II diplomacy.KeywordsTechno-political communicationArms controlGeneral and complete disarmamentSoviet-American Disarmament Study Group (SADS)Joseph RotblatAleksandr TopchievJerome WiesnerSolly Zuckerman
An emergent polar futurism characterizes the contemporary built space of climate science in Antarctica, inaugurated in large part by the British Antarctic Survey's cutting-edge Halley VI research base. This article analyzes the spatial form, design, and use of Halley VI as well as the rhetoric surrounding it, seeing in Halley VI an expression of a particular “socio-technical imaginary” that implicitly gestures toward a tendential integration of climate science and global logistics. Alongside claims toward fostering a comfortable, communal life among its inhabitants, the imaginary embedded in Halley VI is one where climate research is subsumed within capital's broader aims to facilitate stable logistical movements and infrastructural durability amid chaotic, volatile conditions, a subsumption that bears in particular on the knowledge workers who inhabit the base. What a reading of the base's layout, interior, and lived-in uses exposes, the paper claims, is an implicit portending of a growing proletarianization of sensual experience and knowledge work among residents at the base, increasingly displaced as they are from the subjective core of the base's operations. This reading both extends and complicates recent calls in polar geographies to attend to speculative figurations of Antarctic futures, channeling Halley VI's polar futurism through structural determinants drawn out of literatures critically dealing with design, the history of systems sciences, and theorizations of ongoing restructurings of contemporary labor. The article suggests then that imaginaries of Anthropocenic futures such as those embedded in Halley VI's polar futurism might serve at once as speculative-projective tools and implicit sites for carrying out critiques of tensions and pernicious trends that underlie such Anthropocenic speculation.
With technologies like machine learning and data analytics being deployed as privileged means to improve how contemporary bureaucracies work, many governments around the world have turned to artificial intelligence as a tool of statecraft. In that context, our paper uses Canada as a critical case to investigate the relationship between ideals of good government and good technology. We do so through not one, but two Trudeaus—celebrity Prime Minister Justin Trudeau (2015—…) and his equally famous father, former Prime Minister Pierre Elliott Trudeau (1968–1979, 1980–1984). Both shared a similar interest in new ideas and practices of both intelligent government and artificial intelligence. Influenced by Marshall McLuhan and his media theory, Pierre Elliott Trudeau deployed new communication technologies to restore centralized control in an otherwise decentralized state. Partly successful, he left his son with an informationally inclined political legacy, which decades later animated Justin Trudeau's own turn toward Big Data and artificial intelligence. Compared with one another, these two visions for both government and artificial intelligence illustrate the broader tensions between cybernetic and neoliberal approaches to government, which inform how new technologies are conceived of, and adopted, as political ones. As this article argues, Canada offers a paradigmatic case for how artificial intelligence is as much shaped by theories of government as by investments and innovations in computing research, which together delimit the contours of intelligence by defining which technical systems, people, and organizations come to be recognized as its privileged bearers.
This article analyses the EU’s Stability and Growth Pact (SGP) to challenge interpretations of neoliberalism as an international project. The fiscal rules of the SGP are a paradigmatic example of how neoliberalism uses constitutional techniques to put limits on national democracy. These rules, however, have never worked as intended with adherence having been the exception rather than the norm. Although scholars readily admit neoliberal rules misfire in practice, conceptualisations of neoliberalism have remained largely unscathed. In contrast, this article argues that techniques of budgetary planning have had a more crucial impact on neoliberal fiscal governance than legal rules. In the case of the SGP, supranational actors have been empowered not by their capacity to put constitutional limits on public expenditure, but by analysing and intervening in the purposes and uses of public finance through managerial techniques of budgetary planning. In making this argument, I argue that neoliberal rules matter to international fiscal governance only through their failure. Instead, supranational institutions have been empowered in the neoliberal era through a managerial reformatting of governance.
This article shows how, in the middle years of perestroika (1987–1988), the Estonian reform movement (mainly consisting of scholars and experts) used technoscientific vocabulary to strengthen their quest for the republic’s ‘territorial self-management’. To this end, the reformists drew on concepts borrowed from disciplines such as cybernetics, systems theory, future studies and management. The article investigates the transfer of two particular concepts, ‘future scenario’ and ‘self-regulation’, from the scientific field to the Estonian political arena in 1987 and their role in pursuing the ‘self-manageable’ status for the republic. The process culminated in November 1988 with the adoption of the Declaration of Sovereignty of the Estonian SSR. This development served as a catalyst for the adoption of similar declarations by other republics, which opened the way to the dissolution of the Soviet Union.
In the decades after 1945, the future gained unprecedented prominence as an object of scientific anticipation and state planning in both capitalist and socialist countries of the Cold War world. In Poland, future studies or futurology emerged in the course of the 1960s in reaction to Western intellectual trends, the post-stalinist political Thaw, as well as the domestic socio-economic situation. The Polish futurology turned out to be one of the most productive, institutionally and personally stable research collectives when compared to other socialist countries. This research community generated various approaches to the problem of how to anticipate the unknown future. This chapter examines three of them: making the future an object of knowledge; subjecting it to conscious (political) control; imagining alternatives to the status quo. Re-examining these historical examples of anticipatory knowledge provides a mirror to discuss our current efforts at predicting and controlling the future.
In this article, we seek to open up for critical debate disciplinary narratives that center the “synthesis” qualities of geographic thought. Proponents of Geography often emphasize its integrative, synthesis approach to human–environment relations to underline its value to interdisciplinary research initiatives addressing critical real-world issues such as climate change. But there are multiple styles of knowledge synthesis at work within academia and beyond, and they have contradictory ethical and epistemological effects. More specifically, synthesis is on the rise, but it is not Geography’s synthesis-as-understanding. Rather, an increasingly dominant cybernetic sociotechnical imaginary is installing a specific notion of synthesis—“synthesis-as-solution”—into universities, transforming both the production of knowledge and the institutional management and technological manifestation of that production. This cybernetic sociotechnical imaginary constrains research ethically and epistemologically to reduce knowledge to the synthesizable information flows and continuous innovation that characterize cybernetic control. In this context, non-conforming research—that is, research that disrupts or disdains such smooth synthesis—risks being labeled unprofessional, unimportant, and obsolescent and marginalized institutionally. Geographic disciplinary narratives that unreflexively celebrate synthesis thus risk producing a paradoxical future for Geography, one in which more space for different modes of knowledge production is created, but the type of difference recognized and affirmed is severely constrained. There is a pressing need for geographers to pay more attention to the practices and contexts in which we create disciplinary narratives because, like the content of our knowledge production, they can either challenge or reinforce a cybernetic sociotechnical imaginary.
Climate models are what governments, experts and societies base their decisions on future climate action on. To show how different models were used to explain climatic changes and to project future climates before the emergence of a global consensus on the validity of general circulation models, this article focuses on the attempt of Soviet climatologists and their government to push for their climate model to be acknowledged by the international climate science community. It argues that Soviet climate sciences as well as their interpretations of the climate of the twenty-first century were products of the Cold War, and that the systematic lack of access to high-speed computers forced Soviet climatologists to use simpler climate reconstructions as analogues, with far-reaching consequences for climate sciences in post-Soviet Russia. By juxtaposing the history of Soviet climate modelling with the early history of the Intergovernmental Panel for Climate Change, which rejected the Soviet model, the article sheds light on the relationship of science and politics. The findings are based on archival and print material as well as on interviews.
This article contributes to the study of governmentalities of the late twentieth century with regard to the proliferation of computers and information technology. Based on archival materials and oral history interviews, this study reconstructs and analyses the story of Franco–Soviet cooperation in the field of economics from the late 1950s to the 1980s, which was initially motivated by a common interest in promoting planning methods, but was later recast as a dialogue dominated by technical issues of information processing and communication and ultimately became part of a commercial strategy to support the French and Eastern European computer industries.
This essay studies the narrative self-positioning of Science Studies in the German Democratic Republic during the 1980s. Drawing on archival material on the foundation of the Council for Marxist-Leninist Science Studies at the Academy of Sciences in East Berlin in March 1988, it analyses how boundaries between Science Studies as a lone standing discipline and several other fields were construed and crossed at the same time and how (scientific) authority was claimed from the intermediate position of an external insider. Not only did Science Studies engage with their subject – the sciences –, but also with the politics of the Socialist Party, with the institution of the Academy, and with (industrial) production. After a formative institutional phase that spanned across the 1970s, Science Studies made efforts to centralize their work during the 1980s, to bind themselves closer to the state and scientific institutions, and to distinguish themselves from them at the same time.
Recently, certain members of the scientific community have framed anthropogenic climate change as an invitation to reimagine the practice of science. These calls to reinvent science coalesce around the notion of usable knowledge, signaling the need to ensure that research will serve the needs of those impacted by climate change. But how novel is this concept? A historical analysis reveals that the goal of usability is haunted by Euro-American conceptions of instrumental knowledge dating back to the nineteenth century. Even as climate research institutions have embraced the radical epistemic ideal of usability over the past 40 years, they have clung to older definitions of research that are at odds with its anti-individualist implications.
Starting in the 1950s, computer programs for simulating cognitive processes and intelligent behaviour were the hallmark of Good Old-Fashioned Artificial Intelligence and ‘cognitivist’ cognitive science. This article examines a somewhat neglected case of simulation pursued by one of the founding fathers of simulation methodology, Herbert A. Simon. In the 1970s and 1980s, Simon had repeated contacts with Marxist countries and scientists, in the context of which he advanced the idea that cognitivism could be used as a framework for simulating dialectical materialism. Simon's idea was, in particular, to represent dialectical processes through a ‘symbolic’ version of dialectical logic. This article explores the context of Simon's interaction with Marxist countries—China and the USSR—and also assesses the outcome of the simulation. The difficulty with simulating distinctive features of dialectical materialism is read in light of the underlying assumptions of cognitivism and, ultimately, in light of the attempt to tame a rival world view.
The chapter explores the history of knowledge, practices, and technologies associated with the development of scientometrics and its main tools, the Science Citation Index, science indicators and impact factors in the 1970s. The chapter shows how imagined and real features of Soviet science and society, such as the centralization of research, employment security and the labour of the Soviet users of Western technologies, who often worked by hand or using simple mechanical devices, shaped the development of this computer-based data-analytic tool in the places of its origin in the West. The history of scientometrics is a complex story of creative appropriation and modification of knowledge and technologies in the context of transnational, East–West interactions and encounters.
This chapter examines the role of Cold War exchanges in the human sciences that shaped Soviet research on programmed instruction during the 1960s and 1970s. Pioneered by psychologist Lev Landa, this field sought to describe human learning in terms of logical structures and offered the theoretical foundation for the development of special teaching computers. Landa’s research had distinct resemblances to the quantitative orientation in American behavioral sciences of that time. His approach also made use of the conception of bounded human rationality produced in the US by mathematically inclined fields of social science. Nonetheless, while drawing on Western approaches to programmed instruction and cybernetics, Landa also adapted those to Soviet political, social, and ideological contexts.
As a global phenomenon, the Cold War had a profound influence on international relations, society, culture, and the sciences, including the social sciences. Recent historiographical developments suggest the need for close attention to transnational dimensions of Cold War social science. Adopting a transnationalism lens brings into focus movements, exchanges, and interactions that have often received at best marginal consideration. With this in mind, the present volume concentrates on three main lines of investigation: exploring important factors that enabled transnational movements and exchanges in the social sciences during the Cold War; analyzing how transnationalism shaped social science work in various Cold War-inflected contexts; and exploring how transnational across different Cold War settings inspired debate over fundamental questions concerning the nature and meaning of the social sciences. Along the way, this volume urges us to rethink certain fundamental points about how we should understand—and thus study—the Cold War itself.
This chapter takes a closer look at the conceptual roots of the Anthropocene. Tracing the history of the Anthropocene concept helps in explaining how certain political imaginaries have found their way into new forms of Anthropocene governance. Using the approach of genealogy, it holds that the production of discourses is inherently tied to forms of political power. A genealogical perspective asks questions such as ‘How did it become possible to conceive of the Earth as an interlinked system and of humanity as a geological actor in the first place?’ In four sections, it first briefly summarizes the debate on the origins and historical predecessors of the Anthropocene concept, then introduces Foucault’s concept of genealogy and outlines an analytical framework to operationalize it. The third section illustrates how to use this approach in practice by taking the Whole Earth movement as an example. The concluding section summarizes the findings and discusses their relevance for International Relations.
Development of water resources and socio-economic development heavily depend on each other. Thus, environmental issues started to appear more prominently in the economics debate worldwide in the 1950s. In this era, the main objective was to maximise economic returns from the water resources. However, effective coordination of water considers factors beyond economics such as political, social and ecological dynamics in complex interactions between stakeholders to prevent present and future conflicts related to water. The systems analysis (SA) models have emerged in response to this need and allowed for a holistic approach to address a variety of water management issues from regional planning and river basin management to water quality, flooding and draught management, and sectorial water allocation. The goal of this chapter is to provide a comprehensive introduction to the systems analysis (SA) modelling approach in water management and the evolution of the SA models as decision support tools since the 1950s in response to contemporary water challenges. Following a brief conceptualisation, an extensive literature review showcases the major methodological trajectory and practical contributions of the field. The potential of SA research to provide policy-relevant solutions to major water-related problems in the Global South, focusing on the case of Brazil, is further discussed in the conclusion sections.
This article is focused on the economic works of the Soviet machinelearning pioneer Emmanuil Braverman, who published, during the 1970s, a series of papers introducing disequilibrium fixed-price models of the Soviet economy. This highly original theory, developed independently from the Western analyses of disequilibria, proposed rationing mechanisms capable, under some conditions, of bringing a system to the state of equilibrium. However, in a fixed-price economy, equilibria are not necessarily optimal or effective; therefore specific observational and analytic procedures aiming at bringing a system to a better state had to be invented. Braverman interpreted this analytic framework as a “qualitative system of control” of the Soviet economy representing a sort of a third-way solution between neoclassical models of spontaneous coordination of autonomous agents and theories of optimal planning. This innovative approach, very different from the styles of reasoning in mathematical economics of his time, was grounded in his work on pattern recognition and informed by a cybernetic vision of control as information processing and communication in complex systems.
This paper focuses on official Soviet attitudes towards ‘ecological crisis’ and the rhetoric developed to address it. It analyses in particular the discussions in the Soviet Union that followed the publication of the Club of Rome report Limits to Growth (1972). It contributes to the better understanding of the debate around resource scarcity in a framework of so-called ‘ecological crisis’ as it was conceptualized in the late 1960s to the mid-1970s. It is based on the analysis of writings by the Soviet geophysicist Evgenii Fedorov (1910–81) who was among the few Soviet members of the Club of Rome and thus had direct access to contemporary Western scholarship. The paper explores how such rhetoric accepted and reconceptualized the notion of crisis for use in both domestic and international environmental politics and the associated advancement of technology as the most effective remedy against resource scarcity. Fedorov largely built his ideas on Soviet Marxism and Vladimir Vernadsky’s concepts, which preceded the current notion of the Anthropocene. In addition, his experience in nuclear projects and weather modification research –– both more or less successful technocratic projects – gave him some kind of assurance of the power of technology. The paper also provides some comparison of the views of the problem from the other side of the Iron Curtain through a discussion of the thoughts of the left-wing American environmentalist Barry Commoner (1917–2012), which had been popularized for the Soviet public by Fedorov.
Cybernetics saturates the humanities. Norbert Wiener’s movement gave vocabulary and hardware to developments all across the early digital era, and still does so today to those who seek to interpret it. Even while the Macy Conferences were still taking place in the early 1950s, talk of feedback and information and pattern had spread to popular culture – and to Europe. The new science created a shared language and culture for surpassing political and intellectual ideas that could be relegated to a pre-computing tradition, and it refracted or channelled currents developing in fields from manufacturing to human physiology. It produced conceptions of the political world, as well as new forms of historical consciousness. It offered frameworks for structuralist thought, but also for policies regarding manufacturing and technology, international relations, and governmental decision-making. But the rising sense of the breadth, importance, and even shock of cybernetics long remained understudied, even as its intellectual assemblages continued to, well, relay. In devices and the so-called ‘digital humanities’, a refracted legacy of cybernetics is also visible. From mainframes to category-frameworks, cybernetics is everywhere in our material and intellectual worlds, even as the name and its meaning have faded. To the extent that cybernetics permeates the human sciences and our culture at large, it remains opaque – an only partially visible legacy often deemed too complex to form a simple object of historical narrative. This special issue on cybernetics in the human sciences outlines the history and stakes of cybernetics, as well as the possibilities of returning to it today.
Rather than assume a unitary cybernetics, I ask how its disunity mattered to the history of the human sciences in the United States from about 1940 to 1980. I compare the work of four prominent social scientists – Herbert Simon, George Miller, Karl Deutsch, and Talcott Parsons – who created cybernetic models in psychology, economics, political science, and sociology with the work of anthropologist Gregory Bateson, and relate their interpretations of cybernetics to those of such well-known cyberneticians as Norbert Wiener, Warren McCulloch, W. Ross Ashby, and Heinz von Foerster. I argue that viewing cybernetics through the lens of disunity – asking what was at stake in choosing a specific cybernetic model – shows the complexity of the relationship between first-order cybernetics and the postwar human sciences, and helps us rethink the history of second-order cybernetics.
This article situates the emergence of cybernetic concepts in postwar French thought within a longer history of struggles surrounding the technocratic reform of French universities, including Marcel Mauss’s failed efforts to establish a large-scale centre for social-scientific research with support from the Rockefeller Foundation, the intellectual and administrative endeavours of Claude Lévi-Strauss during the 1940s and 1950s, and the rise of communications research in connection with the Centre d’Études des Communications de Masse (CECMAS). Although semioticians and poststructuralists used cybernetic discourse critically and ironically, I argue that their embrace of a ‘textocratic’ perspective – that is, a theory of power and epistemology as tied to technical inscription – sustained elements of the technocratic reasoning dating back to these 1920s efforts to reform French universities.
In the wake of Stalin's death, many Soviet scientists saw the opportunity to promote their methods as tools for the engineering of economic prosperity in the socialist state. The mathematician Leonid Kantorovich (1912–1986) was a key activist in academic politics that led to the increasing acceptance of what emerged as a new scientific persona in the Soviet Union. Rather than thinking of his work in terms of success or failure, we propose to see his career as exemplifying a distinct form of scholarship, as a partisan technocrat, characteristic of the Soviet system of knowledge production. Confronting the class of orthodox economists, many factors were at work, including Kantorovich's cautious character and his allies in the Academy of Sciences. Drawing on archival and oral sources, we demonstrate how Kantorovich, throughout his career, negotiated the relations between mathematics and economics, reinterpreted political and ideological frames, and reshaped the balance of power in the Soviet academic landscape.
This chapter describes the trajectories of the two techniques of prognosis beyond RAND. Together with other former RAND colleagues, Olaf Helmer and Theodore J. Gordon founded the Institute for the Future (IFTF) and carried out Delphi studies on problems beyond military interests. And with the support of Paul Kecskemeti, Lincoln P. Bloomfield from MIT’s Center for International Studies (CENIS) began to use political gaming as a method of both research and teaching.
The article analyzes the reception of the idea of convergence in Soviet economics from the 1960s to the end of the 1980s. It is predominantly concerned with convergence theory as a policy idea that inspired perestroika. Its central question is: Under the conditions of an authoritarian regime, how could an imported policy idea that bluntly contradicted official ideology reach a degree of dissemination and (among a specific stratum of the elite) popularity that would later turn it into a central pillar of reform policy? An important finding is that the idea of convergence united the Soviet “people of the sixties” and some Western “progressive” intellectuals who together formed a transregional epistemic community that only for a short period of time, at the end of the 1980s, gained political influence.
This article explores the political effects of the development of systems analysis as a form of “infrastructural knowledge”—that is, as a form of knowledge concerned with infrastructure, and an infrastructure of knowledge—that contributed to internal dissensus among scientific experts in the Soviet Union. Systems expertise is largely missing from existing work on the history of Soviet infrastructure. The article analyzes the development of governmental, managerial, and industrial applications of systems analysis in the Soviet context, as well as the transfer of Soviet systems expertise to developing countries. It argues that systems analysis constitutes a form of infrastructural knowledge that enabled Soviet scientists to criticize governmental policies, particularly largescale, top-down infrastructure projects. This critique is interpreted as an expression of a new normativity about what constitutes good governance; it became particularly salient when Soviet scientists were facing infrastructural projects in the global South. Systems analysis, in this way, constituted an important intellectual resource for endogenous liberalization of the authoritarian regime.
Yurii Yaremenko was one of the late great theorists of the Soviet planning system. His theory, as presented first in censored and self-censored form in his major monograph, Structural Changes in the Socialist Economy (1981), describes the planned economy as composed of groups of technologically differentiated industries, ordered by their priority for receiving scarce high-quality goods. The forced development of the economy is its qualitative differentiation, which over time creates inherent structural imperatives for large-scale reorderings of that priority hierarchy, lest the phenomena of structural transformation become pathological. This account is supplemented by post-Soviet published interviews and by the author’s own interviews with Yaremenko’s associates. They reveal what Yaremenko’s theory left unsaid: that the disintegration of the late Soviet state into a multitude of competing, self-reproducing “administrative monsters,” the most powerful being the military industries distorted industrial structure, degraded civilian life, and ultimately made reform impossible.
There is a rich literature on the emergence of new public management in the 1980s yet surprisingly little about the historical and social lineages of this movement. The scholarship on public management generally suggests that it was born out of the neoliberal critique of the state. The public sector would have thus borrowed corporate practices concerned with performance in order to instil market-like competition and make efficiency gains. This article challenges this reading by showing that concerns with performance management emerged instead from new planning technologies developed in the US military sector. I argue that these planning practices, initially developed at the RAND corporation, would radically transform governance by changing the way in which decision makers consider data about performance and use it to develop strategies or policies. I then explore the impact of this new approach on both corporate and public governance. I show how these ideas were translated for business studies and public administration in order to radically transform both fields and ‘make them more scientific’. As I show, this process contributed directly to the rise of what became called public management and provided new planning tools that radically transformed how we think about governance.
ResearchGate has not been able to resolve any references for this publication.