Technology and Culture 40.4 (1999) 861-865
The 1980s saw the National Museum of American History's exhibitions Field to Factory and A More Perfect Union emphasize national storytelling with powerful, thought-provoking images and messages. A celebratory style of exhibition was evolving into one that evoked and reflected the complexities and controversies of the American past. Despite the quarrels that later raged around the National Air and Space Museum's Enola Gay exhibit and tempered subsequent exhibition planning, the National Museum of American History has continued this tradition with Between a Rock and a Hard Place: A History of American Sweatshops, 1820-present. Its opening label, signed by museum director Spencer Crew and associate director of curatorial affairs Lonnie Bunch, tells visitors in no uncertain terms: "Why do museums mount this kind of exhibit? History museums are educational institutions that strive to make the American past accessible, useful and meaningful to the millions who view their exhibitions, read their catalogs and participate in public programs. . . . History museums interpret difficult, unpleasant, or controversial episodes not out of any desire to embarrass, be unpatriotic, or cause pain, but out of a responsibility to convey a fuller, more inclusive history. By examining incidents ripe with complexities and ambiguities, museums hope to stimulate greater understanding of the historical forces and choices that shaped America."
A virtual version of Between a Rock and a Hard Place appears on the worldwide web at www.si.edu/organiza/museums/nmah/ve/sweatshops/. Here's what the physical exhibition looks and feels like.
The Frank A. Taylor Gallery is an enclosed space in the museum just to the right of the Constitution Avenue entrance. At the entrance to Between a Rock and a Hard Place a large sign carries the exhibit's title; colorful clothing hangs in the entryway, giving the impression that you are entering a closet (tidier than most). The visitor turns right and follows a chronological path through the exhibition in a lopsided circle. United States Secretary of Labor Alexis Herman's words opposing sweatshop conditions appear on the second wall, but the display begins with traditional images of sweatshops and their workers. Sewing machines of all sorts interrupt the panels of quotes from workers and employers and enlarged photomurals of workers. Reproduction clothing adds texture to the visuals. The first half of the exhibition recounts a story, familiar to Technology and Culture readers, of working women, immigrants, long hours, apartment rooms crowded with pieceworkers, and the rise of labor unions to combat the inhumane sweatshop conditions.
As I entered the exhibit I looked forward to learning more about the infamous New York Triangle Shirtwaist Fire and perhaps encountering my heroine, Emma Goldman. Alas, there was no Emma, and you could miss the Triangle Fire unless you knew to look for it. In one design element a ten-foot by ten-foot outline of a sweatshop appears on the floor in the midst of displays on nineteenth-century immigration patterns and changes in dressmaking. The outline indicates the workroom's door, windows, worktables, sewing machines, dress dummies. It seems intended to help visitors appreciate the spaces where sweated workers labored. But one gains no sense of the crowded, noisy, smelly conditions; it is simply there, and most visitors I observed walked around it (as I did myself).
Beyond this floor display, after a quick reference to workers and their important contributions to the war effort in World War II, one encounters a chain-link fence topped with barbed wire. The fence sets off objects relating to the 1995 El Monte incident in southern California, which involved the virtual imprisonment of Thai clothing workers behind such a fence, and it underlines the fact that sweatshops remain a current social problem. For me the most powerful object in the entire exhibition is a drawing of the El Monte site on a plain sheet of legal notepaper (fig. 1). This hand-drawn map was used by federal agents to plan their raid and free the El Monte workers. Other elements of the El Monte story include a modern sewing machine and worktable, personal toiletry items sold to workers from the company store -- which served...
This article considers the cultural and social dynamics surrounding the water supply in Istanbul. It focuses on the period from the last two decades of the nineteenth century to the mid-twentieth, roughly the period between the introduction of the central waterworks and the closure of the Ottoman mains and public fountains. This article considers the interaction of these two systems and indicates that the supply of water by centralized waterworks and via fountains existed side-by-side for more than six decades, making this dual system a characteristic feature of Istanbulâ€™s water supply for many years. The conflict between various technological solutions was especially pronounced, and during the period under consideration the story of water supply is to a large extent one of competition between fountains and taps. Within this framework, this article examines how previously free water became a priced commodity, and how the centralized supply reinforced structures of social inequality.
Technology and Culture 43.4 (2002) 668-692
Summer camps may seem like an odd place to study the relationship between technology and culture. After all, organized camps—the industry's term for overnight camps attended by children without their parents—were initially established in the 1880s and 1890s as antidotes to overcivilization, a condition exacerbated by the technologies of comfort and convenience that had blossomed in the second half of the nineteenth century. Although camps with distinct goals sprang up in subsequent decades, most sought to develop in campers both the manual skills and the habit of hard work that would allow them to sever their dependence on modern technology and regain the self-reliance of their forefathers. Touched by the antiquarian impulse that also informed the Arts and Crafts movement and other late Victorian enthusiasms that T. J. Jackson Lears has identified with antimodernism, organized camping maintained a back-to-basics disdain for technology throughout much of the twentieth century. Specialized camps have emerged in recent decades to give campers intensive exposure to technology, but for many camp organizers the phrase "computer camp" remains an oxymoron.
Yet if camp organizers deliberately decided not to adopt all of the technological innovations available to them (a culturally significant act in its own right), they did use technology selectively, especially in connection with food preparation. Well into the 1930s (and particularly during malnutrition scares in the 1910s), rebuilding the health of urban-dwelling children was one of the explicit benefits of the camp experience, and most camps prided themselves on their ample, wholesome food. The challenge of providing three meals a day for a small army of campers was substantial, prompting many summer camps to make a dining hall with a properly equipped kitchen the first permanent building to grace the camp property. From this perspective, then, it is no more strange for summer camps to employ available kitchen technologies than it would be for colleges, hospitals, or prisons to do so.
A great deal of scholarly attention has been devoted to kitchen technologies in private dwellings, but we know very little about the growing numbers of kitchens that operated outside the home in the late nineteenth and early twentieth centuries. Molly Berger has considered the physical evolution of hotel kitchens, demonstrating their role in creating new standards of refinement by insulating genteel diners from the sounds and smells associated with food preparation. Harvey Levenstein and Dolores Hayden have investigated aspects of public kitchens established at the end of the nineteenth century, first in Boston and later in other cities (especially in connection with social settlements such as Chicago's Hull House), to reform the eating habits of the working poor. These not only provided cheap and nutritious meals (at least according to contemporary scientific standards), but cooking took place in open view in order to encourage customers to emulate slow-cooking techniques at home. But where neither the quality of the food nor the character of the dining experience was the primary focus, in institutions in which meals were simply an infrastructural adjunct to educational, medical, penal, or religious activities, kitchens have largely been ignored. These kitchens were technological systems as necessary as those that provided light, heat, and sanitation, responding primarily to practical imperatives and contributing little to the larger mission of the institutions they served.
Or were they? Can institutional kitchens tell us more about cultural priorities than we have assumed they can? Have we missed something by considering the kitchen in isolation from the other spaces and activities involved in producing and consuming meals—social rituals brimming with cultural meaning? Have we perhaps hampered our ability to interpret institutional kitchens by focusing too closely on what happens within their four walls? Architectural historians have been guilty of such lapses, even when studying domestic kitchens, as Elizabeth C. Cromley has pointed out. Her concept of "the food axis," the system of activity arenas devoted to food storage, meal preparation, eating, and cleanup, is a very useful reminder to look beyond the walls of the kitchen and to take seriously the interconnections between seemingly disparate alimentary tasks.
In the case of American summer camps, a consideration of the...
This paper explores the complex reflexive relationships among technologies associated with tuberculosis care and treatment: the fresh air cure, surgical collapse therapy, architecture, and chemotherapy. We review the architectural histories of the Royal Edward Laurentian Hospital (now the Montreal Chest Institute) to track important transformations of treatment environments. We recount how the rest-cure prevalent at the beginning of the twentieth century started a tradition, lasting until the age of antibiotics, in which architectural settings were deployed as physical agents of treatment. A technology in this sense, then, is a set of resource-using practices marshaled to eradicate the disease. We argue that the endurance of specialized tuberculosis architecture, with its porches, balconies, and sunning galleries, provided crucial material and spatial continuity for therapy, even after chemotherapyâ€™s successes augured the end of dedicated tuberculosis hospitals and sanatoria.
De 1906 a 1910, de nombreux mineurs moururent aux Etats-Unis a cause d'explosions dans les mines. Des etudes scientifiques effectuees par des geologues demontrerent que la poussiere de charbon etait tres explosive. Des mesures preventives, soutenues par le Bureau des Mines, s'imposerent alors afin de reduire la perte des ressources et des hommes
Technology and Culture 47.2 (2006) 286-310
At first glance, the Elmo-workstool seems unremarkable: it is just a chair (fig. 1). Developed between the world wars in the Elmowerk, the small electrical motors division of Siemens-Schuckertwerke, the Elmo-workstool has several features that are now standard in chairs marketed as "ergonomic," including its broad, scooped seat and its adjustable, curved backrest. The potential of the Elmo-workstool to increase worker efficiency and lessen fatigue earned it a place in an exhibition on work spaces and seating that opened in Berlin in 1929 and toured Germany during 1931–32. This display did more than demonstrate specific ways to enhance physical comfort and support, however. It also illustrated an industrial need for regularity and uniformity in human motions and signaled the creation of a pathology of movements that did not fit industrial patterns. To combat this new pathology, the exhibition offered a solution: workers would be rendered efficient through a mechanical form of discipline in which their individual movements would be constrained, much as were those of a machine. The methods on display therefore went beyond a simple choreography of allowable motions to include physical structures that supported, directed, and constrained workers into regular and uniform movements.
The point, as the reviewer for a prominent journal of labor health put it, was to eliminate "all motion not directly necessary for work." Thus the exhibition and the Elmo-workstool exemplify a push toward worker control in the name of efficiency—a push, according to early critics, that effectively reduced workers to the role of cogs in the industrial machine. In a related study, Richard Lindstrom maintains that in spite of these motion studies and other efficiency measures, workers remained irreducibly individual. However, this case study suggests that there may well be no need to choose between these two interpretations. For in the worker seating exhibition of 1929–32, motion study was indeed intended to eliminate the individuality of workers' motions. The techniques on display were designed to overcome the complexities inherent in people's bodies and personalities.
"Work Seats and Work Tables" opened in May 1929 at the Labor Protection Museum in Charlottenburg, Berlin. A contribution to Germany's controversial rationalization movement, the exhibition was created and sponsored by the German Bureau of Economy and Efficiency (Reichskuratorium für Wirtschaftlichkeit, or RKW) and its Committee for Economical Production, in conjunction with the Reich Labor Ministry and the German Society for Industrial Hygiene. For a modest entry fee of thirty pfennigs, visitors could study full-scale installations of efficient seating and working arrangements currently in use, and they could also examine the details of the medical research on which they were based. The exhibition was so successful that in May of 1931 its sponsors unveiled a traveling version, scheduling stops at Karlsruhe, Kaiserslautern, Nuremberg, and Leipzig, and—given the favorable reviews in the labor, engineering, and medical press—soliciting invitations from other prospective hosts.
The RKW itself began modestly in 1921, charged with extending the standardization measures that had been adopted during World War I. In 1925, it accepted a more ambitious mandate: to remake the German economy through rational management and industrial cooperation, with the hope of ameliorating the inequalities and the possibility of social strife that worried many of Germany's policy makers and social observers. Funded by the government and supported in-kind by its own industrial members, the RKW possessed no regulatory powers. Still, by 1929 the RKW had become one of Germany's most visible economic institutions, widely known as a clearinghouse for information and advice on all aspects of rationalization. Its ties to the electrical industry were particularly strong: Carl Friedrich von Siemens served as chair of the RKW for its first ten years, and his successor, Carl Köttgen, was also the general-director of Siemens-Schuckertwerke, the company's...
Technology and Culture 46.3 (2005) 513-540
On 5 December 1943, General Dwight D. Eisenhower, commander of the Allied forces in North Africa and Italy, sent an urgent secret radiogram to Washington regarding the War Department's refusal to ship to North Africa a large quantity of a secret new chemical for the control of louse-borne typhus. Arguing that "typhus fever is [an] actual threat to military personnel in Italy at this time," he stated that "seventeen tons of [DDT] concentrate are total requirements for this theater for Civil and Military [authorities]." Then he demanded "reconsideration of [Washington's] decision to deny shipment of equivalent 10 tons concentrate and that [instead the] shipment be made on highest priority," and emphasized the seriousness of the situation by reminding his superiors that the "season of [typhus] prevalence commences early in January."
Impressed by its commanding general's argument, the department immediately reversed its position, and speeded shipments of the precious DDT for use in the Mediterranean. DDT's astounding effectiveness led Eisenhower to send another secret message a month later, insisting that "it is considered of utmost importance that additional five tons DDT 100% be made available for A[llied] M[ilitary] G[overment] use. This must reach this theater in late February without fail." Unknown as an insecticide before World War II, DDT had become critical to the operations of the Allied army. How and why did that happen?
The beginning of the DDT era coincides with the shift from what one leading public health historian terms the "unrepentently experimental" public health strategies of the League of Nations Health Organization and the Rockefeller Foundation in the 1920s and 1930s to what William H. McNeill has called the "triumph of administrative rationality" that succeeded it in the mid-twentieth century. The experimental approach, emerging from late-nineteenth- and early-twentieth-century research at the institutes of Louis Pasteur, Robert Koch, Paul Ehrlich, and other luminaries, gave enormous confidence to public health workers. They had reason to hope that if they studied a disease in the laboratory, identified its human pathologies, and either created a vaccine or identified a disease's nonhuman vectors of communicability, it could be controlled. This laboratory focus underpinned post–World War I public health activities, particularly with regard to diseases carried by insects. The awarding of a Nobel Prize in medicine to Charles Nicolle in 1928 for his discovery, through research at the Pasteur Institute in Tunis, that typhus is transmitted by the human body louse, certified the validity of the experimental model. The laboratory, home of the "microbe hunters" made famous by Paul de Kruif, was where diseases were to be conquered.
But in the 1940s DDT became the key to a radical change in developing insect-borne disease strategies: the attack on disease now engaged the laboratory in support of fieldwork, where critical trials took place. The development of DDT, which could kill or control insects cheaply and with simple technologies, has been recognized as one of the great triumphs of the World War II era. This powerful insecticide made possible a series of attempts to eradicate insects known to be carriers of human disease and others perceived to be economically harmful.
DDT came onto the wartime scene rather suddenly. In 1939, searching for a means to protect woolens from moth infestations, Paul Müller, working for the J. R. Geigy company of Basel, came across DDT, a chemical synthesized years before but thought to be unremarkable. He found that it killed an amazingly broad spectrum of insects on contact and had other important characteristics, such as environmental stability and a lack of odor. It was also cheap to manufacture. Geigy took out a patent on DDT in 1940 and had it in production by 1942, when it was used to stop an infestation of Colorado beetles in Switzerland's valuable potato crop.
DDT's first significant public health use was to control typhus, but it was quickly employed to fight other diseases as well. Most important, the experimental malaria control programs of the first half...
This historical analysis examines the forces that shaped the collection and use of geographical data in Tanzania’s Rufiji Basin. Hoping to develop irrigated agriculture, colonial engineers surveyed the basin’s social and environmental landscapes and weighed the costs of damming. Following World War II, international experts interested in hydropower development conducted more geographical studies. While their studies offered limited information on stream flow, political and economic pressures led to their acceptance over those of colonial engineers. By illustrating how international institutions select what is accepted as knowledge and how such knowledge is used, the case highlights the politics of hydropower development in Africa. Based on archival work and fieldwork conducted in Tanzania and Sweden, this article argues that the shifting of the setting of knowledge construction from the basin to distant planning offices did not lead to projects based on better scientific knowledge, but set the stage for Tanzania’s current electricity problems.
As Brian Woods and Nick Watson note, for millions of people the wheelchair has been a significant site of technological innovation over the last hundred years, yet few historians or sociologists have paid it much attention. With "In Pursuit of Standardization: The British Ministry of Health's Model 8F Wheelchair, 1948–1962, "Woods and Watson aim to rectify that situation, at least in part. The article surveys "the effects of the interplay between the state, medical professionals, and disabled peopleÑboth as individuals and in organized groupsÑon wheelchair innovation in Britain during the 1950s." Although it is "primarily a sociotechnical history of the emergence of a standardized wheelchair" that proceeds from a starting point in the social construction of technology approach, the authors are aware that a focus on "the circulation of power veils the processes of structural exclusion." They draw on the "social model of disability, first developed in Britain as a political tool to explain disability in social terms and later refined . . . as a sociological theory" to complement SCOT and shed light on "the relationships between wheelchair developments and the structural exclusion of disabled people."
Le pacemaker, destine a stimuler et entrainer la pompe cardiaque, est apparu aux Etats-Unis des la fin des annees 1950. Depuis 1952, les techniques d'implantation du dispositif ont fondamentalement change, et les medecins ne l'utilisent plus uniquement en situation d'urgence, mais egalement pour reguler de facon permanente les troubles cardiaques chroniques
Technology and Culture 40 Supplement (1999) 1-4
The following compilation is the thirty-fifth annual bibliography of current publications in the history of technology. Previous bibliographies in this series have appeared in Technology and Culture since 1964. The reader is also referred to the fifth publication of the SHOT Monograph Series, Eugene S. Ferguson's Bibliography of the History of Technology (Cambridge, Mass., 1968).
This year's compilation marks a few changes for the bibliography. First, it combines coverage for two years of publication in the history of technology, 1996 and 1997. The advantage of combining these years is in catching up to current publication and thus increasing the timeliness of future issues of the Current Bibliography. By combining these two years, we are also offering the largest Current Bibliography yet, with more than 3,300 entries. The disadvantage of squeezing two years of bibliography into one year of work is reduced coverage of the literature, though this will be only a one-time occurrence.
Second, the change in publishers of Technology and Culture provided impetus for a review of the format of this bibliography, as well as a significant and long overdue upgrade in the software used to produce it. As a result, the bibliography has acquired a new look in some respects. The most visible change is in the use of numbered references, which simplifies the referencing scheme from the name and subject indexes. The bibliography itself continues to be organized by subject classifications; fortunately, the bibliography's own Y2K problem remains a few years off, when it will be time to reorganize the chronological divisions to reflect the advent of a new century.
Last, the team of contributors to the bibliography has changed, as well. After many years of service, Stephen Cutcliffe is stepping back from his role as contributor. As regular users of the Current Bibliography remember, Stephen served as editor of the bibliography before me, and he has been a great help throughout my years as editor. His annual stack of contributions will be missed, though I will not be surprised if he cannot resist submitting the occasional reference.
The bibliography has been available to scholars in electronic form since March 1992 on the Research Libraries Information Network (RLIN) as part of the HST (History of Science and Technology) file. It provides on-line access to Current Bibliography in the History of Technology for the years 1987 through 1997, and it is possible to query it from your personal computer using a variety of indexes, assuming that you have a modem or access to the Internet. Members of the Society for the History of Technology now have free access to this database; for instructions, please consult the SHOT homepage <http://shot.press.jhu.edu> on the World Wide Web. For more information on personal or institutional RLIN accounts, please contact the RLIN Information Center at 800-537-7546 (U.S. and Canada) or consult the homepage of the Research Libraries Group <http://www.rlg.org>. The Current Bibliography of the History of Science and the Bibliografia Italiana di Storia della Scienza are also available in the HST file, and it is planned that more bibliographic files in the history of science, technology and medicine will be added to the file. Users of the HST file can obtain from me a copy of the working thesaurus for the Current Bibliography in the History of Technology, which should prove useful in working with this database.
I would like to thank the other contributors to the bibliography: Guillaume de Syon, Katalin Harkányi, Patrick Harshbarger, and Ian Winship. Readers willing to scan a selected set of journals or keep track of publications in one of the subfields of the history of technology should contact me, as additional contributors are welcome and needed. I hope that a few members of the Society for the History of Technology will step forward to increase the number of contributors. Improved coverage of Eastern European, Asian, and Latin American publications remains a desideratum, and help in these areas from correspondents would be welcome. Institutions or individuals aware of a journal or publication series that has been neglected in recent years are encouraged...
Technology and Culture 43 Supplement (2002) 1-4
The following compilation is the thirty-seventh annual bibliography of current publications in the history of technology. Previous bibliographies in this series have appeared in Technology and Culture since 1964. The reader is also referred to the fifth publication of the SHOT Monograph Series, Eugene S. Ferguson's Bibliography of the History of Technology (Cambridge, Mass.: SHOT and MIT Press, 1968).
This is the last Current Bibliography to appear in printed form. Henceforth this bibliography will be available chiefly in electronic form through the History of Science, Technology and Medicine database, which has been hosted since March 1992 on the Research Libraries Information Network (RLIN). The HSTM file provides on-line access to citations from the Current Bibliography in the History of Technology since the 1987 volume. Many academic institutions and libraries subscribe to this database, and individual members of the Society for the History of Technology also have free access to it; for instructions, please consult the SHOT homepage hosted by the Johns Hopkins University Press on the World Wide Web <http://www.press.jhu.edu/associations/shot/hstlink.htm>. For others, information on personal or institutional RLIN accounts is available through the RLIN Information Center at 800-537-7546 (U.S. and Canada) or the homepage of the Research Libraries Group <http://www.rlg.org>. The Current Bibliography of the History of Science, the Bibliografia Italiana di Storia della Scienza and the Wellcome Bibliography of the History of Medicine are also represented in the HSTM file. Users of the database can obtain from me a copy of the subject thesaurus for the Current Bibliography in the History of Technology. We intend to accommodate readers who prefer to browse the bibliography on paper with a printable version that can be downloaded via the Technology and Culture website; details will be forthcoming via the SHOT Newsletter.
I would like to thank the contributors to the bibliography: Guillaume de Syon, Michael Friedewald, Katalin Harkányi, and Ian Winship. Readers willing to scan a selected set of journals or keep track of publications in one of the subfields of the history of technology should contact me, as additional contributors are welcome and needed. I hope that a few members of the Society for the History of Technology will step forward to increase the number of contributors. Improved coverage of Eastern European, Asian, and Latin American publications remains a desideratum, and help in these areas from correspondents would be welcome. Institutions or individuals aware of a journal or publication series that has been neglected in recent years are encouraged to help me track down these elusive publications and ensure their coverage in future bibliographies. I also thank those who have sent me individual publications, offprints, journal issues, or citations over the past year. This bibliography only can provide a comprehensive picture of published research in the history of technology if those who publish will take a bit of time to submit offprints or full citations to me at the History of Science and Technology Collections, Stanford University Libraries, Stanford, California 94305-6004. If you are the editor or publisher of books in the history of technology or a journal that is not being covered in the bibliography, please write to me to make sure that the bibliography includes your publication(s). I welcome bibliographic information sent via electronic mail (firstname.lastname@example.org) or fax (650-725-1068).
I would like to thank a few people who have contributed behind the scenes. Kimuli Kasara, Carlo Magno Flores Moya, and Andrea Snavely assisted with bibliographic verification and data entry, as well as journal scanning; their work was supported by the Ellen Poyet Endowment, which funds an assistantship in the history of science and technology in the Stanford University Libraries. I am also grateful to Joe Schultz of Technology and Culture for his editorial work, guidance, and patience, and to Heidi Beck for copyediting the bibliography. Finally, I would like again to express my gratitude to the users of this bibliography, particularly those who have made the effort to send me comments, criticisms, and compliments.
With the collaboration of:
Some of the most important watersheds in human history have been associated with new applications of technology in everyday life: the shift from stone to metal tools, the transition from hunting and gathering to settled agriculture, the substitution of steam power for human and animal energy. Today we are in the early stages of an epochal shift that will prove as momentous as those other great transformations. This time around, however, the new techniques and technologies are not being applied to reinventing our tools, our methods of food production, our means of manufacturing. Rather, it is we ourselves who are being refashioned. We are applying our ingenuity to the challenge of redesigning our own physical and mental capabilities. Technologies of human enhancement are developing, ever more rapidly, along three major fronts: pharmaceuticals, prosthetics/informatics, and genetics. Though advances in each of these three domains are generally distinct from those in the other two, their collective impact on human bodies and minds has already begun to manifest itself, raising profound questions about what it means to be human. Over the coming decades, these technologies will reach into our lives with increasing force. It is likely that they will shake the ethical and social foundations on which contemporary civilization rests.
One fascinating feature of this phenomenon is how much it all sounds like science fiction. The bionic woman, the clone armies, the intelligent robot, the genetic mutant superhero: these images all form part of contemporary culture. And yet, this link with science fiction is potentially misleading. Precisely because we associate human enhancement with the often bizarre futuristic worlds of novels and movies, we tend to dismiss the evidence steadily accumulating around us. Technologies of human enhancement are incrementally becoming a reality in today's society, but we don't connect the dots. Each new breakthrough in genetics, robotics, prosthetics, neuroscience, nanotechnology, psychopharmacology, brain-machine interfaces, and similar fields is seen as an isolated, remarkable event occurring in an otherwise unaltered landscape. What we miss, with this fragmentary perspective, is the importance of all these developments, taken together.
The technological watersheds of the past came about gradually, building over centuries. People and social systems had time to adapt. Over time they developed new values, new norms and habits, to accommodate the transformed material conditions. This time around, however, the radical innovations are coming upon us suddenly, in a matter of decades. Contemporary society is unprepared for the dramatic and destabilizing changes it is about to experience, down this road on which it is already advancing at an accelerating pace.
Let me begin with two brief stories. They are, in a sense, Promethean parables, tales of the human aspiration to rise above earthly limits. But they are also anti-Promethean, in that both begin with tragedy and end on a cautiously hopeful note.
In 1997, a fifty-three-year-old man named Johnny Ray had a massive stroke while talking on the telephone. When he woke up several weeks later, he found himself in a condition so awful that most of us would have a hard time imagining it. It is called "locked-in" syndrome: you are still you, but you have lost all motor control over your body. You can hear and understand what people say around you, but you cannot respond. You have thoughts and feelings but cannot express them. You cannot scream in frustration or despair; you can only lie there. The only way Johnny Ray could communicate was by blinking his eyelids.
In March 1998 two neurologists at Emory University and Georgia Tech inserted a wireless implant into the motor cortex of Ray's brain. The implant transmitted electrical impulses from Ray's neurons to a nearby computer, which interpreted the patterns of brain activity and translated them into cursor movements on a video display. After several weeks of training, Ray was able to think "up" and thereby will the cursor to move upward onscreen. After several more months, he was able to manipulate the cursor with sufficient dexterity to type messages. By that point, the brain-computer interface had become so natural to him that using it seemed almost effortless. When the doctors asked him what it felt like...
Technology and Culture 46.3 (2005) 541-560
On the afternoon of 10 July 1893, Captain James Fitzpatrick of the Chicago Fire Department received a call to put out a fire in the ice plant at the World's Columbian Exposition. Fitzpatrick had gone to fight small blazes in the same building twice before, both caused by a design defect in the plant's smokestack. To obscure the stack from the view of fairgoers, the architect had encased it in a 225-foot wooden tower that stood five feet taller than the stack itself. The architect's plan had called for an iron "thimble" to be installed atop the smokestack to prevent particles and debris from igniting the tower. But the Hercules Iron Works of Aurora, Illinois, the company that constructed the fair's ice-making machinery and the building that contained it, never installed the thimble. The fire of 10 July started when flames from soft, greasy-burning coal used to fire the boiler below ignited soot in the upper reaches of the smokestack.
When Fitzpatrick arrived at the scene, he assumed that the conflagration resembled the ones he had faced earlier. As he had done on these other occasions, he ordered his men to use their ladders to climb the outside of the tower and fight the fire where it had started. "We'll put this blaze out in a minute," he said at the time. But this time the fire had already spread from the top of the smokestack to the building below. Fitzpatrick and his men alighted onto a balcony approximately fifteen feet below the flames. Shortly thereafter, there was an explosion in the building below them, sealing off their escape path. According to one eyewitness:
The sight fascinated while it sickened, and the situation was made more awful from the fact that thousands of Fair visitors were looking on, and as each person tumbled to a horrible death a simultaneous murmur of horror escaped from throats for fully a half mile in every direction.
The blaze killed seventeen people, including Captain Fitzpatrick and eight other firemen (fig. 1). It left nineteen people, including five firemen, seriously injured.
The ensuing investigation focused on the smokestack. Ultimate responsibility for this fatal flaw was never determined. However, the investigation
did indicate that, had the building not exploded, the firemen would not have been trapped on the balcony and could have put the fire out easily, just as they had twice before. What, then, caused the explosion? The answer to that question was...
The technologies of classroom instruction have received scant attention from historians of technology. Yet schools, in particular science classrooms, abound with laboratory and instructional apparatus that play an important role in how students (and ultimately members of the public at large) learn about how science and scientists generate knowledge in a given disciplinary field. This article uses the post-Sputnik reforms in United States high school biology education as a case study to examine the way that the materials and technologies of biology teaching shaped ideas about the epistemology of life-science research during the cold war. It compares the experimental vision and materials produced by the Biological Sciences Curriculum Study, a National Science Foundation-funded, scientist-led reform project, with the materials developed by Wardâ€™s Natural Science Establishment, a longstanding scientific supply company with deep roots in the more established, natural history epistemological tradition.
This paper explores a regulatory campaign to promote access to antibiotics in the United States during the 1950s, and explains it as a reaction to prewar deprivation. It tracks a decade-long attempt to prevent the drug industry from replicating a perceived pattern of big business behavior blamed for underconsumption. The Depression-era Temporary National Economic Committee (TNEC) had explained low consumption by artificially high prices associated with excess profits, excessive marketing and restrictive patents of large companies. In the post-War years a group of TNEC veterans (including Walton Hamilton, Irene Till and John Blair) campaigned to protect the drug market from these vices: through a FTC enquiry which led to a judicial investigation, and through the Kefauver hearings in Congress. This campaign culminated in the in radical increase of FDA powers in 1962, albeit triggered by the thalidomide scare. Ironically the problems of under-consumption were given institutional teeth just at the time that the novel problem of the over-consumption of antibiotics was becoming serious.
Technology and Culture 40.4 (1999) 723-745
On 25 February 1940, an officer with the San Francisco police department's homicide detail reported a "rather suspicious business" operating in the city. At 126 Jackson Street sat an old, three-story rooming house, recently leased by Dr. Henri F. St. Pierre of the Dermic Laboratories. As Assistant Special Agent J. W. Williams later described the scene, "women had been seen entering the place from the Jackson Street side at various times of the day, subsequently leaving by . . . an alley at the rear of the building. Following the arrival of the women, cars would arrive with a man carrying a case resembling . . . a doctor's kit. They would also enter the building for a short time, come out, and drive away. . . ." At first sight, the medical kit, the furtive departures, and the seedy locale all signaled to Williams that St. Pierre was running a "new abortion parlor." As it turned out, however, "the so-called 'Dr.'" was offering a somewhat different service to these women: the removal of their unwanted body hair through prolonged exposure to X rays.
At the time of Williams's writing the practice of removing hair with x-radiation was thoroughly disreputable -- if not illegal -- in most of North America. By 1940, health officials and X-ray clients had long since realized that intense doses of ionizing radiation not only removed hair, but also led to other, dangerous physiological changes. Articles lambasting the potentially lethal practice had been appearing regularly in medical journals and popular magazines since the early 1920s, and these articles only became more graphic with the passage of time. In 1947, an article in the Journal of the American Medical Association described in gruesome detail dozens of cases of cancer resulting from depilatory applications of x-radiation. In 1970, a team of researchers found that more than 35 percent of all radiation-induced cancer in women could be traced to X-ray hair removal. In 1989, two Canadian physicians suggested a new name for the widespread pattern of scarring, ulceration, cancer, and death that affected former epilation clients: "North American Hiroshima maiden syndrome."
Although physicians might now liken X-ray hair removal to international atomic attack, the analogy obscures several of the most crucial aspects of the history of this technology. To begin, unlike the famous "Hiroshima Maidens" (twenty-five young bomb survivors brought to the U.S. for plastic surgery in 1955), epilation clients were anything but the unsuspecting targets of foreign military action. As Williams' 1940 description of back-alley hair removal makes clear, these individuals were not "passive victims" but willing participants in the diffusion and persistence of a controversial technology. Moreover, unlike the atomic explosions that punctuated the summer of 1945, X-ray hair removal was not a dramatic aberration in the history of technology. Rather, from its first use among professional physicians in 1898 through its slow demise in the late 1940s, X-ray epilation enjoyed nearly fifty years of continuous practice. One 1947 investigation concluded that thousands of Americans visited a single X-ray hair removal company, the Tricho Sales Corporation. Since Tricho was just one of dozens of similar X-ray epilation companies in operation in the 1920s and 1930s, one can conclude that tens of thousands -- if not hundreds of thousands -- of other American women also irradiated themselves in order to remove unwanted body hair.
Rather than focusing on the eventual physiological impacts of X-ray epilation, a tragic story already told in meticulous detail by medical reports, this article explores the circumstances in which prolonged, repeated self-irradiation seemed appealing to its myriad users and promoters. Understanding the practice's allure requires consideration of two questions: first, why did so many early-twentieth-century American women wish to remove their body hair? Second, why would some women choose the X ray over other available hair removal technologies? The answers to these questions, we will see, quickly lead from X rays and hair to larger problems of race, sex, and science in the interwar period.
The manipulation of human body hair has a long and rich history. Like teeth and...
This article examines efforts to establish germ-free animals as ideal laboratory animals, tracing the development of germ-free technology by James Reyniers (1908â€“67) and Philip Trexler (1911â€“) at the University of Notre Dame. Despite capturing the scientific imagination between 1942 and the late 1950s, germ-free animals never became the generic tools that Reyniers hoped. This article shows how Reyniers failed to establish germ-free animals because the tension between standardization and the need for novelty was not successfully managed. However, Trexler adapted techniques of producing germ-free life to produce more successful Specific Pathogen Free (SPF) animals. Now so ubiquitous as to be invisible, SPF animals succeeded where germ-free animals failed because they embodied a standard that could be reconfigured to suit local research agendas, while also remaining highly defined and capable of representing natural forms of life. SPF animals became the ideal laboratory animal, used around the world.
This article discusses the processes typically underlying the Indian governmentâ€™s technological choices in the mid 1950s, with a case study of the pharmaceutical industry. It argues that questions of the future development of Indiaâ€™s pharmaceutical industry were impacted by debates over placing it in the public or private sector, and over securing finance from the governmentâ€™s own budget, from transnational corporations, or through Soviet aid. A close scrutiny of the trajectory of these debates reveals how the highly contested conception of the required scope of the production process finally emerged. This scope then determined why, when faced with an offer from the USSR for an integrated pharmaceutical complex also manufacturing dye intermediates; and from the German conglomerate Bayer for a standalone plant for chemical intermediates, both for drugs and dyes, the Indian government decided to accept the Bayer proposal.
This article considers the role of technical representations in the building of one of the most significant civil engineering projects of the mid-nineteenth century, Londonâ€™s main drainage system, designed and overseen by the engineer Joseph Bazalgette. It explores the ways in which the contractâ€”composed of engineering drawings and an accompanying specificationâ€”mediated the relationship between Bazalgette and his most important ally, the contractor. The article also pays close attention to the variety of audiences beyond the contractor to which these documents were directed: including those who authorized and funded the project, those parties directly affected by construction, and the wider public. The result is a fuller picture of the social context in which the main drainage project was constructed and of the crucial role played by the contract in mediating social relations of many kinds, a perspective that is absent in the existing literature on the subject.
Technology and Culture 40.1 (1999) 47-73
In 1944 A. L. G. Rees, a twenty-eight-year-old research and development scientist at Philips Electrical in England, answered a summons from Australia's central research agency to return to his homeland and establish a new laboratory there. His chemical physics laboratory in Melbourne would boast the latest in technology for investigating molecular structure: an infrared spectroscope, X-ray diffraction apparatus, and also -- thanks to the top-level government action required by war production priorities in the United States -- arguably the world's most sophisticated commercially manufactured scientific instrument at the time, the brand-new "Universal" electron microscope (EMU) model from the Radio Corporation of America (RCA). By special arrangement, Rees's slow trip back to Australia in mid-1944 featured visits to many key industrial and academic laboratories in Britain and the United States that had built or purchased the sort of technology that Rees would be using, including RCA Laboratories itself and numerous electron microscopy research groups in the United States. On these laboratory visits, Rees took careful note of the procedures and ideas to which he was exposed, learning how to operate his RCA microscope and the other scientific instruments. Rees, of course, had only done what efficient technology transfer demanded. But he brought back to Australia more than hardware and mere operating instructions: he also brought back "software" for putting his transferred instrument technology to use in scientific experiments, to draw a rough-and-ready distinction between material and intellectual technique. This sort of technology transfer, in which cutting-edge instrumentation migrates from one scientific culture to another, offers a chance to study not only how cultural context is implicit in the design of apparatus, as with typical cases of technology transfer, but also how that context may influence the research work in which the apparatus is employed.
This article retraces the replication and adaptation of biological electron microscopy in isolated Australian conditions during the immediate postwar years, a period when Rees's instrument was the only electron microscope on the continent. For purposes of this study, one can set the hardware to one side as a constant, since the same commercial electron microscope was owned by the Australian and American labs. This circumstance helps focus attention on what I am calling the software of research, providing a natural experiment in the history of science, in which the tyranny of distance isolates a scientific culture after initial inoculation with technique, thus simplifying the study of its growth. The strategy resembles that of Ludwik Fleck's study of bacteriology research under the even more insular conditions of a concentration camp. This article discusses three early Australian research programs in biological electron microscopy, each manifesting a different mix of transferred and indigenous elements of practice. One line of research showed particular originality, involving locally invented material and intellectual techniques (both hardware and software, so to speak) blended with what was imported. Another research program remained largely conventional in its use of transferred material technique but more innovative in its intellectual technique, or software. The last program showed a remarkably faithful replication of technique, on both hardware and software levels, transferred from abroad. Indeed, this last line of research reflected such a successful transplantation of foreign experimental practice that it independently duplicated an artifactual "discovery," that is, a blunder being committed simultaneously in the lab that originally served as the source of technique. The transferred software of electron microscopy -- the intellectual side of experimental practice imported along with the hardware -- comes especially to the fore in accounting for the duplication of findings and interpretations in this third line of research, since the faulty conclusions follow from the evidence only in combination with distinctive interpretive practices also derived from the source.
Experimental techniques, and especially instruments, stand at the interface of the history of science and the history of technology. Historians of technology have long appreciated and documented the way that technology transferred without modification from one cultural context to a substantially different one often functions differently, or fails to function altogether. Historians of science, though usually ready to acknowledge the power of new instruments and experimental techniques to...
As Wendell Berry once wrote, “how we eat determines, to a considerable extent, how the world is used.”1 In this deceptively simple statement, the doyen of neo-agrarianism neatly summarized why we should all take a keen interest in and responsibility for the way we produce, distribute, and consume our food. On one level, of course, the reasons for doing so are obvious. As “foodie” journalists and high-profile academics frequently remind us,2 careless eating invites a variety of negative physiological repercussions, ranging from obesity and heart disease to food poisoning, endocrine disruption, and cancer. Yet public concerns over the effects of careless eating reach well beyond health issues. Berry and other advocates of sustainable agriculture maintain that careless eating has played a key role in the relentless industrialization of our food system by creating a sustained and frequently unwitting demand for highly processed foods, factory-farmed meats, genetically modified crops, and blemish-free produce shipped year-round over immense distances. In turn, sustainable agriculturalists argue, the industrial system that has allowed these foods to become a central part of the American diet has incurred a whole array of ecological, social, economic, geopolitical, cultural, moral, and aesthetic costs.
As these costs—especially those associated with food-borne illness and the profligate use of fossil fuel—have become increasingly apparent in the last few years, we have seen a spike in demand not only for organic food, but for local food as well.3 In fact, “locally grown” has begun to compete with “organically grown” as the label of choice among environmentally and socially conscious consumers, particularly now that so much organic food is grown in industrial-scaled monocultures far from the places it is consumed. 4 Proponents of local food argue that eating locally allows access to a greater variety of fresher and more nutritious food, enhances the ecological and aesthetic integrity of local landscapes, strengthens regional economies, reduces fossil-fuel consumption, and allows consumers to see firsthand how their food is produced. Personal inspections of this sort are particularly important, they argue, in light of the federal regulatory system’s recurring failure to ensure the quality and safety of our food supply. The logic of local food, which links ethical responsibility to geographical proximity, has gained national attention in recent years, helped along by extensive media attention and a host of new “locavore” organizations touting the virtues of “100-mile diets” and other strategies for minimizing “food miles” and maximizing awareness of our respective “foodsheds.”5
While the recent surge of interest in eating locally may seem like just another short-lived food fad, those familiar with the history of the modern sustainable agriculture movement know that local food is hardly a new cause. Indeed, localism has been a defining goal of sustainable agriculture since the movement’s inception in the 1960s.6What is new, however, is the contentious debate that the push for localism has sparked among proponents of sustainable agriculture. At the center of this debate is Michael Pollan, whose 2006 bestseller The Omnivore’s Dilemma: A Natural History of Four Meals has contributed substantially to the recent burst of enthusiasm for local food. As this essay will demonstrate, critical reactions to Pollan’s localist argument, particularly from scholars who are either self-identified members of the sustainable agriculture movement or sympathetic to its goals, provide historians with a unique opportunity to examine a central issue in the evolving national debate over the way we eat and, by implication, “the way we use the world.”
Historians of technology in particular should find this debate of interest. What is the history of technology, after all, if not the history of “how the world is used”? Moreover, the debate over local food bears directly on matters of abiding interest to historians of technology, including the environmental and social impacts of industrialization, the role of the state in regulating and promoting particular technologies, the decentralist resistance to modern technological systems, and the role of consumers and nonprofit organizations in shaping the direction of technological development. 7 On a more concrete level, the proponents of local food raise serious questions about the specific “hardware” and techniques integral to...
Technology and Culture 40.4 (1999) 746-769
A crowd lined up early on 1 March 1915, waiting for their first glimpse inside the new Fordyce bathhouse in Hot Springs, Arkansas. The resort town had been abuzz for weeks with speculation about Samuel Fordyce's elaborate renovations. As they waited, hopeful patrons undoubtedly admired its Neoclassical facade and wondered if the much-advertised, modern therapeutic equipment inside would be equally impressive. When the doors to the four-story facility finally opened, however, visitors did not immediately ascend the stairs from the lobby to the bathing rooms above. They began their tour underground at the Fordyce Spring, the bathhouse's powerful water source, which excavators had unearthed during the renovation. There they crowded around a narrow plate-glass window and peered into a small, tiled room to watch a pale blue stream emerge from the earth below and filter into a large steaming reservoir above. The water reached the surface only briefly; just as it bubbled up to the top, an elaborate system of pipes whisked it upstairs and out of view.
This subterranean tour seems an odd way for a Fordyce bathhouse visit to begin. With its plethora of machines and modern treatments, both heavily advertised in promotional materials, the remodeled Fordyce enticed its visitors with a utopian world of marble and metal. Yet most bathers preferred to see the water source below before venturing upstairs to use its modern applications. According to the town's tourist guidebook, many made repeated visits to the spring throughout their spa treatments in order to watch as it "bubble[d] from the ground in its native state." Such comments are striking. Why, when the building itself was advertised as a harbinger of modernity, would this basement spring so capture people's attention?
At the Fordyce, as in many early-twentieth-century baths, nature mediated technological innovation. Cultural historians have argued that advertisements and art between 1910 and 1930 often "domesticated" modern products for Americans by placing them on a common trajectory with images of a natural or primitive past. Presumably this connection allowed viewers to feel that although perhaps the cultural signposts had changed in the modern age, the results were improved models of the past rather than totally new environments. The combination of modern machines and natural symbols in the design and function of the Fordyce bath make it an ideal vehicle for extending this analysis beyond aesthetic symbols to direct bodily experiences. Thousands of middle- and upper-class Americans visited the Fordyce's renovated facility between 1914 and its 1940 decline, most augmenting bathing with numerous treatments whose mechanical ingenuity mirrored their increasingly ordered working worlds. The Fordyce prided itself on innovation and efficiency, both primary goals of the managerial revolution and the white-collar regimentation altering the lives of America's growing male middle class. Yet, as the primacy of the underground spring suggests, the attraction of modern spas was not based on technological advances alone. The spas served as a cauldron of premodern symbols and technological innovation in which, affirming the ultimate cooperation of machine, man, and nature, was created a unique "cure" for the modern age.
Before the late 1860s, bathing at Hot Springs meant communing with nature. Native Americans had used the springs for centuries before white explorers discovered the site in 1804. As late as 1832, when the land was set up as a federal preserve, the bathing experience remained largely identical to what it had been three hundred years earlier. Most visitors either hiked in or came by horseback, camped in simple shelters, and soaked in mineral waters and mud to relieve aches and cure illnesses. In the late 1830s, entrepreneurs arrived in Hot Springs and built the first bathhouses, which consisted of brush and log cabins placed over excavations in the rocks. Here visitors could, for a small fee, find respite from sweltering summers and chilling winters while taking the waters. Yet despite the waters' virtues, nineteenth-century visitors often focused more on the treacherous hike in to the springs and the "primitive" cabins than on the bathing experience. One visitor reported...
Trans fats became part of the American food system due to a complex interplay among activism, industrial technology, and nutritional science. Some manufacturers began using partially hydrogenated oils, which contain trans fats, in the early twentieth century. Medical authorities began framing saturated fats as unhealthy in the 1950s. In the 1980s, activist organizations, including the Center for Science in the Public Interest, condemned food corporationsâ€™ use of saturated fats and endorsed trans fats as an acceptable alternative. Nearly all targeted corporations responded by replacing saturated fats with trans fats, which fit easily into their existing products. Trans fats thus became the perfect solution to the political problem of saturated fats and to the technical problem of what to use in their place. Activists helped precipitate technological change, but by 1994, trans fats were no longer regarded as a solution. Instead, they became regarded as a new nutritional problem.
Alongside the large-scale clearance of buildings and land in the post-World War II United States emerged a body of childrenâ€™s â€œbulldozers booksâ€ that reflected and endorsed this work. While scholars and adult non-fiction authors have typically treated the bulldozer as symbol of protest, rather than subject of inquiry, postwar childrenâ€™s book writers and illustrators placed the machine front and center. Through their books, they helped a younger generation make sense of the world around them as their environment underwent massive physical destruction for suburban, highway, and urban renewal construction. They also naturalized real-world clearance processes by promoting this work as technological progress, attempting to involve readers in clearance, and putting a friendly, masculine, all-American face on the machines and their operators. In so doing, the books and their authors advanced a â€œculture of clearanceâ€â€”the ideology, technology, policy, and practice of clearing the landscape of its natural and built environment.
The March 1963 issue of Consumer Bulletin included a four-page article titled "How to grow a better lawn", the lead paragraph of which assured readers that "one does not have to be an expert or spend large sums of money to have a good lawn. It is necessary, however, to follow certain established practices in the construction and maintenance of any lawn." These two assertions may have struck readers, as I suspect they would strike lawngrowers today, as somewhat contradictory. Given the list of established practices that followed--"the construction of the lawn base, with proper grading, drainage, and preparation of the seedbed; selection of the type of grass and spreading of the seed; and maintenance, including fertilizing, mowing, and control of weeds"--it is difficult to imagine how the homeowner could have accomplished all of this without large sums of money or expertise. In fact, building lawns in the manner described by Consumer Bulletin required tremendous amounts of both. Recognizing these established practices in lawn construction and maintenance as a technological system allows us to better understand the persistence of this grassy landscape in America.
Monasteries were major contributors to the preservation of ancient knowledge about, as well as innovation in, hydraulic technology during the western Middle Ages. The form of monasticism adopted by the Carthusians combined eremitic isolation with limited communal life, and thus required that water be provided to individual cells as well as to other locations in the monastery. The Carthusian house of Bourgfontaine (Aisne), founded in the fourteenth century in northern France, featured technologically sophisticated water management, the topography of its site requiring a siphon-powered system. An elaborate series of surviving water tunnels led to a large springhouse and aqueduct that in turn ran 500 meters to the charterhouse. Study of archival and pictorial sources, as well as comparison with other excavated and surviving Carthusian houses in Europe, permits us to understand the larger context of the contributions made by Bourgfontaine to hydraulic technology.
While many studies have examined American nation-building or modernization campaigns from diplomatic or strategic perspectives during the Second Indochina War, few have yet to consider how pre-existing social, technological, and environmental relationships often determined a given project’s chances of success or failure. This essay examines American nation-building projects—especially reclamation and settlement programs—in the Plain of Reeds, a vast wetland area about fifty kilometers southwest of Ho Chi Minh City that since the French conquest of Vietnam in the 1860s had been a site for rebel movements and an important target for reclamation. When American advisors arrived in 1954, they encountered not only a deeply embedded insurgency with major bases in the area but also war-damaged infrastructure and engineering agencies and private contractors still deeply influenced by colonial arrangements. This colonial mold on social and environmental relationships constrained successive American programs and played a key role in the success or failure of specific efforts on the ground.
This article investigates the category of waste and its ideological function within Victorian political ecology. It seeks to draw out the connections between conceptions of nature, understandings of technology, and political economy in mid-Victorian capitalist ideology. It does so through a detailed reading of the corpus of one Victorian writer and commentator on technological subjects, Peter Lund Simmonds. Simmonds is interesting both as an everyday producer of knowledge about science and technology, and because he explicitly draws on the category of waste as a condition of possibility for technological progress and civilization. Ultimately he is indicative of the continuing str! ength of cornucopian ideas of nature among ideologues of capit! alist improvement in the mid-Victorian period, which suggests the limited metropolitan influence of any emerging conservationism or "green imperialism."
Technology and Culture 40.4 (1999) 770-796
It was a glorious day for environmentalists when the federal government sharply restricted the use of the insecticide dichlorodiphenyltrichloroethane, or DDT. Federal regulators prohibited "general outdoor application" of DDT in the United States because of apprehensions about the insecticide's effect on wildlife and "the balance of nature." The policy stated that, while it may be "necessary to ignore these considerations" in other parts of the world, "in the United States such considerations cannot be neglected." Although concern for wildlife protection motivated federal regulators, they also knew about DDT's potential impact on human health, including its potential to cause cancer and its ability to pass from mother to baby in milk. The sweeping policy required anyone -- citizens, companies, government agencies -- to get permission from a committee of experts before they could spray DDT from the air inside the United States. The committee, which would include representatives from the Fish and Wildlife Service, the Public Health Service, and other federal agencies, would grant permission only in rare circumstances.
Contrast that view with DDT's reputation in January 1945, when the chief of preventive medicine for the United States Army announced that DDT would be "the War's greatest contribution to the future health of the world." Upon DDT's release to civilians in August 1945, public health officials, farmers, and homeowners snapped up the wonder chemical to kill insects that caused disease, attacked crops, or created a nuisance. In 1948, DDT developer Paul Müller received the Nobel Prize in physiology or medicine.
Many scholars have pointed out that DDT's trajectory from hero in 1945 to pariah in 1972 illustrates sea changes in American values, science, and politics. In an influential study of environmentalism, Samuel P. Hays argued that government experts did not know about or look for ways that DDT might harm people or wildlife until well after its introduction. "Early governmental concern with pesticides," Hays wrote, "had been confined to their efficiency, that is, whether they killed pests as effectively as manufacturers claimed. Not until the 1960s did concern extend to their effects on people and on the environment." These new, broader criteria led scientists to discover unforeseen, long-term problems with insecticides. Regulation of DDT and other toxic substances then came about because outsiders pressured governmental insiders. As Hays contended, "the public sought to work out control strategies, to determine and set acceptable exposure limits, and to devise methods of containment when the social institutions could not." Experts and activists forced the U.S. Environmental Protection Agency to ban most uses of DDT in the United States in 1972.
A body of work from other scholars suggests a new, related hypothesis: because DDT entered the United States as a military technology, and because the armed forces have been among the biggest polluters in the country, we should not be surprised if they ignored potential environmental damage when developing DDT. The plausibility of that hypothesis grows when we note that military-industrial complexes, including those involving the army and pesticide makers, created some of the most polluted sites in the United States. At Colorado's Rocky Mountain Arsenal, manufacturing of chemical weapons (by the army) and pesticides (by Shell, which rented space at the arsenal) combined to create a vast toxic waste dump.
Persuasive as they appear to be, these views of DDT's history have problems. The policy outlined in the first paragraph of this article came not from the well-known 1972 ban on DDT but from an overlooked 1945 policy. The army, along with the Public Health Service, issued the policy, not the Environmental Protection Agency.
According to the received interpretation sketched above, the federal government should not have issued the 1945 policy. At that time, no one should have worried about the impact of pesticides on wildlife and human health. Even if wildlife and public health agencies had worried about such issues, they should not have been powerful enough to override agencies committed to "producer" values. Even if agencies agreed on the desirability of restricting technology for environmental reasons, the lack of enabling legislation should have stymied regulation. Even if legislation had permitted regulation, the...
A historical review of the policy issues, the interaction of governmental units and economic, geographical and technological constraints which accompanied the haphazard attempts to resolve the problem of sewage pollution control in the Chicago metropolitan area is presented. Special attention is focused on the activities of the Sanitary District of Chicago from its creation in 1889 until the issuance of a significant Supreme Court decree in 1930 which detailed the responsibilities and time limits the city had to follow in the completion of its sewage treatment program.