Mustard Gas and American Race-Based Human Experimentation in World War II

To read the full-text of this research, you can request a copy directly from the author.


During World War II, scientists funded by the United States government conducted mustard gas experiments on 60,000 American soldiers as part of military preparation for potential chemical warfare. One aspect of the chemical warfare research program on mustard gas involved race-based human experimentation. In at least nine research projects conducted dur ing the 1940s, scientists investigated how so-called racial differences affected the impact of mustard gas exposure on the bodies of soldiers. Building on cultural beliefs about “race,” these studies occurred on military bases and universities, which became places for racialized human experimentation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... 66 Thus, specific strategies to communicate with marginalized populations experiencing harm-taking into consideration past and current mistrust toward the 65 We suggest that important factors for understanding mistrust in health authorities include 1) which population is at risk (in Israel, the lower socioeconomic level and, in the US, the mostly white middle-upper class) and 2) a failure to communicate and effectively alert former patients about the adverse effects (Bavli and Steel 2015). 66 For example, unethical experiments on African Americans (Brandt 1978;Smith 2008) government-are often needed. 67 The paternalistic and non-transparent approach by the Ministry of Health in Israel toward disadvantaged groups experiencing the adverse effects of radiation treatment (Davidovitch and Margalit 2008) and the hostility it created toward health authorities (Bavli and Steel 2015) demonstrate the negative consequences that can result when these factors are not considered. ...
Full-text available
This dissertation is composed of three papers investigating the historical, ethical, and social aspects of public health errors. The first paper explores how US health authorities responded to the discovery of the late health effects of radiation treatment. It describes how efforts by Michael Reese hospital in Chicago to locate former patients, and the media attention these efforts attracted, led to a national campaign to warn those who underwent radiation treatment during childhood. The second paper examines Health Canada’s ethical obligation to revise misleading information that appeared in the OxyContin product monograph. It shows that mounting evidence of addiction and misuse of the drug did not lead the agency to change the drug’s mistaken monograph. The third paper provides a new conception of public health errors, with utility for scholars who study policy errors as well as for public health actors interested in preventing them.
... These progressive and proactive research ethics policies, however, would soon fall by the wayside, ignored in the face of unbounded Nazi medical research and experimentation. Though there were many other cases of ethically questionable medical experimentation on human subjects in the early 20th century and up through World War II-namely in malarial treatments (Masterson, 2014, p. 162), the beginning of the Tuskegee syphilis study (Tuskegee Timeline, 2016), wartime starvation (Baker & Keramidas, 2013), and chemical warfare studies (Smith, 2008)-a significant amount of research ethics attention is paid to Nazi experimentation and the post-war Nuremberg Trials. Widespread media and global attention on the unsettling potential and costs of the kind of unadulterated experimentation undertaken by the Nazi regime spurred the worldwide policy community to establish legal and ethical guidelines to keep pace with scientific advancement. ...
Respect for persons is a cornerstone value for any conception of research ethics--though how to best realize respect in practice is an ongoing question. In the late 19th and early 20th centuries, "informed consent" emerged as a particular way to operationalize respect in medical and behavioral research contexts. Today, informed consent has been challenged by increasingly advanced networked information and communication technologies (ICTs) and the massive amounts of data they produce--challenges that have led many researchers and private companies to abandon informed consent as untenable or infeasible online. Against any easy dismissal, we aim to recover insights from the history of informed consent as it developed from the late 19th century to today. With a particular focus on the United States policy context, we show how informed consent is not a fixed or monolithic concept that should be abandoned in view of new data-intensive and technological practices, but rather it is a mechanism that has always been fluid--it has constantly evolved alongside the specific contexts and practices it is intended to regulate. Building on this insight, we articulate some specific challenges and lessons from the history of informed consent that stand to benefit current discussions of informed consent and research ethics in the context of data science and Internet industry research.
The article contains an analysis of comics from Marvel Comics, published between the second half of 2001 and the end of 2004. With the mixed methods approach (quantitative and qualitative research), the criterion for the analysis is appearance of and references to the socio-political dilemma of the conflict between ensuring national security and respecting civil liberties. It was one of the most important topics of debate for American society after the terrorist attacks of 9/11 in the sphere of politics and law in the US. The analysis aims to discover the number of comics that dealt with the subject of this post-9/11 dilemma, as well as to categorize individual stories, with the use of Jonathan Culler’s over-interpretation, as supporting the priority of national security or advocating for the importance of civil liberties. With these data, it will be possible to determine the political tone of individual comics, as well as to establish the views of the artists working for Marvel Comics and their attitudes to the policy pursued by the administration of President George W. Bush. The article uses studies conducted as part of doctoral research from the unpublished dissertation Terrorism, politics, and civil liberties in the American comics after September 11, 2001, based on a comics analysis of Marvel Comics, DC Comics, and Image Comics.
Clinical research has come across several ethical breaches when we look into the past. A lot of infamous ethical violations in the name of clinical research took place in human history. The most highlighted ones are Tuskegee Syphilis Study (the Public Health Service Syphilis Study), Nazi Human Experiments, Milgram Study, Thalidomide Tragedy, Monster Study (Tudor Study), sulfanilamide Tragedy, and PATH clinical trials, to name a few. This chapter expounds on these incidences, the history of clinical trials, and the gradual harmonization of regulations with internationally accepted practices. This chapter also covers milestone works and steps taken to omit these ethical breaches. The role of the institutional review board as a core component of human subject protection under the health and human services regulations is also presented.
People living with medically unexplained symptoms (MUS) often have poor quality of life and health outcomes. Many struggle to engage with and trust in healthcare systems. This qualitative study examined how experiences with institutions influence perceptions of medical care for MUS by applying the theoretical framework of institutional betrayal to narratives of U.S. military Veterans living with Gulf War Illness (GWI). Institutional betrayal refers to situations in which the institutions people depend upon for safety and well-being cause them harm. Experiences of institutional betrayal both during active military service and when first seeking treatment appeared to shape perceptions of healthcare in this sample. Veterans expressed the belief that the military failed to protect them from environmental exposures. Veterans concerns regarding subsequent quality of health care were intrinsically linked to a belief that, despite official documentation to the contrary, the predominant paradigm of both the U.S. Department of Defense and the U.S. Department of Veterans Affairs (VA) is that GWI does not exist. Veterans reported that providers are not adequately trained on treatment of GWI and do not believe Veterans' descriptions of their illness. Veterans reported taking up self-advocacy, doing their own research on their condition, and resigning themselves to decrease engagement with VA healthcare or seek non-VA care. The study's findings suggest institutional level factors have a profound impact on perceptions of care and the patient-provider relationship. Future research and policy aimed at improving healthcare for people living with MUS should consider the concept of institutional betrayal.
Conference Paper
Teargas has followed a markedly different trajectory to its chemical weapons (CW) counterparts over the twentieth century. While the Geneva Protocol of 1925 and the 1993 Chemical Weapons Convention prohibited chemical agents as means of warfare, from the early interwar period teargas gained legitimacy as a technology for domestic policing across the world. Moreover, this role in domestic riot control later became a means for some states to justify its use in military operations. This PhD therefore asks: how did teargas, in the case of British policy, become associated with riot control and policing in the twentieth century, yet prohibited as a means of warfare? Drawing from key concepts in STS and related social sciences, I argue that we can take the technical characteristics of ‘teargas’ (its ‘non-lethality’ or low toxicity) as being co-produced with its social role as a crowd control agent. Furthermore, I argue that by doing so we gain insight into how the ‘non-lethal’ status of teargas was situated within a ‘civilising’ governmentality in Britain. This governmentality both legitimated, and was legitimated by, the authority of scientific expertise. The thesis makes this argument by tracing a historical sociology of teargas in Britain and the empire from 1925 to 1965. Using declassified records from the UK National Archives and sources from newspaper archives, it examines three significant moments in Britain’s construction of teargas as a domestic technology. The first addresses the initial transition from military to colonial policing contexts that teargas made in British policy during the interwar period; the second focuses on Britain’s first use of teargas on populations within the UK during civil defence gas tests during WWII; the third traces the widespread use of teargas throughout the empire from WWII until 1965, examining the emergence of CS gas with the conception of riot control later in this period. Ultimately, I contend that CS, the ‘teargas’ of our contemporary moment, emerged from a sociotechnical imaginary of non-lethal chemical control grounded in ‘civilising’ modes of techno-politics.
Fifty years ago, Henry Beecher warned about serious problems with human-subjects research in the United States and exhorted researchers to reform. Research regulations proliferated in the ensuing decades, but new policies and procedures have not resolved every dilemma.
Polio provocation has concerned health professionals for nearly a century. Before an effective polio vaccine was licensed in 1955, evidence that certain paediatric injections could precipitate a polio infection and severe forms of paralysis informed medical debates, experiments and shifts in public health policy. This article explores how the theory was received and approached in the United States and the consequences of its protracted resolution. It contends that although medical professionals sought to maximise health benefits for American citizens, varying conceptions of what constituted an appropriate balance of risk inspired diverse health policy outcomes.
This article explores the various human experiments that were conducted on British army personnel during the Second World War. While some historical work has focused on trials at the Porton Down facility, this paper will start by placing these in the context of the wider range of research projects that were conducted using British troops in the Second World War. It will then consider the question of why conscript soldiers participated in trials. Comparative studies have focused on the ethics of human experimentation in military contexts but this article argues that ethical considerations were only part of the story. Using the oral testimonies of those that were involved in this type of research, it considers how military culture, material incentives and sentiments of national duty all influenced soldiers’ participation in human trials.
The militarization of Alaska during and after World War II created an extraordinary set of new facilities. But it also reshaped the imaginative role of Alaska as a hostile environment, where an antagonistic form of nature could be defeated with the appropriate combination of technology and training. One of the crucial sites for this reformulation was the Arctic Aeromedical Laboratory, based at Ladd Air Force Base in Fairbanks. In the first two decades of the Cold War, its employees conducted numerous experiments on acclimatization and survival. The laboratory is now best known for an infamous set of tests involving the application of radioactive tracers to indigenous Alaskans--experiments publicized by post-Cold War panels established to evaluate the tragic history of atomic-era human subject research. But little else has been written about the laboratory's relationship with the populations and landscapes that it targeted for study. This essay presents the laboratory as critical to Alaska's history and the history of the Cold War sciences. A consideration of the laboratory's various projects also reveals a consistent fascination with race. Alaskan Natives were enrolled in experiments because their bodies were understood to hold clues to the mysteries of northern nature. A scientific solution would aid American military campaigns not only in Alaska, but in cold climates everywhere.
On September 23, 1949, President Harry Truman announced that the Soviet Union had successfully detonated an atomic bomb. The news that the Soviet Union had done this came as little surprise to a number of American scientists and to some members of the intelligence community who had predicted that the Soviets would quickly acquire this advanced weapons technology. But for many Americans this news was disturbing. Truman’s announcement was taken up by, among others, a young Baptist evangelist named Billy Graham. Opening a tent revival in Los Angeles just two days after the President’s report, Graham preached how the news of the Soviet bomb test had “startled the world” and launched an “arms race unprecedented in the history of the world.” President Truman, he informed his listeners, said that we “must be prepared for any eventuality at any hour….” Perhaps even more ominously he asked the crowd, “Do you know the area that is marked out for the enemy’s first atomic bomb? New York! Secondly, Chicago; and thirdly, the city of Los Angeles!” It was not only evangelical preachers who foresaw catastrophic implications from a growing arsenal of atomic weapons.
In 1946, Tom Brock spent part of his summer dumping mustard gas bombs off a barge into the Atlantic Ocean. Brock was a civilian employed by the United States Army Transport Service in Charleston, South Carolina. His job was to dispose of surplus bombs and drums filled with mustard gas. Sulphur mustard, commonly called “mustard gas,” can take several forms: a liquid, a solid, or a vapour. Mustard gas, named for its mustard-like color and smell, is a vesicant that is toxic to humans and causes blistering and burns, affecting the lungs, eyes, and skin. Brock recalled that he and the soldiers enjoyed watching the occasional bomb explode as it sunk into the water. “We thought it was fun,” explained Brock. “I was 18 or 19 years old. We weren’t scared. We didn’t fear any explosive. We thought we were immortal.” Later that summer he was required to guard a barge of bombs that were leaking mustard gas, which looked to him like hot molasses. Due to the known health risks, Brock was told to wear a protective suit and gas mask. However, it was a hot day so he loosened the straps around his legs. As a result, enormous blisters developed, swelling out like a balloon from his toes to his knees. His summer job was no longer fun as he experienced firsthand the health hazards of exposure to mustard gas.
In 2008, Susan L. Smith published “Mustard Gas and American Race-Based Human Experimentation in World War II.” Research, undertaken by the US Army, attempted to quantify the effect of mustard gas (actually a volitile liquid) and othe chemical agents on people from different racial groups. This was based on the idea that different races would respond differently to the toxins, and in particular that this would be evident through dermal reaction. In other words, different skin color might mean different skin constitution. Some of the testing seemed reasonable, since new chemicals and equipment had been developed since 1919, and the racial issue added another dimension to the research. On closer examination, the testing was primarily based on old chemical agents such as mustard gas, Lewisite and phosgene, and thus the extent of the testing seemed scientifically and medically unnecessary. The chemical agents had been developed, tested, used in battle, the wounded treated and the dead subjected to detailed pathological study. The major combatants in World War I had all committed extensive scientific resources to the study of these agents looking at both offensive and defensive aspects of their use, including toxicity testing. The U.S. Chemical Warfare Service (CWS) had been formed in 1918 to specifically deal with issues such as toxicity tests, so why was the U.S. Army revisiting the subject of chemical weapons testing during World War II?
Deadly Allies: Canada’s Secret WarThe Unfought Chemical War
  • J Bryden
  • K Freeman
J. Bryden, Deadly Allies: Canada’s Secret War, 1937-1947 (Toronto: McClelland & Stewart, 1989); K. Freeman, “The Unfought Chemical War,” The Bulletin of Atomic Scientists, 47, no. 10 (December 1991): 30-39
Bad Blood: The Tuskegee Syphilis Experiment
  • Id
Id., at 4-5, 64-66, 388. J. H. Jones, Bad Blood: The Tuskegee Syphilis Experiment (New York: Free Press, 1981).
The Unfought Chemical War The Bulletin of Atomic Scientists In 1943 the U.S. began mustard gas testing on human subjects . At least 2,500 men were tested in gas chambers, 1,000 men in field tests, and the rest of the 60,000 with patch tests and drop tests. Id. (Freeman); see also Pechura and Rall
  • J Bryden
  • Deadly Allies
  • K Freeman
J. Bryden, Deadly Allies: Canada's Secret War, 1937-1947 (Toronto: McClelland & Stewart, 1989); K. Freeman, " The Unfought Chemical War, " The Bulletin of Atomic Scientists, 47, no. 10 (December 1991): 30-39. In 1943 the U.S. began mustard gas testing on human subjects. At least 2,500 men were tested in gas chambers, 1,000 men in field tests, and the rest of the 60,000 with patch tests and drop tests. Id. (Freeman); see also Pechura and Rall, eds., supra note 1, at 10.