ChapterPDF Available

History of Nuclear Medicine and Molecular Imaging


Abstract and Figures

Nuclear medicine's roots are the investigations of the uses of natural and artificial radioactive isotopes as tools for fundamental life science, physiology, and clinical medicine. Shortly after the discovery of radioactivity in 1896 and of radium in 1898, curiosity regarding the physiology and possible therapeutic potentials of radium led to the first human injections of radium in 1913 followed by a series of animal and human studies with natural and artificially produced radionuclides, leading to the definition of the radiotracer method in 1923 and the first human physiology experiment in 1927. During the mid-1930s, accelerators and neutron generators were invented and the positron discovery was added to discoveries of electrons, alpha particles, and neutrons along with the first artificial radioactivity and the discovery of carbon-11 in 1934.
Content may be subject to copyright.
1.01 History of Nuclear Medicine and Molecular Imaging
TF Budinger, University of California Berkeley, Berkeley, CA, USA; Lawrence Berkeley National Laboratory, Berkeley, CA,
USA; National Academy of Engineering, Washington, D.C., DC, USA
T Jones, PET Research Advisory Company, Cheshire, UK
ã2014 Elsevier B.V. All rights reserved.
1.01.1 Introduction 3
1.01.2 Discoveries of the Early 1900s That Underpin Nuclear Medicine 5
1.01.3 Earliest Radiation Detection Systems 5 Photographic Film 5 Electroscope 6 Crookes Tube 6 Wilson Cloud Chamber 6 Geiger–Mu
¨ller Counter 6 Ionization and Wire Chamber Detectors 6
1.01.4 Contemporary Photon Detectors 7 Photoelectron Multiplier Tubes 7 Silicon Photomultiplier Detectors 7
1.01.5 Scintillation Detector Materials 7 First Scintillation Materials 7 Contemporary Scintillation Materials 7 Semiconductor Radiation Detector Systems 8 The Scintillation Well Counter 8
1.01.6 Two-Dimensional Gamma Scanners and Cameras 8 Collimators, Lenses, and Methods of Focusing 8
1.01.7 Three-Dimensional Imaging 10 Planar or Longitudinal Tomography Nuclear Medicine Methods 10 Single-Photon Emission Tomography 10 Positron Coincidence Scanners and Cameras 11 TOF Positron Tomography 12 Reconstruction Tomography from Detected Emission Data 12
1.01.8 Image Processing and Data Analysis 13 Role of Image Processing in Nuclear Medicine 13 Kinetic Modeling 13 Equilibrium Imaging 14
1.01.9 Radionuclide Production 14 Neutron Generators 14 Cyclotrons and Linear Accelerators 14 Radionuclide Generators 15 Post-WWII Radionuclide Distribution 15 Commercialization of Radiotracers (Radiopharmaceuticals) 16
1.01.10 Radiotracer Syntheses Instrumentation 16
1.01.11 Hazards and Absorbed Radiation Doses 16
1.01.12 Selected Applications 17 Radionuclides for Therapy 17 Earliest Human Subject Experiments with Radionuclides 17 Cancer Metastasis Detection Radiotracers 18 Brain Blood Flow and Tumor Detection 18 Brain Metabolism and Neurochemistry 19 Lung Ventilation, Perfusion, and Cancer Detection 19 Liver and Pancreas Function 19 Kidneys and Adrenal Glands 19 Heart Blood Flow and Metabolism 20 Radionuclide imaging of left ventricular function 20 Thallium redistribution phenomenon 20 Mismatch between FDG uptake and
accumulation 21 Bone and Bone Marrow Function 21
1.01.13 Molecular Imaging, Born in Mid-1990s 21
Comprehensive Biomedical Physics 1
1.01.14 Short History of Organizational Nuclear Medicine and Molecular Imaging 23
1.01.15 Future Expectations 23
Appendix A Major Steps in the Chronology of Nuclear Medicine and Nuclear Molecular Imaging 23
References 31
3D reconstruction The method and result of determination
of the spatial distribution of a parameter from projections of
parameters to detectors at multiple positions external to the
three-dimensional object. Parameters can be radionuclide
concentrations, optical fluorescence, and electron density
(attenuation coefficient).
Activation analysis Method of determining the identity
of elements in an object from emissions stimulated by
exposure to neutrons usually from a reactor or other
source of neutrons.
Alpha particle The particle associated with the decay of
radium, polonium, and many other elements of high atomic
number. It is the same as the nucleus of helium, having a
charge of þ2.
Annihilation In this chapter, the term refers to the
disappearance of a positron and an electron when the two
particles join together as positronium (125 ps) and then
disappear as two gamma photons with an energy each of
511 keV in accord with E¼mc
Becquerel One disintegration per second is the unit for the
rate of decay of radioactivity. The unit is named for
Becquerel, who discovered radioactivity in 1896.
Beta particle An electron ejected from the nucleus during
radioactive decay. It has a negative charge.
Bioluminescence The light emitted during a chemical
reaction in biological entities such as microbes, insects,
and foraminifera of the ocean. The gene for the
enzyme luciferase can be transfected into microbes or
cells and these injected into animals. When the substrate
such as luciferin is injected, luminescence from the
animal gives evidence of the presence of the microbes
or cells.
Compton camera Two position-sensitive detectors are
involved to measure the angle and energies of each
incoming photon that interacts with the two detectors.
When a gamma photon interacts with material, it
scatters at an angle related to its energy loss. If the angle
detected by the positional information on the first
detector relative to the position on the second detector is
measured, and the energies are known, the origin of the
source will lie somewhere on a defined cone. Acquisition
of many angles gives data that allow reconstruction of the
source image.
Curie The unit for the number of disintegrations per second
from 1 g of radium, 3.7 10
. The commonly used dose for
nuclear medicine is between 1 and 10 mCi or 3.710
Cyclotron Instrument that accelerates nuclear particles such
as protons or deuterons to collide with a target element so
that another element or isotope is produced (e.g.,
produced from a proton interacting with
Disintegration The natural decay of a radionuclide as it
changes from one element or isotope to another. The
disintegration rate is measured in seconds or becquerels.
Half-life The time for one-half of the radiation activity to
disappear through radioactive disintegration. The model for
half-life is a simple exponential (e
), where lis the half-life
divided into 0.693.
Longitudinal tomography Method of imaging a plane
through the subject using differential motion of the detector
and the source or in the case of positron longitudinal
tomography by electronically selecting coincidence lines
that arise from the selected plane (cf. planar tomography).
Neutron A neutral particle in all elements except hydrogen
H). This particle has an abundance equal or greater than
the number of protons in most elements. When there are
more protons than neutrons, the proton becomes a neutron
with the discharge of a positron (e.g.,
C has six protons
and five neutrons and thus decays to
B). A neutron in
space has a half-life of about 15 min.
Neutron reactor When radium element
Ra is put in
proximity of many atoms of radium, the abundance of
neutrons emitted from radium results in fission of the
nucleus producing more neutrons and many radioactive
fission products. The heat associated with energy loss is used
for power reactors. The neutrons are also used to produce
artificial radioactivity such as the first
I and the
universally used
Optical fluorescence Many substances emit visual spectrum
photons when stimulated by photons of somewhat higher
energies than the emitted photons (e.g., starch and many
ultraviolet stimulated tissues).
PET Positron emission tomography is an imaging method
whereby the occurrence of two annihilation photons from
the positron–electron interaction is detected by coincidence
timing by detectors surrounding the object. The positron-
emitting radionuclide is on a line between the opposing
detectors. Multiple detectors are mounted in planar, ring, or
cylinder geometries.
Photon General term for light, x-rays, and gamma rays
wherein the type of photon is distinguished by its energy
(visual spectrum photons are a few electron volt, x-rays are
1000 eV and greater, and gamma photons range from
100 000 to over 1 000 000 eV).
Planar tomography Earliest form of tomography that
achieved a focal plane by the differential motion of a source
(x-ray) and a detector.
Positron The positively charged particle of the same
mass as the negatively charged electron. The positron
interacts with electrons in any environment to result in
conversion of the two masses to energy in the form of two
511 keV annihilation photons that travel at 180one from
the other.
2History of Nuclear Medicine and Molecular Imaging
Ra–Be The combination of radium with the element
beryllium that results in the emission of a neutron.
This is the result of the alpha particle interacting with
Radionuclide A generally accepted term for
radioactive isotopes distinguishes the radioactive
element from chemicals in which the
element exists (e.g., radiotracer or
Radionuclide generator Device that generates a
radionuclide, which in general can be a cyclotron,
neutron reactor, reaction between two isotopes,
or more commonly a system whose decay
product is the radionuclide of interest
Mo to
Radiopharmaceutical Term used when a specific
pharmaceutical such as albumin is labeled
with a radionuclide. Radiotracer denotes the same
Radiotracer Denotes a chemical combination of a
radionuclide attached to a molecule such as an amino acid,
antibody, and sugar.
SPECT Single-photon emission computed tomography
is distinguished from gamma camera or scanner
methods that give projection images and distinguished
from PET.
Time of flight Method of improving the statistical value of
positron tomography by determination of the time
difference in the arrival of annihilation gamma photons
at opposing detectors to give an estimate (x) of the
position of the specific positron emission event that
occurred along the line between the two opposing
Transaxial tomography A form of tomography involving a
section of the object that is transverse to the axis of rotation
or the plane of detectors in the case of a ring.
Transfection viral vector A method of transporting a gene
into a cell using a virus as the transporting agent.
1.01.1 Introduction
Nuclear medicine developments date from the early 1900s
when the first intravenous injections of radioactivity used
radium salts in 1913 to explore potentials for disease treatment
(Proescher, 1913) and to observe the appearance of radon and
radium in excreta (Seil et al., 1915). These were followed by
extensive human and animal studies that were the underpin-
ning of the ‘tracer principle’ pioneered by Georg Charles de
Hevesy (1923). The chronology of events leading to these and
subsequent developments of nuclear medicine and molecular
imaging with radiotracers is the focus of this chapter. Two
methods are used to present the history: a topic-based history
and a date-based chronology. The main body of the text of this
chapter presents the developments in instrumentation, analy-
sis, radionuclide chemistry, and molecular-genetic imaging as
separate sections. Citations are given for events in the text
along with citations for all of the events listed in Appendix A.
Appendix A is a 13-panel chronology of 154 events with 42 illus-
trations that are referenced in the text. These give a synopsis of
discoveries, innovations, and applications by using a color code
to distinguish an event as a discovery (white), an invention/
innovation (blue), or an application (red). Most of the applica-
tions of modern nuclear medicine are the imaging of a specific
molecule or of a specific molecular process as shown in
Figure 1.
The distinction between a nuclear medicine application
and molecular imaging lies mostly in whether a genetically
engineered process is being measured for molecular imaging.
Thus, a separate section gives the chronology of nuclear
medicine’s role in molecular imaging. Another section of this
chapter gives the chronology of some applications for specific
organs or systems of clinical medicine. The final section gives
the history of societies and convening groups for nuclear med-
icine and molecular imaging.
We assigned credit for discoveries to individuals who made
observations and demonstrated knowledge of the meaning or
impact of these discoveries. Dates of patents did not authenti-
cate an event for which there was alternative evidence for the
original discovery, invention, or application (e.g., invention of
the photomultiplier tube, tomography, bone, and lung scan-
ning). The criterion for inclusion of applications was that the
introduced procedures have led to broad applications in ani-
mal and human physiology and have had significant impact in
clinical medicine. Innovations that are not in current practice
but whose impact led to current instrumentation and clinical
methods are also included.
Therapeutic applications involving use of injected radionu-
clides are in the domain of nuclear medicine; thus, applica-
tions such as
P therapy for blood diseases and
I therapy for
hyperthyroidism and cancer are naturally included in our
chronology. Externally applied radiation, whose earliest his-
tory is discussed in Section 1.01.7, predated internal use of
radionuclides and use of modern ionizing radiation in radio-
therapy presented in volume VII of this series.
An overview of current clinical and animal noninvasive
imaging methods is shown in Figure 2. Major distinguishing
features of nuclear medicine techniques relative to x-ray imag-
ing and magnetic resonance imaging (MRI) are its sensitivity
and the fact that nuclear medicine measures radionuclide
concentrations in tissue, whereas for the most part, the
other methods measure the local tissue properties including
composition and flow. The detected events in x-ray imaging
represent the amount of attenuation of a photon beam pass-
ing through the body, and the detected signal in MRI is from
the nuclei such as hydrogen and other elements with appro-
priate magnetic spin properties (e.g., carbon-13, oxygen-17,
fluorine-19, and phosphorus-31). An important attribute of
magnetic resonance is its ability to detect the concentration of
specific molecules through spectroscopy as well as the chem-
ical exchange rates using specialized methods. But the sensi-
tivity of nuclear medicine far exceeds that of other techniques
and this is its major benefit. Radiotracers can detect proteins
at concentrations of 10
mol/L. For example, positron
History of Nuclear Medicine and Molecular Imaging 3
emission tomography (PET) imaging systems can measure extra-
striatal dopamine receptors at a concentration of 10
mol/L (Abi-
Dargham et al., 1999). The threshold for MRI detection using
gadolinium contrast agents at contemporary fields is 10
(Nunn et al., 1997), and that for magnetic resonance spectroscopy
is 10
mol/L at clinical magnetic field strengths (Rothman et al.,
1993). Even with hyperpolarization, magnetic resonance studies
are limited to evaluation of fluxes, and it can be shown that
even with an increase in polarization of 10,000, the tracer princi-
ple is violated in some applications. The richness of radioisotope
imaging methods can be expected to expand in the future, as
the physical and chemical limits have not been reached.
Gamma camera imaging
Cherenkov imaging
projection imaging
Optical imaging
fluorescent imaging
Optoacoustic imaging
Figure 2 Biomedical imaging technologies.
Surface receptor
traps ligand
Cytosol trapping
(e.g. phosphorylation)
Ligand transported in
cell by cell-specific
surface protein
Reporter gene makes
enzyme (Tk) that traps
A: Luciferin
B: Luciferase
Enzyme modifies self-
quenching fluorescent
Proteins (enzymes,
antigens, antibodies)
Cell surface
Cell surface
Figure 1 Mechanisms for noninvasive detection of molecules using radionuclide-labeled ligands, substrates for bioluminescence, and fluoresent
4History of Nuclear Medicine and Molecular Imaging
In addition to the illustrated series of events in Appendix A,
Appendix B gives a table of relevant Nobel Prizes.
1.01.2 Discoveries of the Early 1900s That Underpin
Nuclear Medicine
Major discoveries in the 10 years following Roentgen’s discov-
ery of the x-ray phenomenon in 1895 include (1) the discovery
of natural radioactivity from uranium salts by Becquerel a few
months after Roentgen’s announcement (Becquerel, 1896)
(Figure A-1); (2) the discovery of electrons ‘cathode rays’
(Thomson, 1897); (3) the discovery of two natural radioactive
elements, polonium and radium (Curie et al., 1898); (4) the
discovery of gamma rays (Rutherford and Soddy, 1902); and
(5) the discovery of transmutation of elements induced by
alpha particle bombardment (Rutherford, 1919). The first
intravenous injections of radioactivity used radium salts in
1913 and 1915 to explore potentials for disease treatment
(Proescher, 1913) and to observe the appearance of radon
and radium in excreta (Seil et al., 1915). After the First World
War, the next two important events in biological applications
were the following: (1) Georg Charles von Hevesy (Figure
A-2.4) used radioactive lead to explore uptake in plants,
which led to the tracer principle (von Hevesy, 1923) and (2)
the first human tracer study of blood circulation kinetics was
by injecting in the vein of one arm the lead and bismuth decay
products of radium to measure the arm-to-arm transit using a
Wilson cloud chamber as the detector (Blumgart and Yens,
1927) (Figure A-2.5).
Two notable instrumentation inventions as well as the
discovery of the neutron and positron occurred in 1932 and
1933. The first was the invention of the cyclotron in Berkeley,
CA (Figure A-2.6), by Lawrence and Livingston (1932). The
second was the linear accelerator that demonstrated the means
for accelerating charged particles and the use of charged parti-
cles to fragment stable elements into subatomic particles
(Crockcroft and Walton, 1932; Widere, 1928). The principal
physical discoveries in 1932 were the identification of a pene-
trating particle from alpha radiation interacting with beryllium
as a neutron (Chadwick, 1932) and the discovery of the posi-
tron using the Wilson cloud chamber and the detection of
magnetic deflection of ion tracks (Anderson, 1933).
The transmutation work of Rutherford (1919) using alpha
particles interacting with light elements produced protons, but
in fact, radioactive isotopes were also produced but not recog-
nized until 1934. This recognition was an essential discovery of
`ne Curie and Fre
´ric Joliot who noted a lingering radioac-
tivity of elements exposed to alpha particles after the source,
polonium, was removed (Curie and Joliot, 1934). These discov-
eries of the ability of alpha particles to generate radioactive
isotopes led Enrico Fermi a few months later to demonstrate
the role of the neutron in creating artificial radioactivity (Fermi,
1934) (Appendix A-3.7). A few years earlier, Leo Szilard
invented the concept of a neutron reactor for generating chem-
ical reactions by neutrons interacting with a variety of elements.
He hypothesized the concept of a nuclear chain reaction and
filed a patent in 1934 in England only 2 years after the neutron
had been discovered. He did not propose fission as the mecha-
nism for his chain reaction, since the fission reaction was not yet
discovered. The concept of a fission reactor did not surface until
and few years later, and the first reactor was realized in Chicago
in 1942 by a group led by Fermi.
Radioactive isotopes (radionuclides) entered the field of
biology and medicine principally through the use of accelera-
tors such as the 27 in. Berkeley cyclotron and, to a lesser extent,
reactors. The early discoveries were motivated by physician
scientists who in effect prescribed an iron and an iodine radio-
nuclide to be produced by Berkeley nuclear chemists. In the
mid-1930s, a physician from Rochester, NY, asked Jack Livin-
good, a colleague of Glenn Seaborg in Berkeley for a radioac-
tive isotope of iron. Thus, after bombarding iron with
deuterons followed by chemical extractions, the 45-day half-
life iron-59 was discovered and thereafter produced for hemo-
globin medical science studies. In 1938, Robley Evans of MIT
had gathered a team of physicians and physicists who first
examined the use of
I produced by neutron bombardment
of ethyl iodide in a small Ra–Be reactor (Figure A-3.7). That
group demonstrated the dynamics of iodine uptake in animal
models of thyroid disease (Hertz et al., 1938). Motivated by
the Boston rabbit studies but frustrated by the short half-life
¼25 min) of
I, Joseph Hamilton of Berkeley asked
Glenn Seaborg if he could find an iodine isotope with a half-
life of ‘Oh, about one week’(Seaborg, personal communica-
tion). Seaborg and Livingood produced the first
days) (Livingood and Seaborg, 1938) that became and still is
one of the most used radionuclides for both diagnosis and
therapy throughout the world (Figure A-3.9). Neutron capture
by natural sulfur led to the production of
P that became the
first internal therapy radionuclide delivered to patients with
blood diseases (Hamilton and Lawrence, 1939).
The text that follows highlights significant instrumentation
and radionuclide applications that followed these discoveries
and inventions.
1.01.3 Earliest Radiation Detection Systems Photographic Film
The process that is basic to photographic film uses silver halide
emulsions (Talbot, 1847). Talbot’s innovations were based on
a 100-year earlier discovery that silver nitrate (AgNO
) darkens
when exposed to light (
photography-part-a.html). This process was one of the essen-
tial tools that allowed recognition of the existence of gamma
and x-ray photons. Imaging radionuclide distributions using
autoradiography as a detector differs from the use of gas cham-
bers or scintillator materials where the radiation causes elec-
tronically detected ionization or production of light,
respectively. In autoradiography, each crystal of silver halide
(size from 0.02 to 3.0 mm) is embedded in the photographic
emulsion similar to the conventional negative film and acts as
an independent detector surrounded by its capsule of gelatin.
When a charged particle passes through or very near the
crystal, the resulting chemical reaction gives a latent signal
that persists throughout the exposure period, which varies
from a few hours to weeks depending on the radionuclide
and target concentration. Photographic development gives a
permanent image. Spatial resolution is a few microns and
volume resolution is less than 100 mm
, with very little
History of Nuclear Medicine and Molecular Imaging 5
background for ex vivo studies where sample washing is used
to remove radionuclides from the background.
It is commonly believed that the first autoradiograph was
the emulsion exposed to radium salts that led Becquerel to
recognize radioactivity in 1896. But 38 years before Becquerel’s
correct interpretation of the darkening of the photographic
plate, Claude Felix Abel Nie
´pce de Saint-Victor (1858) noted
darkening of silver halide emulsions from uranium nitrate and
uranium tartrate salts. Unlike Becquerel, these observations
were not preceded by the identification of x-rays in 1895;
thus, the mechanism for the darkening was not understood.
Autoradiography was more formally introduced as a radia-
tion detector for biological studies and medical science research
by Lacassagne and Lattes in 1924 when they published organ
distribution studies after injection of polonium in an animal.
The radiation in this case was alpha particles. Methods of whole-
body autoradiography applicable to rodents were introduced in
the mid-1950s (Ullberg, 1954) that enabled biodistribution
imaging of radionuclides and radiopharmaceuticals. The earliest
molecular imaging using silver halide emulsions was studies of
enzyme distributions in mammalian cells using a tritium-
labeled enzyme blocker of acetylcholinesterase (e.g.,
H-fluorophosphate) and Kodak silver halide
emulsion plates (Ostrowski and Barnard, 1961). The ex vivo
tomography using autoradiographic techniques became impor-
tant tools for molecular imaging studies about 10 years later
with studies such as brain distribution in animals of injected
S-chlorpromazine (Lindquist and Ullberg, 1972), tritium, and
C-labeled natural and pharmaceutical compounds. Electroscope
Observations of electricity were described in detail in a Latin
language book authored by William Gilbert and published in
1600 (cf. English translation 1893). At the center of this
description was the first electroscope that was used to demon-
strate ionization from x-rays (Villard, 1908). If two thin gold
leaves are suspended side by side from a thin wire in a glass jar
and an electric charge is transferred down the wire from simply
touching the thin conductor wire by amber on whose surface
an electron excess has to be produced by rubbing the amber,
the leaves will repel each other. Then, if radiation is transmit-
ted into the environment, the ions generated will neutralize the
charges on the gold leaves and they will relax. The movement
of the gold leaves indicates the presence of ionizing radiation.
This principle is the basis for some dosimeters and to this day
remains an effective device for teaching electricity. Crookes Tube
A partially evacuated glass tube (ca. 100 Pa) with an electrode
at one end (cathode) and another electrode near the other end
will glow on application of a high voltage (Appendix A-1.1).
The glow is due to the ionization of the low concentration of
molecules. This is the principle of a neon tube also known as a
Geissler tube. In the 1870s, this apparatus was evacuated to
0.1 Pa and the glow moved from within the tube to the glass at
the end (Crookes, 1878). The agents causing the glow were
called cathode rays in 1876, and in 1902, the agents were
determined to be photons that Roentgen used in the discovery
of x-rays in 1895. Wilson Cloud Chamber
The first position-sensitive device for particle track visualiza-
tion was the Wilson cloud chamber built in 1912 (Wilson,
1951). A cloud chamber is an enclosure containing a supersat-
urated vapor of water or alcohol. Radiation entering the cham-
ber causes ionization, and these ions act as condensation loci
around which tiny clouds are formed because the vapors are
near a point of condensation. These nuclei leave tracks of the
ionization. Light reflections from the liquid droplets provide
visual evidence of the quantity and to some extent type of
particle causing the ionization. The Wilson cloud chamber
was the first detector used in human radiotracer studies
(Blumgart and Yens, 1927). It allowed measurement of the
transit time of blood from one arm to the opposite arm in a
human subject injected with radioactivity (Appendix A-2.5).
This experiment was the first physiological quantification of
human blood flow speed. Cloud chambers were used exten-
sively in particle physics until the 1950s, when the bubble
chamber was introduced. The cloud chamber enabled the dis-
coveries of the positron in 1932 and the muon in 1936, both
by Carl Anderson (Anderson, 1933). Geiger–Mu
¨ller Counter
In 1908, Geiger working with Rutherford developed a system
for detection of ionizing radiation using a large voltage (e.g.,
200 V) between an anode and a cathode enclosed in a rarefied
gas container (e.g., ca.0.1 atmosphere) (Rutherford and Geiger,
1908). When ionizing radiation enters the tube, a few gas
molecules are ionized, creating positively charged ions and
electrons. The electric field accelerates the ions and electrons
toward the opposite charged electrodes. These particles ionize
other gas molecules through collisions as they are accelerated
toward the electrode (anode), creating an avalanche of charged
particles. Subsequently, a postdoctoral research assistant,
Mu¨ ller, added to this invention in 1928 by employing much
higher voltage and with Geiger completed experiments to show
spurious pulses were not artifacts but cosmic particles, thus
stimulating the cosmic particle research field as well as provid-
ing the most portable and convenient radiation detector still in
use 85 years later (Trenn, 1986). Ionization and Wire Chamber Detectors
Wire networks embedded in low-concentration gas-filled cham-
bers have provided valuable particle and high-energy photon
imaging systems because the ionization of gas molecules renders
them as electric charges whose position is detected electroni-
cally. It was not until 1957 that noble gases were used with
multiwire chambers (Charpak, 1957). Devices known as multi-
wire chambers have the advantage of 1 mm FWHM spatial
resolution in three dimensions (Jeavons et al., 1975, 1999),
but this is offset by very poor efficiency (e.g., less than 10% for
radiotracer photons) and poor energy resolution, thus limiting
nuclear medicine applications. High-pressure systems and even
liquid noble elements (e.g., xenon) can overcome the efficiency
6History of Nuclear Medicine and Molecular Imaging
limitations, but implementation was disappointing (Derenzo
et al., 1971). Opposing wire chambers were investigated for
PET applications in the late 1970s (Tam et al., 1980;
Townsend et al., 1980) but were not pursued due to low sensi-
tivity and poor energy resolution. Submillimeter 3D spatial
resolution of radiotracers including positron emitters has been
demonstrated for mice weighing about 30 g (Scha¨fers et al.,
2005). Wire chambers do provide a component for Compton
cameras and small animal imaging discussed in this volume
(see Chapters 1.05 and 1.10).
1.01.4 Contemporary Photon Detectors Photoelectron Multiplier Tubes
Photoelectron multiplier tubes (PMT) are evacuated glass tubes
containing a cascade of electrodes with voltage differences
adjusted to accelerate and amplify electrons released from a
phosphor that is coated on one end of the tube. Electrons are
released from this phosphor by the interaction of low-energy
photons from a scintillation crystal. Those low-energy photons
are generated by the interaction of gamma and x-ray photons in
a scintillation crystal (cf. Section 1.01.5) placed adjacent to the
phosphor. The electron amplification of 1 million or more
results in a current whose amplitude is proportional to the
number of scintillation photons produced in the scintillation
crystal and that number is proportional to the energy of each
incoming gamma or x-ray. These devices have become the prin-
cipal detectors used in nuclear medicine imaging instruments.
The invention of the photomultiplier can now be attributed
to the Russian scientist/engineer Kubetsky (1930, 1937) but
has in the past been attributed to Zworykin et al. (1936) who
published the concept in the United States after seeing the
instrument in Kubetsky’s laboratory where he visited 1 year
earlier (Lubsandorzhiev, 2006). Ten years before the ‘Kubetsky
tube’ as it was called in Russia, a patent having elements of the
idea was issued in the United States (Slepian, 1923). Silicon Photomultiplier Detectors
A modern compact form of photomultiplier detectors is the
silicon photomultiplier (SiPM). The combination of silicon
photodiodes and Geiger avalanche detection dates from 1965
(Haitz, 1965) with the modern implementation consisting of
individual SiPMs comprising each pixel of an imaging array
(Roncali and Cherry, 2011). A silicon photodiode is a ‘PN’
solid-state device operated to accelerate the photoelectron pro-
duced by the incoming photon from a scintillation crystal in
order to generate an electric signal through avalanche of elec-
trons. The SiPM is an array of individual elements each with
dimensions of tens of micrometers. The arrays of avalanche
photodiodes (APDs) can have a density of 1000 mm
APD in a SiPM operates in Geiger mode and is coupled with the
others by a quenching resistor. Although the device works in
digital/switching mode, the SiPM is an analog device because all
the microcells are read in parallel making it possible to generate
signals within a dynamic range from a single photon to 1000
photons for just a single square millimeter area device. The SiPM
arrays have amplification and sensitivity bandwidth attributes
that can lead to a replacement of the PMT in many applications.
1.01.5 Scintillation Detector Materials First Scintillation Materials
The conversion of high-energy photons emitted from radionu-
clides to an electronic signal can be accomplished using a pho-
toelectron multiplier system as described earlier, but the
conversion of the incoming photon to an electron is very inef-
ficient because the probability of an interaction of the gamma or
x-ray with the thin phosphor is extremely low for the high
energies of useful radionuclides. Thus, a method of converting
the photon emissions to lower-energy photons was needed, and
this need was met with substances known as scintillators. Prof.
Hartmut Kallmann, even though suspended from academic
activities in the 1930s because of his opposition to the Third
Reich, was able to carry out experiments in what had been a
horse barn at the I.G. Farben chemical conglomerate, then in
Frankfurt, Germany. These experiments using naphthalene and
photographic material allowed detection and imaging of neu-
trons. Recognition that a PMT would be much more sensitive
than photographic films led to the discovery of the scintillator
photoelectron multiplier radiation detector published in 1947
(Broscber and Kallmann, 1947). In late 1945, Kallmann
returned to Berlin to continue his former research. Despite a
lack of resources in postwar Berlin, Kallmann made use of
everyday items to conduct his research: concentrated lead
paint was used as a radioactive source, and melted down moth-
balls were used to make naphthalene, which in turn became a
scintillator. He showed that the current from individual pulses
from a naphthalene–PMT was proportional to energy. A few
years later, while at New York University as professor of
physics, he discovered liquid scintillation phenomenon that
enabled the field of liquid scintillation counting particularly
useful in life sciences and radiopharmaceutical chemistry.
At the same time that Kallmann discovered the
naphthalene–photoelectron multiplier combination, Robert
Hofstadter showed that potassium iodide or sodium iodide
had much higher light output than that from naphthalene, but
he used photographic plates for recording the scintillations.
Hofstadter reported that it was Kallmann’s discovery of naph-
thalene as a scintillator that motivated him to explore other
materials. The idea that led to the use of thallium for activation
was the research of Frank Quinlan of General Electric laborato-
ries (Hofstadter, 1947). Hofstadter’s experiments with NaI(Tl)
were done while at Princeton University. NaI(Tl) coupled to a
PMT became the gamma-ray detector of choice for nearly all
applications in nuclear medicine. For his work in high-energy
physics, Hofstadter shared the 1961 Nobel Prize in physics. Contemporary Scintillation Materials
Though NaI(Tl) can be produced in size ranges from a few
millimeters to plates of large diameters (e.g., 0.4 m dimen-
sions) sufficient for body imaging when multiple PMTs are
used, the conversion efficiency, long light decay time, and
poor energy resolution were not optimum for in vivo nuclear
medicine imaging. For over 60 years, there has been a continu-
ing search for crystal scintillators with the following seven
properties: high efficiency, fast rise time, short scintillation
pulses, good energy resolution, high photon production per
History of Nuclear Medicine and Molecular Imaging 7
incoming photon, output photon wavelength that matches
current PMT photoelectron conversion phosphors, and practi-
cal production. Three principal crystals that have to some
extent replaced NaI(Tl) systems are bismuth germinate
(BGO) used initially in high-energy physics (Weber, 1982),
lutetium orthosilicate a major scintillator currently used in
PET (Melcher, 1990), and the deployment of the very fast
scintillator lanthanum bromide (LaBr
)(Van Loef et al.,
2002) for modern time-of-flight (TOF) PET cameras (see
Chapter 1.07). The highest-energy-resolution scintillator
:Euþ2 was discovered at Berkeley in 2011 with a
resolution of 2.55% (Bizarri et al., 2011). Semiconductor Radiation Detector Systems
In the 1970s, germanium was being used as a detector in gamma
spectrometers, and some attention was given toelectronicdetec-
tion of photons using a large-diameter crystal (Glass et al.,
1973), but the low density of germanium and expense was a
deterrent to this pursuit. Other solid-state detector materials that
were introduced in the late 1970s are HgI
and CdZnTe. The
crystal properties of HgI
are not favorable and until 2000s the
yields of CdZnTe(CZT) were very poor thus the commercial
availability has hindered its use; however, the energy resolution
favors background reduction and the packaging of detection
electronics leads to compact imaging systems that have become
useful for clinical studies (Gullberg et al., 2010). The Scintillation Well Counter
The combination of the PMT and the NaI(Tl) scintillation crystal
provided a means for calibration of radiation doses in clinics
throughout the world. It is a single crystal–photomultiplier
device that converts each gamma photon to an electronic current
pulse. An electronic counter provides the sought for the quanti-
fication of radioactivity. H.O. Anger did not patent this univer-
sally used device because mistakenly he did not think it was that
important (Anger H (1974) Personal communication.).
1.01.6 Two-Dimensional Gamma Scanners
and Cameras
Whereas the Geiger–Mu¨ ller counter had already found effica-
cious medical applications for the quantification of iodine
uptake by the thyroid in the late 1930s (Figure A-3.10), it was
not until the combination of the sodium iodide scintillator
crystal and the PMT that radionuclide imaging was facilitated
by scanning in the 1950s (Cassen et al., 1951) (Figure A-5.14).
Body rectilinear scanners became the main imaging instrument
of nuclear medicine using
I and other radionuclides that
became available in the United States and Europe after World
War II (WWII). Shortly thereafter, the first gamma camera was
invented by Hal O. Anger and to this day is known as the Anger
camera (Figures A-6.18 and A-6.21). This device used a single
flat crystal of NaI(Tl), which transferred gamma rays from
the body into an image through a hexagonal packing of
photomultipliers (PMTs) and electronic analog circuits that
allowed calculation of the crystal position of scintillation
events by ratios of currents from the PMTs (Anger logic).
The Compton camera, first proposed for detection of neu-
tron sources in space (Pinkau, 1966) and then for gamma-ray
astronomy (Schonfelder et al., 1984), has not matured as a
nuclear medicine imaging instrument, yet the promise for orders
of magnitude increase in sensitivity over other devices has led to
its continuing development.Two detectors are involved in order
to compute the direction and energy of the incoming photon
from gamma-emitting sources (see Chapter 1.05). Collimators, Lenses, and Methods of Focusing
A radiation source in the body sends photons in every
direction; thus without some method of focusing, the image
of a radiation source would be blurred as would a small light
source inside a bottle of muddy water. There are four focusing
methods: (1) Scan a small detector point by point over the
surface of the organ and represent the count rate as a number
or mark on pressure-sensitive paper or a cathode ray tube as
was done in the original display systems; (2) place a lead plate
with a single small hole (pinhole) between the body and the
detector system (photographic film or Anger camera) (Figure
A-6.18); (3) place an array of small holes in a regular pattern in
a lead plate (collimator) between the body and the area detec-
tor system; and (4) use the timing of photons arriving at
detectors surrounding the body; if these are the annihilation
photons from positron emitters, the simultaneous arrival of
the signal from two detectors surrounding the body allows
determination of the line through the body on which the
radiotracer lies – this is electronic collimation.
The lead pinhole and multihole collimators are employed
for single-photon emission computed tomography (SPECT).
Electronic timing for coincidence detection is used in PET. The
usual method is to use an array of regular holes or channels for
collimating gamma photons between the body and scintilla-
tion probes or cameras (Newell et al., 1952).
In addition to the parallel hole and pinhole collimators, a
number of other geometries have been proposed for capturing
the positional information of radionuclide sources within the
body. Unconventional collimators or lens systems such as alter-
nating rings of a Fresnel zone plate (Mertz and Young, 1961)
and coded arrayshave the advantage of sensitivity increases over
conventional methods and these approaches have not been fully
exploited for emission imaging. In the early 1970s, the use of a
series of lead rings to construct a Fresnel lens was suggested to
improve sensitivity (Barrett, 1972) and the implementation of
the Fresnel lens for longitudinal tomography with an Anger
camera was realized (Budinger and MacDonald, 1975). Other
methods of improving the sensitivity of planar detection sys-
tems included redundant and nonredundant patterns of pin-
holes. The detected data are thereafter decoded to form an image
of the projected emission.
For all detection systems, the general method for character-
ization and optimization is based on a general theory of image
formation that dates to the theory of the microscope (Abbe,
1873, see Chapter 4.09). This theory states that the spatial
frequencies comprising the object will be modulated (damp-
ened to various extents depending on the frequency) by the
lens between the image and the object. A measure of the ability
of a collimator/detector combination to transfer each of the
8History of Nuclear Medicine and Molecular Imaging
spatial frequencies is the modulation transfer function that was
introduced to nuclear medicine by Beck in 1964.
Table 1 lists significant milestones in instrumentation and
collimator systems. The chronology of reconstruction methods
is given in Table 2.
Table 1 Milestones in radionuclide detection systems (cf. Table 2 for
reconstruction strategies)
1950 Radionuclide emission longitudinal tomography (Ziedses des
Plantes, 1950)
1951 Instrumentation for I
use in medical studies (Cassen et al.,
1952 Multichannel collimators for scanning (Newell et al., 1952)
1952 Film recording Geiger–Mu
¨ller tube scanning (Mayneord and
Newbery, 1952)
1956 Scanning of positron-emitting isotopes in brain (Brownell and
Sweet, 1956)
1958 Scintillation camera (Anger, 1958)
1960 Total body scanner for high-energy gamma rays (Laughlin et al.,
1961 Positron tomograph 32-detector ring geometry (Rankowitz
et al., 1961)
1961 Fresnel transformation of images (Mertz and Young, 1961)
1963 Positron scintillation Anger camera (Anger, 1963)
1966 Focused collimators for longitudinal tomography (McAfee et al.,
1969 Rectilinear scanner with multiple plane analog optical readout
(Anger, 1969a,b)
1971 Slant-hole collimator tomography (Muehllehner, 1971)
1972 Multicrystal positron camera (Burnham and Brownell, 1972)
1972 Positron longitudinal tomography (Brownell and Burnham, 1972)
1972 Single-photon section scanning ‘slice of life’ (Todd-Pokropek,
1975 PET 96 detector hexagon ring (Phelps et al., 1975; Ter-
Pogossian et al., 1975)
1975 Fan-slotted rotating collimator for planar camera emission
imaging (Keyes, 1975)
1976 Mark IV system for SPECT of the brain (Kuhl et al., 1976)
1976 PET system for the regional cerebral blood flow (Thompson
et al., 1976)
1976 Design and performance of a PET transaxial tomograph
(Hoffman et al., 1976)
1976 Circular ring PET (Cho et al., 1976)
1977 The Humongotron – a scintillation camera transaxial tomograph
(Keyes et al., 1977)
1977 3D imaging of the myocardium with radionuclides (Budinger
et al., 1977)
1978 Ringdector PET for the brain (Bohm et al., 1978)
1978 Seven-pinhole collimator (Vogel et al., 1979)
1979 Multiple focused collimator SPECT (Stoddart and Stoddart,
1979 PET 280 detector ring (Derenzo et al., 1979)
1980 PET longitudinal tomography with wire chambers (Tam et al.,
1980; Townsend et al., 1980)
1982 Time-of-flight scanner (Ter-Pogossian et al., 1981, 1982)
1986 Block detector modules for PET (Casey and Nutt, 1986)
1987 PET 600 detector 2.3 mm resolution ring (Derenzo et al., 1987)
1995 Animal PET (Bloomfield et al., 1995)
1996 Position sensitive detectors-animal PET (Cherry et al., 1996)
2005 High-resolution animal SPECT (Beekman et al., 2005)
2008 Adaptive high-resolution SPECT (Barrett et al., 2008)
2012 Hybrid MR-PET-EEG at 3 and 9.4 T (Jon Shah et al., 2012)
Table 2 Milestones in reconstruction tomography strategies
(cf. Table 1 for detector systems)
1912 Crystallography from x-ray diffraction patterns (Laue et al., 1912)
1917 Radon transform and inverse radon transform (Radon, 1917)
1936 x-Ray planigraphy (laminography) (Andrews, 1936)
1956 Astrophysical identification of microwave sources (Bracewell,
1961 x-Ray tomography by scanning axis of rotation (Oldendorf,
1963 Single-photon section scanner using optical back projection
(Kuhl and Edwards, 1963).
1963 Mathematical theory of reconstruction tomography (Cormack,
1967 Fan-beam inversion in astronautical imaging (Bracewell and
Riddle, 1967)
1968 Reconstruction from electron micrographs using Fourier
methods (Crowther et al., 1970)
1968 Pinhole array-based tomography (Ables, 1968; Dicke, 1968)
1968 Digital acquisition and reorganization of tomographic data (Kuhl
and Edwards, 1968)
1968 Reconstruction of 3D structures from electron micrographs
(DeRosier and Klug, 1968)
1970 Arithmetic reconstruction technique (Gordon et al., 1970)
1971 Convolution tomography for x-ray CT (Ramachandran and
Lakshminarayanan, 1971)
1971 Fourier transform for x-ray CT (Bates and Peters, 1971; Peters,
1972 Simultaneous iterative reconstruction technique (Schmidlin, 1972)
1972 Back projection of filtered projections in PET and CT (Chesler,
1972 Iterative least-squares reconstruction technique (Goitein, 1972)
1972 Fresnel shadowgram optical tomography (Barrett, 1972)
1973 x-Ray computerized tomography invention (Hounsfield, 1973)
1973 Image reconstruction from projections (rho filter) (Peters,
1974 PET reconstruction using Chebyshev polynomials (Marr, 1974)
1974 Fourier reconstruction algorithm (Shepp and Logan, 1974)
1974 Iterative reconstruction with attenuation correction (Budinger
and Gullberg, 1974)
1975 Fresnel shadowgram computed tomography (Budinger and
MacDonald, 1975)
1975 Time-modulated coded aperture (Koral et al., 1975)
1976 Convolution reconstruction of fan-beam projections (Dreike and
Boyd, 1976)
1977 Reconstruction tomography statistics (Huesman, 1977)
1977 x-Ray tomography statistics (Chesler et al., 1977)
1977 Algorithm library for emission tomography (Huesman et al.,
1977 ECG-gated list-mode SPECT (Budinger et al., 1977)
1977 Time-of-flight PET concept (see text)
1977 Maximum likelihood using the EM algorithm (Dempster et al.,
1978 Cone-beam 3D reconstruction (Nalcioglu and Cho, 1978)
1978 ART-based attenuation correction (Chang, 1978).
1978 SNARK77, algorithm library for x-ray CT (Herman and Rowland,
1980 Fully three-dimensional PET reconstruction (Colsher, 1980)
1981 Time-of-flight reconstruction model (Tomitani, 1981)
1982 Maximum likelihood reconstruction (Shepp and Vardi, 1982)
1984 EM reconstruction algorithms for emission tomography (Lange
and Carson, 1984)
1989 3D PET reconstruction (Townsend et al., 1989)
1993 Analytical strategies for the human observer in a detection
system (Barrett et al., 1993)
History of Nuclear Medicine and Molecular Imaging 9
1.01.7 Three-Dimensional Imaging
As nuclear medicine and molecular imaging using radionu-
clides involves remote detection of the superposition of
emission data from a three-dimensional volume, the two-
dimensional image is the superposition of all the activity
along lines or projection paths from the object to the
detector(s). The result is analogous to what one would see in
a microscope without focus. In the early years of nuclear med-
icine, a target organ such as the thyroid or brain tumor accu-
mulated radionuclide such as
I with little activity in the
surrounding tissues; thus, the image gave a diagnostic result.
But in general, projection images gave nonquantitative and
frequently ambiguous information. Thus, methods to see
inside the body were sought. Originally, these methods
employed motion parallax, optical and anaglyphic stereoscopy
using two gamma (Anger) camera views, and rotation or oscil-
lation of digitized Anger camera views from different angles.
But these methods, while sometimes helpful for the detection
of abnormalities, were not generally adopted when introduced
in the 1970s as they did not lead to diagnostic applications.
Thus, instruments and mathematical algorithms were devel-
oped over the past 60 years to provide quantitative informa-
tion on the spatial concentrations of radionuclides. The
methods fall under the term tomography. Three general cate-
gories are known: planar or longitudinal tomography, transax-
ial tomography, and 3D tomography. The latter two are
implemented by both SPECT and PET. The reason optical
systems cannot be employed is that the wavelength and neutral
charge of gamma and x-ray photons cannot be refracted or
deflected as is the case for optical microscopy and electron
microscopy. In the following sections, we outline develop-
ments in SPECT and PET followed by aspects of the history of
image reconstruction through mathematical methods usually
implemented by digital computers. Planar or Longitudinal Tomography Nuclear
Medicine Methods
Tomographic imaging the distribution of radiotracers has used
a variety of strategies ranging from detectors arranged to collect
projections from multiple angles to single detectors with some
form of focusing using collimators (e.g., focused collimator,
multiple pinholes, Fresnel pattern collimator, and coded
apertures). The first invention of longitudinal tomography
was the use of differential motion of a detector and the body
being imaged using x-rays (Ziedses des Plantes, 1950, 1973).
The idea dates to 1921 as documented by Ziedses des Plantes
who responded to an inquiry from the late William Oldendorf
(Oldendorf, 1980). Ziedses des Plantes stated:
I invented body-section radiography in 1921 during my first year of
medical studies at the University of Utrecht.... Then, one night,
I suddenly got the idea to simulate optical imaging by moving the
X-ray tube and the film during the exposure.... We made our first
body-section radiographs in 1931.
More details of x-ray longitudinal tomography beginnings
are given in Ziedses des Plantes collected works (Ziedses des
Plantes, 1973). The nuclear medicine embodiment of this idea
was the Anger tomographic scanner (Anger, 1969a, b).
The applications to radiation therapy led to target-specific
radiation delivery methods (e.g., conformational therapy)
(Takahashi, 1965).
The early methods using detector motion and focusing
systems approached the problem of defining slices or cuts
through the object by blurring the over- and underlying emis-
sions. A projection of radioactivity from imaging the heart
includes background activity from the lungs and chest muscles.
Methods to accomplish separation of emissions from the organ
of interest can be considered methods for focusing using lens
systems and apertures including collimators and combinations
of collimators and differential movement of detectors and the
radiation sources. But modern computerized tomography does
not blur the unwanted data it extracts from the projections
those data belonging to specific planes. Single-Photon Emission Tomography
The invention of single-photon tomography was through the
use of the rectilinear scanner that acquired information from
each of many angles around the body by scanning at different
angles around the body (Kuhl and Edwards, 1963,
Figure A-6.22). No commercial interest arose for manufacturing
the complicated gantry neededto acquire the tomographic data,
yet the work of Kuhl and Edwards motivated other designs and
is considered the first SPECT demonstration. Three SPECT
design activities arose in the 1969–73 period. In England,
Keeling and Todd-Pokropek (1969) developed a tomographic
system wherein two opposing collimated detectors scanned
back and forth on a gantry that rotated to different angles. This
system was popularized by the newspaper press as the ‘slice of
life’ (Figure A-7.25), and in Japan, Tanaka and associates devel-
oped a multicrystal tomographic system also in the 1971–72
period (Tanaka et al., 1972). In Scotland, the Mallard group for
medical physics engineered a gantry that rotated while allowing
opposing collimated scintillator–photomultiplier detectors to
move back and forth, similar to the ‘slice of life’ (Bowley et al.,
1973). The introduction of x-ray computed tomography or
computer-assisted tomography (CAT) by Hounsfield occurred
at this time (Hounsfield, 1973), and this commercial introduc-
tion further encouraged nuclear medicine instrumentation
innovators to pursue tomography.
By 1974, tomographic data acquisition commenced
throughout the world by using the already well-known Anger
camera and either by rotating a patient in front of the camera or
by rotating one or two cameras around the patient. The princi-
pal motivation was to extract from the data an accurate repre-
sentation of radionuclide concentrations in organs without that
data being obscured by overlying and underlying radiotracer
photons from other organs or tissues. But because the attenua-
tion of photons prevented quantification of radionuclide activ-
ity, single-photon tomography was not universally accepted.
Furthermore, at the same time that the Anger camera-based
tomography gantries were being promoted, positron tomogra-
phy, whose early history is discussed later, moved from planar
opposing detector array systems to circular ring systems that
became a commercial product in 1979 (cf. Section
Because the PET systems have a superior sensitivity and an easily
implemented method for correction of attenuation, the physical
attributes appeared to diminish the importance and future of
SPECT. But with solutions to the attenuation problem of SPECT
10 History of Nuclear Medicine and Molecular Imaging
(Budinger and Gullberg, 1974) and the readily available
radionuclide from the
Mo generator, SPECT became an attrac-
tive alternative to PET whose applications required cyclotron-
produced radionuclides and a few generators for positron emit-
ters (cf. Section In addition,
Tc, x
In, and the many commercially produced radiotracers
Tc bone scanning agents and
Tc-DTPA (diethylene
triamine pentaacetate) aerosol) associated with these radionu-
clides encouraged development of SPECT systems including
algorithms to compensate for attenuation, dynamic data acqui-
sition, and methods for gated cardiac studies. By the 1980s, the
availability of commercial tomography gantries using Anger
scintillation cameras and the readily available radionuclides
relative to positron-emitting radionuclides led to widespread
use of SPECT. The SPECT systems also could be used for routine
planar emission imaging (e.g., bone scanning).
SPECT systems have a sensitivity much less than that of PET
systems as can be seen by the ratio of solid angles available from
a source to the detector. The available surface area is limited by
the amount of material in the collimator such that the transmis-
sion is only 10
in many cases (Anger, 1964). On the other
hand, PET has an advantage of about 100 in sensitivity as there
is no shielding, because PET systems have electronic collima-
tion. The efficiency of detectors for conversion of incoming
photons to light photons or electronic signals is a key aspect of
sensitivity. This is particularly the case for PET as a gas chamber
detector system or material of low density and low-energy con-
version efficiency in the range of 10% will have a PET efficiency
of only 1% because the detection is the product of the detection
efficiency of the two opposing detectors. The sensitivity of
SPECT can be increased by surrounding the subject with as
many detectors as the geometry will allow. Three or four cam-
eras around the head will allow an improvement of sensitivity
by a corresponding factor of 3 or 4. With more than five planar
detectors, the sensitivity will decrease proportional to the dis-
tance squared between the sources and the detector for pinhole
systems but not for parallel-hole collimated systems. Though
the pinhole collimator has a sensitivity proportional to the
aperture and inversely proportional to the distance from the
source squared, it is possible to mount a cylinder collimator
with multiple pinholes around the source and achieve good
sensitivity for small sources such as small animals. A system of
focused and moving collimators was designed and produced in
the mid-1970s (Stoddart and Stoddart, 1979), but this system
was never adopted by the large commercial imaging industry.
Another high-resolution system for animal SPECT provides
0.5 mm resolution and good sensitivity but requires animal or
object motion (Beekman et al., 2005). To achieve dynamic
SPECT, the University of Arizona group created two systems
FastSPECT and adaptive SPECT (Barrett et al., 2008). The adap-
tive SPECT approach promises SPECT animal imaging with
0.2 mm resolution. Innovations in collimator systems were
introduced for dynamic SPECT and have achieved clinical status
(Gullberg et al., 2010). Positron Coincidence Scanners and Cameras
The first physical description of the benefits of imaging radio-
nuclide distributions in the brain using positron emitters was a
determination of the improvement in detection sensitivity by
coincidence counting instead of the single-photon counting
that required a collimator and a single crystal–photomultiplier
(Wrenn et al., 1951). Wrenn et al.’s (1951) paper was pub-
lished the same year as the first clinical study that used the
coincidence system constructed by Gordon Brownell at the
Physics Research Laboratory at Massachusetts General
Hospital, Boston. Opposing scintillation detectors scanned a
patient for the detection of a brain tumor using
As (a posi-
tron emitter) by the neurosurgeon William Sweet (1951) (-
Figure A-5.15). The success of the projection brain scanner
using a positron emitter motivated design of opposing detector
arrays that laid the foundation for longitudinal positron
tomography (Brownell and Sweet, 1953, 1956). In 1958,
human lung physiology was explored using
O from a cyclo-
tron in England (Dyson et al., 1958) (Figure A-6.17).
In 1961, a group at Brookhaven National Laboratory devel-
oped a ring of 32 sodium iodide-based detectors that sur-
rounded the head in a ring (Rankowitz et al., 1961). This
device known by some as the ‘headshrinker’ is shown in
Figure A-6.20. In the 1960s and 1970s, Brownell with col-
leagues Aronow, Burnham, Chesler, Correaia, Hoop, and
Cochavi developed a series of positron scanners with the tomo-
graphic implementation enabled by two 2D arrays for tomog-
raphy reported in 1970 but with a date of inception of 1968
(Figure A-7.24). As these were reported in the US Atomic
Energy Commission Record of Invention, no patent applica-
tions were allowed. Subsequent patents by others on positron
tomography were preceded by the Brownell et al. inventions as
to the general concept, though more recent patents do have
unique features not included in the 1968 disclosures in the
AEC records.
At Donner Laboratory of the University of California,
Berkeley, Hal Anger, who invented the gamma camera, devel-
oped a PET tomographic camera using two opposing gamma
or Anger cameras in a parallel arrangement similar to the
Brownell’s first positron camera (PC I). The Anger PC, patented
in 1972, served hematology studies of the distribution of the
positron emitter iron-52 as shown in Figure A-7.23. This PC
used analog electronics to present focused planes of emission
data equivalent to a crude form of back projection of coinci-
dent lines corresponding to annihilations that arose from the
electronically selected plane. The concept of rotating the par-
allel arrays was deployed at Massachusetts General Hospital by
Brownell and coworkers in the 1972 development of the PC II
tomograph and was the first true PET instrument that could
acquire transaxial sections.
The next major innovation was the hexagonal detector array
(Figure A-9.30) developed at Washington University in Saint
Louis (Phelps et al., 1975; Ter-Pogossian et al., 1975). This
system was the prototype for the commercial scanner known
as PETT (Hoffman et al., 1979). The four next major develop-
ments were circular arrays of detectors that surrounded the
body to allow imaging of a transverse section without gaps
between detectors or the need for motion. Thompson and
coworkers in Canada remodeled the 1961, 32-detector system
from Brookhaven National Laboratory in 1976. Cho and asso-
ciates at UC Los Angeles Radiological Research Laboratory
engineered a ring of scintillation detectors but with no subse-
quent subject experiments. Derenzo and associates at Donner
Laboratory of UC Berkeley developed a 280-crystal ring system
for dynamic data acquisition for brain and heart kinetic studies
(Derenzo et al., 1979). In Sweden, Bohm et al. (1978)
History of Nuclear Medicine and Molecular Imaging 11
fabricated a PET detector ring system that enabled human
studies with short-lived tracers (e.g.,
C and
Since 1979 with the first commercial PET system develop-
ment by CTI, Inc., of Knoxville, TN, in collaboration with Hoff-
man and Phelps of UCLA, PET tomography development has
become a focus of industry with advances in resolution, volume
of body coverage, and sensitivity. Academic and government
research laboratories continue to explore innovations with the
first TOF system developed in 1982 (Ter-Pogossian et al., 1982)
increases in spatial resolution for human systems at 2.3 mm
FWHM (e.g., Derenzo et al., 1987) and hybrid systems such as
PET/CT (Beyer et al., 2000), PET/MRI (Shao et al., 1997)(Figure
A-12.38), and a combination of EEG, PET, and MRI (Jon Shah
et al., 2012). By 2013, there were over 3000 clinical PET systems.
Small-animal PET systems were developed inthe mid-1990s at a
number of institutions (Bloomfield et al., 1995) and commer-
cially (Cherry et al., 1996). TOF Positron Tomography
TOF PET was first investigated as a PhD thesis from Vanderbilt
University (Dunn, 1975). TOF was advanced as a solution to
limited statistics in nuclear medicine imaging in 1977
(Budinger, 1977) with more details of the advantages pre-
sented in the following years (Allemand et al., 1980). Instru-
ments were developed at Washington University (Ter-
Pogossian et al., 1982); at University of Texas, Houston
(Wong et al., 1984); and at Commissariat a
Atomique-Laboratoire d’Electronique et de L’Informatique
[CEA-LETI] (Gariod et al., 1982; Mazoyer et al., 1990).
Chapter 1.07 of this volume is focused on TOF PET instrumen-
tation and reconstruction strategies. The principal value of TOF
PET is the potential improvement in sensitivity because the
statistical noise is significantly reduced by having the knowledge
of the region along a response line from which the annihilation
event arose. This improvement in terms of signal to noise
(square root of sensitivity) depends on the measurement accu-
racy of the time of arrival difference at the opposing detectors,
the speed of light, and the size of the body region being imaged.
The sensitivity improvement is theoretically proportional to two
times the object diameter divided by the product of the timing
resolution and the speed of light. Thus, for a 40 cm diameter
body and a timing capability of 500 ps (e.g.,current commercial
tomographs), the improvement is about 5, assuming the com-
parison is made to a non-TOF PET with similar scintillator
crystal efficiency. The first systems put into operation for patient
scans built at Washington University used cesium iodide (CsI)
and barium fluoride (BaF
) scintillation crystals. The perfor-
mance characteristics (e.g., density and light output) were low
compared to scintillation crystals of BGO and lithium orthosi-
licate. And this resulted in a sensitivity that was only partially
compensated for by the sensitivity improvement of TOF. After a
gap of 20 years, TOF PET interest was renewed because of the
availability of lanthanum fluoride (Ce-doped) (LaBr
), which
has good timing resolution, high efficiency, and high light
output (Van Loef et al., 2002). The fastest system under con-
struction in 2013 has a timing resolution of 325 ps that leads to
an eightfold improvement in sensitivity relative to conventional
PET (Moses W (2013) Personal communication.). See Chapters
1.07 and 1.08. Reconstruction Tomography from Detected
Emission Data
The theme of detecting the internal structure of matter might
be dated to the work of crystallographers whose goal is to
determine the relative positions of atoms in a crystal. As Laue
realized that the wavelengths of x-rays were in the range of
expected atomic distances, he with his colleagues acquired the
first x-ray diffraction pattern from crystals (Friedrich et al.,
1912), and from these data taken at multiple angles, the inter-
nal structure of simple crystals could be obtained. This might
be considered the first solution of the inverse problem – that is,
from projections of information from different angles, deter-
mine the composition of the object that led to those
The major difference between tomography using transmit-
ted x-rays and that for nuclear medicine is the fact that in x-ray
tomography, the source strength and position are known and
the unknown is the electron density or material density, but in
nuclear medicine tomography, only the detected emissions
and direction of emissions are known. The unknowns are the
position and strength or concentration of sources. An addi-
tional unknown in nuclear medicine is the amount of attenu-
ation suffered by the emissions from the sources in the body.
SPECT and PET algorithms must calculate source strengths,
source positions, and attenuation – a much more difficult
problem than x-ray computed tomography.
Before the introduction of computational methods into x-
ray radiology and radionuclide nuclear medicine imaging,
astrophysics investigators demonstrated success in applications
of what we now call Fourier transform methods (Bracewell,
1956). It is now generally accepted that the Radon transform
and its inverse represent the underpinning mathematical con-
structs for solving the inverse problem. But developments in
medical computerized tomography did not originate with the
Radon transform (Figure A-2.3). One approach to approximat-
ing the 3D distribution of a radionuclide in an object from
projections of the emitted photons is to merely back project the
observed data into a matrix using optical or digital computer
methods. This method was first used by Kuhl and Edwards
(1963) in their pioneering SPECT. Another solution to finding
the most likely 3D distribution of electron density for radiol-
ogy or nuclear medicine is based on arithmetic approximations
by matching the observed projections to projections computed
from estimates of the actual distribution (i.e., the first could be
the back projection image). Based on differences between the
measured projections and the computed projections from the
estimated distribution, another estimate of the actual distribu-
tion is made using arithmetic or multiplicative weighing fac-
tors. This was the approach known as the arithmetic iterative
reconstruction technique (ART) (Gordon et al., 1970).
A similar mode of data manipulation was first used in phan-
tom and patient studies by Kuhl and Edwards (1968, 1973)
using an upgrade of their original SPECT tomograph. A form of
this iterative estimation process is to attempt a solution of
thousands of simultaneous equations where each projection
value is equal to the sum of all the picture elements along a ray
that projects to the observed value. If there are 100 projection
angles and an object of 100100 picture elements, the
number of unknowns for each equation varies from 140
12 History of Nuclear Medicine and Molecular Imaging
(diagonal) to 1 (edge). The total number equations is the sum
of the projection bins for each angle. For a circular object
with a diameter of 100 picture elements and 100 angles the
number, the number of equations is 10000. Closed-form
solutions were not possible, but an iterative strategy enabled
by the availability of digital computers in the 1970s allowed
Hounsfield to demonstrate the first x-ray CAT. In the case of
x-ray tomography, the computed parameter is the attenuation
coefficient (ca. the electron density). There was an earlier
attempt by William Oldendorf to demonstrate a system that
would provide simple back projection using x-rays. He used a
moving detector and corresponding moving x-ray source on
each side of a rotating sample holder. This method indeed
would allow transverse section reconstruction of the back
projection image (Figure A-6.19), yet there was no recogni-
tion of the pioneering thinking that led to this invention
(Oldendorf, 1961).
Many of the known mathematical strategies used to create
an image from multiple projections from multiple angles
were seen as too complex and even intractable using
the tools of analog electronics and analog computers prior
to the late 1960s. In the early 1970s, the character of
nuclear medicine as well as radiographic imaging changed
dramatically as small computer systems and efficient algo-
rithms for reconstruction became available (e.g., fast Fourier
transform software and hardware). The chronology listed in
Table 2 depicts significant innovations leading to current
tomographic methods that are discussed in detail under
Chapters 1.05 and 1.06.
Applied mathematical strategies to solve the inverse problem
blossomed in astrophysics, geophysics, structure analysis (e.g.,
electron microscopy), optics, and probably most of all in med-
ical sciences particularly radiology and nuclear medicine. The
definition and history of applications of the Radon transform is
found in Deans (1993),mathematicaldevelopmentsofx-ray
computed tomography are detailed in Herman (1980), and the
algorithms for nuclear medicine tomography are found in
Huesman et al. (1977). Contemporary methods for achieving
the optimum approximation to the true distribution of an image
parameter rely on maximum likelihood methods and EM
algorithms whose origin was in 1977 (Dempster et al., 1977).
A chronology of 42 developments is given in Table 2.
1.01.8 Image Processing and Data Analysis Role of Image Processing in Nuclear Medicine
Early nuclear medicine images reflected the contrast between
areas of high radionuclide activity by the number of ink spots
on paper (Figure A-4.12). Some implementations were merely
a mechanical hammer applied to carbon paper. The tapper was
linked to a mechanical system that emulated the spatial posi-
tion of the detector. This approach was followed by cathode
ray tube-based methods wherein either persistent phosphors or
photographic recordings were used to capture the spatial image
of projected events. The very low statistics of nuclear medicine
methods led to implementation of image processing by well-
known techniques of smoothing and filtering known in elec-
trical engineering signal processing. The objective was to ren-
der images with acceptable visual impact. In addition, nuclear
medicine researchers adopted and advanced techniques to
characterize the performance of imaging systems (Beck, 1964;
Beck and Redtung, 1988). Whereas projection images of, for
example, the thyroid involved only a few thousand events for
diagnostic delineation of abnormal iodine accumulation or
lack of accumulation, imaging other organs such as the brain,
heart, lung, liver, and kidney required 100 000 or more events,
and tomographic imaging required 500 000 events for reliable
statistics in each tomographic slice. These statistical aspects
received attention from a number of investigators whose goal
was to define the quantitative accuracy of 3D imaging and to
provide dynamic nuclear medicine studies (e.g., Budinger
et al., 1978; Huesman, 1977).
Evaluation of the capabilities of an imaging system to detect
objects and modeling of the detection process was pioneered
by Rose of Zenith who showed the number of events needed to
detect an object of a given contrast in a given imaging field size.
The concepts of the receiver operating condition method in
electrical engineering signal detection were applied and
improved by Harrison Barrett who included the human
observer in the evaluation of imaging system performance
(Barrett et al., 1993). Kinetic Modeling
The time-varying changes in detected radiation provide
the kinetic data for the determination of physiological phe-
nomena. From the time of Hevesy, the tracer principle has
been concerned with the measurement of kinetic processes.
Kinetic analyses require a temporal sequence of spatial con-
centrations of radiotracers. Mathematical models were devel-
oped into which the time versus activity data could be
inserted to derive biologically relevant rate constants and
volumes of exchange. Examples of such kinetic models for
mammalian/human biology are those defined in the 1960s
by Kety (1960) and Ziegler (1965) (see Chapter 1.12 on
kinetics by Cunningham and Welch). One of the simplest
models is that of the rate of flow using a single exponential,
,wherekis rate constant that represents flow in units of
volume per time per volume of tissue. This model assumes
the internal activity is delivered as a bolus or ‘delta’ input
function as was done to image brain blood flow using
carotid artery injections of radionuclides (Ingvar and
Lassen, 1961). The rates of decline in detected events for
2D projections of brain activity give a measure of flow, and
it was those data obtained by the Danish and Swedish
investigators (Lassen et al., 1978) that produced the first
functional brain images (Figure A-10.33).
This simple exponential washout model works well if a
rapid, discrete, ‘delta’ input function of a tracer is administered
into the input route to a tissue followed by a measurement of
the subsequent clearance of tracer from the tissue. Using more
complicated kinetic models, this time course information
offers a means for deriving measurements of the rates and
volumes of exchange within the tissue. Confining the delivery
of tracer to the tissue of interest in this way results in minimal
interference from recirculating tracer since it becomes heavily
diluted in the tissues of the body and in some cases exhaled or
excreted. Other examples of such studies are the measurements
in the human brain of gray and white matter blood flow
History of Nuclear Medicine and Molecular Imaging 13
and oxygen utilization using intracarotid artery injections of
xenon-133 (Hoedt-Rasmussen 1965) and oxygen-15 (Ter-
Pogossian et al., 1970).
Where it is not possible to give a selective supply of tracer
into the arterial input of the tissue of interest, the necessity of
intravenous administration brings with it the need to both
measure the time course of the resulting arterial input func-
tion and account for this in operating the kinetic model to
derive the rate constants and volumes of exchange. This
requires a mathematical deconvolution of the effects of the
time-varying input function that brings with it the challenge
of measuring the arterial time course of the tracer at the point
of entry into the tissue of interest as well as noise amplifica-
tion associated with the mathematical analysis. In early
nuclear medicine procedures, this limitation was
circumvented, for example, by using tracers of blood flow
that behaved like microspheres where an extraction by the
tissue is assumed to approach 100% either due to a very large
partition coefficient or by metabolic blockade of the egress of
the tracer from the tissue. Metabolic blockade has also been
used, for example, to image glucose utilization using the
analog of glucose, deoxyglucose, that when phosphorylated
is trapped or sequestered in the cell (Sokoloff et al., 1977), or
to infer cardiac blood flow using
N-ammonia because the
N-ammonia is trapped in the heart myocyte by enzymatic
conversion to glutamine. Equilibrium Imaging
Prior to the introduction of high-sensitivity PET, the Anger
camera, collimated to image 511 keV annihilation gamma
rays and early PET devices, which physically had to scan,
were unable to record kinetic data with sufficient sensitivity
or temporal fidelity to follow kinetic processes to derive func-
tional image data. To circumvent these hurdles, a method was
developed that involved continuous inhalation of
. The
short physical half-life, the physiological flow through the
imaging field, and the metabolic sequestration result in a
dynamic equilibrium between blood infusion of this radio-
tracer and steady-state brain flow and metabolism. For brain
studies, the equilibrium is reached in 8–9 min. The low sensi-
tivity of imaging is overcome as sufficient data can be detected
over time to provide statistically significant images of the
distribution of
O within cerebral tissues. To complement
the oxygen utilization-dependent images obtained with
a steady-state cerebral perfusion-dependent image could be
recorded by continuously inhaling C
. This results in the
being rapidly transferred by carbonic anhydrase in the
lung to H
O, thus providing a perfusion-dependent signal in
tissue. It also provides a means for correcting for the re-
circulating labeled water present in the
image equilib-
rium. Steady-state mathematical models were subsequently
developed for both the
and the C
inhalation pro-
cedures (Jones et al., 1976). This procedure allows imaging of
cerebral blood flow, oxygen extraction fraction, and oxygen
utilization of the human brain (Jones and Rabiner, 2012). It
can be used for imaging lung ventilation with the gamma
camera during the steady-state inhalation of
Kr (Fazio
and Jones, 1975) (Figure A-8.29).
1.01.9 Radionuclide Production Neutron Generators
An early generator of neutrons was a mixture of radium and
beryllium wherein the alpha particles from the decay of radium
interacted with beryllium to produce neutrons and carbon-12.
Alpha particles from polonium, plutonium, or americium could
be used as well. In fact, there are many methods for producing
neutrons including the natural decay of
Cf. But the yield of
neutrons is only one for 30,000 alpha particles. Some of the
earliest artificial radioactive tracers were produced in very low
quantities with the radium–beryllium neutron generator (e.g.,
I). The first fission reactor, developed in 1942, and its many
successors in the United States and elsewhere (e.g., Harwell,
England) had the capability to produce large quantities of
radiotracers, but these radionuclides were not made available
to the medical communities until after the end of WWII. Cyclotrons and Linear Accelerators
The invention of the cyclotron (Figure A-2.6) by Lawrence in
1932 (Lawrence and Livingston, 1932) enabled the discovery
of the majority of radionuclides used in nuclear medicine and
in particular the generation of positron emitters listed in
Table 3. Not all positron-emitting radionuclides required
cyclotron-based accelerated particles for radionuclide produc-
tion. The Van de Graaff accelerator (Van de Graaff, 1931) and
the Crockcroft–Walton accelerator (Crockcroft and Walton,
1932) used electric potential differences to accelerate charged
particles. The currents of particles from these devices are not as
great as currents achieved by cyclotrons. The particle accelera-
tor using electromagnetic fields rather than the static electric
field linear accelerators was invented some 8 years before the
cyclotron (Ising, 1924/1928; Widere, 1928). In the late
1930s, a‘37 in.’ cyclotron was developed at Berkeley Radiation
Laboratory on the University of California campus, and this
cyclotron along with a later-developed ‘60 in.’ cyclotron
enabled discovery of most of the artificial radioisotopes and
provided some of the first medically useful radionuclides (e.g.,
I, and
Fe). In 1955, the first cyclotron was installed at
Hammersmith Hospital near London for medical research and
clinical studies using short half-life radionuclides (Vonberg
and Fowler, 1963). Within 10 years, cyclotrons were installed
at Washington University Mallinckrodt Institute of Radiology,
at Massachusetts General Hospital in Boston, and at Sloan–
Kettering Cancer Institute in New York. By 2012, there were
300 cyclotrons around the world capable of producing medical
Photonuclear reactions are also a source of radionuclides.
Transmutation of elements through x-ray or gamma-ray pho-
tons has been known from experiments to determine the mass
of the neutron wherein Chadwick and Goldhaber excited deu-
terium to release a neutron using x-rays from thorium C
(Chadwick and Goldhaber, 1934). Since then, electron acceler-
ators have been useful for the production of radionuclides using
photonuclear reactions because photon energies exceeding
30 MeV are available from bremsstrahlung radiation. The high-
energy photons (gamma rays) interact with atomic nuclei and in
many elements result in the expulsion of one or more neutrons.
14 History of Nuclear Medicine and Molecular Imaging
This reaction is designated as photonuclear (g,n). Though the
cross section for the ejection of a neutron by a high-energy
photon (e.g., 30–50 MeV) is in the range of 10
barn), radionuclides such as
Gd, and
I have been made
available at facilities where cyclotrons were not readily available
(e.g., China). For example, 370 MBq (10 mCi) can be produced
in 40 min from the bremsstrahlung from a 25 mAelectronbeam
current (Nordell, 1984).
Whereas a major capability of low-energy particle accelera-
tors is to produce the short half-life positron emitters
O, most of the nuclear medicine applications have used
the longer half-life radiotracers (e.g.,
Zr, and
because these are made available from commercial suppliers
due to their longer half-life and therefore more convenient
shelf life and because the costs of a cyclotron installation at
medical clinics are prohibitive being in excess of 2 million US
dollars. A major cost is the site preparation and shielding. To
solve this problem, new accelerator concepts were explored in
the first part of 2000, and of the alternative approaches (e.g.,
linacs, microtron, and superconducting cyclotron), the super-
conducting cyclotron operating at 5 T or more (Antaya and
Schultz, 2010) has the reduced costs that led to its prototype
production (Ionetix, Inc., San Francisco, CA) (Figure A-13.42).
The goal is to make available the short half-life radiotracers to
clinical sites and researchers throughout the world. Radionuclide Generators
A radionuclide generator consists of an apparatus that allows
separation of the productof radionuclide decay from the source
radionuclide. Radioisotope generators make available short-
lived radioisotopes thousands of miles from the site of produc-
tion and provide a continuous source of short-lived radionu-
clides for clinical or research studies. The source is referred to as
the mother and the product of the decay is called the daughter.
The first system of this type was developed in1926 by Gioac-
chino Failla who obtained radon from its parent radium
called a ‘cow’ whose elution or ‘milking’ produced the daughter
radionuclide, radon (Failla, 1926). The radon generator was
followed 25 years later by the tellurium-132/iodine-132 gener-
ator (Winsche et al., 1951), and thereafter, more than 100
generator systems have been developed. A few are listed in
Table 4 and those starred are in general clinical use today.
The most commonly used generator is reactor-produced
molybdenum-99 that has a half-life of 67 h and decays to
technetium-99m that has a half-life of 6 h. This generator was
first produced at Brookhaven National Laboratory (Tucker
et al., 1958).
Of the many milestones in the history of clinical nuclear
medicine, one of the most significant was the development of
Tc or technetium pertechnetate whose gamma photons of
140 keV, half-life of 6 h, and availability from a simple precur-
sor parent molybdenum-99 (half-life of 67 h) made it the most
widespread radionuclide in clinical medicine from the late
1960s to present. The discovery of technetium (element 43)
was through the analysis of a sample of
Mo that had been
bombarded at the Berkeley cyclotron and was sent to Italy in
1937 (Perrier and Segre
`, 1937). It was called technetium from
the Greek for artificial as it was the first artificial radionuclide.
Powell Richards at Brookhaven National Laboratory in an
analysis of available radionuclides for nuclear medicine
noted that
Tc had the ideal physical decay characteristics
for imaging (Richards, 1960); however, this radionuclide is not
easily incorporated into chemical compounds. For example, 34
years after
Tc discovery, its use was finally approved by the
FDA for cardiac perfusion studies for which the more expensive
and less ideal
Tl has been used since the mid-1970s.
In addition to the technetium-99m generator, there are
other important generators: rubidium-81/krypton-81m gener-
ator (Yano and Anger, 1968); xenon-123/iodine-123 generator
(Lambrecht et al., 1972); strontium-82/rubidium-82 generator
(Yano et al., 1977); xenon-122/iodine-122 generator (Richards
and Ku, 1979); germanium-68/gallium-68 generator (Gleason,
1960); and the zinc-62/copper-62 generator (Robinson et al.,
1980). Because of the difficulty in producing the mother radio-
nuclide such as xenon-123 for a xenon-123/iodine-123 gener-
ator, production and distribution systems were developed for
the daughter. But in many cases, it is not possible to have the
daughter product without an on-site generator such as the
strontium-82/rubidium-82 generator and the xenon-122/
iodine-122 generator because the half-life of the products is
short (e.g., rubidium-82 is 76 s and iodine-122 is 3.6 min).
The generators that yield radionuclides with half-lives of
minutes allow special applications of equilibrium imaging for
the quantification of ventilation or perfusion, as well as the
opportunity to do repeat studies.
Some of the daughter radionuclides are used in kits for
efficient labeling of cells and chemical ligands; others are
used directly for imaging (e.g.,
Kr for lung ventilation and
Rb for heart blood flow distribution) (Table 4). Post-WWII Radionuclide Distribution
From a focus on making atom bomb materials, in 1945, sci-
entists turned toward how to use the Manhattan Project’s
Table 3 Major radionuclides used in medicine and biology
Accelerator produced (bþ) Reactor produced (g)
Arsenic-74 17.8 days Chromium-51 26.6 days
Cadmium-107 6.5 h Cobalt-60 5.27 years
Carbon-11 20 min Copper-67 61.8 h
Copper-62 9.7 min Gallium-67 78 h
Copper-64 12.7 h Gold-198 2.7 days
Fluorine-18 110 min Indium-111 2.8 days
Gallium-68 68 min Iodine-123 13 h
Iodine-122 3.6 min Iodine-125 60.1 days
Iodine-124 4.3 days Iodine-131 8 days
Iron-52 8.3 h Iron-59 44.5 days
Manganese-52 5.7 h Mercury-197 2.7 days
Nitrogen-13 10 min Mercury-203 47 days
Oxygen-15 2 min Sodium-24 15 h
Rubidium-82 1.2 min Technetium-99m 6 h
Sodium-22 2.6 years Thallium-201 3 days
Zinc-62 9 h Thallium-204 3.78 years
Zirconium-89 78 h Xenon-133 5.25 days
Autoradiography and biodistribution (b)
Calcium-45 162.7 days Sulfur-35 86.7 days
Carbon-14 5730 years Tritium 12.3 years
Phosphorus-32 14.3 days Yttrium-90 64 h
Strontium-89 50.6 days
History of Nuclear Medicine and Molecular Imaging 15
isotope production facilities and methods for medical research.
In the spring of 1945, isotopes began being produced for
civilian use. But prior to 1945, radionuclides such as radium,
phosphorus-32, iron-59, and the iodines (127, 130, 131) were
being produced at reactor facilities and the Berkeley cyclotron
with some distributions to hospitals where medical applica-
tions pleaded for radionuclides that were found useful for
therapy. Most of the demand was for thyroid cancer and for
the treatment of hyperthyroidism. Facilities able to produce
these isotopes included the Oak Ridge reactor and the acceler-
ator at Berkeley. Through personal efforts by Paul C. Aebersold,
the chief of the Isotopes Branch of the Manhattan Engineering
District Division of Research, the Atomic Energy Commission
leadership authorized Oak Ridge National Laboratory to
implement an efficient public release of available radionu-
clides (Figure A-4.11). Oak Ridge and other reactors partici-
pated in production of radionuclides needed for medical
applications (e.g., Harwell, England supplied
I to sites in
Europe). The Oak Ridge program made 104000 shipments
between 1946 and 1957. Commercialization of Radiotracers
The determination of the plasma blood volume in medical
situations of blood loss was enabled by the use of radioiodine
when it became available from Oak Ridge in 1945. The method
employed bovine serum and the paper demonstrating results
in 76 human subjects (Storaasli et al., 1950) led to commer-
cialization by Abbott Laboratories, a pharmaceutical manufac-
turer who named this agent radioiodinated serum albumin.
The success of radioiodine for cancer therapy, strontium radio-
nuclides for bone pain palliation,
Tc-labeled red blood cell
kits for blood pool and heart imaging, and iodinated ligands
for brain and kidney function diagnoses led to a few commer-
cial enterprises in the 1970s. It was not until
deoxyglucose PET (FDG-PET) was shown as an effective heart
and cancer agent that a viable industry was established. PET
became synonymous with FDG-PET worldwide by the 1990s.
As cyclotrons became more available at hospitals and the cost
of the target material H
O decreased, the local availability of
FDG led to even greater applications. A commercial market for
many radionuclides exists because local low-energy cyclotrons
do not have the proton or deuteron energy to produce some
important radionuclides for diagnosis and therapy, and in
some cases, neutron reactors are needed. By 2010, there were
over 65 pharmaceutical companies involved in radiopharma-
ceuticals for therapy and diagnostics. Some of the main corpo-
rations in the United States, Canada, and Europe are Bayer
HealthCare Pharmaceutical; Bracco Diagnostics, Inc.; GE
Healthcare Limited; GlaxoSmithKline Plc; IBA Group;
Lantheus Medical Imaging, Inc; Mallinckrodt; Nordion; and
PETNET Pharmaceuticals, Inc.
1.01.10 Radiotracer Syntheses Instrumentation
The principal instrumentation innovations needed in order to
synthesize radiotracers involves the development of target
modules that could be placed in the beamlines of accelerators
and reactors. These targets required special machining, alloys,
and plumbing to provide target gases, removable solid
chemicals, and liquids as well as provision for cooling using
helium or water. Processing and organic synthesis requires
robotic manipulators and expensive shielding enclosures.
The need to protect chemists from radiation exposure, ste-
rility requirements, and quality control led to the deployment
of automation techniques that receive as an input, the radio-
tracer (e.g.,
F), and provide an output of the
radiolabeled compound (e.g.,
C-methionine and
fluorodeoxy glucose). The synthesis boxes are large and expen-
sive with varying degrees of maintenance. The evolution of
microfluidics and control systems led to successful develop-
ment of radiotracer ‘synthesis on a chip’ systems (Lee
et al., 2005).
1.01.11 Hazards and Absorbed Radiation Doses
From the earliest investigations of human applications of
radionuclides in therapy, diagnoses, energy production, and
nuclear weapons, there were concerns about the potentials for
harmful effects. The applications were pursued before the
development of the science of radiobiology (cf. Volume 8).
The first observations of harmful effects were made by Bec-
querel who noted skin inflammation to his torso from
radium carried in packets in his vest as well as by Pierre
Curie who with Becquerel studied effects to their skin from
radium exposure (Grigg, 1965). Most notable in the early
history of organized efforts to understand radiation health
effects was that of a dentist from Boston, United States, who
showed that excessive exposure to x-rays could be lethal to
mammals (Kathren, 1964). The formal organizations, Inter-
national Commission of Radiological Protection (ICRP) and
the National Council on Radiation Protection and Measure-
ments (NCRP), took on the responsibilities for various radi-
ation protection recommendations in 1928 and 1929,
respectively (Taylor, 1958).
The level of concern regarding harmful effects from ioniz-
ing radiation rose significantly after radium dial workers’ ill-
nesses were determined to be the result of internally ingested
radium from the use of their tongues and lips to dress dial
paint brushes. This observation led to the establishment of
general standards for safe handling of radioluminous materials
and to the establishment of permissible levels of body burdens
of radium (Evans, 1980; NBS, 1941). Skin burns and epilations
became well-known hazards particularly for radiologists and
radiotherapists (Kathren, 1962).
Table 4 Radionuclide generators from precursors
Parent Half-life Daughter Half-life
Rb 4.7 h
Kr 13.0 s
Mo 2.8 days
Tc 6.0 h
Zn 9.1 h
Cu 9.8 min
Ge 275.0 days
Ga 68.0 min
Sr 25.0 days
Rb 75.0 s
Sn 115 days
In 1.7 h
Xe 20.1 h
I 3.5 min
16 History of Nuclear Medicine and Molecular Imaging
During the WWII, investigations of
U and
Pu for the
fabrication of the nuclear bomb (the Manhattan Project) led to
a concerted effort to determine permissible levels of radiation
exposure particularly from
Pu. The injection of plutonium
in human subjects during the years 1945–47 was part of a
government-led effort to determine the metabolic fate and
toxicity of internal radiation emitters in workers accidentally
exposed to these materials while building the atomic bomb. In
a 1958 Los Alamos report, ‘Human Experiences with
Plutonium,’ Wright Langham and Payner Harris wrote that
the studies were “initiated ... to determine whether small
doses of plutonium were acutely toxic and to establish the
urinary excretion rate to provide a more accurate base line
from which to determine the body burden of exposed
workers.” In 1962, Langham and his coworkers wrote:
“Although no acute toxic effects were expected from such
small doses, clinical laboratory observations were carried out,
especially with regard to hematologic changes and liver and
kidney functions. No acute subjective or objective clinical
effects were observed.” In addition to plutonium, polonium,
used in the bomb’s initiator, was a radiation worker potential
hazard and was administered to five human subjects starting in
1943. Uranium (
U), used as the fuel in one type of atomic
bomb, was injected in six human subjects in 1946–47. These
studies involved very small doses and the subjects were
patients with serious cancer illnesses. They were required to
give informed consent to the extent they could understand the
possible implications. One of us (TFB) reviewed the doses
administered relative to current knowledge of known conse-
quences from alpha-emitting radionuclides and concluded
that the doses were indeed below the threshold of any expected
physiological harm. The majority of the injected activity would
be adsorbed to the bone after an initial excretion from the
kidneys. More details of these experiments are given in two
publications (Langham et al., 1962; Moss and Eckhardt, 1995).
In addition, quantitative studies of the distribution of
Cs, and other radionuclides in animals and
foods (e.g., milk) were undertaken to understand the hazards
of fallout from nuclear testing.
Analytical methods for quantification of absorbed doses
from ingested or injected radionuclides date from 1968 when
investigators examined the physics of photons and charged
particles (electrons and positrons) interacting with tissues in
order to calculate the amount of energy absorbed. General
limits of tissue tolerance were known from WWII era investi-
gations, but the absorbed fractions from radionuclides dis-
tributed in different organs were unknown. In addition, the
fraction of the radiation transmitted from one organ to other
organs of the body, needed to be calculated and tabulated. By
the mid-1970s, a schema for calculation of the likely
absorbed doses to organs from an injected radioactive com-
pound was developed (Loevinger and Berman, 1976). There-
after, the concepts of accumulated activity in a source organ,
energy of each disintegration (including beta particles and
positrons), and the absorbed fractions of the emitted photons
(x-rays and gamma rays) were combined into the medical
internal radiation dose schema of the Society of Nuclear
Medicine. A primer was developed in the 1980s (Loevinger
et al., 1991). This schema and the evolution to modern
computer-assisted dose estimate software are presented as
Chapter 1.13 of this volume.
1.01.12 Selected Applications Radionuclides for Therapy
Notable historical events relative to radiotherapy are the first
external beam x-ray therapy in 1896 in Lyon, France (Case,
1958), only 7 months after the discovery of x-rays using small
patches of radium sealed in a rubber or other suitable con-
tainers that were applied to skin diseases in 1901 (Danlos and
Block, 1901), and the first skin cancer treatments and proven
cure for basal cell carcinoma in Sweden (Lennmalm, 1900)
and Russia (Goldberg and London, 1903). The early history of
external beam therapy can be appreciated from the 520-page
textbook by Joseph Belot published in 1904 (Belot, 1904). To
overcome the limited photon flux from radium and radon
Co sources were introduced in the mid-1950s
(Mitchell, 1946). Topics of radiotherapy by external beams
and internally deposited radiation sources are covered in
Volume 7 of this series.
Advances in internal applications of radionuclides such as
radium and iridium date from 1904 when intratumoral
implantations were achieved using small glass tubes with a
thin filter of silver. The major innovation was the Dominici
tube (Dominici and Barcat, 1908). The next in vivo experiments
involved the first intravenous injections of radioactivity into
human subjects.
Dissolved radium salts were used to explore the potentials
for disease treatment (Proescher, 1913) and to observe the
appearance of radon and radium in excreta (Seil et al., 1915).
Twenty years later, intravenously injected phosphorus-32 from
the Berkeley cyclotron was used for treatment of leukemia and
polycythemia vera (excessive red cell development) with the
first injected P-32 in 1937 (Lawrence, 1965).
In 1941, the only therapy for debilitating pain for prostate
bone metastases was morphine. But through research on cal-
cium and strontium isotopes by Pecher, very specific localiza-
tion of these elements in bone metastases was discovered, and
this led to the use of strontium-89 as a therapy of choice for the
last 75 years. In the mid-1940s, radioiodine was used at Mas-
sachusetts General Hospital and the University of California,
Berkeley, for evaluation of thyroid diseases and treatment of
thyroid cancer (Evans, 1975; Hamilton and Lawrence, 1939).
The use of
I spread throughout the United States and Europe
at the close of WWII (e.g., Winkler et al., 1946). Earliest Human Subject Experiments with
Other than the exploratory studies on potentials for therapy
with injected radium salts mentioned earlier, the first applica-
tions of radioactivity and radioactively labeled substances (fre-
quently called radiopharmaceuticals) for scientific studies in
biology and medicine were pioneered by Georg von Hevesy
(Figure A-1.2), who in 1923 published his investigations on
the timing and distribution of lead (
Pb or radium D) and
bismuth (
Bi or radium E) in plants (Hevesy, 1923). Hevesy’s
experiments included the determination of body water with
stable deuterium, the determination of the rate of DNA syn-
thesis in liver and kidney cells using
P, and other studies with
38 different radionuclides in mammals and 14 stable and
radioactive elements in human beings, which he described in
History of Nuclear Medicine and Molecular Imaging 17
a 600-page monograph (Hevesy, 1948). Human cardiovascu-
lar circulation kinetics were first studied by intravenous injec-
tion of the natural radionuclides,
Pb and
Bi (Blumgart
and Yens, 1927). The radionuclides were injected in the right
arm and the time to transit to the left arm was measured using a
Wilson cloud chamber, perhaps the only time the cloud cham-
ber was used in human experiments (Figure A-3.8). The kinet-
ics of absorption from the gut to the blood circulation were
made by Hamilton who ingested
NaCl and measured the
time of arrival in the peripheral circulation using a Geiger–
Mu¨ ller tube (Hamilton, 1937) as shown in Figure A-3.8.
During WWII, investigations of the physiology of carbon mon-
oxide and carbon dioxide relative to high-altitude exposure
were conducted on human subjects at Berkeley, CA, using
radiotracers (Tobias et al., 1945).
After WWII, other significant pioneering applications of
tremendous physiological and eventually clinical significance
include remote imaging of oxygen utilization in tissues, imag-
ing brain blood flow and metabolism, pulmonary ventilation
and perfusion, liver cell functions, heart circulation, kidney
function, and bone marrow activity. For example, oxygen
utilization was demonstrated by autoradiography of animal
tumors (Figure A-5.16) after the animals had been exposed to
oxygen-15 (Ter-Pogossian and Powers, 1958). Oxygen-15
(half-life is 2.06 min) was accumulated in animals for 20 min
while breathing radioactive air whose nitrogen-14 had been
bombarded with deuterons at the Washington University
cyclotron. Human studies of pulmonary function were first
done in England using oxygen-15 produced in a cyclotron
(Dyson et al., 1958) (Figure A-6.17). Uses of
Se, and the positron emitters,
O, and
F, were introduced before 1960, and these along with appli-
cations of the
Tc and
Tl and
Rb, in the
1970s to the present are highlighted under organ systems in the
succeeding text. Cancer Metastasis Detection Radiotracers
Organ-specific cancer detection history is given in individual
sections (i.e., the brain, lungs, liver, pancreas, bone, and adre-
nals). This section gives a synopsis of agents applied to whole-
body surveys for metastasis detection. Of the many agents
developed for imaging solid tumors, the most notable in the
mid-1970s to the present are
F-fluorodeoxyglucose devel-
oped in 1976 by the Brookhaven National Laboratory team
with the synthesis published in 1978 (Ido et al., 1978) and the
first imaging of the annihilation photons from
deoxyglucose (Kuhl et al., 1977) using the SPECT system of
Kuhl and Edwards that evolved from a 1963 invention. This
was followed by the first PET imaging in 1979 on the commer-
cial system developed by Phelps and Hoffman with CTI, Inc.
(Phelps et al., 1979). Notable among the cancer imaging
agents is
C-thymidine developed by the University of
Washington group (Shields et al., 1984).
Breast cancer diagnostic and treatment response studies
F-fluorodeoxyglucose with general success for metas-
tasis detection and as a guide to treatment efficacy and a tool
for early modification of treatment strategies. Prostate cancer
diagnosis and dissemination evaluations were not successful
F-fluorodexyglucose, but
C-choline and
were valuable adjuncts to prostate cancer studies (Hara et al.,
1998) (Figure A-12.39). But these prostate cancer applications
require an on-site cyclotron.
F analogs of choline accumulate
in the bladder and this has diminished their usefulness,
Thus, the nuclear medicine evaluation has become a single-
photon study with
In-labeled monoclonal antibody Prosta-
Scint. Monoclonal antibody imaging, though being pursued
since 1980, has not become a widely accepted clinical diagnos-
tic technique except in a few cancers. For example,
ProstaScint and most recently
Zr-herceptin (trastuzumab)
have significant value in the detection of tumor-specific anti-
gens for prostate cancer (Petronis et al., 1998) and breast
cancer (Holland et al., 2010), respectively. Other pioneering
nuclear medicine applications to cancer detection and
treatment are sited in Section 1.01.13. Brain Blood Flow and Tumor Detection
The study of global brain blood flow was pioneered by the use
of the nonradioactive tracer, nitrous oxide (Kety and Schmidt,
1948), and 7 years later by inhaled radioactive
Kr gas with
jugular and femoral artery blood sampling for the quantification
of the time course of radioactivity determined from the transit of
blood from the lungs to the brain and periphery (Munck and
Lassen, 1955). External detection of
Xe emissions after intra-
carotid artery injections allowed regional flow quantitation
using the rate of signal disappearance (washout rate) (Ingvar
and Lassen, 1961). Later imaging arrays were developed for
human brain studies of functional brain activity after carotid
injections of
Xe (Lassen et al., 1978)(FigureA-10.33).
PET imaging of brain blood flow became available with the
development of instruments having transaxial tomographic
capabilities (cf. Table 1). Since the late 1970s, H
O, C
labeled amino acids, and
F-fluorodeoxyglucose have become
important tools for studying brain physiology and for clinical
diagnoses. Brain blood flow studies using
O compounds used
equilibrium imaging methods (Jones et al., 1976) and bolus
injection methods for flow, oxygen extraction, and vascular
volume (Herscovitch et al., 1983). An early application of
brain blood flow was the study of dementia (Frackowiak et al.,
1981). In 1982, the introduction of
I-HIPDM for brain blood
flow (Kung et al., 1982) including tomography (Fazio et al.,
1984) led to widespread applications for stroke evaluation using
the Anger gamma camera. But the applications and commercial
availability were suppressed by the results of a still controversial
clinical trial that suggested surgical interventions were no better
than medical treatment. The lack of insurance reimbursement
for middle cerebral artery surgery diminished the interest in
stroke patient imaging at least in the United States, until the
advent of MRI methods 10 years later that gave mostly tissue
properties and net flow information. With the clinical use of
MRI, nuclear medicine uses declined even though imaging
before and after pharmacological stimulation informs the
potential efficacy of vascular surgery.
Brain tumor detection using Geiger–Mu¨ller detectors to mea-
sure radiotracer uptake by external detection used incorporated
I iodofluorescein as that dye (Moore, 1948), after intravenous
injection, had previously been used to locate tumors during
surgery at the University of Minnesota. Localization of tumors
used radionuclides that did not enter normal brain tissues,
which were protected by the blood–brain barrier, but in tumors
and damaged brain tissues, there is either no barrier or the
18 History of Nuclear Medicine and Molecular Imaging
barrier has been damaged (e.g., inflammation). Late in the
1950s and early 1960s, the two most used agents were
labeled chlormerodrin and
I-albumin. Chlormerodrin was a
well-known diuretic and easily labeled with mercury, but the
mercury label had a half-life of 47 days, and though most was
excreted quickly by the kidneys, the residual accumulated radi-
ation dose was large and the photon energy of 280 keV was too
high for good spatial resolution with collimators known at that
time or even today. The radiation doses from
I-labeled albu-
min were much smaller than
Hg-chlormerodrin yet the
energy was still high. Detection methods used end-window
Geiger–Mu¨ller detectors and a bathing cap with inscribed circles
to guide repositioning of the detector for the acquisition of
sequential data from which uptake washout kinetics could be
determined in the 1950s (Planiol, 1965, 1995).
Even before this widespread use of GM tubes for brain
tumors, coincidence detection of the annihilation photons
from arsenic-74 in brain tumors was pioneered at Massachu-
setts General Hospital by a neurosurgeon with physicist
Gordon Brownell (Brownell and Sweet, 1953) with opposing
scintillation detectors, and this led to the first longitudinal
tomographic PET instrument built by Gordon Brownell
(Burnham and Brownell, 1972). Brain Metabolism and Neurochemistry
Though nuclear medicine brain blood flow and tumor imaging
has been replaced by other methods, oxygen metabolism, glu-
cose uptake, and neuroreceptor concentrations have become
targets for imaging and a unique niche for PET as well as
SPECT. The earliest successful labeling of neurochemical com-
pounds with carbon-11 was by chemists at Service Hospitalier
´ric Joliot in the Saclay center for nuclear research (Comar
et al., 1973). These syntheses performed in 1973 and thereafter
included the radiopharmaceuticals
nicotine, and the amino acid
C-methionine (Comar et al.,
1976). The syntheses used the readily implemented chemistry
C-methionine and
C-formaldehyde. These radiotracers
enabled studies of brain neurochemistry (Raynaud et al., 1974;
Wagner et al., 1983) and diseases such as addiction (Fowler
et al., 1989; Volkow et al., 1993, 1996) (Figure A-12.37) and
Parkinson’s disease (Garnett et al., 1984).
Major insights to dementia diagnosis (Benson et al., 1983;
Friedland et al., 1985) (Figure A-11.35) were made using
deoxyglucose and the tracer of brain glial cell activity associ-
ated with inflammation (Camsonne et al., 1984). The presence
of amyloid plaques in the brains of patients with dementia has
led to the development of PET radiotracer with a good affinity
for amyloid (Klunk et al., 2001). This compound required a
C label, yet there was sufficient widespread interest in this
agent for dementia that international studies were launched.
An analog of this agent has been developed for
F labeling and
clinical trials commenced in recent years (Jagust et al., 2010).
A detailed history and perspective for the future is given by
Jones and Rabiner (2012). Lung Ventilation, Perfusion, and Cancer Detection
Nuclear medicine clinical studies of the lungs involved methods
for diagnosing pulmonary embolus and methods for mapping
the ventilation spaces. Lung ventilation measurements with
external detectors first used
Xe gas (Knipping et al., 1955)
and quantification by counting of photons from
over the
lungs first reported in a proceedings in 1958 (Dysonet al., 1960).
Lung blood perfusion and ventilation were pioneered using
(West and Dollery, 1960). C
cyclotron production of the oxygen-15 tracer, and this limitation
led to the use of injected
I-labeled aggregates of albumin
(Taplin et al., 1963a). These aggregates localized in blocked
small arteries of the lungs and allowed the diagnosis of pulmo-
nary embolism (Wagner et al., 1964). With the widespread
availability of
Tc, lung ventilation studies became enabled
by inhalation of a
Tc-DTPA aerosol (Taplin and Chopra,
1978). A landmark study of lung tumorswas the first application
F-deoxyglucose PET imaging to the detection of lung cancer
(Nolop et al., 1987). Liver and Pancreas Function
For liver function studies, two classes of radiotracers have been
used. The first are those agents that become phagocytized by
Kupffer cells, and the second are radiotracers that are taken up
by the main metabolic liver cells (hepatocytes). These cells
ignore the colloid particle but have an avid uptake of amino
acids and other nutrients except
F-fluorodeoxyglucose is gen-
erally not trapped due to the absence of the phosphorylase
enzyme. The first liver function tests used radioactive colloids
to show the activity of Kupffer cells (Dobson et al., 1949). The
activity of hepatic cells and hepatobiliary imaging was initiated
I-rose Bengal (Taplin et al., 1955), then
selenomethionine (Blau and Manske, 1961), and later
agents (Loberg et al., 1976). The earliest gamma camera success
in cancer detection in the liver used
Tc-labeled sulfur colloid
to detect areas of decreased flow or uptake as the colloid is
trapped by functioning Kupffer cells of the liver (Harper
et al., 1965).
Human patient studies using radioactive
Se as a substitute
for sulfur in methionine demonstrated that about 6% of an
injected dose of this compound could localize in the pancreas
2 h after administration (Blau and Manske, 1961). The diag-
nosis of cancer was dependent on the visualization of regions
of no or poor uptake, but the variability in anatomy and
uptake in the liver lobes hindered widespread applications.
This led to subtraction imaging using an agent that localized
only in the liver (e.g.,
Tc sulfur colloid and
nanoparticles) along with the
Se-selenomethionine to give
images with the liver subtracted from the data, ‘gold subtracted
pancreas image’ (Kaplan et al., 1966). These combined radio-
nuclide studies led to electronic data processing of multiple
energy isotopes that gave filtered images (Mccready and
Cottrall, 1971). The search for a specific pancreas imaging
agent is still under way 40 years later.
The major physiological innovations were the investigation
of receptor–ligand interactions and the development and
kinetic evaluation of receptor-mediated binding of
Tc galac-
tosylneoglycoalbumin in the liver (Vera et al., 1984). Kidneys and Adrenal Glands
Kidney function tests using
I-labeled iodohippurate and
scintillation probes (Tubis et al., 1960) were successful in
detection of renal artery stenosis (Taplin et al., 1963b).
History of Nuclear Medicine and Molecular Imaging 19
Gamma camera imaging of kidney function was initiated with
Tc-DTPA a few years later (Richards and Atkins, 1967).
The earliest human and animal studies to image the adrenal
glands were by the University of Michigan group who used
19-iodocholesterol with scanners and the Anger camera to
facilitate diagnosis of adrenal tumors (Beierwaltes et al., 1971). Heart Blood Flow and Metabolism
As mentioned earlier, the earliest use of radiotracers for studies
of cardiovascular circulation was to measure the transit time of
blood from one arm to the opposite arm using daughters of
radium and a Wilson cloud chamber (Appendix A-2.5) for
detection (Blumgart and Yens, 1927). Another early applica-
tion of tracers for circulation studies was the determination of
the time of arrival to peripheral circulation of sodium ions after
oral ingestion of
NaCl using the Geiger–Mu¨ ller tube as a
detector (Hamilton, 1937). The distribution of cardiac output
in the anesthetized rodent using radiotracers pioneered the
concept of radiocations as ‘molecular microspheres’ though
the term was not employed until about 20 years later.
After injection of
KCl in the venous circulation, the
amount accumulating in various body organs was measured
by Geiger–Mu¨ ller detector in order to determine the fractional
distribution of blood flow (Sapirstein, 1956). Flows to some
organs not influenced by anesthesia and permeability barriers
were accurate. The first noninvasive radionuclide studies that
showed promise of imaging myocardial blood flow in human
subjects used
Rb (Love and Burch, 1957). Later, using the
positron emitter
Rb, coronary flow assessment was made by
coincidence counting using opposing detector pairs on either
side of the chest (Bing et al., 1964). This was perhaps the
underpinning experiment that led to a major application of
nuclear medicine to the diagnosis of coronary artery disease as
inferred from regional decreases in accumulation of radionu-
clides in the heart muscle tissue.
From 1956 to the 1980s, the major radiotracers used for
myocardial blood flow assessment were radionuclides of potas-
sium and its analogs (i.e.,
Tl, and
Cs). These tracers accumulate in the myocardium in propor-
tion to regional flow to the muscle. Because there is a very large
distribution volume for potassium and its analogs in the heart
cells, the amount that is imaged a few minutes after injection
can be used to infer the relative blood flow to different regions
of the heart. The principal agent for clinical studies became
thalium-201 (Lebowitz et al., 1973; Strauss et al., 1975;
Wackers et al., 1975), which has gamma photon energies of
Hg x-rays (70–80 keV) making up 94% of the disintegrations.
The first human heart studies with nitrogen-13 ammonia
) were conducted at the University of Chicago (Harper
et al., 1973). Ammonia was produced by bombarding methane
with 8 MeV deuterons to form
from the (d, n) reaction
on the
C of methane (Monahan et al., 1972). Harper and his
group visualized the positron annihilation radiation of
in the myocardium of a human subject using a Nuclear Chi-
cago HP Anger camera and a 550 keV collimator. The reason
ammonia accumulates in the heart is that it is trapped by the
enzyme glutamine synthetase when it enters the myocyte. This
tracer has been used in clinical PET for the past 40 years.
In the mid-1980s and until the present, the most commonly
used radiotracer for heart studies has been
(a complex coordination of
Tc with methoxyisobutylisoni-
trile). This radiopharmaceutical accumulates in mitochondria
with relatively little washout. Its invention stems from the chem-
istry of a class of compounds studied at MIT in the mid-1980s
(Abrams et al., 1983) and approved by the FDA in 1994.
The positron-emitting radiopharmaceuticals used to trace
lipid and glucose metabolism of the heart include
acid (Scho¨ n et al., 1982; Weiss et al., 1976),
(Lipton and Welch, 1971),
C-acetate (Brown et al., 1987),
F-deoxyglucose (Phelps et al., 1980). Positron-emitting
myocardial flow agents include oxygen-15 in the form of water,
ammonia (first studied in the noncoincident mode by
Harper et al., 1973), and
Rb (Budinger et al., 1979).
Commercial development for clinical heart studies in
2000–14 focused on
I compounds for PET studies
of myocardial perfusion and the sympathetic nervous system
status. These were motivated to some extent by the development
of PET–CT and the search for agents that do not need a local
cyclotron in addition to the
Rb generator. Positron emitters
such as
O, and
technologies, radionuclide generators, and rapid synthesis
techniques that became commercially available after 2000. Radionuclide imaging of left ventricular
The contractile function of the left ventricle is usually measured
by the fraction of blood volume ejected by muscle contraction.
With Geiger–Mu¨ ller tube detectors over the heart, it was possi-
ble to record the change in the number of counts from systole
to diastole using circulating radiotracers that stay in the blood
pool for at least a few minutes after intravenous injection. The
ratio of detected emissions at diastole to those at systole gave
the ejection fraction, which is of diagnostic importance as a
measure of heart function. Thereafter, methods of left ventric-
ular function studies of clinical value were developed in the
early 1970s using the Anger camera and
Tc-labeled human
albumin that remained in the blood pool (e.g., Strauss et al.,
1971). When labeling of red blood cells (Eckelman et al.,
1971) became available through reliable commercial kits, the
methods of measuring ejection fraction became widespread,
and multiple-gated acquisition was the next innovation in
clinical cardiac function studies (Alpert et al., 1974). The
Anger camera image data were binned into multiple intervals
triggered by the R-wave of the patient’s ECG. A video display of
the beating heart enabled by this strategy allowed diagnosis of
abnormal wall motion. The transition from single-view imag-
ing of the cardiac cycle to multiple views, specialized
collimators, and finally SPECT (Chapter 1.03) and PET
occurred over the next few years to what is now known as
nuclear cardiology. Currently, the ventricular blood pool
examinations are secondary to myocardial muscle perfusion
studies where the left ventricle volume is determined from the
outline of the inner wall of the ventricle. Thallium redistribution phenomenon
Images of a defect (diminished uptake of
Tl) obtained
during treadmill exercise, used to amplify the difference
between flow to normal tissue and flow to the pathological
heart muscle, gave objective information of the position
and size of compromised heart muscle without the need for
coronary catheterization. In 1975, Gerald Pohost, then at
20 History of Nuclear Medicine and Molecular Imaging
Massachusetts General Hospital, injected
Tl while electri-
cally pacing the heart during coronary angiography. The
images, acquired an hour or more after injection, did not show
the perfusion defects in the regions expected from the catheter
angiograph. But by doing the
Tl imaging much sooner after
the coronary angiography, the deficits expected from the ana-
tomical coronary studies did appear on the Anger camera
images. Some of these defects ‘filled in’ when patients returned
for a late image (Pohost G (1975) Personal communication.). In
the clinic, it was noticed that often, a defect appeared when
patients were stimulated by exercise during an injection, but if
the patient was called back for a follow-up image without injec-
tion, the defect disappeared.When this happened, a diagnosis of
‘reversible defect’ was made. The diagnosis was that there was
sufficient residual flow to conclude the heart defect could be
reversed by coronary surgery (Pohost et al., 1977). Animal stud-
ies with graded coronary flow reductions validated the stress/rest
approach (Pohost et al., 1981). Thus,
Tl studies became stress/
rest-imaging studies with a single injection for
Tl because
there is usually enough circulating
Tl to enable the tracer to
accumulate in the exercise-based defect given sufficient time.
Since the blood concentration of
Tc-pertechnetate decreases
rapidly afterthe first injection done at exercise, a second injection
for the heart at rest is needed to evaluate the presence or absence
of some residual flow to the compromised region during rest. Mismatch between FDG uptake and
The first paper on the subject of cardiac metabolism versus flow
was from UCLA (Marshall et al., 1983). That pioneering work led
to the widely used clinical diagnosis of flow–metabolism mis-
match whereina regionaldecrease in perfusion as inferred from a
relative decrease in
retention isnot matched by a decrease
in FDG accumulation. The biochemistry and physiology behind
this mismatch are not resolved. Two biochemical factors to be
considered are the pentose shunt activity and the sustainability
of glutamine synthetase in the compromised myocardium. Bone and Bone Marrow Function
The first animal distribution studies with radionuclides used
the beta emitter
P injected as sodium phosphate into rodents
in 1935 (Chiewitz and Hevesy, 1935). Amounts of
obtained from neutron bombardment of sulfur-32 were small
as the neutrons were generated from a radium–beryllium neu-
tron source. When E.O. Lawrence heard of the needs Hevesy
had for
P (17-day half-life), he produced millicurie amounts
at the Berkeley cyclotron and shipped this radionuclide by
regular mail to Hevesy periodically. During WWII, Charles
Pecher and Jacqueline Pecher studied the biological distribu-
tion of
Sr and
Ca in animals (Pecher, 1941). This work
suggested the use of
Sr or
Sr for therapy of cancers residing
in bone and led to the successful application for cancer-related
bone pain palliation. That work also led to an understanding
of the biohazards from
Sr and to a lesser extent
Sr in fallout
from nuclear weapon testing as the kinetic analysis in 1941
showed the accumulation in maternal milk of animals. The
chemical and radiation toxicity of plutonium isotopes was
studied in animals and human subjects starting in 1945 as
part of the Manhattan project of the US nuclear weapons
program, and isotopes of plutonium were injected in human
patients for the purposes of evaluating cancer therapy potentials
as well as for understanding radiation dosimetry after human
exposure. Patients were studied through excreta and bone auto-
radiography after intravenous injections of small doses (Advisory
Committee on Human Radiation Experiments, 1995). The pos-
itron emitter
F was used to study rat bone metabolism using
autoradiography as the quantitative detector (Wallace-Durbin,
1954). Human bone scanning followed many years later using
F- (French and Mccready, 1967)and
Tc-labeled phos-
phonic acids with widespread applications using
(ethane-hydroxy-diphosphonate) (Subramanian and McAfee,
1971; Unterspann, 1976; Yano et al., 1972).
The availability of iron radionuclides gave physicians the
opportunity to study bone marrow functioning because the
accumulation of iron in red cells and red cell precursors in
the bone marrow allowed inferences regarding the kinetics of
blood production. But the high energies of the available iron
radionuclides require heavy collimation. Thus, multiple indi-
vidually collimated detectors were aimed at the principal
organs associated with blood production and turnover (i.e.,
marrow of the skull, long bones, sternum, liver, spleen, and
heart). This was done using single-crystal scintillation and
photomultiplier detectors placed over multiple sites in the
body (Huff et al., 1950). The positron emitter
Fe allowed
studies of the anatomical distribution of active bone marrow as
shown in Figure A-7.23 (Anger and Van Dyke, 1964).
A frequently overlooked method of evaluation of
bone minerals as well as whole-body content of sodium,
potassium, and calcium is neutron activation wherein the
body is exposed to a low fluence of neutrons and the activated
elements are evaluated in a whole-body counter, constructed
with thick steel walls and ceiling (e.g., battleship steel plates) to
reduce background. The measurement of the total body con-
tent of some elements by neutron activation analysis was
shown to be feasible in the living human subjects by
Anderson et al. (1964).