Monte Carlo techniques in medical radiation physics.
ABSTRACT The author's main purpose is to review the techniques and applications of the Monte Carlo method in medical radiation physics since Raeside's review article in 1976. Emphasis is given to applications where proton and/or electron transport in matter is simulated. Some practical aspects of Monte Carlo practice, mainly related to random numbers and other computational details, are discussed in connection with common computing facilities available in hospital environments. Basic aspects of electron and photon transport are reviewed, followed by the presentation of the Monte Carlo codes widely available in the public domain. Applications in different areas of medical radiation physics, such as nuclear medicine, diagnostic Xrays, radiotherapy physics (including dosimetry), and radiation protection, and also microdosimetry and electron microscopy, are presented. Actual and future trends in the field, like Inverse Monte Carlo methods, vectorization of codes and parallel processors calculations are also discussed.
 [Show abstract] [Hide abstract]
ABSTRACT: Patients with vertebral column deformations are exposed to high risks associated with ionising radiation exposure. Risks are further increased due to the serial Xray images that are needed to measure and asses their spinal deformation using Cobb or superimposition methods. Therefore, optimising such Xray practice, via reducing dose whilst maintaining image quality, is a necessity.Journal of XRay Science and Technology 01/2014; 22(5):61325. · 1.09 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: Ellipsoid shape was used to simulate dose distributions in mangoes.•For onesided gammaray irradiation, the dose at the outermost segment was quite low.•Adding plastic wrap or edible coating improved dose distribution significantly.•The average doses at mangoes decreased with increase of the number of mangoes.Journal of Food Engineering 03/2015; 149. · 2.58 Impact Factor  SourceAvailable from: irpa11.irpa.net
Article: Prediction By Simulation Of The Detectability Of Microcalcifications In Mammography Imaging Systems
[Show abstract] [Hide abstract]
ABSTRACT: Mammographic screening should be carried out with radiation doses that are kept as low as is possible. Yet equally it is vital that image quality should not be compromised. This work uses simulation modelling to predict the effect of imaging parameters and geometry on the signaltonoise ratios (SNRs) that could be achieved with an 'ideal' detector. Monte Carlo calculations were used to generate 'virtual images' representing the photon distribution emerging from a simulated phantom with 502000 μm calcium hydroxyapatite microcalcifications embedded 0.5 to 4.5 cm deep in simulated breastsubstitute tissue. A key calculational element has been modelling the geometry of the reallife situation, including modelling photon intensity distributions from broad and fine focal spot mammography xray tubes and allowing for the contribution of scattered radiation. Simulations have produced aerial photon distributions which are used to form the virtual images. The capability of different imaging parameters and geometries to visualise details of interest is analysed by examining these images using SNR. A subjective interpretation of the images combined with the SNR has revealed that for these subject contrasts, imaging parameters and geometries which produce SNR's of 2.0 – 2.5 are required for the details to be in the region of "boderline detectability". The requisite exposure parameters and geometries to present these SNR's to mammography image receptors are discussed with predictions of minimum subject contrast visibility. Such simulations potentially offer a step forward in ensuring effective use of Xray procedures. They may predict whether symptomatic details may or may not be seen with particular imaging systems. In addition, the results show that an 'ideal' detector could detect statistically significant differences between small calcifications and normal tissue at much lower photon numbers than are actually used in clinical practice. Simulated SNRs can potentially predict limiting cases in which imaging geometry and calcification characteristics just permit detectability. Simulation modelling thus offers a means of predicting absolute minimum dose levels for successful imaging of specific, clinically relevant, calcifications.
Page 1
Phys. Med. Bid, 1991, Vol. 36, No 7, 861920. Printed in the UK
Review
Monte Carlo techniques in medical radiation physics
Pedro Andreo
Department of Radiation Physics, Ka~olimlta Institute and University of Stockholm.
Box 60211,104 01 Stockholm, Sweden
Received 31 Auyst 1990, in final form 27 February 1991
Contents
1. Introduction
1.1. Scope of this review
2. The basics of the Monte Carlo practice
2.1. Random numbers and other computational details
2.2. Photon transport
2.3. Electron transport
2.3.1. Condensed history (‘macroscopic’) techniques
2.3.2.
Detailed history (‘microscopic’) techniques
2.3.3. Variance reduction in electron transport
3. Macroscopic Monte Carlo codes in the public domain
4. Applications in medical radiation physics
4.1. Nuclear medicine
4.1.1. Detectors
4.1.2. Imaging correction techniques
4.1.3. Absorbed dose calculations
4.2. Diagnostic radiology
4.2.1. Detection systems
4.2.2. Determination of physical quantities in diagnostic radiology
4.2.3. Radiation protection aspects
4.3. Radiotherapy physics
4.3.1. Teletherapy sources and dosimetry equipment
4.3.2. Inphantom simulations
4.3.3. Treatment planning applications
4.3.4. Calculations in brachytherapy
4.4. Radiation protection
4.5. Applications based on microscopic Monte Carlo techniques
4.5.1. Electron microscopy
4.5.2.
Radiation track structure and microdosimetry
5. Inverse Monte Carlo techniques
6. Vectorized and parallel Monte Carlo simulation
7. Conclusions
Acknowledgments
References
00319155/91/070861+60%03.50 @ 1991 IOP Publishing Ltd
862
862
863
863
866
868
869
870
871
875
881
881
881
882
884
885
885
886
886
887
888
890
891
893
895
897
897
898
901
904
908
909
909
861
Page 2
862
P Andreo
1 . Introduction
Since the review article by Raeside (1976), where the principles of the Monte Carlo
method and its first applications in medical physics were described, the number of
publications in this field using the simulation of the transport of radiation continues
to increase. Following Nahum (1988a), today practically one scientific article per issue
is being published in Physics in Medicine and Biology for instance, and a similar rate
can be observed in parallel journals. Several hooks with comprehensive reviews, ‘tech
nical descriptions’ or proceedings from Monte Carlo courses, have also been published
recently (Morin 1988, Jenkins el a/ 1988, Kase et a/ 1990).
At the time of the 1976 review article by Raeside most of the Monte Carlo work had
been developed at large research centers using mainframe computer systems. This was
the case with most of today’s well known Monte Carlo codes such as ETRAN (Berger
and Seltzer 1968), EGS (Ford and Nelson 1978), MCNP (Thompson 1979, based on
previous codes by Cashwell el a/ 1972, 1973), or MORSE (Straker el a/ 1976), some
of which will be discussed in this article. The basic references in the field of photon
and electron transport simulation were already available at that time (Cashwell and
Everett 1959, Berger 1963, etc).
A number of Monte Carlo codes were written during the 1970s for application to
medical physics, mainly radiotherapy physics. The codes developed by Patau (1972),
Nahum (1976) and Abou Mandour (1978) belong to this group, all of them devel
oped on mainframe computer systems and with a strong emphasis on the simulation
of electron transport. This aspect explains the weight given to data calculated with
ETRAN for comparisons, as the other codes mentioned were generally related to either
reactor neutron/photon physics (MCNP, MORSE) or highenergy physics (EGS). The
availability of minicomputers (mainly DEC/PDPII) in many medical institutions made
possible the development of ‘smaller’ Monte Carlo codes capable of simulating either
specific problems in radiotherapy physics with photon beams (Webb and Parker 1978,
Webb 1979), the transport of photons in Comptonscatter tissue densitometry (Bat
tista and Bronskill 1978) or the full electromagnetic cascade used to derive quantities
for electron dosimetry in water (Andreo 1980, 1981).
The power of many of the mainframe computer systems initially used for Monte
Carlo simulations is available today in many departments of hospital physics around
the world, sometimes even on the desk of a hospital physicist. At the same time,
some of the general purpose computer codes have become widely distributed through
institutions like the Radiation Shielding Information Center (RSIC) at Oak Ridge
National Laboratory in USA or the Nuclear Energy Agency Data Bank (NEA) at
GifsurYvette (France) in Europe. The result is a broader group of scientists with
knowledge of Monte Carlo techniques; as a consequence, the range of applications for
the Monte Carlo method continues to increase.
1.1. Scope oflhis reuiew
The main purpose of this work is to review the techniques and applications of the
Monte Carlo method in medical radiation physics since Raeside’s review article in
1976. Emphasis will be given to applications where photon and/or electron transport
in matter is simulated. Although the method has also been applied to neutrons and
heavy charged particles in radiotherapy physics (cf Biiche and Przybilla 1981, Sisterson
et a/ 1989, Smith et a/ 1989), or in the simulation of their track structures (cf Paretzke
1987) the considerably smaller number of applications with these particles, and the
Page 3
Monte Carlo techniques in medical radiaiion physics 863
limited space for this review, excludes them from a detailed survey. The use of Monte
Carlo techniques in nonradiation medical physics also falls outside the scope of this
review.
Some practical aspects of Monte Carlo practice, mainly related to random num
hers and other computational details, will be discussed in connection with common
computing facilities available in hospital environments. Basic aspects of electron and
photon transport will he reviewed, followed by the presentation of the Monte Carlo
codes widely available in the public domain. Applications in different areas of medical
radiation physics, such as nuclear medicine, diagnostic xrays, radiotherapy physics
(including dosimetry), and radiation protection, and also microdosimetry and electron
microscopy, will he presented. Actual and future trends in the field, like Inverse Monte
Carlo methods, vectorization of codes and parallel processors calculations will also be
discussed.
2. The basics of the Monte Carlo practice
The general principles of the Monte Carlo method have been discussed by Raeside
(1976) and more recently by Turner et a1 (1985). They have also been introduced in
a number of other publications (cf Cashwell and Everett 1959, Shreider 1966, Carter
and Cashwell 1975, James 1980, Lund 1981) and their repetition will be avoided here
as much as possible. In what follows some emphasis is given to basic and practical
ideas of interest in medical physics.
2.1. Random numbers and oiher computational details
Random numbers will be treated first as they play a fundamental role in Monte Carlo
calculations. ‘Philosophical’ arguments on sets of ‘trulyrandom’, ‘pseudorandom’
and ‘quasirandom’ numbers will not he considered here. The basic ideas on random
number generators (RNGS) have been discussed in the initial references cited here;
updated (and more complete) reviews have been given by James (1980, 1990) and
Ehrman (1981). The classic by Knuth (1969) is still mandatory in the field of RNGs
and Sowey (1972) has given about 300 references on random number generation and
testing covering the period 1927 to 1971.
Although some new trends in RNGs have been described recently, including ‘in
telligent’ random number techniques (learning and biasing) such as those used in the
MCNP code (Booth 1988), Lehmer’s method is still the most commonly used for RNGs.
It is called the multiplicativelinearcongruential method. Given a modulus M , a mul
tiplier A, and a starting value to (‘seed’), random numbers ti are generated according
to
ti = (AEi1 + E ) modulo M
(1)
where B is a constant. In multiplicativelinearcongruential generators M is usually
chosen to be 2‘, b being the number of bits in the integer representation of data in
the computer. Constants A and B are chosen to give a ‘well behaved’ RNG, which is
a difficult task. Congruential RNGs generally have a maximum repetition period of
length 22M when M is a power of 2, or M  1 if A 4 is prime (Ehrman 1981), which
is long enough for most Monte Carlo simulations.
Page 4
864
P Andreo
The length of the period of a RNG must he long enough to avoid repetitions in
the sequence of numbers used during the simulation process, as otherwise correla
tions can he produced. There are simulations, however, where the set of independent
numbers needed might exceed the repetition period of the RNG being used. When
the generator is ‘well behaved’, even if the sequence of numbers is used more than
once, the probability of having more than one particle history starting in the same
position of the sequence of random numbers is practically negligible. This means that
when the end of the sequence is reached it will be started again during some of the
sampling procedures used along the simulation. The final effect will he equivalent to
initiating the sequence of random numbers using different ‘seeds’, but it should be
investigated for each computer/RNG/application configuration. The risk of repetition
can he decreased using RNGs with very long periods, such as the McGill University
package ‘SuperDuper’ in IBM assembler (cf James 1980, 1990) or RNG64 due to Biela
jew (1986), both of which produce periods as long as one might expect from a 64bit
machine. The use of such RNGs however needs longer calculation times than the gen
erators previously reported and their choice must he based on a careful evaluation.
This author’s experiences of the simulation of ionization chambers within a phantom,
where hundreds of millions of histories of “CO photons were simulated using the EGS
code, did not show any change in the results or in the extremely slow convergence of
the low uncertainty being sought when RNG64 or the standard generator in EGS (see
later) was used.
The RNG of congruential type which has probably received more attention in the
Monte Carlo literature is the infamous RANDU, distributed by IBM (1968) in comput
ers of their 360 and 370 series, where A = 65539, B = 0 and M = Z31 (32bit integer,
bit zero reserved for the sign, and a period of Z31  1). The same RNG has also been
implemented by DEC in minicomputerbased operative systems with 16hit integer
arithmetic (RT11, RSXIIM, etc), where it is possible to call for modulus 232 using two
registers which allow the use of the modulus to any power by shifting. (For compati
bility between VAX11 and PDPI1 FORTRAN, VAX/VMS offers the option of using this
generator also (DEC 1984).) The predicted period for DEC’s RANDU is 230 (in 32hit
DEC machines hit 15 is sign hit) and it produces identical output random numbers to
IBM’s generator. The large availability of DEC PDP11 machines in hospital physics
environments during the last 15 years has led to a widespread use of this generator
(cf Battista and Bronskill 1978, Bond el a/ 1978, Audreo 1980, etc).
A long period is not the only desirable property in a RNG and different tests
must be applied to verify the true randomness of the sequence. Knuth (1969) is still
the standard source of information on random tests, which Morin el a/ (1988) have
performed on 16bit minicomputers for RNGs used in different applications of interest
in medical physics. Even if the generator RANDU passed the majority of tests, the
so called ‘Marsaglia effect’ or how ‘random numbers fall mainly in planes’ (Marsaglia
1968), has shown that this generator produces correlated triplets of random numbers
(cf Coldwell 1974). The use of techniques where vituple random numbers are used for
decisionmaking or branching is common in transport simulation, which should warn
us against the use of such RNGs. An improvement of the properties of RANDU has
been obtained by changing the multiplier A from 65539 to 69069 (Marsaglia 1972),
which is the standard RNG used in VAX FORTRAN (MTHWANDOM) with M = Z32, and
in the SLAC RNI generator. Another RNG commonly used today in medical physics
calculations is the SLAC RN6 generator which has A = 663608 941 and B = 0 and is
included in the EGS code. Both generators have similar properties regarding period
Page 5
Monte Carlo techniques in medical radiation physics 865
length and number of planes in 3space, RN6 being faster, especially with the ‘in
line’ coding included in EGS which avoids subroutine calls (cf Ehrman 1981). Theory
predicts a period of about lo9 for these two generators, but a repetition of 2, 3 and 4
consecutive numbers can be found in certain implementations after generating 3.5 x IO7
random numbers approximately. In VAX FORTRAN the RNG generates numbers in
the interval 0 5 < < 1 (DEC 1984), which can yield obvious runtime errors when
expressions like In(e), which are frequent in particle transport, are used. One can
alternatively use In(1  C) which is as random as f . SLAC RN6 on the other hand
generates numbers in the interval 0 5 < 5 I, and therefore a detailed condition to
avoid the generation of zeros should be included.
A very interesting review of RNGs has been published recently by James (1990),
where it is recommended that ‘oldfirshioned generators (like the multiplicativelinear
congruential previously described) should be archived and made available only upon
special request for historical purposes or in situations where the user really wants a
bad generator’ and Fibonacci and shift register (also known as Tansworthe) generators
should be used instead. Furthermore they should be implemented in the form of
subroutines returning an array of random numbers rather than a function returning
one number each time. Fibonacci RNGs are based on the series with the same name
(where each element is the sum of the two preceding elements) but use some arithmetic
or logical operation between two numbers which have occurred somewhere earlier in
the sequence, not necessarily the last two:
I
where @ is a binary or logical operation and p and q are the lags (henceforth the name
of lagged Fibonacci sequences) defined such that p > q. Shift register RNGs are also
based on this formula but with M = 2; therefore only single bits are generated, which
are collected into words using a shift register. James (1990) has given the FORTRAN
code of the generators RANMAR and ACARRY of the lagged Fibonacci type, with
periods of Z 1 4 4 ( ~ 2 x
portable, that is they give bitidentical results on all machines, with at least 24bit
mantissas in the floatingpoint representation. An ‘inline’ version of RANMAR lias
just become available for the EGS Monte Carlo system (Bielajew 1990a).
A problem that has not received enough attention in certain Monte Carlo calcu
lations is that of truncation errors (different from roundoff errors, characteristic of
computer hardware). It is well known that the use of single or double precision can
yield different results for certain Monte Carlo computations. This is the reason why
some Monte Carlo codes which are available for different wordlength computers are
coded in double precision when they are used on 32bit machines (for instance ITS,
cf Halbleib and Mehlhorn 1984). This might be of importance, for instance, during
the simulation of electron transport where extremely short pathlenghts are sometimes
selected, especially in the proximity of boundaries. In those cases the use of double
precision for certain variables should not be excluded without careful verification; the
price of an increase in computation time may be well worth paying.
The treatment of uncertainties in Monte Carlo calculations deserve some addi
tional comments. The classification of uncertainties recommended by CIPM (1981)
for experimental methods may also be adopted iii Monte Carlo procedures (Andreo
and Fransson 1989, Andreo 199Oa). In such recommendations, t,he classification of
uncertainties commonly used, namely ‘random’ and ‘systematic’ uncertainties, have
respectively (!), which are completely
and Z570(xz
Page 6
866
P Andreo
been replaced by uncertainties of ‘category A’ (objective, evaluated using statistical
methods) and ‘category B’ (subjective, estimated by any other method). Uncertain
ties in both categories are specified by standard deviations (or variances) which can
he combined using the general propagation law for uncertainties. However, whereas
the evaluation of variances in ‘category A’ is achieved using statistical procedures,
both in experiments and in Monte Carlo calculations, uncertainties in ‘category B’
(nonstatistical methods) are more difficult to evaluate in Monte Carlo calculations
as multiple steps are involved (programming; tabulation of data, in many cases with
unknown uncertainties; etc). A conservative estimate is to make both categories of
uncertainties (A and B) equal and combine them in quadrature; a multiplicative factor
can then be used to obtain an ‘overall’ uncertainty (CIPM 1981).
To evaluate uncertainties of category A (i.e. with statistical methods), some Monte
Carlo codes divide the total number of histories to be simulated into a number of
batches, and statistical uncertainties are evaluated using the mean values of the vari
ables (tallies) scored in each batch. Typically, 10 batches are used in Monte Carlo
calculations as with EGS usercodes (Rogers 1982) or ITS (Halbleib and Mehlhorn
1984), although they treat the restart of a calculation differently. Bond et al (1978)
have referred to 15100 hatches for each simulation. A different procedure consists
of scoring both the quantity of interest (cumulative tally) and its squared value (cu
mulative squares of the tally) whenever a particle is transported a given pathlength
(cf Shteider 1966). Uncertainties can then be evaluated during the simulation of each
history (inside the main loop of the simulation), and the process is equivalent to a . 
suming that every history is a batch on its own. This is the method used, for instance,
in the codes MCNP (Soran 1980) or MCEF (Andreo 1980,1981). As can be observed in
figure 1 for Gaussian distributed numbers, only when a very large number of batches
is considered in the first technique, will the statistical evaluation approach the correct
estimation given by the second method.
2.2. Photon fransporl
Most codes dealing mainly with photon transport assume that electrons generated
through different interactions are absorbed ‘on the spot’ and the simulation process
becomes therefore relatively simple. All the physical interactions of photons (or neu
trons) are completely simulated following the general techniques described by Raeside
(1976) or Turner et al (1985), for example, without much comput,ational effort. From
the exponential attenuation distribution, the appropriate cumulative distribution can
be evaluated and the distance s between interactions in a medium (step length) is
determined by
s = Xln(l ()
(3)
A being the mean free path at the photon energy at the beginning of the step and f a
random number, 0 5 E < 1. The type of interaction event ocurring after the step s is
sampled from the appropriate relative probabilities pi (ratios of single cross sections
to the total cross section), using their cumulative distribution function Pi. Another
random number selects the interaction event i(() such that
j  1
C P i = Pj1 5 E < c p i = Pj
j
(4)
Page 7
1 s o
a
,.
~
E=
0
r?
0
c

f
0
0
I n
1.00
U .
0.50
I , , , . " " ,
,
, ' . , , . , I
,
, I . . 
0
0
0.
9

'* '. m 0 . 0 : 0.i . ~~eooooeoeeoeoooeloooeo~
0
: i
0
Page 8
868
P Andrea
(1983, 1986) whereas a weighted sampling from the Thompson distribution has been
used by Chen et a/ (1980). The significance of electron binding corrections in the
scattering of lowenergy photons has been investigated by Beernink et a/ (1983) and
Williamson et nl (1984); they have concluded that neglecting these effects results in
a significant underestimation of scattering angles which are of importance in Monte
Carlo calculations.
The small number of interactions that take place when photons traverse matter has
motivated the development of ‘variance reduction’ techniques to decrease uncertain
ties that can be evaluated by statistical methods (i.e. category A). In such techniques
the ‘natural physics’ (and scoring procedures of tallies) is manipulated in a number
of different ways so as to increase the relative occurrence of certain events. Forced
interactions, importance sampling, Russian roulette, particle spliting, etc are com
monly used techniques well described in many of the classic references on the field (cf
Cashwell and Everett 1959, Carter and Cashwell 1975, etc) which were also briefly
discussed by Raeside (1976). A collision density estimator to increase the efficiency of
smallangle scatter calculations in xray diagnostic has been developed by Persliden
and AlmCarlsson (1986); and Gardner e l nl (1987) have derived algorithms to force
scattered radiation into detectors of different shape. An uptodate review describing
the techniques in more common use today has been given by Bielajew and Rogers
(1988).
It is interesting to note that some applications have been developed, mainly in
the area of radiotherapy physics, which can be considered as an important variance
reduction technique. In general they combine Monte Carlo and analytical techniques,
yielding results that would otherwise require extremely long CPUtimes for a direct
simulation. A simple application, where Monte Carlo calculated depthdoses for mo
noenergetic photons have been used to produce data for bremsstrahlung spectra, has
been reported by Andreo and Nahum (1985) as equivalent to the direct Monte Carlo
simulation of millions of photons. A more elaborated approach consists in the convo
lution of spatial absorbed dose distributions from monoenergetic photons to determine
dose distributions for radiotherapy treatment planning (see section 4.3.3), which yields
results corresponding to the direct simulation of billions of photon histories (cf Ah
nesjo et a/ 1987). The potential of this method has not been yet applied to some
of the problems which demand intensive variance reduction techniques, such as the
simulation of ionization chambers for radiotherapy dosimetry which are described in
section 4.3.1.
2.9. Electron transport
For the simulation of the complete electromagnetic cascade the inclusion of electron
transport adds a new dimension to the problem. In principle, the direct simulation of
all the physical interactions (sometimes referred to as ‘microscopic’ simulation) could
be used for electrons, as it has been described for photons or neutrons. The only diffi
culty in doing so is to keep track of all generations of electrons and photons produced
during successive interactions. Their transport has to be considered in a systematic
way until all the particles have been simulated. However, the very large number of
interactions that take place during electron slowing down (of the order of IO4 collisions
in aluminium, from 0.5 MeV to 1 keV, cf Berger and Wang 1988), makes it unrealistic
to simulate all the physical interactions in the majority of applications. This aspect
has motivated the development of the so called ‘condensed history’ or ‘macroscopic’
Page 9
Monte Carlo techniques in medical radiation physics 869
techniques (Sidei el a/ 1957, Schneider and Cormack 1959), where interactions are
grouped in different ways.
2.3.1. Condensed history (‘macroscopic’) techniques. The principles of the ‘condensed
history’ technique will he described following the classical review on Monte Carlo
charged particle transport by Berger (1963). Physical interactions of electrons are
classified into groups which provide a detailed ‘macroscopic picture’ of the physical
process:
SO,Sl, s2,. . . , sn,. . .
EO,
El, Ez,
UO,Ul,UZ.. . .,U,, . .
TO,PI,PZ, ..., r. ,...
, , . , En,
,
where s , is the distance travelled, E, the energy, U, the direction and rn the position
of the electron. The transition from step n to n + 1 accounts for many interactions
where multiple collision models, like multiple scattering or stoppingpower theory, are
considered. The step size (distance travelled or energy loss between two steps) has to
be chosen in such a way that the total number of steps is kept as small as possible,
as computation time will be proportional to the number of steps. On the other hand,
the step size has to be such that multiple collision models for angular deflections and
energy losses are valid, that is short pathlengths and many single interactions per step.
According to Berger (1963) the ‘condensed history’ technique can be classified in
two procedures:
(1) Class I, which groups all the interactions and uses a predetermined set of
pathlengths, the random sampling of interactions being performed at the end of the
step. The simplest choice is a constant pathlength. The disadvantage is that the
angular deflection increases from step to step as the electron energy decreases, which
demands further resizing in order to be able to use a multiple scattering theory.
Logarithmic spacing is a better selection as angular deflections change little from step
to step. This is chosen so that, on average, the energy is reduced by a constant factor
k per step, that is the fraction of energy lost per step is constant.
(2) Class 11, or ‘mixed procedures’, which groups only minor collisions where en
ergy losses or deflections are small, but considers the individual sampling of ninjor
events or ‘catastrophic’ collisions, where a large energy loss or deviation occurs (fig
ure 2). Further details of the technique can be found in Berger (1963) and Andreo
(1985). An extension of Class I1 procedures has been introduced by Andreo and
Brahme (1984) to account for energy losses and deflections below the threshold of
catastrophic collisions, using restricted energyloss straggling and restricted multiple
scattering during the classification between major and minor events.
Class II schemes have some advantages over Class I procedures.
In particular,
the initial state of knockon electrons and bremsstrahlung photons is unambiguosly
defined, angular deviations can be treated more accurately and the correlation between
energy loss and angular deflection is always conserved. On the other hand, Class I
schemes can include complete energyloss straggling which is inherently independent
of the electron transport cutoff (Tc) and threshold energies of knockon electrons
(7’8).
The dependence on T a is an important limitation of Class I1 schemes, as the
sampling of electron energyloss straggling is limited to the interval [Ta, 0.5To] ([E, TO]
Page 10
870
P Andreo
E. e
4
dieMceMueUsQ
Figure 2. Energydistance diagram of the Class I1 procedure used to describe the electron (and
photon) tracks ahown in the upper part of the figure. Electron 'ca1astrophic'collirionJ occur at posi
tions 1 (bremsstrahlung interaction), 2 and 3 (knodion inelastic collisions), and 5 (elastic collision,
no energy loss). The photon i s scattered by a Compton interaction at position 4. All secondary
electrons (bottom of lower figure) are followed down to the transport cutoff of the simulation, E,.,,
where they are absorbed ( S t a r s ) . The broken descendent line from Eo corresponds to a calculation
of the electron energy loss based on the continuousslowingd~~,, approximation (CSDA).
for positrons) which might become critical during the treatment of lowenergy electrons
(below 100 keV). To overcome this limitation, restrictedClass II procedures (Andreo
and Brahme 1984) perform an additional sampling below Ts which makes energy
straggling independent of the threshold energy of knockon electron production. The
technique need to be developed further to include, for example, binding effects in
inelastic collisions.
The consideration of 'catastrophic' collisions in a Class I1 scheme is analogous
to the transport of photons, where all the interactions are considered individually.
For electrons a 'catastrophic' mean free path may he defined, Acat, obtained from
the addition of single events, such as inelastic, A,,,,,
interactions, Abrem. The distance (step length) between major events is again given by
equation (3), i.e. s = ACatIn(l <) (cf Andreo 1985). In fact by changing the thresh
olds of catastrophic events it would be possible to perform a continuous transition
from full grouping (macroscopic) to singleinteraction (microscopic) simulation.
2.9.2. Detailed history ('microscopic 7 techniques. Most existing charged particle
Monte Carlo codes are based on the multiple collision models for scattering and energy
elastic, A.,, and bremsstralung
Page 11
Monte Carlo techniques in medical radiation physics
871
loss previously mentioned. The majority of applications used in radiotherapy physics
(the main area of medical physics where highenergy electron transport has become of
importance) are based on these procedures. There are, however, important limitations
with such models, for instance when geometrical regions (or distance to boundaries)
are very small, when dealing with very lowenergy transfers (smaller than or close to
binding energies) or when the transport of lowenergy electrons, below, say, 100 keV
has to be simulated. In these situations either the number of collisions is not enough
to consider a process as multiple, or the physics behind the theory itself is violated.
In many of these cases the condensed histories approach has to be abandoned and the
simulation of the transport has to be performed using singleinteraction models. This
is the case with the majority of the Monte Carlo applications simulating tracklength
structures, microdosimetry or electron microscopy.
As already mentioned, the microscopic Monte Carlo technique in electron trans
port is identical to the approach used in photon (or neutron) simulation. In this sense,
the techniqne itself is simpler than the macroscopic model as it does not depend on ad
ditional parameters governing the grouping and distance between collisions. However,
depending on the electron energy, the physics involved might become considerably
more complicated. There are still a number of poorly known cross sections at very
low energy and, in certain instances, the simulation in condensed media must rely on
experimental data extrapolated from gasphase data, sometimes using Fano plots (UT
against In") to minimize energy dependence.
The complexity of the techniques used in microscopic modelling varies consider
ably, although a common approach is to neglect bremsstrahlung interactions. Simple
models used in early applications, in electron microscopy for example, have been
based on the simulation of all the elastic scattering events, calculating the step length
between consecutive collisions with the elasticmean free path. Energy losses were
determined from the Bethe theory of stopping power and, in some cases, an approx
imation to account for energyloss straggling has been included. An improvement to
this model has been to take inelastic collisions into account, the mean free path being
based on U , , + aine,, which is the basic techniqne commonly used today. From this
point on, the main difference between existing microscopic Monte Carlo applications
lies mainly in the different theories, models or experimental data used to account for
energyloss singleprocesses contributing to
citation), and refinements to treat atomicshell structure (according to Paretzke (1987)
molecular changes become important at residual energies below, say, 10 eV). Other
differences like the accuracy of elastic cross sections (from screened Rutherford cross
sections to Mott partial wave expansions), or the generation and transport of sec
ondary and higherorder generation electrons can also be mentioned. Detailed reviews
on the different approaches used in electron microscopy and radiation track structure
(and microdosimetry) have been given by Kyser (1981) and Paretzke (1987) respec
tively. An uptodate review of cross sections at lowenergies for different materials
(mainly gases and liquid water) has been performed by Grosswendt (1988a).
(such as electronic ionization and ex
2.9.9. Variance reduclion in eleclron transport. Variance reduction techniques devel
oped for application to photon transport are, in principle, also applicable to electrons,
but in general the simulations do not involve events which are particularly rare. Apart
from correlated sampling techniques, where different particles are simulated using the
same sequence of random numbers, the techniques in common use are mostly based
on a careful study of the physics of the problem. 'Runningparameters' used in the
Page 12
P Andreo
Page 13
Monte Carlo techniques in medical radiation physics
873
Page 14
874
P Andreo
simulation, such as transport cutoff and threshold energies for secondary electron and
photon production become extremely important. In Class I1 schemes for instance, the
threshold for the production of knockon electrons Ts can be set equal to the energy
cutoff T, for the transport (provided that energyloss straggling is properly taken into
account), as it is time consuming to create a particle which will not be simulated.
Other techniques are directly related to the geometry of the problem. In general
T, should be chosen such that an electron which has been slowed down to such cutoff
has a small probability of crossing to a different region. More details are involved in
the so called ‘range rejection’ or ‘electron trapping’ techniques, where the history of an
electron without the energy to escape from a certain region, or alternatively to reach
another region, will be disregarded; this requires the determination of the electron
range and comparison with the distance to the closest boundary (figure 3(a)). Less
time consuming is the inclusion of regions where the transport cutoff T,, is larger
than in the volume of interest, which can be considered as the simplest technique.
A ‘virtual envelope’ surrounding such a volume of interest can be considered, whose
dimensions are chosen such that an electron outside the envelope cannot reach the
volume of interest, and the additional transport cutoff T,, is chosen on this basis
(figure 3(b)).
Figure 3. Variance reduction techniques in electron transport. (a) Range vejection or elrctron
trapping technique: an electron et (z,y,z)o is requested to travel down to (z,y,z,)~. Even if the
energy T o is larger than the electron transport cutoff, T,, in most c a e s the history can be disregarded
as the CSDA range 70 is shorter than the distance to the closest boundary d. An energy trapping
Tt..p can he selected to check the condition TO < d only when To < Tt,,,. (b) Virtual envelope
technique: a region E is defined having a thickness t and the same electron trqnsport cutotl Tc as
the volume of interest V. Electrons produced outside the region E U V will be able to reach V only if
their CSDA range is longer than t. A cutotl T,, (larger than T,) can be set outside E U V such that
the CSDA range associated with the electron energy T,, js smaller than t. Radiation yield sliould be
investigated prior to the selection ofTtrap or Tc,.
Similar regionbyregion analysis can be performed to ‘relax’ the requirements
on the fraction of energy loss, maximum allowable travelling distance or any other
runtime parameter used in general to achieve a more detailed traneport through the
volumes of interest. In all cases it is important to realize that ignoring an electron
history involves excluding the probability of bremsstrahlung emission, which could
otherwise result in energy deposition in different regions. Therefore the radiation yield
Page 15
Monte Carlo techniques in medical radiation physics 875
at the different cutoff energies in the simulated medium should always be investigated
and kept reasonably low.
3 . Macroscopic Monte Carlo codes in the public domain
Some of the Monte Carlo codes developed in large research centres are used today for
research in medical physics. This has been possible with the availability of considerable
computing power in hospital physics departments, together with a well supported
distribution of the codes (through the RSIC at ORNL or the European NEA Data
Bank, for instance). Table 1 lists the Monte Carlo codes and systems cited by their
name throughout this review.
Two such codes, ETRAN and EGS, or results computed with them, have become
widespread and a brief discussion of their different approaches in dealing with the
simulation of radiation transport will be given here. Readers wishing to expand the
summary given here are referred to the so called ‘Erice book’ (Jenkins et al 1988)
where the two codes are treated in depth. Other generally available Monte Carlo
codes like MORSE (cf Palta 1981) or OGRE (cf Nilsson and Bralime 1981) have also
been used in medical physics calculations, but their use has been less extensive than
the two codes which will be described.
The ETRAN Monte Carlo model, an acronym for electrontransport (Berger and
Seltzer 1968), was originally developed at the National Bureau of Standards, USA,
to simulate the transport o f electrons and photons involving energies up to a few
MeV, being extended later on for calculations at higher energies. The extension of
the (ETRANbased) ITS system of codes to the multiGeV region, has been developed
recently (Miller 1989). Since the late 1960s, ETRAN has been used in calculations
related to the dosimetry of therapeutic beams, mainly electrons (Berger and Seltzer
1969, 1982; Berger el ol 1975). It has provided most of the Monte Carlo results
included in the ICRU Report 35 on electron dosimetry (ICRU 1984a) and most of
today’s electron dosimetry procedures are based on data computed with this code (cf
Andreo 1988a). ETRAN is a Class I code, which emphasizes the physics of electron
transport, its main characteristics being the accurate treatment of electron multi
ple scattering (using the theory of Goudsmit and Sauuderson) and bremsstrahlung
interactions (including cross sections differential in energy and angle). To take into
account lowenergy transport, ETRAN includes characteristic xrays from the Kshell
and Auger electrons after the emission of a photoelectron, but neglects coherent scat
tering and binding corrections to incoherent scattering (cf Seltzer 1988) which have
been included in an updated version of the ITS system.
Discrepancies between calculations with ETRAN and other Monte Carlo codes (and
experiments) have been reported in the literature (Andreo 1980, 1981; Andreo and
Brahme 1981, 1984). Such discrepancies have been of importance in the determina
tion of quantities of interest in electron dosimetry, snch as the variation of the mean
energy of primary electrons with depth, where the values calculated with ETRAN were
consistently higher than those obtained with the code MCEF (Andreo and Brahme
1981). It was suggested by these authors that the disagreements were caused by the
different treatment of energyloss straggling in the respective Monte Carlo codes, as
ETRAN (Class I) performs a sampling from the Landau/BlunckLeisegang distribu
tion whereas MCEF (Class 11) combines restricted stoppingpower with a sampling
from the M~ller distribution. Discrepancies in electron depthdose distributions, com
pared with experiments and other Monte Carlo calculations, have also heen shown
View other sources
Hide other sources
 Available from Pedro Andreo · May 17, 2014
 Available from usyd.edu.au