ArticlePDF Available

Prediction of Respiratory Motion Using A Statistical 4D Mean Motion Model

Authors:

Abstract and Figures

In this paper we propose an approach to generate a 4D sta-tistical model of respiratory lung motion based on thoracic 4D CT data of different patients. A symmetric diffeomorphic intensity–based registra-tion technique is used to estimate subject–specific motion models and to establish inter–subject correspondence. The statistics on the diffeomor-phic transformations are computed using the Log–Euclidean framework. We present methods to adapt the genererated statistical 4D motion model to an unseen patient–specific lung geometry and to predict individ-ual organ motion. The prediction is evaluated with respect to landmark and tumor motion. Mean absolute differences between model–based pre-dicted landmark motion and corresponding breathing–induced landmark displacements as observed in the CT data sets are 3.3 ± 1.8 mm consid-ering motion between end expiration to end inspiration, if lung dynamics are not impaired by lung disorders. The statistical respiratory motion model presented is capable of provid-ing valuable prior knowledge in many fields of applications. We present two examples of possible applications in the fields of radiation therapy and image guided diagnosis.
Content may be subject to copyright.
Prediction of Respiratory Motion Using A
Statistical 4D Mean Motion Model
Jan Ehrhardt1, Ren´e Werner1, Alexander Schmidt–Richberg1, and Heinz
Handels1
Department of Medical Informatics, University Medical Center Hamburg–Eppendorf,
Germany, j.ehrhardt@uke.uni-hamburg.de
Abstract. In this paper we propose an approach to generate a 4D sta-
tistical model of respiratory lung motion based on thoracic 4D CT data
of different patients. A symmetric diffeomorphic intensity–based registra-
tion technique is used to estimate subject–specific motion models and to
establish inter–subject correspondence. The statistics on the diffeomor-
phic transformations are computed using the Log–Euclidean framework.
We present methods to adapt the genererated statistical 4D motion
model to an unseen patient–specific lung geometry and to predict individ-
ual organ motion. The prediction is evaluated with respect to landmark
and tumor motion. Mean absolute differences between model–based pre-
dicted landmark motion and corresponding breathing–induced landmark
displacements as observed in the CT data sets are 3.3±1.8mm consid-
ering motion between end expiration to end inspiration, if lung dynamics
are not impaired by lung disorders.
The statistical respiratory motion model presented is capable of provid-
ing valuable prior knowledge in many fields of applications. We present
two examples of possible applications in the fields of radiation therapy
and image guided diagnosis.
1 Introduction
Respiration causes significant motion of thoracical and abdominal organs and
thus is a source of inaccuracy in image guided interventions and in image acqui-
sition itself. Therefore, modeling and prediction of breathing motion has become
an increasingly important issue within many fields of application, e.g in radiation
therapy [1].
Based on 4D images, motion estimation algorithms enable to determine
patient–specific spatiotemporal information about movements and organ defor-
mation during breathing. A variety of respiratory motion estimation approaches
have been developed in the last years, ranging from using simple analytical
functions to describe the motion over landmark–, surface– or intensity–based
registration techniques [2, 3] to biophysical models of the lung [4]. However, the
computed motion models are based on individual 4D image data and their use
is usually confined to motion analysis and prediction of an individual patient.
The key contribution of this article is the generation of a statistical 4D inter–
individual motion model of the lung. A symmetric diffeomorphic non–linear
Second International Workshop on
Pulmonary Image Processing
-3-
intensity–based registration algorithm is used to estimate lung motion from a
set of 4D CT images from different patients acquired during free breathing. The
computed vector motion fields are transformed into a common coordinate sys-
tem and a 4D mean motion model (4D–MMM) of the respiratory lung motion
is extracted using the Log–Euclidean framework [5] to compute statistics on the
diffeomorphic transformations. Furthermore, methods are presented to adapt the
computed 4D–MMM to the patient’s anatomy in order to predict individual or-
gan motion without 4D image information. We perform a quantitative in–depth
evaluation of the model–based prediction accuracy for intact and impaired lungs
and two possible applications of the 4D–MMM in the fields of radiation therapy
and image guided diagnosis are shown.
Few works that deal with the development of statistical lung motion models
have been published. Some approaches exist for the generation of 3D lung at-
lases [6], or the geometry–based simulation of cardiac and respiratory motions
[7]. First steps towards an average lung motion model generated from different
patients were done by Sundaram et al. [8], but their work focuses on 2D+tlung
MR images and the adaptation of the breathing model to a given patient has not
been addressed. First methods for building inter–patient models of respiratory
motion and the utilization of the generated motion model for model–based pre-
diction of individual breathing motion were presented in [9] and [10]. This paper
is an extension of [10] with regard to the methodology and the quantitative eval-
uation. In [9] motion models were generated by applying a Principal Component
Analysis (PCA) to motion fields generated by a surface–based registration in a
population of inhale–exhale pairs of CT images. Our approach is different in all
aspects: the registration method, the solution of the correspondance problem,
the spatial transformation of motion fields, and the computation of statistics
of the motion fields. Furthermore, we present a detailed quantitative evaluation
of a model based prediction for intact and impaired lungs. This offers interest-
ing insights into the prediction accuracy to be expected depending on size and
position of lung tumors.
2 Method
The goal of our approach is to generate a statistical model of the respiratory
lung motion based on a set of Npthoracic 4D CT image sequences. Each 4D
image sequence is assumed to consist of Nj3D image volumes Ip,j :R
(R3), which are acquired at corresponding states of the breathing cycle.
This correspondance is ensured by the applied 4D image reconstruction method
[11] and therefore, a temporal alignment of the patient data sets is not necessary.
Our method consists of three main steps: First, the subjectspecific motion
is estimated for each 4D image sequence by registering the 3D image frames.
In a second step, an average shape and intensity model is generated from the
CT images. In the last step, the average shape and intensity model is used as
anatomical reference frame to match all subject-specific motion models and to
build an average intersubject model of the respiratory motion.
-4-
Second International Workshop on
Pulmonary Image Processing
Image registration is required in all three steps. We use a non–linear, intensity–
based, diffeomorphic registration method as described in the next section. The
three steps to generate the statistical model of the respiratory motion are de-
tailed in Sect. 2.2. The utilization of the 4D–MMM for motion predicition is
presented in Sect. 2.3.
2.1 Diffeomorphic image registration
Diffeomorphic mappings ϕ:Ω, (ϕDiff (), Rd) guarantee
that the topology of the transformed objects is preserved and are therefore used
in computational anatomy to analyze and characterize the biological variabil-
ity of human anatomy [12]. A practical approach for fast diffeomorphic image
registration was recently proposed in [13] by constraining ϕto a subgroup of
diffeomorphisms. Here, diffeomorphisms are parametrized by a stationary veloc-
ity field v, and the diffeomorphic transformation ϕis given by the solution of
the stationary flow equation at time t= 1 [5]:
∂t φ(x, t) = v(φ(x, t)) and φ(x,0) = x.(1)
The solution of eq. (1) is given by the group exponential map ϕ(x) = φ(x,1) =
exp(v(x)) and the significant advantage of this approach is that these exponen-
tials can be computed very efficiently (see [5] for details).
The problem of image registration can now be understood as finding a para-
metrizing velocity field v, so that the diffeomorphic transformation ϕ= exp(v)
minimizes a distance Dbetween a reference image I0and the target image Ijwith
respect to a desired smoothness Sof the transformation: J[ϕ] = D[I0, Ij;ϕ] +
αS[ϕ]. Using S[ϕ] = Rk∇vk2dx(with ϕ= exp(v)) as regularization scheme,
the following iterative registration algorithm can be derived:
Algorithm 1 Symmetric diffeomorphic registration
Set v0= 0, ϕ=ϕ1=I d and k= 0
repeat
Compute the update step u=1
2fI0,IjϕfIj,I0ϕ
1
Update the velocity field and perform a diffusive regularization:
vk+1 = (Id τ α)1vk+τu(2)
Calculate ϕ= exp(vk+1) and ϕ1= exp(vk+1 )
Let kk+ 1
until kvk+1 vkk< or kKmax
Second International Workshop on
Pulmonary Image Processing
-5-
The update field uis calculated in an inverse consistent form to assure source
to target symmetry. The force term fis related to Dand is chosen to be:
fI0,Ijϕ(x) = (I0(x)(Ijϕ)(x)) (Ijϕ)(x)
k∇(Ijϕ)(x)k2+κ2(I0(x)(Ijϕ)(x))2(3)
with κ2being the reciprocal of the mean squared spacing. Eq. (2) performs the
update of the velocity field v, where τis the step width. The term (Id τ α)1
is related to the diffusive smoother Sand can be computed efficiently using
additive operator splitting (AOS).
We have chosen this diffeomorphic registration approach because of three rea-
sons: In the context of the motion model generation, it is important to ensure
that the calculated transformations are symmetric and diffeomorphic because of
the multiple usage of inverse transformations. The second reason is related to
runtime and memory requirements: due to the size of the 4D CT images dif-
feomorphic registration algorithms using non–stationary vector fields, e.g. [14],
are not feasible. Third, the representation of diffeomorphic transformations by
stationary vector fields provides a simple way for computing statistics on diffeo-
morphisms via vectorial statistics on the velocity fields.
For a diffeomorphism ϕ= exp(v), we call the velocity field v= log(ϕ) the
logarithm of ϕ. Remarkably, the logarithm v= log(ϕ) is a simple 3D vector
field and this allows to perform vectorial statistics on diffeomorphisms, while
preserving the invertibility constraint [15]. Thus, the Log-Euclidean mean of
diffemorphisms is given by averaging the parametrizing velocity fields:
¯
ϕ= exp 1
NX
i
log(ϕi)!.(4)
The mean and the distance are inversion-invariant, since log(ϕ) = log(ϕ1).
Even though the metric linked to this distance is not translation invariant, it
provides a powerful framework where statistics can be computed more efficiently
than in the Riemannian distance framework. For a more detailed introduction to
the mathematics of the diffeomorphism group and the associated tangent space
algebra, we refer to [5] and the references therein.
2.2 Generation of a 4D mean motion model
In the first step, we estimate the intra–patient respiratory motion for each 4D
image sequence by registering the 3D image frames. Let Ip,j :R(R3)
be the 3D volume of subject p {1, . . . , Np}acquired at respiratory state j
{0, . . . , Nj1}. Maximum inhale is chosen as reference breathing state and the
diffeomorphic transformations ϕp,j :are computed by registering the
reference image Ip,0with the target images Ip,j ,j {1, . . . , Nj1}. In order
to handle discontinuities in the respiratory motion between pleura and rib cage,
lung segmentation masks are used to restrict the registration to the lung region
by computing the update field only inside the lung (see [3] for details).
-6-
Second International Workshop on
Pulmonary Image Processing
In order to build a statistical model of respiratory motion, correspondence
between different subjects has to be established, i.e. an anatomical reference
frame is necessary. Therefore, the reference images Ip,0for p= 1, . . . , Npare
used to generate an average intensity and shape atlas ¯
I0of the lung in the
reference breathing state by the method described in [10]. This 3D atlas image
¯
I0is now used as reference frame for the statistical lung motion model. Each
patient–specific reference image Ip,0is mapped to the average intensity and shape
atlas ¯
I0by an affine alignment and a subsequent diffeomorphic registration.
Let ψpbe the transformation between the reference image Ip,0of subject p
and the atlas image ¯
I0. Since the intra–subject motion models ϕp,j are defined
in the anatomical spaces of Ip,0, we apply a coordinate transformation
˜
ϕp,j =ψpϕp,j ψ1
p(5)
to transfer the intra–subject deformations into the atlas coordinate space. Such
a coordinate transformation accounts for the differences in the coordinate sys-
tems of subject and atlas due to misalignment and size/shape variation and
eliminates subject–specific size, shape and orientation information in the de-
formation vectors. This enables the motion fields of each of the subjects to be
compared directly quantitatively and qualitatively and the 4D–MMM is gener-
ated by calculating the Log-Euclidean mean ¯ϕjof the mapped transformations
for each breathing state j:
¯ϕj= exp 1
NpX
p
log ( ˜
ϕp,j )!= exp 1
NpX
p
log ψpϕp,j ψ1
p!.(6)
The method proposed in [16] was used to compute the logarithms log ( ˜
ϕp,j ).
The resulting 4D–MMM consists of an average lung image ¯
I0for a refer-
ence state of the breathing cycle, e.g. maximum inhalation, and a set of motion
fields ¯ϕjdescribing an average motion between the respiratory state jand the
reference state (Fig. 1).
2.3 Utilization of the 4D–MMM for individual motion prediction
The 4D–MMM generated in section 2.2 can be used to predict respiratory lung
motion of a subject seven if no 4D image information is available. Presuming a
3D image Is,0acquired at the selected reference state of the breathing cycle is
available, the 4D–MMM is adapted to the individual lung geometry of subject
sby registering the average lung atlas ¯
I0with the 3D image Is,0. The resulting
transformation ψsis used to apply the coordinate transformation eq. 5 to the
mean motion fields ¯ϕjin order to obtain the model–based prediction of the
subject–specific lung motion: ˆ
ϕs,j =ψ1
s¯ϕjψs.
However, two problems arise. First, breathing motion of different individuals
varies significantly in amplitude [1]. Therefore, motion prediction using the mean
amplitude will produce unsatisfying results. To account for subject–specific mo-
tion amplitudes, we propose to introduce additional information by providing the
Second International Workshop on
Pulmonary Image Processing
-7-
(a) (b)
Fig. 1. Visualization of average lung model ¯
I0(a) and magnitude of mean deformation
¯ϕjbetween end inspiration and end expiration (b). The average deformation model
shows a typical respiratory motion pattern. Different windowing and leveling functions
are used to accentuate inner/outer lung structures.
required change in lung air content ∆Vair. Even without 4D-CT data, this infor-
mation can be acquired by spirometry measurements. Thus, we search a scaling
factor λso that the air content of the transformed reference image Is,0λˆ
ϕ1
s,j
is close to the air content Vair (Is,0) + ∆Vair. In order to ensure that the scaled
motion field is diffeomorphic, the scaling is performed in the Log–Euclidean. To
determine the correct scaling factor λ, a binary search strategy is applied and
the air content is computed using the method described in [17]. ∆Vair can be
regarded as a parameter that describes the depth of respiration. In general, other
measurements can also be used to calculate appropriate scaling factors, e.g. the
amplitude of the diaphragm motion.
Further, a second problem arises when predicting individual breathing motion
of lung cancer patients. Lung tumors will impair the atlas–patient registration
because there is no corresponding structure in the atlas. This leads to distortions
in ψsnear the tumor region and consequently the predicted motion fields ˆ
ϕs,j are
affected. Therefore, we decided to compute ψsby registering lung segmentation
masks from atlas and subject sand by omitting the inner lung structures.
3 Results
To capture the respiratory motion of the lung, 18 4D CT images were acquired
using a 16–slice CT scanner operating in cine-mode. The scanning protocol and
optical–flow based reconstruction method was described in [11]. The spatial res-
olution of the reconstructed 4D CT data sets is between 0.78 ×0.78 ×1.5mm3
and 0.98 ×0.98 ×1.5mm3. Each data set consists of 3D CT images at 10 to 14
preselected breathing phases. Due to computation times, in this study we use
the following 4 phases of the breathing cycle: end inspiration (EI), 42% exhale
(ME), end expiration (EE) and 42% inhale (MI). A clinical expert delineated
left and right lung and the lung tumors in the images.
-8-
Second International Workshop on
Pulmonary Image Processing
Fig. 2. Result of the motion estimatation by intra–patient registration (top row)
and the model–based motion prediction (bottom row) of patient 01. Visualization of
the magnitude of the displacement field computed by intra–patient registration (top
left) and of the displacement field predicted by the 4D mean motion model (bot-
tom left). Right: contours at end inspiration (green), end expiration (yellow) and esti-
mated/predicted contours at end expiration (red).
The aim of the model generation is to create a representation of the mean
healthy lung motion. In a dynamic MRI study by Plathow et al. [18], tumors with
diameter >3cm were shown to influence respiratory lung dynamics. According
to their observations, we divide the lungs into two groups: lungs with intact
dynamics and lungs with impaired motion. Lungs without or with only small
tumors (volume <14.1cm3or diameter <3cm) are defined as intact. Lungs with
large tumors or lungs affected by other diseases (e.g. emphysema) are defined as
impaired. According to this partitioning, we have 12 data sets with both lungs
intact and 6 data sets with at least one impaired lung. Only data sets with intact
lungs are used to generate the 4D–MMM.
3.1 Landmark–based evaluation
Due to the high effort of the manual landmark identification only 10 of the
18 data sets are used for the detailed quantitative landmark–based evaluation.
Between 70 and 90 inner lung landmarks (prominent bifurcations of the bronchial
tree and the vessel tree) were identified manually in the four breathing phases,
about 3200 landmarks in total. An intraobserver variability of 0.9±0.8mm was
Second International Workshop on
Pulmonary Image Processing
-9-
Table 1. Landmark motion amplitudes and target registration errors RE E for the
patients considered (in mm). Values are averaged over all landmarks per lung. Lungs
with impaired motion are indicated by a gray text color.
Landmark Intra-patient Model-based
motion registration prediction
Data set (Lung) [mm] TRE [mm] TRE [mm]
Patient01 left 4,99 ±4,84 1,51 ±1,31 2.43 ±1,64
right 7,25 ±4,47 1,41 ±0,83 3.97 ±2,08
Patient02 left 7,09 ±2,92 2,28 ±1,73 4.26 ±1,28
right 4,21 ±1,75 1,16 ±0,61 3.82 ±1,14
Patient03 left 6,15 ±2,26 1,38 ±0,73 3.68 ±1,31
right 6,28 ±2,01 1,78 ±1,05 3.72 ±1,37
Patient04 left 6,65 ±2,56 1,53 ±0,93 4.01 ±1,60
right 6,22 ±3,52 1,44 ±0,82 2.28 ±1,09
Patient05 left 5,77 ±2,03 1,50 ±0,80 3.17 ±1,34
right 3,18 ±3,36 1,29 ±1,04 3.47 ±1,99
Patient06 left 9,67 ±8,32 1,64 ±1,42 5.85 ±2,65
right 11,85 ±7,08 1,60 ±1,00 4.88 ±2,02
Patient07 left 8,22 ±6,52 2,45 ±2,22 3.99 ±1,79
right 4,99 ±6,65 1,49 ±1,48 3.35 ±1,69
Patient08 left 5,78 ±4,14 1,18 ±0,57 3.15 ±1,70
right 6,28 ±5,63 1,25 ±1,03 3.11 ±2,24
Patient09 left 7,43 ±5,34 1,42 ±1,22 3.05 ±1,39
right 8,41 ±5,22 1,67 ±1,03 4.94 ±3,01
Patient10 left 7,63 ±5,83 1,93 ±2,10 3.16 ±2,29
right 8,85 ±6,76 1,76 ±1,33 5.12 ±2,34
assessed by repeated landmark identification in all test data sets. The target
registration error (TRE) was determined for a quantitative evaluation of the
patient–specific registration method and the model–based prediction. The TRE
Rk
jis the difference between the motion of landmark kestimated by ϕjand the
landmark motion as observed by the medical expert.
The mean landmark motion magnitude, i.e. the mean distance of correspond-
ing landmarks, between EI and EE is 6.8±5.4mm, (2.6±1.6mm between EI and
ME and 5.0±2.8mm between EI and MI). The TRE of the intra–patient reg-
istration is a lower bound for the accuracy of the model–based prediction using
the 4D–MMM. The average TRE REE between the reference phase (EI) and EE
for patient 01 to 10 (averaged over all landmarks and patients) is 1.6±1.3mm
(1.5±0.8mm between EI and ME and 1.6±0.9mm between EI and MI). Details
for all test data sets are shown in table 1.
For each of the 10 test data sets the 4D–MMM is used to predict landmark
motion as described in Sect. 2.3. If both lungs of the test data set are intact,
a leave–one–out strategy is applied to ensure that the patient data is not used
for the model generation. The change in lung air content ∆Vair needed for the
computation of the scaling factor λwas calculated from the CT images IEI and
IEE for each lung side and each test data set. The same factor λwas used to
-10-
Second International Workshop on
Pulmonary Image Processing
scale the predicted motion fields ˆ
ϕEE ,ˆ
ϕME and ˆ
ϕM I . Besides ∆Vair no 4D
information is used for the model–based prediction.
In Fig. 2 the motion field predicted by the 4D–MMM is compared to the
motion field computed by patient–specific registration. A good correspondency
between the motion fields is visible, except in the right upper lobe where small
deviations occur. The prediction accuracy is illustrated by overlayed contours.
The average TREs REE are listed in table 1 for each of the test data sets and
for both the patient–specific and model–based motion estimation. Lungs with
impaired motion are indicated by a gray text color. Regarding table 1, lungs with
impaired motion generally show higher TREs for the model–based prediction
than intact lungs. The average TRE REE for intact lungs is 3.3±1.8mm, which
is significantly lower (p < 0.01) than for lungs with impaired motion (REE =
4.2±2.2mm). Significance is tested by applying a multilevel hierarchical model
with the individual Rkvalues nested within the patient (software: SPSS v.17);
data are logarithmized to ensure normal distribution and the model is adjusted
to landmark motion.
3.2 Model-based prediction of tumor motion
For a second evaluation of the model, we use expert generated tumor segmenta-
tions in two breathing phases (EI and EE) of 9 patient data sets with solid lung
tumors. The 4D–MMM is transformed into the coordinate space of each test
data set (see Sect. 2.3) and then used to warp the expert–generated tumor seg-
mentation at maximum exhale towards maximum inhale. The distance between
the predicted tumor mass center and the center of the manual segmentation
was used to evaluate the accuracy of the model–based prediction. Correspond-
ing results are summarized in table 2. Large tumors with a diameter >3cm are
marked in the table as “large”.
Regarding table 2 accuracy of the model–based predicted motion of the tu-
mor mass center from EI to EE ranges from 0.66mm to 7.38mm. There is no
significant correlation between the tumor motion amplitude and the accuracy
of the model-based predicted mass center (r= 0.19, p > 0.15). Furthermore, it
cannot be shown that the prediction accuracy for small tumors is significantly
better than for large tumors (p > 0.4). In contrast, the model–based prediction
accuracy of non-adherent tumors is significantly better than for tumors adher-
ing to chest wall or hilum (p < 0.05). In these cases the model presumes the
tumour moves like surrounding lung tissue, whereas it rather moves like the ad-
jacent non-lung structure (e.g. chest wall or hilum). In the last column in table 2
those tumors are tagged. Significance is tested by applying a linear mixed model
(software: SPSS v.17) and the model is adjusted to tumor motion.
4 Discussion
In this paper, we proposed a method to generate an inter–subject statistical
model of the breathing motion of the lung, based on individual motion fields
Second International Workshop on
Pulmonary Image Processing
-11-
Table 2. Tumor size and motion amplitude, and the center distances between manually
segmented tumor and predicted tumor position (see text for details).
Tumor Tumor Intra–patient Model–based
size motion registration prediction
Data set (Lung) [cm3] [mm] TRE [mm] TRE [mm]
large
adhere
Patient 01 right 6.5 12.20 0.45 3.54
Patient 02 right 7.6 2.15 1.44 3.90 X
Patient 03 left 12.7 6.74 0.41 3.91 X
Patient 05 right 8.2 2.34 1.95 5.39 X
right 17.3 1.68 1.05 4.44 X X
Patient 06 left 3.4 19.78 2.12 6.87
right 128.2 13.78 0.97 2.99 X
Patient 07 right 2.8 1.31 0.42 0.66
Patient 08 right 18.4 6.24 0.90 1.59 X
Patient 09 right 88.9 8.35 0.29 5.33 X X
Patient 10 right 96.1 1.77 1.01 7.46 X X
extracted from 4D CT images. Methods to apply this model in order to predict
patient–specific breathing motion without knowledge of 4D image information
were presented. Ten 4D CT data sets were used to evaluate the accuracy of the
image–based motion field estimation and the model–based motion field predic-
tion. The intra–patient registration shows an average TRE in the order of the
voxel size, e.g. 1.6±1.3mm when considering motion between EI and EE. The
4D–MMM achieved an average prediction error (TRE) for the motion between
EI and EE of 3.3±1.8mm. Regarding that besides the calculated scaling factor
λno patient–specific motion information is used for the model–based prediction
and that the intra–patient registration as well as the atlas–patient registration
is error prone, we think this is a promising result. Thus we believe that a sta-
tistical respiratory motion model has the capability of providing valuable prior
knowledge in many fields of applications.
Since the statistical model represents intact respiratory dynamics, it was
shown that the prediction precision is significantly lower for lungs affected by
large tumours or lung disorders (4.2±2.2mm). These results indicate (at least for
the 10 lung tumor patients considered) that large tumors considerably influence
respiratory lung dynamics. This finding is in agreement with Plathow et al. [18].
In addition, we applied the 4D–MMM to predict patient–specific tumor motion.
No correlation between prediction accuracy and tumor size or tumor motion
amplitude could be detected (at least for our test data sets). We observed that
tumors adhering to non–lung structures degrade local lung dynamics significantly
and model–based prediction accuracy is decreased for these cases.
To conclude this paper, we present two examples of possible applications of
the statistical respiratory motion model.
Application examples: The capability of the 4D–MMM to predict tumor motion
for radiotherapy planning is exemplarily illustrated for patient 01. This patient
has a small tumor not adherent to another structure, and a therapeutically
-12-
Second International Workshop on
Pulmonary Image Processing
(a) (b)
Fig. 3. (a) Visualization of the internal target volume (ITV) of patient 01 in a coronal
CT slice. The ITV was calculated from expert-defined tumor segmentations (yellow
contour) and from tumor positions predicted by the average motion model (red con-
tour). (b) Visualization of the difference between lung motion estimated by patient–
specific registration and lung motion predicted by the 4D–MMM for patient 09. The
left lung shows intact lung motion; dynamics of the right lung are impaired by the
large tumor. The contour of the tumor is shown in black.
relevant tumor motion of 12.2mm. An important measure for planning in 3D
conformal radiotherapy is the internal target volume (ITV), which contains the
complete range of motion of the tumour. For this patient, the ITV is calculated
first from expert-defined tumor segmentations in the images acquired at EI, EE,
ME and MI. In a second step, the expert segmentation in EI is warped to EE,
ME and MI using the 4D–MMM and the ITV is calculated based on the warped
results. The outlines of both ITVs are shown in Fig. 3(a).
A second example demonstrates that the 4D–MMM could be helpful from
the perspective of image-guided diagnosis. Here, the motion pattern of individual
patients are compared to a “normal” motion, represented by the 4D–MMM. To
visualize the influence of a large tumor to the respiratory motion, the difference
between the individual motion field computed by intra–patient registration and
the motion field predicted by the 4D–MMM is shown in Fig. 3(b). The left
lung shows differences of only about 3mm, whereas the large differences to the
intact lung motion indicate that the respiratory dynamics of the right lung are
influenced by the large tumor.
Currently, the statistical motion model represents the average motion in the
training population. A main focus of our future work is to include the variability
of the motion into the model. Here, the Log–Euclidean framework provides a
suitable technique for more detailed inter–patient statistics.
References
1. Keall, P.J., Mageras, G., Balter, J.M., et al.: The management of respiratory
motion in radiation oncology report of AAPM task group 76. Med Phys 33(10)
Second International Workshop on
Pulmonary Image Processing
-13-
(2006) 3874–3900
2. Vik, T., Kabus, S., von Berg, J., Ens, K., Dries, S., Klinder, T., Lorenz, C.: Vali-
dation and comparison of registration methods for free-breathing 4D lung CT. In:
SPIE Medical Imaging 2008. Volume 6914., SPIE (2008) 2P
3. Werner, R., Ehrhardt, J., Schmidt-Richberg, A., Handels, H.: Validation and com-
parison of a biophysical modeling approach and non-linear registration for estima-
tion of lung motion fields in thoracic 4D CT data. In: SPIE Medical Imaging 2009:
Image Processing. Volume 7259. (2009) 0U–1–8
4. Werner, R., Ehrhardt, J., Schmidt, R., Handels, H.: Patient-specific finite element
modeling of respiratory lung motion using 4D CT image data. Med Phys 35(5)
(May 2009) 1500–1511
5. Arsigny, V.: Processing Data in Lie Groups: An Algebraic Approach. Application
to Non-Linear Registration and Diffusion Tensor MRI. Th`ese de sciences (phd
thesis), ´
Ecole polytechnique (November 2006)
6. Li, B., Christensen, G.E., Hoffman, E.A., McLennan, G., Reinhardt, J.M.: Estab-
lishing a normative atlas of the human lung: intersubject warping and registration
of volumetric CT images. Acad Radiol 10 (2003) 255–265
7. Segars, W., Lalush, D., Tsui, B.: Modeling respiratory mechanics in the MCAT
and spline-based MCAT phantoms. IEEE Trans Nucl Sci 48(1) (Feb 2001) 89–97
8. Sundaram, T.A., Avants, B.B., Gee, J.C.: A dynamic model of average lung defor-
mation using capacity-based reparameterization and shape averaging of lung mr
images. In: MICCAI 2004, Springer (2004) 1000–1007
9. Klinder, T., Lorenz, C., Ostermann, J.: Respiratory motion modeling and estima-
tion. In: Workshop on Pulmonary Image Analysis, New York, USA (2008) 53–62
10. Ehrhardt, J., Werner, R., Schmidt-Richberg, A., Schulz, B., Handels, H.: Genera-
tion of a mean motion model of the lung using 4D CT data. In: Visual Computing
for Biomedicine, Delft, Eurographics Association (2008) 69–76
11. Ehrhardt, J., Werner, R., aring, D., Frenzel, T., Lu, W., Low, D., Handels, H.:
An optical flow based method for improved reconstruction of 4D CT data sets
acquired during free breathing. Med Phys 34(2) (Feb 2007) 711–721
12. Miller, M.I.: Computational anatomy: shape, growth, and atrophy comparison via
diffeomorphisms. NeuroImage 23 Suppl 1 (2004) S19–S33
13. Vercauteren, T., Pennec, X., Perchant, A., Ayache, N.: Symmetric log-domain
diffeomorphic registration: a demons-based approach. In: Med Image Comput
Comput Assist Interv, MICCAI 2008, Springer (2008) 754–761
14. Beg, M.F., Miller, M.I., Trouve, A., Younes, L.: Computing large deformation
metric mappings via geodesic flows of diffeomorphisms. Int J Comp Vis 61(2)
(2005) 139–157
15. Arsigny, V., Commowick, O., Pennec, X., Ayache, N.: A log-Euclidean framework
for statistics on diffeomorphisms. In: MICCAI 2006, Springer (2006) 924–931
16. Bossa, M.N., Gasso, S.O.: A new algorithm for the computation of the group
logarithm of diffeomorphisms. In: MFCA 2008, New York, USA (2008)
17. Lu, W., Parikh, P.J., El Naqa, I.M., et al.: Quantitation of the reconstruction qual-
ity of a four-dimensional computed tomography process for lung cancer patients.
Med Phys 32 (2005) 890–901
18. Plathow, C., Fink, C., Ley, S., M.Puderbach, M.Eichinger, I.Zuna, A.Schm¨ahl,
H.Kauczor: Measurement of tumor diameter-dependent mobility of lung tumors
by dynamic MRI. Radiother Oncol 73 (2004) 349–354
-14-
Second International Workshop on
Pulmonary Image Processing
... A common assumption of most previous work on respiratory motion modelling and estimation is that the motion caused by respiration is repeatable, i.e. that the motion paths are the same from cycle to cycle (Manke et al., 2002;Blackall et al., 2005;McClelland et al., 2006;Ehrhardt et al., 2009;King et al., 2009a) or between the expiration and inspiration phases of a single cycle (Manke et al., 2002;Blackall et al., 2005;Ehrhardt et al., 2009). However, this is only approximately true, and a number of papers have reported significant variation in breathing motion paths (Sonke et al., 2008;Blackall et al., 2006;von Siebenthal et al., 2007;Mutaf et al., 2010;McClelland et al., 2011). ...
... A common assumption of most previous work on respiratory motion modelling and estimation is that the motion caused by respiration is repeatable, i.e. that the motion paths are the same from cycle to cycle (Manke et al., 2002;Blackall et al., 2005;McClelland et al., 2006;Ehrhardt et al., 2009;King et al., 2009a) or between the expiration and inspiration phases of a single cycle (Manke et al., 2002;Blackall et al., 2005;Ehrhardt et al., 2009). However, this is only approximately true, and a number of papers have reported significant variation in breathing motion paths (Sonke et al., 2008;Blackall et al., 2006;von Siebenthal et al., 2007;Mutaf et al., 2010;McClelland et al., 2011). ...
... Statistical approaches such as PCA have also been used to form intersubject respiratory motion models (Ehrhardt et al., 2009(Ehrhardt et al., , 2011. However, in this work the model, once adapted to individual subjects, represented the motion during an average breathing cycle and inter-cycle motion variability was not addressed. ...
Article
Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition.
... Surrogate driven, average cycle (SD-AC): This 315method is similar to a number of average cycle motion models in the literatures[21][22]. In this method, the 3-D motion is represented as an average breathing cycle. ...
Article
Respiratory motion is one major complicating factor in many image acquisition applications and image-guided interventions. Existing respiratory motion estimation and compensation methods typically rely on breathing motion models learned from certain training data, and therefore may not be able to effectively handle intra-subject and/or inter-subject variations of respiratory motion. In this paper, we propose a respiratory motion compensation framework that directly recovers motion fields from sparsely spaced and efficiently acquired dynamic 2-D MRIs without using a learned respiratory motion model. We present a scatter-to-volume deformable registration algorithm to register dynamic 2-D MRIs with a static 3-D MRI to recover dense deformation fields. Practical considerations and approximations are provided to solve the scatter-to-volume registration problem efficiently. The performance of the proposed method was investigated on both synthetic and real MRI datasets, and the results showed significant improvements over the state-of-art respiratory motion modeling methods. We also demonstrated a potential application of the proposed method on MRI-based motion corrected PET imaging using hybrid PET/MRI.
... Previous work in cardiac motion analysis used non-rigid registration of end-diastolic images to compare cross-subject motion fields [6] in a common reference frame, but did not perform quantitative analysis of the motion fields. More recently, mean motion models of respiratory lung motion [7] were generated using the Log-Euclidean mean of mapped transformations obtained via intra-subject and inter-subject diffeomorphic registration. This paper presents a principled method for the construction and crosssectional statistical analysis of the flow of velocity vector fields for describing shape change of volumetric 3D brain structures taken from magnetic resonance grayscale images (MRI) acquired over time. ...
Article
Full-text available
Anatomical shape change over time is a biomarker for track-ing disease progression. Given a database of anatomical images where each subject is represented by a time-series of images that have been ac-quired over time, algorithms for estimation of longitudinal progression of morphometric changes over time are required to address 1) the common practical issue where not all subjects are sampled at uniform and ho-mogenous time instants, and 2) the fact that the baseline image for each subject is different hence changes with respect to baseline for each sub-ject are with respect to a different starting frame of reference. To address the first issue, we have previously shown how to estimate the flow of vec-tor fields interpolating through the given time-series of followup images starting from each subject's baseline image in the large deformation dif-feomorphic metric matching (LDDMM). In this work, we show one way of addressing issue 2, namely, the normalization of the 4D within-subject flows estimated with respect to individual baseline images into a com-mon central template. We apply this method on the hippocampus shape taken from a small database of 5 controls and 5 cognitively-impaired no dementia (CIND) subjects that underwent magnetic resonance (MR) imaging every 3-6 months over 2 years. The time-series images for each subject were segmented for extracting the hippocampus using an auto-mated multi-atlas segmentation method, and these were used to generate the longitudinal within-subject flow of vector fields with reference to the baseline shape. Then, these flows were transformed into a central unbi-ased hyper-template shape created from the baseline shapes to provide a common frame of reference. In this central template frame of reference, standard statistical methods can be applied to the 4D vector fields, such as average flows and principle modes of variation in the 4D flows. We computed the means and principal modes of variations for both the con-trol and the CIND group in the central template and demonstrate their time evolution. Statistical analysis on the dimensionality-reduced flow showed significant group differences in the hippocampus shape change over time between the controls and the CIND group. With the increas-ing availability of time-series data, this method is likely to find use in II understanding the space-time patterns of evolution of anatomical change in normal control subjects and those within a disease group.
Article
Respiratory motion models have been proposed for the estimation and compensation of respiratory motion during image acquisition and image-guided interventions on organs in the chest and abdomen. However, such techniques are not commonly used in the clinic. Subject-specific motion models require a dynamic calibration scan that interrupts the clinical workflow and is often impractical to acquire, while population-based motion models are not as accurate as subject-specific motion models. To address this lack of accuracy, we propose a novel personalisation framework for population-based respiratory motion models and demonstrate its application to respiratory motion of the heart. The proposed method selects a subset of the population sample which is more likely to represent the cardiac respiratory motion of an unseen subject, thus providing a more accurate motion model. The selection is based only on anatomical features of the heart extracted from a static image. The features used are learnt using a neighbourhood approximation technique from a set of training datasets for which respiratory motion estimates are available. Results on a population sample of 28 adult healthy volunteers show average improvements in estimation accuracy of 20% compared to a standard population-based motion model, with an average value for the 50th and 95th quantiles of the estimation error of 1.6mm and 4.7mm respectively. Furthermore, the anatomical features of the heart most strongly correlated to respiratory motion are investigated for the first time, showing the features on the apex in proximity to the diaphragm and the rib cage, on the left ventricle and interventricular septum to be good predictors of the similarity in cardiac respiratory motion.
Article
Full-text available
Registration of the lungs in thoracic CT images is required in many fields of application in medical imaging, for example for motion estimation, analysis of pathology progression or the generation of shape atlases. In this paper, we present a robust registration approach that has been optimized for the registration of thoracic CT data. The algorithm con-sists of an initial shape-based adjustment of lung surfaces followed by an intensity-based diffeomorphic image registration. The approach is evaluated based on 20 CT scans provided for the EM-PIRE10 study for pulmonary image registration. A fourth place out of 34 participants suggests a good applicability for the registration of lung CT images.
Article
The effects of certain lung pathologies include alterations in lung physiology negatively affecting pulmonary compliance. Current approaches to diagnosis and treatment assessment of lung disease commonly rely on pulmonary function testing. Such testing is limited to global measures of lung function, neglecting regional measurements, which are critical for early diagnosis and localization of disease. Increased accessibility to medical image acquisition strategies with high spatiotemporal resolution coupled with the development of sophisticated intensity-based and geometric registration techniques has resulted in the recent exploration of modeling pulmonary motion for calculating local measures of deformation. In this review, the authors provide a broad overview of such research efforts for the estimation of pulmonary deformation. This includes discussion of various techniques, current trends in validation approaches, and the public availability of software and data resources.
Conference Paper
Full-text available
There is an increasing interest on computing statistics of spa-tial transformations, in particular diffeomorphisms. In the Log-Euclidean framework proposed recently the group exponential and logarithm are essential operators to map elements from the tangent space to the man-ifold and vice versa. Currently, one of the main bottlenecks in the Log-Euclidean framework applied on diffeomorphisms is the large computa-tion times required to estimate the logarithm. Up to now, the fastest ap-proach to estimate the logarithm of diffeomorphisms is the Inverse Scal-ing and Squaring (ISS) method. This paper presents a new method for the estimation of the group logarithm of diffeomorphisms, based on a se-ries in terms of the group exponential and the Baker-Campbell-Hausdorff formula. The proposed method was tested on 3D MRI brain images as well as on random diffeomorphisms. A performance comparison showed a significant improvement in accuracy-speed trade-off vs. the ISS method.
Article
Full-text available
We have compared and validated image registration methods with respect to the clinically relevant use-case of lung CT max-inhale to max-exhale registration. Four fundamentally different algorithms representing main approaches for image registration were compared using clinical images. Each algorithm was assigned to a different person with extensive working knowledge of its usage. Quantitative and qualitative evaluation is performed. Whereas the methods achieve similar results in target registration error, characteristic differences come to show by closer analysis of the displacement fields.
Conference Paper
Full-text available
Spatiotemporal image data allow analyzing respiratory dynamics and its impact on radiation therapy. A key feature within this field of research is the process of lung motion field estimation. For a multitude of applications feasible and “realistic” motion field estimates are required. Widely non-linear registration methods are applied to estimate motion fields; in this case physiology is not taken into account. Using Finite Element Methods we implemented a biophysical approach to model respiratory lung motion starting with the ...
Conference Paper
Full-text available
Modeling of respiratory motion gains in importance within the field of radiation therapy of lung cancer patients. Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D CT data of different patients to extend motion modeling capabilities. Our modeling process consists of two main parts: an intra-subject registration to generate subject-specific motion models and an inter-subject registration to combine these subject-specific motion models into a mean motion model. Further, we present methods to adapt the mean motion model to a patient-specific lung geometry. A first evaluation of the model was done by using the generated mean motion model to predict lung and tumor motion of individual patients and comparing the prediction quality to non-linear registration. Our results show that the average difference in prediction quality (measured by overlap coefficients) between non-linear registration and model-based prediction is approx. 10%. However, the patient-specific registration relies on individual 4D image data, whereas the model-based prediction was obtained without knowledge of the individual breathing dynamics. Results show that the model predicts motion patterns of individual patients generally well and we conclude from our results that such a model has the capability to provide valuable a-priori knowledge in many fields of applications.
Article
Full-text available
This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in Dupuis et al. (1998) and Trouvé (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0○ϕ−1=I 1 where ϕ=Φ1 is the end point at t= 1 of curve Φ t , t∈[0, 1] satisfying .Φ t =v t (Φ t ), t∈ [0,1] with Φ0=id. The variational problem takes the form \mathop {\arg {\text{m}}in}\limits_{\upsilon :\dot \phi _t = \upsilon _t \left( {\dot \phi } \right)} \left( {\int_0^1 {\left\| {\upsilon _t } \right\|} ^2 {\text{d}}t + \left\| {I_0 \circ \phi _1^{ - 1} - I_1 } \right\|_{L^2 }^2 } \right)
Article
Full-text available
Development and optimization of methods for adequately accounting for respiratory motion in radiation therapy of thoracic tumors require detailed knowledge of respiratory dynamics and its impact on corresponding dose distributions. Thus, computer aided modeling and simulation of respiratory motion have become increasingly important. In this article a biophysical approach for modeling respiratory lung motion is described: Major aspects of the process of lung ventilation are formulated as a contact problem of elasticity theory which is solved by finite element methods; lung tissue is assumed to be isotropic, homogeneous, and linearly elastic. A main focus of the article is to assess the impact of biomechanical parameters (values of elastic constants) on the modeling process and to evaluate modeling accuracy. Patient‐specific models are generated based on 4D CT data of 12 lung tumor patients. Simulated motion patterns of inner lung landmarks are compared with corresponding motion patterns observed in the 4D CT data. Mean absolute differences between model‐based predicted landmark motion and corresponding breathing‐induced landmark displacements as observed in the CT data sets are in the order of 3mm (end expiration to end inspiration) and 2mm (end expiration to midrespiration). Modeling accuracy decreases with increasing tumor size both locally (landmarks close to tumor) and globally (landmarks in other parts of the lung). The impact of the values of the elastic constants appears to be small. Outcomes show that the modeling approach is an adequate strategy in predicting lung dynamics due to lung ventilation. Nevertheless, the decreased prediction quality in cases of large tumors demands further study of the influence of lung tumors on global and local lung elasticity properties.
Conference Paper
Full-text available
Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.
Thesis
Recently, the need for rigorous frameworks for the processing of non-linear data has grown considerably in medical imaging. In this thesis, we propose several general frameworks to process various types of non-linear data, which all belong to Lie groups. To this end, we rely on the algebraic properties of these spaces. Thus, we propose a general processing framework for symmetric and positive-definite matrices, named Log-Euclidean, very simple to use and which has excellent theoretical properties. It is particularly well-adapted to the processing of diffusion tensor MRI. We also propose several frameworks, called polyaffine, to parameterize locally rigid or affine transformations, in a way that guarantees their invertibility. Their use is illustrated in the case of the locally rigid registration of histological slices and of the locally affine 3D registration of MRIs of the human brain. This led us to propose two general frameworks for computing statistics in finite-dimensional Lie groups: first the Log-Euclidean one, which generalizes our work on tensors, and second a framework based on the novel notion of bi-invariant mean, whose properties generalize to Lie groups those of the arithmetic mean. Finally, we generalize our Log-Euclidean framework to diffeomorphic geometrical transformations, which opens the way to a general and consistent framework for statistics in computational anatomy.
Conference Paper
We present methods for extracting an average representation of respiratory dynamics from free-breathing lung MR images. Due to individual variations in respiration and the difficulties of real-time pulmonary imaging, time of image acquisition bears little physiologic meaning. Thus, we reparameterize each individual’s expiratory image sequence with respect to normalized lung capacity (area, as a substitute for volume), where 1 represents maximal capacity and 0 minimal capacity, as measured by semi-automated image segmentation. This process, combined with intra-subject pairwise non-rigid registration, is used to interpolate intermediate images in the normalized capacity interval [0,1]. Images from separate individuals with the same normalized capacity are taken to be at corresponding points during expiration. We then construct an average representation of pulmonary dynamics from capacity-matched image sequences. This methodology is illustrated using two coronal 2-D datasets from healthy individuals.
Article
To establish the range of normal values for quantitative CT-based measures of lung structure and function, the authors developed a method for matching pulmonary structures across individuals and creating a normative human lung atlas. A computerized human lung atlas was synthesized from computed tomographic (CT) images from six subjects by means of three-dimensional image registration. The authors identified a set of reproducible feature points for each CT image and used these points to establish correspondences across subjects, used a landmark- and intensity-based consistent image registration algorithm to register a template image volume from the population to the rest of the pulmonary CT volumes in the population, averaged these transformations, and constructed an atlas by deforming the template with the average transformation. The effectiveness of the authors' method was evaluated and visualized by means of both gray-level and segmented CT images. The method reduced the average landmark registration error from 10.5 mm to 0.4 mm and the average relative volume overlap error from 0.7 to 0.11 for the six data sets studied. The method, and the computerized human lung atlas constructed and visualized by the authors with this method, provides a basis for establishing regional ranges of normative values for structural and functional measures of the human lung.