A three-dimensional reconstruction algorithm for an inverse-geometry
volumetric CT system
Taly Gilat Schmidt,a?Rebecca Fahrig, and Norbert J. Pelcb?
Department of Radiology, Stanford University, Stanford, California, 94305
?Received 11 December 2004; revised 14 August 2005; accepted for publication 19 August 2005;
published 14 October 2005?
An inverse-geometry volumetric computed tomography ?IGCT? system has been proposed capable
of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system
uses a large-area scanned source opposite a smaller detector. The source and detector have the same
extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding
cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The
algorithm first rebins the acquired data into two-dimensional ?2D? parallel-ray projections at mul-
tiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is
performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new
method for correcting the gridding error caused by the finite and asymmetric sampling in the
neighborhood of each output grid point in the projection space. The reconstruction algorithm was
implemented and tested on simulated IGCT data. Results show that the gridding correction reduces
the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm
does not introduce significant artifacts or blurring when compared to images reconstructed from
simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the
method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as effi-
ciently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for
the same number of photons. Simulations of a resolution test pattern and the modulation transfer
function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm
isotropic resolution. The successful implementation of the reconstruction algorithm is an important
step in establishing feasibility of the IGCT system. © 2005 American Association of Physicists in
Medicine. ?DOI: 10.1118/1.2064827?
Conventional computed tomography ?CT? systems are rap-
idly evolving to acquire increasingly thicker volumes per
circular rotation using multirow detectors or flat panel digital
detector technology. These volume CT approaches provide
several advantages over single slice acquisition, including
faster scan times, thinner slices, and reduced motion arti-
facts. The ability to scan an entire organ in one rotation could
have important clinical impact, for example, in perfusion
studies and other dynamic applications.
The increased volume thickness comes at the expense of
larger cone-beam angles. Because of the diverging x-ray
beam in the axial, or slice, direction, a circular scan cone-
beam acquisition does not acquire sufficient volumetric
data.1Although approximate reconstruction algorithms are
commonly used,2the resulting artifacts can be significant for
large cone-angles. While exact reconstruction is possible for
helical cone-beam scanning for certain pitch values,3–6this
paper focuses on sufficient volumetric acquisition in one cir-
We have previously proposed a volumetric CT system that
can sufficiently sample a thick ?on the order of several cen-
timeters? volume in one fast circular scan.7This inverse-
geometry volumetric CT system ?IGCT? uses a large-area
scanned source and an area detector with a smaller extent in
the transverse direction. The sampling is fanlike in the trans-
verse direction, and in the axial direction the source and
detector have the same extent, providing sufficient volumet-
ric coverage and avoiding cone-beam artifacts. In addition,
the smaller detector area may provide significant advantages
over conventional cone-beam systems with respect to cost
and detected scatter radiation.
Previous work studied the feasibility of the IGCT system
with respect to sampling and photon flux and found it pos-
sible to sample a 30-cm wide field of view ?FOV? with
15-cm volume thickness in less than half of a second.7In
fact, the source scanning is sufficiently fast so that the scan
time is limited by gantry speed rather than sampling. Another
important feasibility question is whether the acquired IGCT
data can be reconstructed accurately ?from an artifact per-
spective? and efficiently ?from a noise perspective?. The pur-
pose of this paper is to present a reconstruction algorithm for
the IGCT system.
The data acquired by the IGCT geometry are very similar
to that from a multiring positron emission tomography ?PET?
geometry. Therefore a PET reconstruction algorithm can be
used. As in a three-dimensional ?3D? PET system, the IGCT
data consists of in-plane rays which connect each source row
to the opposed detector row, and cross-plane rays which con-
nect each source row to other detector rows. It is the in-plane
rays that ensure a sufficient dataset for accurate volumetric
reconstruction, while the cross-plane rays improve the
signal-to-noise ratio ?SNR?.
32343234Med. Phys. 32 „11…, November 20050094-2405/2005/32„11…/3234/12/$22.50© 2005 Am. Assoc. Phys. Med.
Numerous algorithms have been proposed for 3D PET.
One class of algorithms uses 3D filtered backprojection.8–10
The data are rebinned into 2D parallel-ray projections at
multiple tilt and view angles, and the central slice theorem is
used to derive appropriate filters in frequency space. The
filtered projections are then backprojected into the volume.
The IGCT reconstruction algorithm proposed in this paper
follows this 3D filtered backprojection approach. Although
this type of algorithm has been thoroughly studied for PET
imaging, the application to a CT system merits additional
research. CT produces images of higher spatial resolution
and lower noise than PET and therefore demands more ac-
curate reconstruction. Further, the process by which IGCT
data are converted for use by this type of algorithm has not
The paper begins with a brief description of the IGCT
system, followed by an overview of the theoretical founda-
tion of the reconstruction algorithm. The key difference be-
tween the IGCT and 3D PET geometries is the ray sampling,
which is accounted for during rebinning. Once the data are
organized into 2D parallel-ray projections, the geometry is
equivalent to that of 3D PET after rebinning and the already
established filters can be used. Therefore, we focus much of
our investigation on the rebinning algorithm and only briefly
review the filter design. Gridding is used to rebin the data.
We show that errors can arise due to the location of acquired
data samples relative to the output grid point, and we present
a new method for reducing this gridding error. The paper
then investigates the image artifact, resolution, and noise per-
formance of the algorithm through simulations. Finally, al-
ternative reconstruction methods are briefly discussed.
II. SYSTEM DESCRIPTION
The basic system geometry is illustrated in Fig. 1. The
IGCT system consists of a large-area scanned x-ray source
mounted on a CT gantry opposite a smaller array of fast
photon-counting detectors. During an acquisition, the elec-
tron beam is electromagnetically steered over a transmission
target, dwelling behind each of an array of collimator holes
which limit the resulting x rays to those that illuminate the
detector area. For each source position, the entire detector
array is read out, creating a 2D divergent projection of a
fraction of the field of view. The scanning of the source
positions is fast relative to the gantry rotation.
III. RECONSTRUCTION ALGORITHM
The goal of the rebinning algorithm is to estimate, from
the rays in the IGCT geometry, a full set of 2D parallel-ray
projections. The parallel-ray geometry is illustrated in Fig. 2.
We define the axis of rotation to be along the z axis, and axial
planes to be perpendicular to the axis of rotation. We assume
that a parallel-ray projection is formed by the set of rays
normal to a virtual planar detector. The rotation of the pro-
jection about the axis of rotation ?i.e., view angle?, is defined
as ?, while the rotation from the axis of rotation ?i.e., colati-
tude or tilt angle? is defined as ?. Parameters u and ? repre-
sent the local coordinates within each projection ?i.e., where
a ray falls on the detector?. For all projections, the u axis lies
within an axial plane.
These four parameters, ?, ?, u, and ?, can be calculated
for each ray in the IGCT geometry. We define ? to be the
azimuthal angle of a ray, ?i.e., the angle about the z axis in
the absence of gantry rotation?. The parameters are illus-
trated in the context of the IGCT geometry in Fig. 3. A ray
with ? equal to zero and ? equal to ?/2 is parallel to the x
axis, and a ray with ? equal to zero is parallel to the z axis.
FIG. 1. Proposed IGCT geometry shown with the x-ray beam at one position
in the source array.
FIG. 2. 2D parallel-ray geometry to which the IGCT data is rebinned is
illustrated using a virtual detector. ? is the projection view angle, ? is the
colatitude angle, and u and ? are the coordinates within the projection. For
comparison, two virtual detectors are shown, one with ? equal to ?/2 and
one with a smaller value of ?.
FIG. 3. Four geometry parameters, ?, ?, u, and ?, shown for a ray in the
IGCT geometry where ? is the azimuthal angle.
3235 Schmidt, Fahrig, and Pelc: 3D reconstruction for inverse-geometry volumetric CT3235
Medical Physics, Vol. 32, No. 11, November 2005
The parameters depend on the 3D locations of the source
and detector element that define the ray and can be calculated
using the following equations. The coordinates ?sx,sy,sz? de-
fine the location of the source spot before gantry rotation,
where −sxis the source-to-isocenter distance ?SID?. Simi-
larly, each detector has coordinates ?dx,dy,dz? before gantry
rotation, where dxis the detector-to-isocenter distance ?DID?.
Parameters ?, ?, u, and ? are independent of the gantry ro-
tation and are calculated using the coordinates of the unro-
tated source and detector. Parameters ? and u can be calcu-
lated by considering the projection of the ray onto the x-y
? = arctan?sy− dy
u = dy· cos??? + dx· sin???.
The total view angle ? depends both on ? and the gantry
rotation angle ?gantry.
? = ? + ?gantry.
The parameters ? and ? can be calculated by considering the
plane defined by the ray and the source column from which
the ray originates.
??sx− dx?2+ ?sy− dy?2?,
? = dz· sin??? + ?dx· cos??? − dy· sin????cos???.
In this formulation, the distance of the ray to isocenter is
parametrized by the two perpendicular components u and ?,
which are equivalent to the parallel-ray detector coordinates
shown in Fig. 2.
The four parameters, ?, ?, u, and ?, are sufficient for
reorganizing the IGCT data into 2D parallel-ray projections.
However, for a discrete implementation with regularly
sampled output 2D projections that are equally spaced in the
two angles, some form of interpolation must be used.
In order to better understand the rebinning algorithm, it is
helpful to visualize the data in projection space. For a 2D
reconstruction from 1D projections, such as those acquired
by conventional single slice CT systems, each ray is de-
scribed by two parameters, the rotation angle ? and the ra-
dial distance to isocenter ?. For these single slice CT sys-
tems, projection space is two dimensional with coordinate
axes ? and ?. Each ray in a 1D projection samples one point
in the two-dimensional projection space, and a 1D parallel-
ray projection, comprised of data at one ? value and a range
of ? values spanning the field of view, samples a horizontal
line in projection space.
In the IGCT geometry, each ray is described by two
angles and two distances and is represented in a 4D projec-
tion space. Each ray samples one point in the 4D projection
space, but the sample points from all acquired rays are not
uniformly distributed. Rebinning the data to 2D parallel-ray
projections is equivalent to interpolating the nonuniform
samples onto a 4D Cartesian grid in projection space. The
problem of resampling nonuniform data onto a uniform grid
arises in many different fields and has been the subject of
much work. We are using a gridding approach11in which
each acquired data point contributes to all output grid points
within some neighborhood. In this implementation, a bin
width is selected for each of the four projection space param-
eters, defining the 4D neighborhood of measured data points
used to estimate each grid point. Each data point in this bin is
weighted based on its 4D location with respect to the grid
point and a chosen 4D kernel shape. The interpolated value
at the grid point is the sum of the weighted data points,
normalized by the sum of weights for that point.
The important design parameters for the rebinning algo-
rithm are the bin widths, kernel shape, and output grid sam-
pling density. For application in magnetic resonance ?MR?
reconstruction, the effect of each of these parameters on the
gridded data has been described in detail.12Although most
medical imaging applications, including MR, apply gridding
in frequency space, the analysis in Ref. 12 is based on gen-
eral signal processing theory and is relevant for other appli-
cations. When gridding in projection space, special care must
be taken to properly combine rays that are physically close
yet separated in angle. For example, rays near ?=2? must
be considered when gridding data at ?=0.
B. Rebinning error correction
One important step in the gridding algorithm is compen-
sation for the nonuniform and/or asymmetric location of the
acquired data points. That is, the estimated grid point value
should not be biased by the number or the distribution of
measured data points used in the estimation. Errors can occur
if the sampling is not accounted for properly.
The simplest method for performing this correction is
post-compensation, where the value at the output grid point
is normalized by the total sum of the deposited weights. Af-
ter this normalization, and considering gridding of a 1D
function f?x?, the gridded value at a point xois
where f?xi? is the ith input sample and kiis the normalized
kernel value for that sample. This method corrects for the
number of data points that contribute to a grid point and
gives an unbiased estimate if the data are locally constant.
That is, if f?xi?=f?xo? for all i, Eq. ?6? gives the correct
answer since the sum of the kiis one. However, consider the
particular but relatively simple case where the input function
is linear with slope G.
f?x? = f?xo? + G?x − xo?.
Straight-forward gridding yields
ki?f?xo? + G?xi− xo??,
which reduces to
3236Schmidt, Fahrig, and Pelc: 3D reconstruction for inverse-geometry volumetric CT3236
Medical Physics, Vol. 32, No. 11, November 2005
fˆ?xo? = f?xo? + G?
Since the desired value is f?xo?, the second term on the right-
hand side of Eq. ?9? is the gridding error ?.
? = G?
If the kernel is even and the samples are symmetric about xo,
the error is zero. In general, though, there is an error propor-
tional to the slope of the input function. In our implementa-
tion, we are gridding the projection measurement data.
Therefore, it is the gradient of the projection of the object
that determines the amount of error in the gridded value.
In addition, we have found that the error caused by the
linear term and the asymmetric sampling can be coherent in
adjacent gridded projection angles, causing an artifact to ac-
cumulate in the image. This can be understood by consider-
ing the distribution of data points about a particular grid
point. If the data points are asymmetrically distributed in the
radial direction, the interpolated value at the grid point will
be biased in the direction with more samples. For example, if
the projection measurements are higher on the side with
more samples, the gridding output may overestimate the cor-
rect value. The asymmetric sampling will likely bias a grid
point at a nearby radial location in the opposite direction
?note that the gain of the gridding process is unity?. In our
system, each view samples data from a range of azimuthal
and radial positions. The radial sampling varies slowly with
azimuthal angle within each IGCT view, and repeats for each
gantry position. Since the overall trends of projections also
vary slowly with view angle, rebinned projections at nearby
azimuthal angles will contain similar errors. In other words,
the gridding error will vary rapidly in the radial direction and
slowly in the azimuthal direction, which is the type of error
to which CT is particularly sensitive.
A more sophisticated gridding approach preweights the
data by the inverse sampling density of the measurements.
That is, data from highly sampled regions are deemphasized
while data from sparsely sampled regions are emphasized by
the preweighting factors. For certain sampling patterns, such
as spiral sampling in MR, these density weights can be cal-
culated analytically.13Several other approaches, including
computational and iterative methods, have been proposed to
determine the weights for arbitrary sampling patterns.14–16
While preweighting should reduce errors, we note that Eq.
?10? predicts residual errors even with uniform sampling
The uniform resampling algorithm ?URS?, which is opti-
mal in the minimum norm least square sense, and the block
uniform resampling algorithm ?BURS?, a computationally
feasible locally optimal gridding algorithm, have also been
proposed.17These algorithms indirectly incorporate the sam-
pling pattern when estimating the grid points by formulating
the gridding problem as a linear set of equations and using
least-squares methods to solve for the values at the grid
points. These methods are sometimes ill-conditioned and
may be sensitive to noise or measurement errors. A regular-
ized version has also been proposed which provides stability
at the expense of accuracy.18
Most of the methods listed above were developed for
gridding in frequency space and are largely applied to MR
imaging. Gridding in projection space has slightly different
challenges.19Due to the ramp filter in CT reconstruction,
errors that are high in frequency in the radial direction are
greatly amplified. Also, the dynamic range ?the range of re-
constructed values divided by the noise level? of CT de-
mands a higher signal-to-artifact level compared to MR or
PET. For example, CT is sensitive to errors on the order of a
few Hounsfield units ?HU?, where one HU is a change in
signal that is one tenth of one percent of the attenuation of
water, while the range of values may be 400% of the density
Therefore, we propose a new gridding correction that is
motivated by Eq. ?10?. We note that if the sum in the error
term was zero, the grid point value would be correct ?for this
case? regardless of the slope. We modify each kernel value ki
by an amount which depends on the distance between the
data point and grid point. We define the new kernel values,
k _ newi= ki+ ??xi− xo?,
and solve for the value of ? such that the sum in Eq. ?10?,
and therefore the error, equals zero.
?ki+ ??xi− xo???xi− xo??12?
By using the kernel values defined in Eqs. ?11? and ?13?,
the zero and first-order terms of the projection data are esti-
mated correctly at the grid points. This local kernel correc-
tion strategy can be generalized to ensure that higher-order
terms are correctly estimated, but since we only use the data
in a small neighborhood about each grid point, the higher-
order terms should be small. In addition, the higher-order
terms are less likely to be similar in neighboring projections
and should not lead to the coherent errors.
Although the proposed correction does not explicitly
compute the measurement sampling density, the modified
kernel values in Eq. ?11? can be thought of as compensating
for this as well as resymmetrizing the kernel based on the
distribution of data points. A post-compensation step to en-
sure that the total sum of weights at each grid point is one is
still required. The gridding correction can produce negative
kernel values which may cause the sum of the kernel values
at the grid point to be very small. This occurs when the
measured data points are clustered close together on one side
of the grid point. When the kernel value sum is very small,
the post-compensation step amplifies the contribution of
some data points and the noise. Therefore, a threshold is set
3237 Schmidt, Fahrig, and Pelc: 3D reconstruction for inverse-geometry volumetric CT3237
Medical Physics, Vol. 32, No. 11, November 2005
on the sum of the corrected kernel values. If the sum is
below the threshold, the original kernel values are used.
This method can be easily extended to multiple dimen-
sions. In the case of 2D gridding, the locally linear function
f?x,y? = f?xo,yo? +?f
?xf?x − xo? +?f
?yf?y − yo?.
The grid point value at ?xo,yo? estimated from data points at
and the adjusted kernel values are defined by
k _ newi= ki+ ?x?xi− xo? + ?y?yi− yo?,
where ?xand ?yare determined by solving the following
= −?ki?xi− xo?
= −?ki?yi− yo?.
The solution in Eq. ?17? is not well-defined when the
system of equations is ill-conditioned. This could be the case
in sparsely sampled regions where there is an insufficient
distribution of data points surrounding a grid point. This will
cause the calculated ? values to be very large, which may
lead to unstable performance. A threshold on the allowed
size of ? can be set, and for grid points for which this thresh-
old is exceeded, either the original kernel values can be used,
or the region size used to estimate the grid point can be
For our geometry, the gridding correction is applied in
four dimensions, which requires solving a system of four
equations to ensure that the linear term is correctly esti-
?xi− xo?2+ ?y?
?xi− xo??yi− yo?
?yi− yo?2+ ?x?
?xi− xo??yi− yo?
C. Filtered backprojection
Once the data are organized into 2D parallel-ray projec-
tions, the central slice theorem can be used to design the
appropriate reconstruction filter. The theorem states that a 2D
parallel-ray projection of a 3D object samples the 3D Fourier
transform of the object along the plane that is perpendicular
to the projection direction and that passes through the origin.
Therefore the ensemble of parallel-ray projections sample
the Fourier transform of the object, with some areas of fre-
quency space sampled more than others.
The role of the reconstruction filters is to weight the fre-
quency content of each projection so that, when they are all
superimposed during backprojection, the 3D Fourier trans-
form of the object is properly reconstructed. One solution is
to define the filter applied to each projection to be the inverse
of the density of measurements in frequency space on the
plane sampled by that projection.
An analytical expression for this filter, known as the
“Colsher” filter, has been previously derived8,10and is stated
without proof below. The derivation assumes 2D parallel-ray
projections continuously and uniformly distributed between
? equal to zero and 2? and colatitude angle between ?min
and ?/2, where ?minis the colatitude angle of the most ob-
lique projection. These assumptions are reasonable if the dis-
tance between adjacent projections is small in both angular
directions. The density of measurements, stated without
? = arcos?k?sin ?
?? = max??min,?
where kuand k?are the coordinates of the 2D Fourier trans-
form of the projection and M is the total number of projec-
The 2D filter for a parallel-ray projection at a colatitude
angle ? is then given by
where W?k? is a window function used to control the impulse
response. Substituting the expression for D?the resulting 2D
As can be seen in Eq. ?23?, the filter depends on the colati-
tude angle ? but is the same for all view angles at that ?.
The window function W?k? can be designed to recover
some of the resolution lost during the rebinning step. The
gridding algorithm convolves the input data with a 4D kernel
causing some apodization in frequency space. During the
filtering step, the Fourier transform is performed in two spa-
tial dimensions, u and ?. Therefore, in these two dimensions,
the blurring due to gridding can be undone by incorporating
into the filter window the inverse of the Fourier transform of
the gridding kernel. The blurring in the two angular dimen-
sions cannot be reduced during the normal filtering step but
could be deapodized in a separate step prior to backprojec-
The filter in Eq. ?23? is defined as a continuous function in
frequency space. Implementing the filter discretely can intro-
3238Schmidt, Fahrig, and Pelc: 3D reconstruction for inverse-geometry volumetric CT3238
Medical Physics, Vol. 32, No. 11, November 2005