Article

Fuzzy clustering-based segmented attenuation correction in whole-body PET imaging

Authors:
  • French Naval Academy | Arts & Metiers Institute of Technology
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Segmented attenuation correction is now a widely accepted technique to reduce noise propagation from transmission scanning in positron emission tomography (PET). In this paper, we present a new method for segmenting transmission images in whole-body scanning. This reduces the noise in the correction maps while still correcting for differing attenuation coefficients of specific tissues. Based on the fuzzy C-means (FCM) algorithm, the method segments the PET transmission images into a given number of clusters to extract specific areas of differing attenuation such as air, the lungs and soft tissue, preceded by a median filtering procedure. The reconstructed transmission image voxels are, therefore, segmented into populations of uniform attenuation based on knowledge of the human anatomy. The clustering procedure starts with an over-specified number of clusters followed by a merging process to group clusters with similar properties (redundant clusters) and removal of some undesired substructures using anatomical knowledge. The method is unsupervised, adaptive and allows the classification of both pre- or post-injection transmission images obtained using either coincident 68Ge or single-photon 137Cs sources into main tissue components in terms of attenuation coefficients. A high-quality transmission image of the scanner bed is obtained from a high statistics scan and added to the transmission image. The segmented transmission images are then forward projected to generate attenuation correction factors to be used for the reconstruction of the corresponding emission scan. The technique has been tested on a chest phantom simulating the lungs, heart cavity and the spine, the Rando–Alderson phantom, and whole-body clinical PET studies showing a remarkable improvement in image quality, a clear reduction of noise propagation from transmission into emission data allowing for reduction of transmission scan duration. There was very good correlation (R2 = 0.96) between maximum standardized uptake values (SUVs) in lung nodules measured on images reconstructed with measured and segmented attenuation correction with a statistically significant decrease in SUV (17.03% ± 8.4%, P < 0.01) on the latter images, whereas no proof of statistically significant differences on the average SUVs was observed. Finally, the potential of the FCM algorithm as a segmentation method and its limitations as well as other prospective applications of the technique are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Medical image segmentation is becoming an increasingly important image processing step for a number of clinical applications including: (a) identification of RoIs such as lesions to measure their volume and thus assess response to therapy; 5 (b) detection of the left ventricle (LV) cavity to determine the ejection fraction; 4,6 (c) volume visualization and quantification of organ uptake 9,10 or uptake defect of the tracer in the myocardium; 7 (d) study of motion or conduction abnormalities of the heart; 11 and (e) attenuation correction in emission tomographic imaging. 12,13 All subsequent interpretation tasks like feature extraction, object recognition, and classification depend largely on the quality of the segmentation output. The level to which the segmentation is carried depends on the problem being solved. ...
... The FCM algorithm consists of iterations alternating between Eqs. (12) and (13). This algorithm converges to either a local minimum or a saddle point of J m . ...
... The majority of segmentation methods used for attenuation correction fall into one of the following two classes (see chapter 6): histogram-based thresholding techniques [118][119][120] and fuzzy-clustering based segmentation techniques. 12,13 Other interesting approaches to segment noisy transmission data include the use of active contour models, 121 neural networks 122 morphological segmentation 123 and hidden Markov modelling. 124 An alternative to segmentation of transmission images with the goal of reducing noise in PET transmission measurements is the use of coregistered segmented MRI data in functional brain imaging. ...
Chapter
It is gratifying to see in overview the progress that image segmentation has made in the last ten years, from operator-dependent manual delineation of structures, through simple thresholding, the use of classifiers and fuzzy clustering, and more recently atlas-guided approaches incorporating prior information. Recent developments have been enormous particularly in the last ten years, the main opportunities striving towards improving the accuracy, precision, and computational speed through efficient implementation in conjunction with decreasing the amount of operator interaction. The application of medical image segmentation is well established in research environments and is still limited in clinical settings to institutions with advanced physics and extensive computing support. As the above mentioned challenges are met, and experience is gained, implementation of validated techniques in commercial software packages will be useful to attract the interest of the clinical community and increase the popularity of these tools. It is expected that with the availability of computing power in the near future, more complex and ambitious computer intensive segmentation algorithms will become clinically feasible.
... Dans [Esnault et al., 2007] L'algorithme de clustering Fuzzy C-Means (FCM) [Bezdek, 1981], une version floue de l'algorithme de clustering C-means, a notamment été appliqué à la segmentation d'images TEP du corps entier [Zaidi et al., 2002]. Bien que l'objectif des auteurs ne soit pas de segmenter les lésions tumorales, mais de corriger les problèmes d'atténuation, les travaux montrent l'intérêt potentiel qu'offre l'algorithme FCM pour la segmentation en imagerie TEP. ...
... Après cette étape d'initialisation, le processus itératif est effectué. Comme indiqué dans [Zaidi et al., 2002], il consiste en la mise à jour des centroïdes, puis le calcul des masses de croyance m q V i ({ω k }) à chaque itération q. Chaque centroïde c k est mis à jour de la manière suivante : ...
... Dans cette section, nous évaluons notre méthode en la comparant à des méthodes déjà proposées dans la littérature pour la segmentation des images TEP au 18 FDG [Vauclin et al., 2010;Zaidi et al., 2002;Hatt et al., 2008], et que nous avons déjà retenues dans le chapitre 2 (voir section 2.3), mais également pour la segmentation des images TEP au 18 FMiso [Rasey et al., 1996;Choi et al., 2010]. ...
Article
Multi-tracer Positron Emission Tomography (PET) functional imaging could have a prominent effect for the treatment of cancer by radiotherapy. PET images using 18Fluoro-Deoxy-Glucose (18 FDG), 18F-Fluoro-L Thymidine (18 FLT) and 18Fluoro-Misonidazole (18 FMiso) tracers are respectively indicators of glucose cell consumption, cell proliferation and hypoxia (cell lack of oxygen). The joint use of these three tracers could helps us to define sub-volumes lea- ding to an adequate treatment. For this purpose, it is imperative to provide a medical tool of segmentation and fusion of these images. PET images have the characteristic of being very noisy and having a low spatial resolution. These imperfections result in the presence of uncertain and imprecise information in the images respectively. Our contribution consists in proposing a method, called EVEII for Evidential Voxel-based Estimation of Imperfect Information, based on belief function theory, for providing a reliable and accurate segmentation in the context of imperfect images. It also lies in proposing a method for the fusion of multi-tracer PET images. The study of EVEII on simulated images reveals that it is the best suited compared to other methods based on belief function theory, giving a good recognition rate of almost 100 % of pixels when the signal-to-noise ratio is greater than 2.5. On PET physical phantoms, simulating the characteristics of 18FDG, 18FLT and 18FMiso PET images, the results show that our method gives the better estimation of sphere volumes to segment compared to methods of the literature proposed for this purpose. On both lowly and highly noisy phantoms respectively, mean error bias of volume estimation are only of -0.27 and 3.89 mL, demonstrating its suitability for PET image segmentation task. Finally, our method is applied to the segmentation of multi-tracer PET images for three patients. The results show that our method is well suited for the fusion of multi-tracer PET images, giving for this purpose a set of parametric images for differentiating the different biological tissues.
... Thereby, [125] proposed an algorithm namely, the maximum entropy based fuzzy clustering algorithm (MEFC), that ensures a maximum fairness to handle imprecise data and minimize such choice of bias of membership function via maximum entropy principle. In addition, MEFC algorithm is considered to be robust as compared to FCM which is recognized to be sensitive to noise and outliers [215]. As compared to other fuzzy clustering approaches, the maximum entropy function also give a clear physical meaning for clustering data. ...
... As compared to other fuzzy clustering approaches, the maximum entropy function also give a clear physical meaning for clustering data. This means, that data points closer to cluster centers will have higher memberships (representing higher entropy values), as compared to data points that are far from cluster centers [215] (see Fig. 4.4). To represent uncertainty of unlabeled multidimensional data, the maximum entropy inference (MEI) problem aims at assigning membership grade (µ ij ) between [0, 1] to every data point which avoids bias [125]. ...
... Issues and requirements -According to [215], the performance of MEFC algorithm is dependent on the choice of the number of clusters and their initial centers. Therefore, it is required to properly initialize such parameters, to achieve an optimized solutions and avoid issues like inconsistency due to parameter initialization, which gives different results for different parameter inputs. ...
Thesis
Full-text available
Prognostics & Health Management (PHM) aims at extending the life cycle of an engineering asset, while reducing exploitation and maintenance costs. For this reason, prognostics is considered as a key process with future capabilities. Indeed, accurate estimates of the Remaining Useful Life (RUL) of equipment enable defining further plan of actions to increase safety, minimize downtime, ensure mission completion and efficient production. Recent advances show that data-driven approaches (mainly from machine learning) are increasingly applied for fault prognostics. They can be seen as black-box models that learn the system behavior directly from Condition Monitoring (CM) data, use that knowledge to infer its current state and predict future progression of failure. However, approximating the behavior of critical machinery is a challenging task that can result in poor prognostics. As for understanding some issues of data-driven prognostics modeling, consider the following points. 1) How to effectively process raw monitoring data to obtain suitable features that clearly reflect evolution of degradation? 2) How to discriminate degradation states and define failure criteria (that can vary from case to case)? 3) How to be sure that learned-models will be robust enough to show steady performance over uncertain inputs that deviate from learned experiences, and to be reliable enough to encounter unknown data (i.e. operating conditions, engineering variations, etc.)? 4) How to achieve ease of application under industrial constraints and requirements? Such issues constitute the problems addressed in this thesis and have led to develop a novel approach beyond conventional methods of data-driven prognostics. Main contributions are as follows. - The data-processing step is improved by introducing a new approach for features extraction using trigonometric and cumulative functions, where features selection is based on three characteristics, i.e., monotonicity, trendability and predictability. The main idea of this development is to transform raw data into features that improve accuracy of long-term predictions. - To account for robustness, reliability and applicability issues, a new prediction algorithm is proposed: the Summation Wavelet-Extreme Learning Machine (SWELM). SW-ELM ensures good prediction performances while reducing the learning time. An ensemble of SW-ELM is also proposed to quantify uncertainty and improve accuracy of estimates. - Prognostics performances are also enhanced thanks to the proposition of a new health assessment algorithm: the Subtractive-Maximum Entropy Fuzzy Clustering (S-MEFC). S-MEFC is an unsupervised classification approach which uses maximum entropy inference to represent uncertainty of unlabeled multi-dimensional data and can automatically determine the number of states (clusters), i.e., without human assumption. - The final prognostics model is achieved by integrating SW-ELM and S-MEFC to show evolution of machine degradation with simultaneous predictions and discrete state estimation. This scheme also enables to dynamically set failure thresholds and to estimate the RUL of monitored machinery. Developments are validated on real data from three experimental platforms: PRONOSTIA FEMTO-ST (bearings test-bed), CNC SIMTech (machining cutters), C-MAPSS NASA (turbofan engines) and other benchmark data. Due to realistic nature of the proposed RUL estimation strategy, quite promising results are achieved. However, reliability of the Prognostics model still needs to be improved which is the main perspective of this work.
... Two approaches have been extensively tested for MRI based attenuation correction of PET image i.e. Atlas based approach and Segmentation based approach [15][16][17][18][19]. ...
... Bone segmentation is not possible in this method and assigned the attenuation valued of soft tissue. In segmented based method differentiation between air and bone is not possible hence quantification based error [17,19]. ...
... An additional advantage is that single-photon sources can use isotopes that emit photons at an energy different from the 511 keV of annihilation photons (eg, 137 Cs), thus allowing efficient implementation of postinjection TX scanning with reduced contamination of TX images by PET data. The technique has a major drawback, however, given that the TX data need to be normalized on a daily basis (90 minutes on the ECAT ART scanner (CTI/Siemens Medical Solutions, Knoxville, Tennessee) [55]) to a slab phantom scan to correct acquired data for scatter and cross section variation using a log-linear transformation of the attenuation factors [56]. Various strategies also have been suggested to reduce contamination of EM data by TX photons for simultaneous scanning and to reduce spillover of EM data into the TX energy window [47][48][49][50][51]. ...
... Clinically relevant segmentation algorithms were designed by balancing image quality and required algorithmic complexity and resulting computational time [58]. Most TX image segmentation algorithms fall into one of the following two classes: classical histogram-based adaptive thresholding techniques [59][60][61] and fuzzy clustering-based approached [55,62]. Adaptive thresholding-based techniques use the gray-level histogram counts to distinguish between regions. ...
Article
Molecular imaging using PET has evolved from a vigorous academic field into the clinical arena. Considerable advances have been made in the design of high-resolution standalone PET and combined PET/CT units dedicated to clinical whole-body scanning. Likewise, much worthwhile research focused on the development of quantitative imaging protocols incorporating accurate data correction techniques and sophisticated image reconstruction algorithms. Since its inception, photon attenuation in biological tissues has been identified as the most important physical degrading factor affecting PET image quality and quantitative accuracy. Various strategies have been devised to determine an accurate attenuation map to enable correction for nonlinear photon attenuation in whole-body PET studies. This article presents the physical and methodological basis of photon attenuation and summarizes state-of-the-art developments in algorithms used to derive the attenuation map aiming at accurate attenuation compensation of PET data. Future prospects, research trends, and challenges are identified, and directions for future research are discussed.
... When images contain different structures with contrasting intensities, thresholding provides a simple but effective means for obtaining segmentation. Generally, the thresholds are generated based on visual assessment of the resulting segmentation [3,4]. ...
... % apply thresholding process for each pixel in the volume (3) for i = 1 to x do (4) for j = 1 to y do (5) for k = 1 to z do (6) if Pixelvalue ≤ Thresholdvalue then (7) Pixelvalue ← 0 (7) end if (9) end for (10) end for (11) end for Algorithm 3: pseudo code for 3D-thresholding. finally recompute the new cluster centers. ...
Article
Full-text available
3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations.
... Similar to the thresholding approaches, Fuzzy-C-Means (FCM) (Zaidi et al., 2002;Belhassen and Zaidi, 2010), Fuzzy Locally Adaptive Bayesian (FLAB) (Hatt et al., 2009), and iterative thresholding methods (ITM) (Jentzen et al., 2007) are extensively used in PET image segmentation because the boundary representation in these approaches is well suited for the fuzzy nature of the lesion boundaries. For instance, FLAB is based on a Bayesian statistical model where a finite number of fuzzy levels is used to label voxels within a ROI as belonging to more than two classes (i.e., in addition to background and foreground classes). ...
... Although our main interest in this study is to delineate structures jointly from anatomical and functional images, we also show how well the proposed method performs on PET images alone when compared to the PET image segmentation methods commonly used in the literature. For this purpose, we implemented fixed and adaptive thresholding methods (Otsu, 1979), ITM (Jentzen et al., 2007), FCM (Zaidi et al., 2002;Belhassen and Zaidi, 2010), FLAB (Hatt et al., 2009), and region growing methods (Li et al., 2008;Day et al., 2009) and we particularly optimized these methods for the segmentation of PET images. Fig. 7 demonstrates an example PET image slice and segmentation results from those methods as well as our proposed method. ...
Article
We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use.
... The studies began with hard-partition clustering in this field, such as k-means [1][2][3] (also known as crisp c-means [3]), i.e., the ownership of one pattern to one cluster is definite, without any ambiguity. Then, benefiting from Zadeh's fuzzy-set theory [4,5], soft-partition clustering [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43] emerged, such as classic fuzzy c-means (FCM) [3,6], where the memberships regarding one data instance to all underlying clusters are in the form of uncertainties (generally measured by probabilities [6,17,18] or possibilities [7][8][9]), i.e. fuzzy memberships. So far softpartition clustering has triggered extensive research and the representative work can be reviewed from the following four aspects: (1) FCM's derivatives [6][7][8][9][10][11][12][13][14]. ...
... Nevertheless, these conditions are usually difficult to be satisfied in reality. Particularly, new things frequently appear in modern high-technology society, e.g., load balancing in distributed systems [41] and attenuation correction in medical imaging [42], and it is difficult to accumulate abundant, reliable data in the beginning phase in these new applications. Therefore, this issue strictly restricts the practicability of partition clustering, in both cases of hard-partition and soft-partition. ...
... La segmentation à proprement parler est finalement effectuée en attribuant chaque point de données (chaque voxel) au cluster pour lequel il présente le plus grand degré d'appartenance [35]. ...
... Par ailleurs, cette méthode est relativement sensible au choix des différents paramètres et conditions initiales, comme le degré de flou des clusters m, le nombre de clusters c ou encore les valeurs initiales des degrés d'appartenance u ik . Enfin, l'algorithme présenté plus haut possède le désavantage de converger vers un minimum local ou un point-selle de J m et non vers le minimum global souhaité [35,36]. ...
Thesis
Full-text available
L’application de l’imagerie PET en oncologie, en cardiologie et en neurologie n’a cessé de gagner en importance ces dernières années, et ce à différents niveaux que sont le diagnostic, la planification et le suivi du traitement. Cependant, la segmentation de tumeurs dans des images de type PET reste un défi majeur à l’heure actuelle et continue de faire l’objet de nombreuses publications scientifiques. Parmi les problèmes rencontrés figurent notamment la faible résolution du dispositif, le faible contraste entre les tumeurs et les tissus, la présence importante de bruit dans les images ainsi que la grande variabilité des tumeurs rencontrées en termes de géométrie, d’homogénéité et d’activité métabolique. Par ailleurs, l’absence d’une banque de données centralisée conjuguée au manque de procédures standardisées pour l’évaluation des différentes méthodes de segmentation constitue un frein important à l’avancée de la recherche dans ce domaine. Dans ce travail, une étude systématique des principales méthodes de segmentation d’images PET présentées dans la littérature a été effectuée sur base d’images µPET pré-cliniques. Parmi les méthodes étudiées figurent différentes méthodes de seuillage, les méthodes de classification random forest et support vector machine, les méthodes de clustering K-means, fuzzy C-means et FLAB ainsi que les méthodes level set et watershed basées sur les contours. Pour ce faire, un logiciel de visualisation et de segmentation d’images a été développé en langage C++ afin de définir un framework commun pour l’implémentation de celles-ci. Une méthodologie rigoureuse a ensuite été établie afin de fixer les valeurs des paramètres des différentes méthodes ainsi que d’évaluer et de comparer leurs performances prédictives de façon objective. L’évaluation à proprement parler à été effectuée sur base de deux séries de tumeurs imagées chez des petits animaux, présentant toutes deux des caractéristiques distinctes. La première série comprenait 35 tumeurs hétérogènes, de volume situé entre 30 et 3000 mm³ et présentant une faible activité métabolique (SBR < 2.50 pour la plupart). La seconde série comprenait 10 tumeurs relativement homogènes, de volume situé entre 4 et 270 mm³ et d’activité métabolique importante (SBR > 2.0 pour la plupart). Chacune de ces tumeurs a par ailleurs fait l’objet d’une segmentation manuelle de la part d’au moins un expert afin d’obtenir une supervision pour l’évaluation des méthodes. Cette étude a notamment démontré une baisse de performances importante (indice de Jaccard < 50%) de la plupart des méthodes en présence de tumeurs fortement hétérogènes, de petit volume (< 70 mm³) ou faiblement captantes (SBR ≈ 1.0). Par ailleurs, aucune méthode ne s’est distinguée des autres en termes de performances prédictives pour les tumeurs de la première série mais trois méthodes candidates ont pu être identifiées sur base d’un compromis entre robustesse et performances médianes, à savoir le seuillage fixe absolu, le seuillage adaptatif ainsi que la méthode des fuzzy C-means. Pour la seconde série de tumeurs, en revanche, la méthode random forest s’est avérée particulièrement bien adaptée. Enfin, le cadre de travail mis en place ainsi que les résultats qui ont été obtenus peuvent déboucher sur des perspectives de recherche supplémentaires. De nombreux défis restent encore à relever au niveau de la segmentation d’images PET.
... The studies began with hard-partition clustering in this field, such as k-means123 (also known as crisp c-means [3]), i.e., the ownership of one pattern to one cluster is definite, without any ambiguity. Then, benefiting from Zadeh's fuzzy-set theory [4,5], soft-partition clustering6789101112131415161718192021222324262728293031323334353637383940414243 emerged, such as classic fuzzy c-means (FCM) [3,6], where the memberships regarding one data instance to all underlying clusters are in the form of uncertainties (generally measured by probabilities [6,17,18] or possibilities789 ), i.e. fuzzy memberships. So far softpartition clustering has triggered extensive research and the representative work can be reviewed from the following four aspects: (1) FCM's derivatives67891011121314. For improving the robustness against noise and outliers, two major families of derivatives of FCM, i.e., possibilistic c-means (PCM) [3,789 and evidential c-means (ECM) [10– 13], were presented by relaxing the normalization constraint defined on the memberships of one pattern to all classes, and based on the concepts of possibilistic partition and credal partition, respectively. ...
... Nevertheless, these conditions are usually difficult to be satisfied in reality. Particularly, new things frequently appear in modern high-technology society, e.g., load balancing in distributed systems [41] and attenuation correction in medical imaging [42], and it is difficult to accumulate abundant, reliable data in the beginning phase in these new applications. Therefore, this issue strictly restricts the practicability of partition clustering, in both cases of hard-partition and soft-partition. ...
... The studies began with hard-partition clustering in this field, such as k-means [1-3] (also known as crisp c-means [3]), i.e., the ownership of one pattern to one cluster is definite, without any ambiguity. Then, benefiting from Zadeh's fuzzy-set theory [4,5], soft-partition clustering [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43] emerged, such as classic fuzzy c-means (FCM) [3,6], where the memberships regarding one data instance to all underlying clusters are in the form of uncertainties (generally measured by probabilities [6,17,18] or possibilities [7-9]), i.e. fuzzy memberships. So far soft-partition clustering has triggered extensive research and the representative work can be reviewed from the following four aspects: (1) FCM's derivatives [6-14]. ...
... Nevertheless, these conditions are usually difficult to be satisfied in reality. Particularly, new things frequently appear in modern high-technology society, e.g., load balancing in distributed systems [41] and attenuation correction in medical imaging [42], and it is difficult to accumulate abundant, reliable data in the beginning phase in these new applications. Therefore, this issue strictly restricts the practicability of partition clustering, in both cases of hard-partition and soft-partition. ...
Article
Conventional, soft-partition clustering approaches, such as fuzzy c-means (FCM), maximum entropy clustering (MEC), and fuzzy clustering by quadratic regularization (FC-QR), are usually incompetent in those situations where the data are quite insufficient or much polluted by underlying noise or outliers. In order to address this challenge, the quadratic weights and Gini-Simpson diversity based fuzzy clustering model (QWGSD-FC), is first proposed as a basis of our work. Based on QWGSD-FC and inspired by transfer learning, two types of cross-domain, soft-partition clustering frameworks and their corresponding algorithms, referred to as type-I / type-II knowledge-transfer-oriented c-means (TI-KT-CM and TII-KT-CM), are subsequently presented, respectively. The primary contributions of our work are four-fold: 1) The delicate QWGSD-FC model inherits the most merits of FCM, MEC, and FC-QR. With the weight factors in the form of quadratic memberships, similar to FCM, it can more effectively calculate the total intra-cluster deviation than the linear form recruited in MEC and FC-QR. Meanwhile, via Gini-Simpson diversity index, like Shannon entropy in MEC, and equivalent to the quadratic regularization in FC-QR, QWGSD-FC is prone to achieving the unbiased probability assignments. 2) Owing to the reference knowledge from the source domain, both TI-KT-CM and TII-KT-CM demonstrate high clustering effectiveness as well as strong parameter robustness in the target domain. 3) TI-KT-CM refers merely to the historical cluster centroids, whereas TII-KT-CM simultaneously uses the historical cluster centroids and their associated fuzzy memberships as the reference. This indicates that TII-KT-CM features more comprehensive knowledge learning capability than TI-KT-CM, and TII-KT-CM consequently exhibits more perfect cross-domain clustering performance. 4) Neither the historical cluster centroids nor the historical cluster centroid based fuzzy memberships involved in TI-KT-CM or TII-KT-CM can be inversely mapped into the raw data. This means that both TI-KT-CM and TII-KT-CM can work without disclosing the original data in the source domain, i.e. they are of good privacy protection for the source domain. In addition, the convergence analyses regarding both TI-KT-CM and TII-KT-CM are conducted in our research. The experimental studies thoroughly evaluated and demonstrated our contributions on both synthetic and real-life data scenarios.
... Unstructured, random or statistical noise is one of the most disturbing factors in PET images, and refers to random variations within the image caused by random statistical variation in counting rate (Poisson counting noise), modulated by applied corrections and the reconstruction algorithm. Statistical noise in PET images is non-stationary implying that noise properties such as correlation and magnitude depend on the position within the image [124]. ...
... SNR or N S / is the ratio of the average of the VOI values S to the standard deviation of the VOI values across realizations N[144].Definition IV. SNR or N S /is defined as the average pixel values S divided by the standard deviation of pixel intensities N within the outlined ROI[124].Definition V. N S /is calculated pixel-wise in the frequency domain in which signal is defined as mean signal amplitude S and noise is defined as standard deviation at a pixel N[137].Definition VI.N S / represents a measure of image quality in which signalis defined as the sum of squared values of the pixels within an outlined ROI identifying the objects S . The noise is defined as the sum of squared values of the pixel deviation from the mean within an outlined ROI covering the same structure in the image N[145].Definition VII. ...
... We constructed a new hyper-image utilizing both the metabolic and anatomic information, which means each voxel is represented by the SUV normalized to the SUVmax on PET images, the Hounsfield unit (HU) density values normalized to the maximum HU values on CT images, and the product of them. Then the constructed hyper-image can be divided into four regions using a fuzzy c-means (FCM) algorithm (Zaidi et al 2002, Belhassen et al 2010. ...
Article
Full-text available
The aim of the study is to assess the staging value of the tumor heterogeneity characterized by texture features and other commonly used semi-quantitative indices extracted from 18F-FDG PET images of cervical cancer (CC) patients. Forty-two patients suffering CC at different stages were enrolled in this study. Firstly, we proposed a new tumor segmentation method by combining the intensity and gradient field information in a level set framework. Secondly, fifty-four 3D texture features were studied besides of SUVs (SUVmax, SUVmean, SUVpeak) and metabolic tumor volume (MTV). Through correlation analysis, receiver-operating-characteristic (ROC) curves analysis, some independent indices showed statistically significant differences between the early stage (ES, stages I and II) and the advanced stage (AS, stages III and IV). Then the tumors represented by those independent indices could be automatically classified into ES and AS, and the most discriminative feature could be chosen. Finally, the robustness of the optimal index with respect to sampling schemes and the quality of the PET images were validated. Using the proposed segmentation method, the dice similarity coefficient and Hausdorff distance were 91.78±1.66% and 7.94 ±1.99mm, respectively. According to the correlation analysis, all the fifty-eight indices could be divided into 20 groups. Six independent indices were selected for their highest areas under the ROC curves (AUROC), and showed significant differences between ES and AS (P<0.05). Through automatic classification with the support vector machine (SVM) Classifier, run percentage (RP) was the most discriminative index with the higher accuracy (88.10%) and larger AUROC (0.88). The Pearson correlation of RP under different sampling schemes is 0.9991±0.0011. RP is a highly stable feature and well correlated with tumor stage in CC, which suggests it could differentiate ES and AS with high accuracy.
... Several proposed solutions involve clustering techniques [8,[15][16][17][18][19][20]. For algorithms such as the classic k-means [21], it is important to know the number of clusters a priori to obtain an optimum result [8,16]; however, this information is often unknown and it is not easy to compare results obtained with different values of k [16]. ...
... Positron emission tomography (PET) volume analysis is vital for various clinical applications including artefact reduction and removal, tumour quantification in staging, a process which analyses the development of tumours over time, and to aid in radiotherapy treatment planning [1,2]. PET has been progressively incorporated into the management of patients. ...
Article
Full-text available
The increasing number of imaging studies and the prevailing application of positron emission tomography (PET) in clinical oncology have led to a real need for efficient PET volume handling and the development of new volume analysis approaches to aid the clinicians in the clinical diagnosis, planning of treatment, and assessment of response to therapy. A novel automated system for oncological PET volume analysis is proposed in this work. The proposed intelligent system deploys two types of artificial neural networks (ANNs) for classifying PET volumes. The first methodology is a competitive neural network (CNN), whereas the second one is based on learning vector quantisation neural network (LVQNN). Furthermore, Bayesian information criterion (BIC) is used in this system to assess the optimal number of classes for each PET data set and assist the ANN blocks to achieve accurate analysis by providing the best number of classes. The system evaluation was carried out using experimental phantom studies (NEMA IEC image quality body phantom), simulated PET studies using the Zubal phantom, and clinical studies representative of nonsmall cell lung cancer and pharyngolaryngeal squamous cell carcinoma. The proposed analysis methodology of clinical oncological PET data has shown promising results and can successfully classify and quantify malignant lesions.
... Analysing the volume data acquired from PET scanner is very important for different clinical applications including artefact reduction and removal, tumour quantification in staging, a process which analyses the development of tumours over time, and to aid in radiotherapy treatment planning [9,10]. The utilisation of advanced high performance analysis software will be useful in aiding clinicians in diagnosis and radiotherapy planning. ...
Article
Full-text available
An experimental study using artificial neural network (ANN) is carried out to achieve the optimal network architecture for proposed positron emission tomography (PET) application. 55 experimental phantom datasets acquired under clinically realistic conditions with different 2-D and 3-D acquisitions and image reconstruction parameters along with 2min, 3min and 4min scan times per bed are used in this study. The best scanner parameters are determined based on the ANN experimental evaluation of the proposed datasets. The analysis methodology of phantom PET data has shown promising results and can successfully classify and quantify malignant lesions in clinically realistic datasets.
... (10) It is assumed that the random vectors are independent and identically distributed. The a posteriori distribution of the complete parameters can be expressed as (11) Since and [23], it follows that (12) Therefore, the a posteriori distribution of the complete parameters equals (13) where we used the fact that . Since the log of the a posteriori distribution of the complete parameters is unavailable, we estimate it in the so called E-Step by computing the conditional expectation (with respect to ) of the log of the a posteriori distribution of the complete parameters given the observed data and current parameter estimates. ...
Article
Full-text available
When transmission images are obtained using conventional reconstruction methods in stand alone PET scanners, such as standard clinical PET, microPET, and dedicated brain scanners, the results may be noisy and/or inaccurate. For example, the popular penalized maximum-likelihood method effectively reduces noise, but it does not address the bias problem that results from the incorporation of a penalty function and contamination from emission data due to patient activity. In this paper, we present an algorithm that simultaneously reconstructs transmission images and performs a ldquosoftrdquo segmentation of voxels into the classes: air, patient bed, lung, soft-tissue, and bone. It is through the segmentation step that the algorithm, which we refer to as the concurrent segmentation and estimation (CSE) algorithm, provides a means for incorporating accurate attenuation coefficients. The CSE algorithm is obtained by applying an expectation-maximization like formulation to a certain maximum a posterior objective function. This formulation enables us to show that the CSE algorithm monotonically increases the objective function. In experiments using real phantom and synthetic data, the CSE images produced attenuation correction factors and emission images that were more accurate than those obtained using a popular segmentation based attenuation correction method, and the penalized maximum likelihood and filtered backprojection methods.
... However, they are sensitive to noise because they are based on gradient measure. Some methods based on probabilistic measures (Aristophanous et al., 2007;Hatt et al., 2009) and fuzzy measures (Zaidi et al., 2002;Dewalle-Vignion et al., 2011) have recently been proposed. Their advantages are the capability of dealing with noise and/or partial volume effect which is due to the low spatial resolution of acquisition system and the post-filtering applied on PET images. ...
Article
PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods.
... The various clustering analysis algorithms, such as hierarchical, squared error-based and fuzzy clustering, have been used to tackle diverse problems such as exploratory pattern-analysis, decision-making and machine-learning through classifying patterns or feature vectors into a group of clusters in terms of similarity measures [4] [5]. In medical imaging, clustering has been used to segment MR and PET images [6] [7] [8], and to generate parametric images [9] [10] [11] [12], where parameter estimation is integrated with cluster analysis and estimates associated with the cluster centroid curve proportionally represent all voxels belonging to the cluster. The " hard " clustering techniques, such as K-mean [11] and hierarchical clustering [12], assign instances to one specific cluster during iterative analysis. ...
Conference Paper
Full-text available
Functional imaging can provide quantitative functional parameters to aid early diagnosis. Low signal to noise ratio (SNR) in functional imaging, especially for single photon emission computed tomography, poses a challenge in generating voxel-wise parametric images due to unreliable or physiologically meaningless parameter estimates. Our aim was to systematically investigate the performance of our recently proposed adaptive fuzzy clustering (AFC) technique, which applies standard fuzzy clustering to sub-divided data. Monte Carlo simulations were performed to generate noisy dynamic SPECT data with quantitative analysis for the fitting using the general linear least square method (GLLS) and enhanced model-aided GLLS methods. The results show that AFC substantially improves computational efficiency and obtains improved reliability as standard fuzzy clustering in estimating parametric images but is prone to slight underestimation. Normalization of tissue time activity curves may lead to severe overestimation for small structures when AFC is applied.
... Another category of methods produces attenuation maps by non-rigidly registering MR images to atlas generated from populational computed tomography (CT) and MR image pairs [14][15][16][17]. Machine learning and joint estimation methods have also been explored [18][19][20][21]. Recently, a UTE/multi-echo Dixon (mUTE) sequence-based method has been proposed to take full advantage of MR physics for AC [22]. ...
Article
Full-text available
PurposePET measures of amyloid and tau pathologies are powerful biomarkers for the diagnosis and monitoring of Alzheimer’s disease (AD). Because cortical regions are close to bone, quantitation accuracy of amyloid and tau PET imaging can be significantly influenced by errors of attenuation correction (AC). This work presents an MR-based AC method that combines deep learning with a novel ultrashort time-to-echo (UTE)/multi-echo Dixon (mUTE) sequence for amyloid and tau imaging.Methods Thirty-five subjects that underwent both 11C-PiB and 18F-MK6240 scans were included in this study. The proposed method was compared with Dixon-based atlas method as well as magnetization-prepared rapid acquisition with gradient echo (MPRAGE)- or Dixon-based deep learning methods. The Dice coefficient and validation loss of the generated pseudo-CT images were used for comparison. PET error images regarding standardized uptake value ratio (SUVR) were quantified through regional and surface analysis to evaluate the final AC accuracy.ResultsThe Dice coefficients of the deep learning methods based on MPRAGE, Dixon, and mUTE images were 0.84 (0.91), 0.84 (0.92), and 0.87 (0.94) for the whole-brain (above-eye) bone regions, respectively, higher than the atlas method of 0.52 (0.64). The regional SUVR error for the atlas method was around 6%, higher than the regional SUV error. The regional SUV and SUVR errors for all deep learning methods were below 2%, with mUTE-based deep learning method performing the best. As for the surface analysis, the atlas method showed the largest error (> 10%) near vertices inside superior frontal, lateral occipital, superior parietal, and inferior temporal cortices. The mUTE-based deep learning method resulted in the least number of regions with error higher than 1%, with the largest error (> 5%) showing up near the inferior temporal and medial orbitofrontal cortices.Conclusion Deep learning with mUTE can generate accurate AC for amyloid and tau imaging in PET/MR.
... However, as the MR signal is not directly related to the photon attenuation coefficients and there is no simple transform that can convert an MR image into the attenuation map, AC for PET/MR still needs further investigations to unleash the quantitative merits of PET. For the past decades, various methods have been proposed to generate pseudo CT images based on MR images, through segmentation [2]- [12], atlas [13]- [21], joint emission and transmission estimation [22]- [26], multi-tissue per pixel-based MR sequence developments [27], [28], and machine learning [29]- [31] approaches. ...
Article
Full-text available
Attenuation correction (AC) is important for the quantitative merits of positron emission tomography (PET). However, attenuation coefficients cannot be derived from magnetic resonance (MR) images directly for PET/MR systems. In this work, we aimed to derive continuous AC maps from Dixon MR images without the requirement of MR and computed tomography (CT) image registration. To achieve this, a 3D generative adversarial network with both discriminative and cycle-consistency loss (Cycle-GAN) was developed. The modified 3D U-net was employed as the structure of the generative networks to generate the pseudo CT/MR images. The 3D patch-based discriminative networks were used to distinguish the generated pseudo CT/MR images from the true CT/MR images. To evaluate its performance, datasets from 32 patients were used in the experiment. The Dixon segmentation and atlas methods provided by the vendor and the convolutional neural network (CNN) method which utilized registered MR and CT images were employed as the reference methods. Dice coefficients of the pseudo-CT image and the regional quantification in the reconstructed PET images were compared. Results show that the Cycle-GAN framework can generate better AC compared to the Dixon segmentation and atlas methods, and shows comparable performance compared to the CNN method.
... Learning methods as artificial neural network (ANN), support vector machine (SVM), k-means algorithm, fuzzy C-means algorithm are efficient but require high computational steps and are sensitive to variability of PET radiotracer depending on study protocol, as for example scanner characteristics, radiotracer injected dose and interval between radiotracer injection and exam start, e.g. [9,10]. In addition, supervised algorithms have limited application in PET imaging, unlike in the MRI or CT fields, due to high heterogeneity that makes the recognition of stable features in the training set very difficult. ...
Conference Paper
Full-text available
Lesion volume delineation of Positron Emission Tomography images is challenging because of the low spatial resolution and high noise level. Aim of this work is the development of an operator independent segmentation method of metabolic images. For this purpose, an algorithm for the biological tumor volume delineation based on random walks on graphs has been used. Twenty-four cerebral tumors are segmented to evaluate the functional follow-up after Gamma Knife radiotherapy treatment. Experimental results show that the segmentation algorithm is accurate and has real-time performance. In addition, it can reflect metabolic changes useful to evaluate radiotherapy response in treated patients.
... PET volume segmentation is vital for different applications, namely; to correct attenuation effects in PET data-sets and to alleviate artefact introduced through volume reconstruction using tissue component density association [1], for tumour quantification in staging, a process which analyses the development of tumours over time, and to aid in radiotherapy treatment planning [2] [3]. The utilisation of advanced high performance segmentation approaches will be useful in aiding clinicians in diagnosis and radio-therapy planning. ...
Conference Paper
The increasing numbers of patient scans and the prevailing application of positron emission tomography (PET) in clinical oncology have led to a need for efficient PET volume handling and the development of new volume analysis approaches to aid clinicians in the diagnosis of disease and planning of treatment. A novel automated system for oncological PET volume segmentation is proposed in this paper. The proposed intelligent system is using competitive neural network (CNN) and learning vector quantisation neural network (LVQNN) for clustering and quantifying phantom and real PET volumes. Bayesian information criterion (BIC) has been used in this system to assess the optimal number of clusters for each PET data set. The experimental study using phantom PET volume was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The analysis of the resulting segmentation of clinical oncological PET data seems to confirm that this approach shows promise and can successfully segment patient lesion.
... On the other hand, for the detection and segmentation of lesions, intensity-based approaches have been developed. Indeed, the segmentation methods designed during the last ten years were mostly based on thresholding [2], region-growing [3], classification (FCM [4], FLAB [5]), watershed, or basic mathematical morphology pipelines (see [6] for a recent survey). Practically, such methods generally lead to interactive tools in clinical routines, where the expert user provides regions of interest (e.g., bounding-boxes) and/or seeds, and tune threshold values in order to delineate lesions in the chosen area(s). ...
Article
Full-text available
Positron Emission Tomography (PET) image segmentation is essential for detecting lesions and quantifying their metabolic activity. Due to the spatial and spectral properties of PET images, most methods rely on intensity-based strategies. Recent methods also propose to integrate anatomical priors to improve the segmentation process. In this article, we show how the hierarchical approaches proposed in mathematical morphology can efficiently handle these different strategies. Our contribution is twofold. First, we present the component-tree as a relevant data-structure for developing interactive, real-time, intensity-based segmentation of PET images. Second, we prove that thanks to the recent concept of shaping, we can efficiently involve a priori knowledge for lesion segmentation, while preserving the good properties of component-tree segmentation. Preliminary experiments on synthetic and real PET images of lymphoma demonstrate the relevance of our approach.
... We also study the overestimation of the actual regions, which optimizes 5 classes as proposed in [11]. Figure 8 shows the corresponding images with ߚ ൌ ͳͲ ିଷ (proposed by Figure 2). ...
Article
Full-text available
Segmentation technique is widely accepted to reduce noise propagation from transmission scanning for positron emission tomography. The conventional routine is to sequentially perform reconstruction and segmentation. A smoothness penalty is also usually used to reduce noise, which can be imposed to both the ML and WLS estimators. In this paper we replace the smoothness penalty by a segmentation penalty that biases the object toward piecewise-homogeneous reconstruction. Two updating algorithms are developed to solve the penalized ML and WLS estimates, which monotonically decrease the cost functions. Experimental results on simulated phantom and real clinical data were both given to demonstrate the effectiveness and efficiency of the algorithms which were proposed.
... EM [12] 0.44±0.14 FCM [28] 0.50±0.08 Schaefer [29] 0.43±0.07 ...
Conference Paper
Full-text available
In this paper we explore the application of anomaly detection techniques to tumor voxels segmentation. The developed algorithms work on 3-points dynamic FDG-PET acquisitions and leverage on the peculiar anaerobic metabolism that cancer cells experience over time. A few different global or local anomaly detectors are discussed, together with an investigation over two different algorithms aiming to estimate normal tissues' statistical distribution. Finally, all the proposed algorithms are tested on a dataset composed of 9 patients proving that anomaly detectors are able to outperform techniques in the state of the art.
... where x i is the feature vector at the ith location, c ðnÞ k is the kth centroid at the nth iteration, and b is an exponent >1. A variation of this method was applied by Zaidi et al. [23], in which the algorithm starts with an oversized number of clusters to avoid misidentification of conflicting regions. This is followed by a merging process to reach the desired or natural number of clusters according to a priori anatomical knowledge. ...
Article
Full-text available
Several methods have been proposed for the segmentation of ¹⁸F-FDG uptake in PET. In this study, we assessed the performance of four categories of ¹⁸F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected "en bloc", frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (-5.9 ± 11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs defined using the surgical specimens. Adaptive thresholding techniques need to be calibrated for each PET scanner and acquisition/processing protocol, and should not be used without optimization.
... Two methodologies for applying fuzzy logic in medical image segmentation have dominated the literature; the fuzzy cmeans (FCM) approach [17], [18] and the Markovian approach [19], [15]. Chatzis et al have proposed a hybrid FCM-MRF which combines the flexibility of the FCM model with the capacity to incorporate higher dimensional prior information of an MRF model [20]. ...
Article
Hyperpolarized MRI with 13C-labelled compounds is an emerging clinical technique allowing in vivo metabolic processes to be characterized non-invasively. Accurate quantification of 13C data, both for clinical and research purposes, typically relies on the use of region-of-interest analysis to detect and compare regions of altered metabolism. However, it is not clear how this should be determined from the five-dimensional data produced and most standard methodologies are unable to exploit the multidimensional nature of the data. Here we propose a solution to the novel problem of 13C image segmentation using a hybrid Markov random field model with continuous fuzzy logic. The algorithm fully utilizes the multi-dimensional data format in order to classify each voxel into one of six distinct classes based on its metabolic characteristics. Bayesian priors fully incorporate spatial, temporal and ratiometric contextual information whilst image contrast from multiple spectral dimensions are considered concurrently by using an analogy from color image segmentation. Performance of the algorithm is demonstrated on in silico data where the superiority of the approach over a reference thresholding method is consistently observed. Application to in vivo animal data from a pre-clinical subcutaneous tumor model illustrates the ability of the MRF algorithm to successfully detect tumor location whilst avoiding image artefacts. This work has the potential to assist the analysis of human hyperpolarized 13C data in the future.
... More details about segmentation methods based on thresholding approaches for PET images can be found in [34], [6], [91], [74], [14], [68]. Similar to the thresholding approaches, the stochastic and learning-based approaches such as Fuzzy-C-Means (FCM) [90], [11], Fuzzy Locally Adaptive Bayesian (FLAB) [38], clustering [88] and mixture [1] techniques are extensively used in PET image segmentation [6]. It is because the fuzzy nature of lesion boundary such as circular uptake regions is well presented in these approaches [6]. ...
Article
Full-text available
Segmentation is one of the crucial steps in applications of medical diagnosis. The accurate image segmentation method plays an important role in proper detection of disease, staging, diagnosis, radiotherapy treatment planning and monitoring. In the advances of image segmentation techniques, joint segmentation of PET-CT images has increasingly received much attention in the field of both clinic and image processing. PET - CT images have become a standard method for tumor delineation and cancer assessment. Due to low spatial resolution in PET and low contrast in CT images, automated segmentation of tumor in PET - CT images is a well-known puzzle task. This paper attempted to describe and review four innovative methods used in the joint segmentation of functional and anatomical PET - CT images for tumor delineation. For the basic knowledge, the state of the art image segmentation methods were briefly reviewed and fundamental of PET and CT images were briefly explained. Further, the specific characteristics and limitations of four joint segmentation methods were critically discussed.
... Another approach is based on image segmentation: the MR image is segmented into different tissue classes with the corresponding attenuation coefficients assigned to produce the attenuation map [69]- [76]. Various machine learning methods have also been proposed, including fuzzy mean clustering [77], random forest [78], and Gaussian mixture regression [79]. For more details about these methods, readers are referred to previous review papers specifically about MR-based attenuation correction [80]- [84]. ...
Article
Full-text available
Machine learning has found unique applications in nuclear medicine from photon detection to quantitative image reconstruction. Although there have been impressive strides in detector development for time-of-flight positron emission tomography (PET), most detectors still make use of simple signal processing methods to extract the time and position information from the detector signals. Now, with the availability of fast waveform digitizers, machine learning techniques have been applied to estimate the position and arrival time of high-energy photons. In quantitative image reconstruction, machine learning has been used to estimate various corrections factors, including scattered events and attenuation images, as well as to reduce statistical noise in reconstructed images. Here, machine learning either provides a faster alternative to an existing time-consuming computation, such as in the case of scatter estimation, or creates a data-driven approach to map an implicitly defined function, such as in the case of estimating the attenuation map for PET/MR scans. In this article, we will review the above-mentioned applications of machine learning in nuclear medicine.
... Segmentation-based methods try to segment the MRI image into the relevant tissue types for attenuation correction and assign a predefined attenuation coefficient to the segmented tissue types. 11 The possibility of automatic segmentation based on a standard T 1 -weighted MR image using fuzzy clustering was demonstrated by Zaidi et al. 12 Recently, our group has proposed a method that uses ultrashort echo time sequences to enable the distinction of bone and air. 13 Most of these methods have only been tested on limited clinical brain data. ...
Article
Accurate attenuation correction is important for PET quantification. Often, a segmented attenuation map is used, especially in MRI-based attenuation correction. As deriving the attenuation map from MRI images is difficult, different errors can be present in the segmented attenuation map. The goal of this paper is to determine the effect of these errors on quantification. The authors simulated the digital XCAT phantom using the GATE Monte Carlo simulation framework and a model of the Philips Gemini TF. A whole body scan was simulated, spanning an axial field of view of 70 cm. A total of fifteen lesions were placed in the lung, liver, spine, colon, prostate, and femur. The acquired data were reconstructed with a reference attenuation map and with different attenuation maps that were modified to reflect common segmentation errors. The quantitative difference between reconstructed images was evaluated. Segmentation into five tissue classes, namely cortical bone, spongeous bone, soft tissue, lung, and air yielded errors below 5%. Large errors were caused by ignoring lung tissue (up to 45%) or cortical bone (up to 17%). The interpatient variability of lung attenuation coefficients can lead to errors of 10% and more. Up to 20% tissue misclassification from bone to soft tissue yielded errors below 5%. The same applies for up to 10% misclassification from lung to air. When using a segmented attenuation map, at least five different tissue types should be considered: cortical bone, spongeous bone, soft tissue, lung, and air. Furthermore, the interpatient variability of lung attenuation coefficients should be taken into account. Limited misclassification from bone to soft tissue and from lung to air is acceptable, as these do not lead to relevant errors.
Chapter
The physical basis of the attenuation phenomenon lies in the natural property that photons emitted by the radiopharmaceutical will interact with tissue and other materials as they pass through the body. For photon energies representative of those encountered in nuclear medicine (i.e., 68 to 80 keV for 201 Tl to 511 keV for positron emitters), photons emitted by radiopharmaceuticals can undergo photoelectric interactions where the incident photon is completely absorbed. In other cases, the primary radionuclide photon interacts with loosely bound electrons in the surrounding material and are scattered. The trajectory of the scattered photon generally carries it in a different direction than that of the primary photon. However, the energy of the scattered photon can be lower than (in the case of incoherent scattering) or be the same as (in the case of coherent scattering) that of the incident photon. It is worth emphasizing that for soft tissue (the most important constituent of the body), a moderately low-Z material, we note two distinct regions of single interaction dominance: photoelectric below and incoherent above 20 keV. Moreover, the percentage of scattered events which undergo Compton interactions in the object is more than 99.7% at 511 keV for water, in which the number of interactions by photoelectric absorption or coherent scattering is negligible. Mathematically, the magnitude of photon transmission through an attenuating object can be expressed by the exponential equation:
Conference Paper
Full-text available
Quantitative PET imaging requires a dynamic scan in order to measure the arterial input function and the tissue time-activity curves. By combining these two curves with adequate mathematical models it is possible to obtain useful physiological information such as the metabolic rate, perfusion, receptors density etc. Cluster Analysis (CA) allows to group pixels having the same kinetic. In this work the performance of two clustering algorithms were assessed. The user must supply a set of images acquired at different time points and the number of clusters. The choice of the correct number of clusters was performed by using a parsimony criteria. In order to test the CA method real dynamic small animal PET data were acquired. Image derived arterial input function and myocardial FDG uptake were measured. Results showed that CA allow us to obtain accurate tissue time activity curves without the need of manual region on interest delineation.
Conference Paper
We present a method for obtaining attenuation maps for use in emission computed tomography (ECT) using ultra low dose CT data (at 140kVp, down to 10mA). This is achieved using a recursive k-means clustering method, the output of which initializes successive parameter-less region growing procedures. The method automatically produces templates corresponding to bone, lung, soft and dense tissue (muscle and fat). The segmentation of each tissue class from k-means clustering is used to compensate for the higher statistical noise variation seen at lower dose. The use of the region grower provides local contextual information that minimizes the impact of global noise. The templates were assigned appropriate linear attenuation coefficients and then convolved with the PET/SPECT system's PSF. This approach was applied to a dataset from an experimental anthropomorphic phantom exposed to systematically reducing CT dose derived from an X-ray beam at 140kVp and varying current from 160mA (full diagnostic dose) to 10mA (ultra low dose). Preliminary results show that for the purpose of CT attenuation correction, it is possible to successfully produce attenuation maps at ultra low dose with very low error (compared to full diagnostic dose) if used with the segmentation method presented.
Article
Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.
Conference Paper
Full-text available
We propose two forms of cluster-based priors for the maximum a Posterior (MAP) algorithm to improve PET image re-construction quantitatively. Conventionally, most priors in MAP reconstruction use weighted differences between voxel intensities within a small localized spatial neighborhood, exploiting intensity similarities amongst adjacent voxels. It was hypothesized that by incorporating a larger collection of voxels with similar properties, the MAP approach has a greater ability to impose smoothness while preserving edges. We propose to use clustering techniques as applied to pre-reconstructed images to define clustered neigh-borhoods of voxels with similar intensities. Two forms of cluster-based priors were proposed. The unweighted cluster-based prior (CP-U) applies a uniform weight regardless of position within a cluster to voxel value differences. The distance weighted cluster-based prior (CP-W) applies different weights based on the distance between voxels within a cluster. The two forms of cluster-based priors, CP-U and CP-W, are implemented within MAP reconstruction. The fuzzy C-means (FCM) method is used to cluster the filtered backprojection (FBP) reconstructed image before MAP reconstruction. To evaluate the proposed priors, a mathematical brain phantom was used in analytic simulations to generate the projection data. We compare reconstructed images from the proposed cluster-based priors MAP algorithms with those from conventional MLEM and quadratic prior (QP) MAP algorithms, using the regional bias (normalized mean squared error, NMSE) vs noise (normalized standard deviation tradeoff, NSD) tradeoff curves. MAP reconstruction using cluster-based priors (CP-U-MAP and CP-W-MAP) dramatically improved the noise vs. bias tradeoff when the number of clusters selected is equal to or larger than the true number of clusters within the image. However, the CP-U-MAP may introduce some bias in a region that may be wrongly clustered, e.g. when the number of selected clusters is smaller than the true number of clusters, a problem that is largely avoided by CP-W-MAP reconstruction which exhibits very robust quantitative performance.
Article
Purpose: Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. Methods: About 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Results: Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). Conclusions: A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.
Chapter
We present an efficient clustering method for detecting the tumor in positron emission tomography(PET) of the tumor bearing small animal. We used iterative threshold method to remove the background noise and then we applied two clustering procedures in order. The one is clustering method based on intensity to segment the tumor region and the other is clustering based on connectivity to remove false positive region from the segmented region. The tumor tissue looks bright in the image compared to surrounding normal tissue because of glucose uptake. Therefore, based on volume intensity, we divided all elements of the image into several clusters, the tumor, living bodies, background using improved fuzzy c-means clustering(FCM). Using FCM with the sorted initial mean of each cluster gets out of the wrong optimization and reduces the amount of time-consumed. However, not only the tumor tissue, but also the other organs like heart, bladder can also have high intensity value because of glucose metabolism. So in order to separate the tumor and false positive region, we applied geometric clustering based on connectivity. Proposed segmentation method can lead a robust analysis of the tumor growth with the aid of the quantitative measurements such the tumor size or volume.
Article
Purpose: Many methods have been proposed for tumor segmentation from positron emission tomography images. Because of the increasingly important role that [(11)C]choline is playing in oncology and because no study has compared segmentation methods on this tracer, the authors assessed several segmentation algorithms on a [(11)C]choline test-retest dataset. Methods: Fixed and adaptive threshold-based methods, fuzzy C-means (FCM), Canny's edge detection method, the watershed transform, and the fuzzy locally adaptive Bayesian algorithm (FLAB) were used. Test-retest [(11)C]choline scans of nine patients with breast cancer were considered and the percent test-retest variability %VAR(TEST-RETEST) of tumor volume (TV) was employed to assess the results. The same methods were then applied to two denoised datasets generated by applying either a Gaussian filter or the wavelet transform. Results: The (semi)automated methods FCM, FLAB, and Canny emerged as the best ones in terms of TV reproducibility. For these methods, the %root mean square error %RMSE of %VAR(TEST-RETEST), defined as %RMSE= variance+mean(2), was in the range 10%-21.2%, depending on the dataset and algorithm. Threshold-based methods gave TV estimates which were extremely variable, particularly on the unsmoothed data; their performance improved on the denoised datasets, whereas smoothing did not have a remarkable impact on the (semi)automated methods. TV variability was comparable to that of SUV(MAX) and SUV(MEAN) (range 14.7%-21.9% for %RMSE of %VAR(TEST-RETEST), after the exclusion of one outlier, 40%-43% when the outlier was included). Conclusions: The TV variability obtained with the best methods was similar to the one reported for TV in previous [(18)F]FDG and [(18)F]FLT studies and to the one of SUV(MAX)∕SUV(MEAN) on the authors' [(11)C]choline dataset. The good reproducibility of [(11)C]choline TV warrants further studies to test whether TV could predict early response to treatment and survival, as for [(18)F]FDG, to complement∕substitute the use of SUV(MAX) and SUV(MEAN).
Article
Classifying tumour in positron emission tomography (PET) imaging at early stage of illness are important for radiotherapy planning, tumour diagnosis, and fast recovery. There are many techniques for analysing medical volumes, in which some of the approaches have poor accuracy and require a lot of time for processing large medical volumes. Artificial intelligence (AI) technologies can provide better accuracy and save decent amount of time. This paper proposes an adaptive neuro fuzzy inference system (ANFIS) for analysing 3D PET volumes. ANFIS performance evaluation has been carried out using confusion matrix, accuracy, and misclassification value. Two PET phantom data sets, clinical PET volume of nonsmall cell lung cancer patient, and PET volumes from seven patients with laryngeal tumours have been used in this study to evaluate the performance of the proposed approach. The proposed classification methodology of phantom and clinical oncological PET data has shown promising results and can successfully classify patient lesion.
Article
Gaussian mixture is an important distribution to describe mixture data, in which each part mixed must be assumed to be normal. But the normal distribution may not be able to model the data adequately for some cases, so that Gaussian mixture is invalid. The generalized exponential distribution is a flexible density model, which can describe uniform, Gaussian, Laplacian and other sub- and super-Gaussian unimodal densities. In this paper, the generalized exponential mixture model is studied, in which Gaussian mixture and Laplacian mixture are two special cases. Different distributions can also be mixed, such as mixing a Laplacian with a Gaussian. In 1-dimension real space, the EM algorithm is developed to address the solution to the mixture model. The EM algorithm can be conditionally popularized to n-dimension real space. Numerical simulations and image segmentation experiment demonstrate feasibility and effectiveness of the proposed approach.
Conference Paper
In clinically applicable structural magnetic resonance images (MRI), bone and air have similarly low signal intensity, making the differentiation between them a very challenging task. MRI-based bone/air segmentation, however, is a critical step in some emerging applications, such as skull atlas building, MRI-based attenuation correction for Positron Emission Tomography (PET), and MRI-based radiotherapy planning. In view of the availability of hybrid PET-MRI machines, we propose a voxel-wise classification method for bone/air segmentation. The method is based on random forest theory and features extracted from structural MRI and attenuation uncorrected PET. The Dice Similarity Score (DSC) score between the segmentation result and the 'ground truth' obtained by thresholding Computed Tomography images was calculated for validation. Images from 10 subjects were used for validation, achieving a DSC of 0.83±0.08 and 0.98±0.01 for air and bone, respectively. The results suggest that structural MRI and uncorrected PET can be used to reliably differentiate between air and bone.
Article
This work presents a simulation based study by Monte Carlo which uses two adaptive neuro-fuzzy inference systems (ANFIS) for cross talk compensation of simultaneous 99mTc/201Tl dual-radioisotope SPECT imaging. We have compared two neuro-fuzzy systems based on fuzzy c-means (FCM) and subtractive (SUB) clustering. Our approach incorporates eight energy-windows image acquisition from 28 keV to 156 keV and two main photo peaks of 201Tl (77±10% keV) and 99mTc (140±10% keV). The Geant4 application in emission tomography (GATE) is used as a Monte Carlo simulator for three cylindrical and a NURBS Based Cardiac Torso (NCAT) phantom study. Three separate acquisitions including two single-isotopes and one dual isotope were performed in this study. Cross talk and scatter corrected projections are reconstructed by an iterative ordered subsets expectation maximization (OSEM) algorithm which models the non-uniform attenuation in the projection/back-projection. ANFIS-FCM/SUB structures are tuned to create three to sixteen fuzzy rules for modeling the photon cross-talk of the two radioisotopes. Applying seven to nine fuzzy rules leads to a total improvement of the contrast and the bias comparatively. It is found an outperformance for the ANFIS-FCM due to its acceleration and mostly accurate results.
Article
In PET activation studies, linear changes in regional cerebral blood flow may be caused by subject interscan displacements rather than by changes in cognitive state. The aim of this study was to investigate the impact of these artifacts and to assess whether they can be removed by applying a scan-specific calculated attenuation correction (CAC) instead of the default measured attenuation correction (MAC). Two independent data sets were analyzed, one with large (data I) and one with small (data II) interscan displacements. After attenuation correction (CAC or MAC), data were analyzed using SPM99. Interscan displacement parameters (IDP), obtained during scan realignment, were included as additional regressors in the General Linear Model and their impact was assessed by variance statistics revealing the affected brain volume. For data I, this volume reduced dramatically from 579 to 12 cm³ (approximately 50-fold) at Puncorr ≤ 0.001 and from 100 to 0 cm³ at Pcorr ≤ 0.05 when CAC was applied instead of MAC. Surprisingly, for data II, applying CAC instead of MAC still resulted in a substantial (approximately 10-fold) reduction of the affected volume from 23 to 2 cm³ at Puncorr ≤ 0.001. We conclude that interscan displacement-induced variance can be prevented by applying a (realigned attenuation correction scan (e.g., CAC). With MAC data, introducing IDP covariates is not an alternative since they model only this variance. Even in data with minor interscan displacements, applying a (realigned attenuation correction method (e.g., CAC) is superior to a nonaligned MAC with IDP covariates.
Article
Full-text available
One of the main factors of error for semi-quantitative analysis in positron emission tomography (PET) imaging for diagnosis and patient follow up, as well as new flourishing applications like image guided radiotherapy, is the methodology used to define the volumes of interest in the functional images. This is explained by poor image quality in emission tomography resulting from noise and partial volume effects induced blurring, as well as the variability of acquisition protocols, scanner models and image reconstruction procedures. The large number of proposed methodologies for the definition of a PET volume of interest does not help either. The majority of such proposed approaches are based on deterministic binary thresholding that are not robust to contrast variation and noise. In addition, these methodologies are usually unable to correctly handle heterogeneous uptake inside tumours. The objective of this thesis is to develop an automatic, robust, accurate and reproducible 3D image segmentation approach for the functional volumes determination of tumours of all sizes and shapes, and whose activity distribution may be strongly heterogeneous. The approach we have developed is based on a statistical image segmentation framework, combined with a fuzzy measure, which allows to take into account both noisy and blurry properties of nuclear medicine images. It uses a stochastic iterative parameters estimation and a locally adaptive model of the voxel and its neighbours for the estimation and segmentation. The developed approaches have been evaluated using a large array of datasets, comprising both simulated and real acquisitions of phantoms and tumours. The results obtained on phantom acquisitions allowed to validate the accuracy of the segmentation with respect to the size of considered structures, down to 13 mm in diameter (about twice the spatial resolution of a typical PET scanner), as well as its robustness with respect to noise, contrast variation, acquisition parameters, scanner models or reconstruction algorithms. The performance of the developed algorithm is shown to be superior to thresholding reference methodologies. The results demonstrate the ability of the developed approach to accurately delineate tumours with complex shapes and activity distributions, for which the reference methodologies fail to generate coherent segmentation maps. The algorithm is also able to delineate multiples regions inside the tumour, whereas reference methodologies are usually binary only. Both robustness and accuracy results demonstrate that the proposed methodology may be used in clinical context for diagnosis and patients follow up, as well as for radiotherapy treatment planning and "dose painting", facilitating optimized dosimetry and potentially reduced doses delivered to healthy tissues around the tumour and nearby organs. Such studies to evaluate the impact of the methodology in radiotherapy treatment planning have already started in a project which aims to explore the potential of the algorithm which has been successfully patented.
Article
Hyperpolarized MRI with <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">13</sup> C-labelled compounds is an emerging clinical technique allowing in vivo metabolic processes to be characterized non-invasively. Accurate quantification of <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">13</sup> C data, both for clinical and research purposes, typically relies on the use of region-of-interest analysis to detect and compare regions of altered metabolism. However, it is not clear how this should be determined from the five-dimensional data produced and most standard methodologies are unable to exploit the multidimensional nature of the data. Here we propose a solution to the novel problem of <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">13</sup> C image segmentation using a hybrid Markov random field model with continuous fuzzy logic. The algorithm fully utilizes the multi-dimensional data format in order to classify each voxel into one of six distinct classes based on its metabolic characteristics. Bayesian priors fully incorporate spatial, temporal and ratiometric contextual information whilst image contrast from multiple spectral dimensions are considered concurrently by using an analogy from color image segmentation. Performance of the algorithm is demonstrated on in silico data, where the superiority of the approach over a reference thresholding method is consistently observed. Application to in vivo animal data from a pre-clinical subcutaneous tumor model illustrates the ability of the MRF algorithm to successfully detect tumor location whilst avoiding image artifacts. This work has the potential to assist the analysis of human hyperpolarized <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">13</sup> C data in the future.
Article
This paper presents a new fuzzy subspace clustering (FSC) method which finds some subspaces as clusters such that each point belongs to the nearest subspace with a certain weight or probability. Then we propose two graph-regularized versions for it, in which two points are more likely to be assigned to the same cluster if they are close spatially or with the same labels. In the proposed two graph regularizations, one encodes the weight (or probability) of a point assigned to a cluster and the other encodes the projection coefficients of a point on a subspace. We develop iterative solutions for these methods through constructing a surrogate, which monotonically decrease the cost function with a simple structure. The experimental results, using both synthetic and real-world databases, demonstrate the effectiveness and flexibility of the proposed methods.
Article
To assess the value of positron emission tomography (PET)/computed tomography (CT) with either (18)F-choline and/or (11)C-acetate, of residual or recurrent tumour after radical prostatectomy (RP) in patients with a prostate-specific antigen (PSA) level of <1 ng/mL and referred for adjuvant or salvage radiotherapy. In all, 22 PET/CT studies were performed, 11 with (18)F-choline (group A) and 11 with (11)C-acetate (group B), in 20 consecutive patients (two undergoing PET/CT scans with both tracers). The median (range) PSA level before PET/CT was 0.33 (0.08-0.76) ng/mL. Endorectal-coil magnetic resonance imaging (MRI) was used in 18 patients. Nineteen patients were eligible for evaluation of biochemical response after salvage radiotherapy. There was abnormal local tracer uptake in five and six patients in group A and B, respectively. Except for a single positive obturator lymph node, there was no other site of metastasis. In the two patients evaluated with both tracers there was no pathological uptake. Endorectal MRI was locally positive in 15 of 18 patients; 12 of 19 responded with a marked decrease in PSA level (half or more from baseline) 6 months after salvage radiotherapy. Although (18)F-choline and (11)C-acetate PET/CT studies succeeded in detecting local residual or recurrent disease in about half the patients with PSA levels of <1 ng/mL after RP, these studies cannot yet be recommended as a standard diagnostic tool for early relapse or suspicion of subclinical minimally persistent disease after surgery. Endorectal MRI might be more helpful, especially in patients with a low likelihood of distant metastases. Nevertheless, further research with (18)F-choline and/or (11)C-acetate PET with optimal spatial resolution might be needed for patients with a high risk of distant relapse after RP even at low PSA values.
Article
Full-text available
A counterexample to the original incorrect convergence theorem for the fuzzy c-means (FCM) clustering algorithms (see J.C. Bezdak, IEEE Trans. Pattern Anal. and Math. Intell., vol.PAMI-2, no.1, pp.1-8, 1980) is provided. This counterexample establishes the existence of saddle points of the FCM objective function at locations other than the geometric centroid of fuzzy c-partition space. Counterexamples previously discussed by W.T. Tucker (1987) are summarized. The correct theorem is stated without proof: every FCM iterate sequence converges, at least along a subsequence, to either a local minimum or saddle point of the FCM objective function. Although Tucker's counterexamples and the corrected theory appear elsewhere, they are restated as a caution not to further propagate the original incorrect convergence statement.
Article
Full-text available
In a fuzzy clustering an object typically receives strictly positive memberships to all clusters, even when the object clearly belongs to one particular cluster. Consequently, each cluster's estimated center and scatter matrix are influenced by many objects that have small positive memberships to it. This effect may keep the fuzzy method from finding the true clusters. We analyze the cause and propose a remedy, which is a modification of the objective function and the corresponding algorithm. The resulting clustering has a high contrast in the sense that outlying and bridging objects remain fuzzy, whereas the other objects become crisp. The enhanced version of fuzzy k-means is illustrated with an example, as well as the enhanced version of the fuzzy minimum volume method.
Article
Full-text available
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
Article
Full-text available
A number of hard clustering algorithms have been shown to be derivable from the maximum likelihood principle. The only corresponding fuzzy algorithm are the well known fuzzy k-means or fuzzy isodata of Dunn and its generalizations by Bezdek and by Gustafson and Kessel. The authors show how to generate two other fuzzy algorithms which are analogous of known hard algorithms: the minimization of the fuzzy determinant and of the product of fuzzy determinants.By comparison between the hard and fuzzy methods it appears that the latter yield more often the global optimum, rather than merely a local optimum. This result and the comparison between the different algorithms, together with their specific domains of application, are illustrated by a few numerical examples.
Article
Full-text available
Poisson noise in transmission data can have a significant influence on the statistical uncertainty of PET measurements, particularly at low transmission count rates. In this paper, we investigate the effect of transmission data processing on noise and quantitative accuracy of reconstructed PET images. Differences in spatial resolution between emission and transmission measurements due to transmission data smoothing are shown to have a significant influence on quantitative accuracy and can lead to artifacts in the reconstructed image. In addition, the noise suppression of this technique is insufficient to greatly reduce transmission scan times. Based on these findings, improved strategies for processing count-limited transmission data have been developed, including a method using segmentation of attenuation images. Using this method, accurate attenuation correction can be performed using transmission scan times as low as 2 min without increasing noise in reconstructed PET images.
Article
Full-text available
In this study, we investigate the application of the fuzzy clustering to the anatomical localization and quantitation of brain lesions in Positron Emission Tomography (PET) images. The method is based on the Fuzzy C-Means (FCM) algorithm. The algorithm segments the PET image data points into a given number of clusters. Each cluster is an homogeneous region of the brain (e.g. tumor). A feature vector is assigned to a cluster which has the highest membership degree. Having the label affected by the FCM algorithm to a cluster, one may easily compute the corresponding spatial localization, area and perimeter. Studies concerning the evolution of a tumor after different treatments in two patients are presented.
Article
Full-text available
Attenuation correction in single-photon (SPET) and positron emission (PET) tomography is now accepted as a vital component for the production of artefact-free, quantitative data. The most accurate attenuation correction methods are based on measured transmission scans acquired before, during, or after the emission scan. Alternative methods use segmented images, assumed attenuation coefficients or consistency criteria to compensate for photon attenuation in reconstructed images. This review examines the methods of acquiring transmission scans in both SPET and PET and the manner in which these data are used. While attenuation correction gives an exact correction in PET, as opposed to an approximate one in SPET, the magnitude of the correction factors required in PET is far greater than in SPET. Transmission scans also have a number of other potential applications in emission tomography apart from attenuation correction, such as scatter correction, inter-study spatial co-registration and alignment, and motion detection and correction. The ability to acquire high-quality transmission data in a practical clinical protocol is now an essential part of the practice of nuclear medicine.
Article
Full-text available
In this work we demonstrate the proof of principle of CT-based attenuation correction of 3D positron emission tomography (PET) data by using scans of bone and soft tissue equivalent phantoms and scans of humans. This method of attenuation correction is intended for use in a single scanner that combines volume-imaging (3D) PET with x-ray computed tomography (CT) for the purpose of providing accurately registered anatomical localization of structures seen in the PET image. The goal of this work is to determine if we can perform attenuation correction of the PET emission data using accurately aligned CT attenuation information. We discuss possible methods of calculating the PET attenuation map at 511 keV based on CT transmission information acquired from 40 keV through 140 keV. Data were acquired on separate CT and PET scanners and were aligned using standard image registration procedures. Results are presented on three of the attenuation calculation methods: segmentation, scaling, and our proposed hybrid segmentation/scaling method. The results are compared with those using the standard 3D PET attenuation correction method as a gold standard. We demonstrate the efficacy of our proposed hybrid method for converting the CT attenuation map from an effective CT photon energy of 70 keV to the PET photon energy of 511 keV. We conclude that using CT information is a feasible way to obtain attenuation correction factors for 3D PET.
Article
Full-text available
Transmission scanning can be successfully performed with a 137Cs single-photon emitting point source for three-dimensional PET imaging. However, the attenuation coefficients provided by this method are underestimated because of the energy difference between 662- and 511-keV photons, as well as scatter and emission contamination when the transmission data are acquired after injection. The purpose of this study was to evaluate, from a clinical perspective, the relative benefits of various processing schemes to resolve these issues. Thirty-eight whole-body PET studies acquired with postinjection singles transmission scans were analyzed. The transmission images were processed and applied to the emission data for attenuation correction. Three processing techniques were compared: simple segmentation (SEG) of the transmission scan, emission contamination subtraction with scaling (ECS) of the resulting data to 511-keV attenuation coefficient values and a hybrid technique performing partial segmentation of some tissue densities on the ECS scan (THR). The corrected emission scans were blindly assessed for image noise, the presence of edge artifacts at the lung-soft-tissue interface and for overall diagnostic confidence using a semiquantitative scoring system. The count densities and the SDs in uniform structures were compared among the various techniques. The observations for each method were compared using a paired t test. The SEG technique produced images that were visually less noisy than the ECS method (P < 0.0001) and the THR technique, but at the expense of increased edge artifacts at the boundaries between the lungs and surrounding tissues. The THR technique failed to eliminate these artifacts compared with the ECS technique (P < 0.0001) but preserved the activity gradients in the hilar areas. The count densities (and thus, the standardized uptake values) were similar among the three techniques, but the SEG method tended to underestimate the activity in the lung fields and in chest tumors (slope = 0.79 and 0.94, respectively). For many clinical applications, SEG data remain an efficient method for processing 137Cs transmission scans. The ECS method produced noisier images than the other two techniques but did not introduce artifacts at the lung boundaries. The THR technique, more versatile in complex anatomic areas, allowed good preservation of density gradients in the lungs.
Article
Full-text available
A method is presented for fully automated detection of Multiple Sclerosis (MS) lesions in multispectral magnetic resonance (MR) imaging. Based on the Fuzzy C-Means (FCM) algorithm, the method starts with a segmentation of an MR image to extract an external CSF/lesions mask, preceded by a local image contrast enhancement procedure. This binary mask is then superimposed on the corresponding data set yielding an image containing only CSF structures and lesions. The FCM is then reapplied to this masked image to obtain a mask of lesions and some undesired substructures which are removed using anatomical knowledge. Any lesion size found to be less than an input bound is eliminated from consideration. Results are presented for test runs of the method on 10 patients. Finally, the potential of the method as well as its limitations are discussed.
Article
Full-text available
The availability of accurately aligned, whole-body anatomical (CT) and functional (PET) images could have a significant impact on diagnosing and staging malignant disease and on identifying and localizing metastases. Computer algorithms to align CT and PET images acquired on different scanners are generally successful for the brain, whereas image alignment in other regions of the body is more problematic. A combined PET/CT tomograph with the unique capability of acquiring accurately aligned functional and anatomical images for any part of the human body has been designed and built. The PET/CT scanner was developed as a combination of a Siemens Somatom AR.SP spiral CT and a partial-ring, rotating ECAT ART PET scanner. All components are mounted on a common rotational support within a single gantry. The PET and CT components can be operated either separately, or in combined mode. In combined mode, the CT images are used to correct the PET data for scatter and attenuation. Fully quantitative whole-body images are obtained for an axial extent of 100 cm in an imaging time of less than 1 h. When operated in PET mode alone, transmission scans are acquired with dual 137Cs sources. The scanner is fully operational and the combined device has been operated successfully in a clinical environment. Over 110 patients have been imaged, covering a range of different cancers, including lung, esophageal, head and neck, melanoma, lymphoma, pancreas, and renal cell. The aligned PET and CT images are used both for diagnosing and staging disease and for evaluating response to therapy. We report the first performance measurements from the scanner and present some illustrative clinical studies acquired in cancer patients. A combined PET and CT scanner is a practical and effective approach to acquiring co-registered anatomical and functional images in a single scanning session.
Article
Full-text available
Methods of quantitative emission computed tomography require compensation for linear photon attenuation. A current trend in single-photon emission computed tomography (SPECT) and positron emission tomography (PET) is to employ transmission scanning to reconstruct the attenuation map. Such an approach, however, considerably complicates both the scanner design and the data acquisition protocol. A dramatic simplification could be made if the attenuation map could be obtained directly from the emission projections, without the use of a transmission scan. This can be done by applying the consistency conditions that enable us to identify the operator of the problem and, thus, to reconstruct the attenuation map. In this paper, we propose a new approach based on the discrete consistency conditions. One of the main advantages of the suggested method over previously used continuous conditions is that it can easily be applied in various scanning configurations, including fully three-dimensional (3-D) data acquisition protocols. Also, it provides a stable numerical implementation, allowing us to avoid the crosstalk between the attenuation map and the source function. A computationally efficient algorithm is implemented by using the QR and Cholesky decompositions. Application of the algorithm to computer-generated and experimentally measured SPECT data is considered.
Article
Full-text available
Many functionals have been proposed for validation of partitions of object data produced by the fuzzy c-means (FCM) clustering algorithm. We examine the role a subtle but important parameter-the weighting exponent m of the FCM model-plays in determining the validity of FCM partitions. The functionals considered are the partition coefficient and entropy indexes of Bezdek, the Xie-Beni (1991), and extended Xie-Beni indexes, and the Fukuyama-Sugeno index (1989). Limit analysis indicates, and numerical experiments confirm, that the Fukuyama-Sugeno index is sensitive to both high and low values of m and may be unreliable because of this. Of the indexes tested, the Xie-Beni index provided the best response over a wide range of choices for the number of clusters, (2-10), and for m from 1.01-7. Finally, our calculations suggest that the best choice for m is probably in the interval [1.5, 2.5], whose mean and midpoint, m=2, have often been the preferred choice for many users of FCM
Article
Full-text available
Attenuation correction is essential to PET imaging but often requires impractical acquisition times. Segmentation of short, noisier transmission scans has been proposed as a solution. We report that a 3D morphological tool-the watershed algorithm-is well adapted for segmenting even two-minute PET transmission images. The technique is non-iterative, fast and fully 3D. It inherently ensures class continuity and eliminates outliers. Pre-filtering the data induced smoother class edges, showing that a multi-resolution approach could be used to deal with the partial volume effect and excessive noise in the data. The algorithm was tested on two-minute scans of a torso phantom and on a human study
Article
Full-text available
As a preliminary step toward performing respiration compensated attenuation correction of respiratory-gated cardiac PET data, we acquired and automatically segmented respiratory-gated transmission data for a dog breathing on its own under gas anesthesia. Transmission data were acquired for 20 min on a CTI/Siemens ECAT EXACT HR (47-slice) scanner. Two respiratory gates were obtained using data from a pneumatic bellows placed around the dog's chest. For each respiratory gate, torso and lung surfaces were segmented automatically using a differential 3-D image edge detection algorithm. Three-dimensional visualizations showed that during inspiration the heart translated about 4 mm transversely and the diaphragm translated about 9 mm inferiorly. The observed respiratory motion of the canine heart and diaphragm suggests that respiration compensated attenuation correction may be necessary for accurate quantitation of high-resolution respiratory-gated human cardiac PET data. Our automated image segmentation results suggest that respiration compensated segmented attenuation correction may be possible using respiratory-gated transmission data obtained with as little as 3 min of acquisition time per gate
Article
A unified presentation of classical clustering algorithms is proposed both for the hard and fuzzy pattern classification problems. Based on two types of objective functions, a new method is presented and compared with the procedures of Dunn and Ruspini. In order to determine the best, or more natural number of fuzzy clusters, two coefficients that measure the “degree of non-fuzziness” of the partition are proposed. Numerous computational results are shown.
Article
Two fuzzy versions of the k-means optimal, least squared error partitioning problem are formulated for finite subsets X of a general inner product space. In both cases, the extremizing solutions are shown to be fixed points of a certain operator T on the class of fuzzy, k-partitions of X, and simple iteration of T provides an algorithm which has the descent property relative to the least squared error criterion function. In the first case, the range of T consists largely of ordinary (i.e. non-fuzzy) partitions of X and the associated iteration scheme is essentially the well known ISODATA process of Ball and Hall. However, in the second case, the range of T consists mainly of fuzzy partitions and the associated algorithm is new; when X consists of k compact well separated (CWS) clusters, Xi, this algorithm generates a limiting partition with membership functions which closely approximate the characteristic functions of the clusters Xi. However, when X is not the union of k CWS clusters, the limiting partition is truly fuzzy in the sense that the values of its component membership functions differ substantially from 0 or 1 over certain regions of X. Thus, unlike ISODATA, the “fuzzy” algorithm signals the presence or absence of CWS clusters in X. Furthermore, the fuzzy algorithm seems significantly less prone to the “cluster-splitting” tendency of ISODATA and may also be less easily diverted to uninteresting locally optimal partitions. Finally, for data sets X consisting of dense CWS clusters embedded in a diffuse background of strays, the structure of X is accurately reflected in the limiting partition generated by the fuzzy algorithm. Mathematical arguments and numerical results are offered in support of the foregoing assertions.
Article
The segmentation of medical images is one of the most important steps in the analysis and quantification of imaging data. However, partial volume artefacts make accurate tissue boundary definition difficult, particularly for images with lower resolution commonly used in nuclear medicine. In single-photon emission tomography (SPET) neuroreceptor studies, areas of specific binding are usually delineated by manually drawing regions of interest (ROIs), a time-consuming and subjective process. This paper applies the technique of fuzzy c-means clustering (FCM) to automatically segment dynamic neuroreceptor SPET images. Fuzzy clustering was tested using a realistic, computer-generated, dynamic SPET phantom derived from segmenting an MR image of an anthropomorphic brain phantom. Also, the utility of applying FCM to real clinical data was assessed by comparison against conventional ROI analysis of iodine-123 iodobenzamide (IBZM) binding to dopamine D2/D3 receptors in the brains of humans. In addition, a further test of the methodology was assessed by applying FCM segmentation to [123I]IDAM images (5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio] benzyl alcohol) of serotonin transporters in non-human primates. In the simulated dynamic SPET phantom, over a wide range of counts and ratios of specific binding to background, FCM correlated very strongly with the true counts (correlation coefficient r 2>0.99, P<0.0001). Similarly, FCM gave segmentation of the [123I]IBZM data comparable with manual ROI analysis, with the binding ratios derived from both methods significantly correlated (r 2=0.83, P<0.0001). Fuzzy clustering is a powerful tool for the automatic, unsupervised segmentation of dynamic neuroreceptor SPET images. Where other automated techniques fail completely, and manual ROI definition would be highly subjective, FCM is capable of segmenting noisy images in a robust and repeatable manner.
Article
Attenuation correction in single-photon (SPET) and positron emission (PET) tomography is now accepted as a vital component for the production of artefact-free, quantitative data. The most accurate attenuation correction methods are based on measured transmission scans acquired before, during, or after the emission scan. Alternative methods use segmented images, assumed attenuation coefficients or consistency criteria to compensate for photon attenuation in reconstructed images. This review examines the methods of acquiring transmission scans in both SPET and PET and the manner in which these data are used. While attenuation correction gives an exact correction in PET, as opposed to an approximate one in SPET, the magnitude of the correction factors required in PET is far greater than in SPET. Transmission scans also have a number of other potential applications in emission tomography apart from attenuation correction, such as scatter correction, inter-study spatial co-registration and alignment, and motion detection and correction. The ability to acquire high-quality transmission data in a practical clinical protocol is now an essential part of the practice of nuclear medicine.
Article
Whole-body fluorine-18 fluoro-2-d-deoxyglucose positron emission tomography (FDG-PET) is widely used in clinical centres for diagnosis, staging and therapy monitoring in oncology. Images are usually not corrected for attenuation since filtered backprojection (FBP) reconstruction methods require a 10 to 15-min transmission scan per bed position on most current PET devices equipped with germanium-68 rod transmission sources. Such an acquisition protocol would increase the total scanning time beyond acceptable limits. The aim of this work is to validate the use of iterative reconstruction methods, on both transmission and emission scans, in order to obtain a fully corrected whole-body study within a reasonable scanning time of 60min. Fiveminute emission and 3-min transmission scans are acquired at each of the seven bed positions. The transmission data are reconstructed with OSEM (ordered subsets expectation maximization) and the last iteration is reprojected to obtain consistent attenuation correction factors (ACFs). The emission image is then also reconstructed with OSEM, using the emission scan corrected for normalization, scatter and decay together with the set of consistent ACFs as inputs. The total processing time is about 35min, which is acceptable in a clinical environment. The image quality, readability and accuracy of uptake quantification were assessed in 38 patients scanned for various malignancies. The sensitivity for tumour detection was the same for the non-attenuation-corrected (NAC-FBP) and the attenuation-corrected (AC-OSEM) images. The AC-OSEM images were less noisy and easier to interpret. The interobserver reproducibility was significantly increased when compared with non-corrected images (96.1% vs 81.1%, P<0.01). Standardized uptake values (SUVs) measured on images reconstructed with OSEM (AC-OSEM) and filtered backprojection (AC-FBP) were similar in all body regions except in the pelvic area, where SUVs were higher on AC-FBP images (mean increase 7.74%, P<0.01). Our results show that, when statistical reconstruction is applied to both transmission and emission data, high quality quantitative whole-body images are obtained within a reasonable scanning (60min) and processing time, making it applicable in clinical practice.
Article
A local threshold for segmented attenuation correction technique has been developed for positron emission tomography using short (2-3 minutes) post-injection transmission scans. The technique implements an optimal threshold method on localized histograms to get pseudo-anatomic segmentation on transmission images. Theoretical values of attenuation coefficients are assigned to corresponding anatomic regions. Emission images are reconstructed using attenuation correction factors computed by forward-projecting segmented transmission images. Phantoms and clinical cardiac images are studied using this technique. The technique corrects emission images with accuracy similar to the standard, pre-injection method, reduces noise in the corrected emission images, and offers the potential for increased patient throughput by enabling shorter, post-injection data acquisition
Article
Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results.
Article
The authors present a fuzzy validity criterion based on a validity function which identifies compact and separate fuzzy c-partitions without assumptions as to the number of substructures inherent in the data. This function depends on the data set, geometric distance measure, distance between cluster centroids and more importantly on the fuzzy partition generated by any fuzzy algorithm used. The function is mathematically justified via its relationship to a well-defined hard clustering validity function, the separation index for which the condition of uniqueness has already been established. The performance of this validity function compares favorably to that of several others. The application of this validity function to color image segmentation in a computer color vision system for recognition of IC wafer defects which are otherwise impossible to detect using gray-scale image processing is discussed
Article
Elimination of errors due to poor attenuation correction is an essential part of any quantitative single photon emission tomography (SPET) technique. Attenuation coefficients (mu Tc) for use in attenuation correction of SPET data were determined using technetium 99m and cobalt 57 flood sources and using topographical information obtained from computed tomography (CT) scans and magnetic resonance (MR) images. In patients with carcinoma of the bronchus, the mean attenuation coefficient for 99mTc was 0.096 cm-1 when determined across a transverse section of the thorax at the level of the tumour by means of a 57Co flood source (13 patients) and 0.093 and 0.074 cm-1 as determined from CT scans for points in the centre of the tumour and contralateral normal lung, respectively (21 patients). In 18 patients with breast tumours, the mean attenuation coefficient for 99mTc was 0.110 and 0.076 cm-1 when determined from MRI cross-sections for points in the centre of the tumour and normal contralateral lung, respectively. This indicates significant overcorrection for attenuation when the conventional value of 0.12 cm-1 is used. A value in the range 0.08-0.09 cm-1 would be more appropriate for SPET studies of the thorax. An alternative approach to quantitative region of interest (ROI) analysis is to perform attenuation correction appropriate to the centre of each ROI (using topographical information derived from CT or MRI) on non-attenuation-corrected reconstructions.
Article
The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400,000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction.
Article
Whole-body fluorine-18 fluoro-2-d-deoxyglucose positron emission tomography (FDG-PET) is widely used in clinical centres for diagnosis, staging and therapy monitoring in oncology. Images are usually not corrected for attenuation since filtered backprojection (FBP) reconstruction methods require a 10 to 15-min transmission scan per bed position on most current PET devices equipped with germanium-68 rod transmission sources. Such an acquisition protocol would increase the total scanning time beyond acceptable limits. The aim of this work is to validate the use of iterative reconstruction methods, on both transmission and emission scans, in order to obtain a fully corrected whole-body study within a reasonable scanning time of 60 min. Five minute emission and 3-min transmission scans are acquired at each of the seven bed positions. The transmission data are reconstructed with OSEM (ordered subsets expectation maximization) and the last iteration is reprojected to obtain consistent attenuation correction factors (ACFs). The emission image is then also reconstructed with OSEM, using the emission scan corrected for normalization, scatter and decay together with the set of consistent ACFs as inputs. The total processing time is about 35 min, which is acceptable in a clinical environment. The image quality, readability and accuracy of uptake quantification were assessed in 38 patients scanned for various malignancies. The sensitivity for tumour detection was the same for the non-attenuation-corrected (NAC-FBP) and the attenuation-corrected (AC-OSEM) images. The AC-OSEM images were less noisy and easier to interpret. The interobserver reproducibility was significantly increased when compared with non-corrected images (96.1% vs 81.1%, P<0.01). Standardized uptake values (SUVs) measured on images reconstructed with OSEM (AC-OSEM) and filtered backprojection (AC-FBP) were similar in all body regions except in the pelvic area, where SUVs were higher on AC-FBP images (mean increase 7.74%, P<0. 01). Our results show that, when statistical reconstruction is applied to both transmission and emission data, high quality quantitative whole-body images are obtained within a reasonable scanning (60 min) and processing time, making it applicable in clinical practice.
Article
The segmentation of medical images is one of the most important steps in the analysis and quantification of imaging data. However, partial volume artefacts make accurate tissue boundary definition difficult, particularly for images with lower resolution commonly used in nuclear medicine. In single-photon emission tomography (SPET) neuroreceptor studies, areas of specific binding are usually delineated by manually drawing regions of interest (ROIs), a time-consuming and subjective process. This paper applies the technique of fuzzy c-means clustering (FCM) to automatically segment dynamic neuroreceptor SPET images. Fuzzy clustering was tested using a realistic, computer-generated, dynamic SPET phantom derived from segmenting an MR image of an anthropomorphic brain phantom. Also, the utility of applying FCM to real clinical data was assessed by comparison against conventional ROI analysis of iodine-123 iodobenzamide (IBZM) binding to dopamine D2/D3 receptors in the brains of humans. In addition, a further test of the methodology was assessed by applying FCM segmentation to [123I]IDAM images (5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio] benzyl alcohol) of serotonin transporters in non-human primates. In the simulated dynamic SPET phantom, over a wide range of counts and ratios of specific binding to background, FCM correlated very strongly with the true counts (correlation coefficient r2>0.99, P<0.0001). Similarly, FCM gave segmentation of the [123I]IBZM data comparable with manual ROI analysis, with the binding ratios derived from both methods significantly correlated (r2=0.83, P<0.0001). Fuzzy clustering is a powerful tool for the automatic, unsupervised segmentation of dynamic neuroreceptor SPET images. Where other automated techniques fail completely, and manual ROI definition would be highly subjective, FCM is capable of segmenting noisy images in a robust and repeatable manner.
Article
In this paper a clustering technique is proposed for attenuation correction (AC) in positron emission tomography (PET). The method is unsupervised and adaptive with respect to counting statistics in the transmission (TR) images. The technique allows the classification of pre- or post-injection TR images into main tissue components in terms of attenuation coefficients. The classified TR images are then forward projected to generate new TR sinograms to be used for AC in the reconstruction of the corresponding emission (EM) data. The technique has been tested on phantoms and clinical data of brain, heart and whole-body PET studies. The method allows: (a) reduction of noise propagation from TR into EM images, (b) reduction of TR scanning to a few minutes (3 min) with maintenance of the quantitative accuracy (within 6%) of longer acquisition scans (15-20 min), (c) reduction of the radiation dose to the patient, (d) performance of quantitative whole-body studies.
Article
Standardised Uptake Values (SUVs) are widely used in positron emission tomography (PET) as a semi-quantitative index of fluorine-18 labelled fluorodeoxyglucose uptake. The objective of this study was to investigate any bias introduced in the calculation of SUVs as a result of employing ordered subsets-expectation maximisation (OSEM) image reconstruction and segmented attenuation correction (SAC). Variable emission and transmission time durations were investigated. Both a phantom and a clinical evaluation of the bias were carried out. The software implemented in the GE Advance PET scanner was used. Phantom studies simulating tumour imaging conditions were performed. Since a variable count rate may influence the results obtained using OSEM, similar acquisitions were performed at total count rates of 34 kcps and 12 kcps. Clinical data consisted of 100 patient studies. Emission datasets of 5 and 15 min duration were combined with 15-, 3-, 2- and 1-min transmission datasets for the reconstruction of both phantom and patient studies. Two SUVs were estimated using the average (SUVavg) and the maximum (SUVmax) count density from regions of interest placed well inside structures of interest. The percentage bias of these SUVs compared with the values obtained using a reference image was calculated. The reference image was considered to be the one produced by filtered back-projection (FBP) image reconstruction with measured attenuation correction using the 15-min emission and transmission datasets for each phantom and patient study. A bias of 5%-20% was found for the SUVavg and SUVmax in the case of FBP with SAC using variable transmission times. In the case of OSEM with SAC, the bias increased to 10%-30%. An overall increase of 5%-10% was observed with the use of SUVmax. The 5-min emission dataset led to an increase in the bias of 25%-100%, with the larger increase recorded for the SUVmax. The results suggest that OSEM and SAC with 3 and 2 min transmission may be reliably used to reduce the overall data acquisition time without compromising the accuracy of SUVs.
Article
PET offers the possibility of quantitative measurements of tracer concentration in vivo. However, there are several issues that must be considered in order to fully realise this potential. Whilst, a correction for a number of background and physical phenomena need to be performed, the two most significant effects are the photon attenuation in the patient and the contribution in the images of events arising from photons scattered in the patient and the gantry. The non-homogeneous distribution of attenuation within the thoracic cavity complicates the interpretation of PET images and precludes the application of simple scatter correction methods developed for homogeneous media. The development of more sophisticated techniques for quantification of PET images are still required. Recent progress in 3D PET instrumentation and image reconstructions has created a need for a concise review of the relevance and accuracy of scatter correction strategies. Improved quantification of PET images remains an area of considerable research interest and several research groups are concentrating their efforts towards the development of more accurate scatter modelling and correction algorithms.
Conference Paper
Segmented-based attenuation correction is now a widely accepted technique to reduce noise contribution of measured attenuation correction. In this paper, we present a new method for segmenting transmission images in positron emission tomography. This reduces the noise on the correction maps while still correcting for differing attenuation coefficients of specific tissues. Based on the Fuzzy C-Means (FCM) algorithm, the method segments the PET transmission images into a given number of clusters to extract specific areas of differing attenuation such as air, the lungs and soft tissue, preceded by a median filtering procedure. The reconstructed transmission image voxels are therefore segmented into populations of uniform attenuation based on the human anatomy. The clustering procedure starts with an over-specified number of clusters followed by a merging process to group clusters with similar properties and remove some undesired substructures using anatomical knowledge