Gamal Fahmy

University of North Carolina at Greensboro, Greensboro, North Carolina, United States

Are you Gamal Fahmy?

Claim your profile

Publications (35)12.68 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent disasters have emphasized the significance of automated dental identification systems. Statistics show that 20% of the 9/11 victims, identified in the first year, were manually identified using dental records. Moreover, 75% of Tsunami victims in Thailand were similarly identified using dental records, compared to 0.5% identified using DNA. This paper addresses the first important problem of an Automated Dental Identification System (ADIS). This system matches image features extracted from multiple dental radiographic records. Dental radiograph record of an individual usually consists of radiographic films. It is an essential step to accurately segment these films from their constituent dental records in order to extract the dental features and achieve high level of automated postmortem identification. In this paper, we propose an automated approach to the problem of segmenting films from their dental records. Challenges include the variability in the background of the dental records including its gray intensity and texture, and variation in the number of films and their dimensions. Our three-stage approach is based on concepts of thresholding, connectivity, and mathematical morphology. We show by experimental evidence that our approach achieves 92% accuracy compared to 74% using previous work suggested in the literature.
    Biometrics Symposium, 2008. BSYM '08; 10/2008
  • A.M. El-Attar, G. Fahmy
    [Show abstract] [Hide abstract]
    ABSTRACT: Satellite and Ariel imaging systems are located at high altitudes. Thus, they are more vulnerable to Soft Errors than similar systems operating at sea level. This paper studies the effect of transient faults on microprocessor based imaging systems. The paper studies the ability of different watchdog timer systems to recover the system from failure. A new improved watchdog timer system design is introduced This new design solves the problems of both the standard and windowed watchdog timers. The watchdog timers are tested by injecting a fault while a processor is reading an image from RAM and sending it to the VGA RAM for display. This method is implemented on FPGA, and visually demonstrates the existence of fast watchdog resets, which can not be detected by standard watchdog timers, and faulty resets which occur undetected within the safe window of the windowed watchdog timers.
    Signal Processing and Information Technology, 2007 IEEE International Symposium on; 01/2008
  • G. Fahmy
    [Show abstract] [Hide abstract]
    ABSTRACT: Super-Resolution image construction has gained increased importance recently. This is due to the demand for resolution enhancement for many imaging applications, as it is much efficient to capture images in a low resolution environment. The Bspline mathematical functions have long been utilized for signal representation. However they have been just recently been used for signal interpolation and zooming. This is due to the fact that they are flexible and provide the best cost/quality trade off relationship. In this paper we present a super-resolution image construction algorithm, where the high frequencies and edges of the high resolution constructed image are solely based on the Bspline signal representation. Mathematical explanation and derivation for the proposed Bspline prediction is analyzed. Several texture images from the Vistex database has been used to test the proposed technique. Extensive simulation results, that have been carried out with the proposed approach on different classes of images and demonstrated its usefulness, are proposed.
    Signal Processing and Information Technology, 2007 IEEE International Symposium on; 01/2008
  • A.M. El-Attar, G. Fahmy
    [Show abstract] [Hide abstract]
    ABSTRACT: Both standard and windowed watchdog timers were designed to detect flow faults and ensure the safe operation of the systems they supervise. This paper studies the effect of transient failures on microprocessors, and utilizes two methods to compare the fault coverage of both watchdog timers. The first method is injecting a fault while a processor is reading an image from RAM and sending it to the VGA RAM for display. This method is implemented on FPGA, and visually demonstrates the existence of fast watchdog resets which can not be detected by standard watchdog timers, and faulty resets which occur undetected within the safe window of the windowed watchdog timers. The second method is a simulation where the fault coverage for each watchdog timer system is calculated. This simulation tries to take into consideration many factors which could affect the outcome of this comparison.
    Signal Processing and Communications, 2007. ICSPC 2007. IEEE International Conference on; 12/2007
  • G. Fahmy
    [Show abstract] [Hide abstract]
    ABSTRACT: Iris became an important biometric in the last decade, due to its uniqueness and richness of features. In this paper, a novel super-resolution and image registration technique for visual (non-infra-red) iris images is presented. In the proposed technique a full face, 3 second long, 90 frames, visual video is captured with a digital camera located 3 feet away from each subject. Iris images are segmented from the full face image. A cross correlation model is applied for the registration/alignment of full gray scale iris images. A high resolution iris image, that is 4 times higher in terms of size and resolution, is constructed from every 9 low resolution images. This process of building a high resolution image is based on an auto_regressive signature model between consecutive low resolution images in filling the sub pixels in the constructed high resolution image. Then this process is iterated until a 16 times higher resolution iris image is constructed. Illustrative images are shown that prove the effectiveness of the proposed technique.
    Signal Processing and Its Applications, 2007. ISSPA 2007. 9th International Symposium on; 03/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
    IEEE Transactions on Image Processing 07/2006; 15(6):1389-96. · 3.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Automating the process of postmortem identification of individuals using dental records is receiving increased attention. Teeth segmentation from dental radiographic films is an essential step for achieving highly automated postmortem identification. In this paper, we offer a mathematical morphology approach to the problem of teeth segmentation. We also propose a grayscale contrast stretching transformation to improve the performance of teeth segmentation. We compare and contrast our approach with other approaches proposed in the literature based on a theoretical and empirical basis. The results show that in addition to its capability of handling bitewing and periapical dental radiographic views, our approach exhibits the lowest failure rate among all approaches studied.
    IEEE Transactions on Information Forensics and Security 07/2006; · 1.90 Impact Factor
  • IEEE Trans. on Image Processing. 01/2006; 15(2):17.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Automating the process of postmortem (PM) identification of individuals using dental records is receiving increased attention. In developing a research prototype of an Automated Dental Identification System (ADIS), research teams from multiple institutions collaborated with forensic experts from the US Federal Bureau of Investigation Criminal Justice Information Services division to identify the functional requirements of ADIS. A multitude of digital image processing and pattern recognition techniques were developed to meet the requirements of the constituent components of ADIS. In this demo, we present a web-based environment called webADIS that integrates ADIS components and provides a unified web-based interface. webADIS provides support for configuring the system into multiple possible realizations for some components as well as alternative identification strategies.
    Proceedings of the 7th Annual International Conference on Digital Government Research, DG.O 2006, San Diego, California, USA, May 21-24, 2006; 01/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we measure the effect of the lighting direction in facial images on the performance of 2 well- known face recognition algorithms, an appearance based method and a facial feature based method. We collect hundreds/thousands of facial images of subjects with a fixed pose and under different lighting conditions through a unique facial acquisition laboratory designed specifically for this purpose. Then we present a methodology for automatically detecting the lighting direction of different face images based on statistics derived from the image. We also detect if there is any glare regions in some lighting directions. Finally we determine the most reliable lighting direction that will lead to a good quality/high performance facial image from both techniques based on our experiments with the acquired data.
    01/2006;
  • Source
    IEEE Transactions on Information Forensics and Security 01/2006; 1:178-189. · 1.90 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe and analyze the performance of a non-ideal iris recognition system. The system is designed to process non-ideal iris images in two steps: (i) estimation of the gaze direction and (ii) processing and encoding of the rotated iris image. We use two objective functions to estimate the gaze direction: Hamming distance and Daugman's integro-differential operator and determine an estimated angle by picking the value that optimizes the selected objective function. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed in this work is based on application of the global independent component analysis (ICA) to masked iris images. We use two datasets: CASIA dataset and a special dataset of off-angle iris images collected at WVU to verify the performance of the encoding technique and angle estimator, respectively. A series of receiver operating characteristics (ROCs) demonstrates various effects on the performance of the non-ideal iris based recognition system implementing the global ICA encoding.
    Image Processing, 2005. ICIP 2005. IEEE International Conference on; 10/2005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we describe and analyze the performance of two iris-encoding techniques. The first technique is based on Principle Component Analysis (PCA) encoding method while the second technique is a combination of Principal Component Analysis with Independent Component Analysis (ICA) following it. Both techniques are applied globally. PCA and ICA are two well known methods used to process a variety of data. Though PCA has been used as a preprocessing step that reduces dimensions for obtaining ICA components for iris, it has never been analyzed in depth as an individual encoding method. In practice PCA and ICA are known as methods that extract global and fine features, respectively. It is shown here that when PCA and ICA methods are used to encode iris images, one of the critical steps required to achieve a good performance is compensation for rotation effect. We further study the effect of varying the image resolution level on the performance of the two encoding methods. The major motivation for this study is the cases in practice where images of the same or different irises taken at different distances have to be compared. The performance of encoding techniques is analyzed using the CASIA dataset. The original images are non-ideal and thus require a sequence of preprocessing steps prior to application of encoding methods. We plot a series of Receiver Operating Characteristics (ROCs) to demonstrate various effects on the performance of the iris-based recognition system implementing PCA and ICA encoding techniques.
    Proc SPIE 03/2005;
  • Journal of Electronic Imaging 01/2005; 14:043018. · 1.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Over the last decade perceptually based image compression has gained significant importance. This is because it relies on Human Visual Perception (HVP) in measuring the reconstruction quality in the compression process, as humans are the end users for images. Visual data that is perceived by humans can be characterized in terms of three parameters. Magnitude, Phase and Orientation of the spatial frequency content. While existing perceptually based image compression techniques exploits the first parameter, the novel contribution of this paper is its focus on the use of phase data for perceptually based texture compression. In this paper a HVS based texture characterization approach is applied to measure the perceived (by humans) phase coherence in the image. Then images are more compressed after removing the unperceived phase redundancy. Finally subjective tests are performed to measure the reconstruction quality of the proposed compression approach. The proposed compression algorithm has been applied in the JPEG2000 framework. Simulation results that demonstrate the efficiency of the proposed approach are presented.
    Image Processing, 2004. ICIP '04. 2004 International Conference on; 11/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Law enforcement agencies have been exploiting biometric identifiers for decades as key tools in forensic identification. With the evolution in information technology and the huge volume of cases that need to be investigated by forensic specialists, it has become important to automate forensic identification systems. While, ante mortem (AM) identification, that is identification prior to death, is usually possible through comparison of many biometric identifiers, postmortem (PM) identification, that is identification after death, is impossible using behavioral biometrics (e.g. speech, gait). Moreover, under severe circumstances, such as those encountered in mass disasters (e.g. airplane crashers) or if identification is being attempted more than a couple of weeks postmortem, under such circumstances, most physiological biometrics may not be employed for identification, because of the decay of soft tissues of the body to unidentifiable states. Therefore, a postmortem biometric identifier has to resist the early decay that affects body tissues. Because of their survivability and diversity, the best candidates for postmortem biometric identification are the dental features. In this paper we present an over view about an automated dental identification system for Missing and Unidentified Persons. This dental identification system can be used by both law enforcement and security agencies in both forensic and biometric identification. We will also present techniques for dental segmentation of X-ray images. These techniques address the problem of identifying each individual tooth and how the contours of each tooth are extracted.
    Proc SPIE 08/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Law enforcement agencies have been exploiting biometric identifiers for decades as key tools in forensic identification. A biometric identifier has to resist the early decay that affects body tissues. Because of their survivability and diversity, the best candidates for postmortem biometric identification are the dental features. In this paper, we present an overview of ADIS (automated dental identification system). We also present a new fully automated algorithm for identifying people from dental X-ray images as one of ADIS components. The algorithm automatically archives AM (antemortem) dental photographs by extracting teeth shapes and storing them in a database. Given a dental image of a PM (postmortem), the proposed algorithm retrieves the best matches from the database
    Circuits and Systems, 2003 IEEE 46th Midwest Symposium on; 01/2004
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a design for a novel lifting based wavelet system that achieves the best trade off between compression and classification performances. The proposed system is based on bi-orthogonal filters and can operate in a scalable compression framework. In the proposed system, the trade off point between compression and classification is determined by the system, however, the user can also fine-tune the relative performance using two controllers (one for compression and one for classification). Extensive simulations have been performed to demonstrate the compression and/or classification performance of our system in the context of the recent image compression standard, namely JPEG2000. Our simulation results show that the lifting based kernels, generated from the proposed system, are capable of achieving superior compression performance compared to the default kernels adopted in the JPEG2000 standard (at a classification rate of 70%). The generated kernels can also achieve a comparable compression quality with the JPEG2000 kernels whilst providing a 99% classification performance. In other words, the proposed lifting based system achieves the best trade off between compression and classification performance at the compressed bit-stream level in the wavelet domain.
    J. Visual Communication and Image Representation. 01/2004; 15:145-162.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract We present a web-based environment,for the Automated Dental Identification System (ADIS). This system is designed for identification missing, unidentified and wanted persons based on dental characteristics.The web-based environment of ADIS allows users to browse through different databases from remote locations, to digitally process query dental images in order to extract the distinctive features used for record matching, and to finally present to the forensic expert a short match list to be inspected.
    Proceedings of the 7th Annual International Conference on Digital Government Research, DG.O 2006, San Diego, California, USA, May 21-24, 2006; 01/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of developing an automated system for postmortem identification using dental records. The Automated Dental Identification System (ADIS) can be used by law enforcement agencies to locate missing persons using databases of dental x-rays. Currently, this search and identification process is carried out manually, which makes it very time- consuming and unreliable. In this paper, we propose an architecture for ADIS, we define the functionality of its components, and we briefly describe some of the techniques used in realizing these components.
    Biometric Authentication, First International Conference, ICBA 2004, Hong Kong, China, July 15-17, 2004, Proceedings; 01/2004

Publication Stats

207 Citations
12.68 Total Impact Points

Institutions

  • 2008
    • University of North Carolina at Greensboro
      Greensboro, North Carolina, United States
  • 2007–2008
    • The German University in Cairo
      Al Qāhirah, Al Qāhirah, Egypt
  • 2004–2005
    • West Virginia University
      • Department of Computer Science & Electrical Engineering
      Morgantown, WV, United States
  • 2000–2004
    • Arizona State University
      • Center for Cognitive Ubiquitous Computing
      Phoenix, AZ, United States