James H. McClellan

Georgia Institute of Technology, Atlanta, Georgia, United States

Are you James H. McClellan?

Claim your profile

Publications (250)317.37 Total impact

  • Source
    Lingchen Zhu · Entao Liu · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Full waveform inversion (FWI) delivers high-resolution images of a subsurface medium model by minimizing iteratively the least-squares misfit between the observed and simulated seismic data. Due to the limited accuracy of the starting model and the inconsistency of the seismic waveform data, the FWI problem is inherently ill-posed, so that regularization techniques are typically applied to obtain better models. FWI is also a computationally expensive problem because modern seismic surveys cover very large areas of interest and collect massive volumes of data. The dimensionality of the problem and the heterogeneity of the medium both stress the need for faster algorithms and sparse regularization techniques to accelerate and improve imaging results. This paper reaches these goals by developing a compressive sensing approach for the FWI problem, where the sparsity of model perturbations is exploited within learned dictionaries. Based on stochastic approximations, the dictionaries are updated iteratively to adapt to dynamic model perturbations. Meanwhile, the dictionaries are kept orthonormal in order to maintain the corresponding transform in a fast and compact manner without introducing extra computational overhead to FWI. Such a sparsity regularization on model perturbations enables us to take randomly subsampled data for computation and thus significantly reduce the cost. Compared with other approaches that employ sparsity constraints in the fixed curvelet transform domain, our approach can achieve more robust inversion results with better model fit and visual quality.
    Full-text · Article · Nov 2015
  • Source
    Lingchen Zhu · Entao Liu · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Seismic data comprise many traces that provide a spatiotemporal sampling of the reflected wavefield. However, such information may suffer from ambient and random noise during acquisition, which could possibly limit the use of seismic data in reservoir locating. Traditionally, fixed transforms are used to separate the noise from the data by exploiting their different characteristics in a transform domain. However, their performance may not be satisfactory due to their lack of adaptability to changing data structures.We have developed a novel seismic data denoising method based on a parametric dictionary learning scheme. Unlike previous dictionary learning methods that had to learn unconstrained atoms, our method exploits the underlying sparse structure of the learned atoms over a base dictionary and significantly reduces the dictionary elements that need to be learned. By combining the advantages of multiscale representations with the power of dictionary learning, more degrees of freedom could be provided to the sparse representation, and therefore the characteristics of seismic data could be efficiently captured in sparse coefficients for denoising. The dictionary learning and denoising were processed from all overlapping patches of the given noisy seismic data, which maintained low complexity. Numerical experiments on synthetic seismic data indicated that our scheme achieved the best denoising performance in terms of peak signal-to-noise ratio and minimizes visual distortion.
    Full-text · Article · Nov 2015 · Geophysics
  • Kyle R. Krueger · James H. McClellan · Waymond R. Scott
    [Show abstract] [Hide abstract]
    ABSTRACT: Ground-penetrating radar (GPR) is used to image and detect subterranean objects, for example, in landmine detection. Although full 3-D inversion of GPR measurements is possible for simple algorithms such as backprojection, it is impractical when using more advanced algorithms that involve $ell_1$-minimization. Many of the algorithms used for GPR imaging involve the storage, or online generation, of a huge dictionary matrix created from discretizing a high-dimensional nonlinear model. This parametric model includes all the target features that need to be extracted, including 3-D location, object orientation, and target type. As more parameters are added to the model, the dimensionality increases. If uniform sampling is done over high-dimensional parameter space, the size of the dictionary and the complexity of the inversion algorithms rapidly grow, exceeding the capability of real-time processors. This paper shows that strategic structuring of the dictionary, which takes advantage of translational invariance in the model, can reduce the dictionary storage by several orders of magnitude and exploit the fast Fourier transform for fast computation of previously highly impractical, bordering on impossible, 3-D GPR imaging problems.
    No preview · Article · Jun 2015 · IEEE Transactions on Geoscience and Remote Sensing
  • Mu-Hsin Wei · Waymond R. Scott · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent developments in the estimation of the discrete spectrum of relaxation frequencies (DSRFs) has opened doors to more robust subsurface target discrimination using electromagnetic induction measurements. In particular, a nonnegative least squares DSRF (NNLSQ-DSRF) estimation method has been shown to be robust and free from parameter tuning. In this letter, we propose an adaptive prefiltering process to complement the NNLSQ-DSRF where we attempt to linearly combine measurements and produce a filtered signal that is very likely to have a nonnegative DSRF, as well as an enhanced signal-to-noise ratio. Using synthetic and field data, we demonstrate that the proposed adaptive prefilter can effectively produce signals with nonnegative DSRFs.
    No preview · Article · May 2015 · IEEE Geoscience and Remote Sensing Letters
  • Carson /A/. Wick · Omer T Inan · James H McClellan · Srini Tridandapani
    [Show abstract] [Hide abstract]
    ABSTRACT: Cardiac computed tomography angiography (CTA) is a minimally invasive imaging technology for characterizing coronary arteries. A fundamental limitation of CTA imaging is cardiac movement, which can cause artifacts and reduce the quality of the obtained images. To mitigate this problem, current approaches involve gating the image based on the electrocardiogram (ECG) to predict the timing of quiescent periods of the cardiac cycle. This paper focuses on developing a foundation for using a mechanical alternative to the ECG for finding these quiescent periods: the seismocardiogram (SCG). SCG was used to determine beat-by-beat systolic and diastolic quiescent periods of the cardiac cycle for nine healthy subjects, and 11 subjects with various cardiovascular diseases. To reduce noise in the SCG, and quantify these quiescent periods, a Kalman filter was designed to extract the velocity of chest wall movement from the recorded SCG signals. The average systolic and diastolic quiescent periods were centered at 29% and 76% for the healthy subjects, and 33% and 79% for subjects with cardiovascular disease. Both inter- and intra-subject variability in the quiescent phases were observed compared to ECG-predicted phases, suggesting that the ECG may be a suboptimal modality for predicting quiescence, and that the SCG provides complementary data to the ECG.
    No preview · Article · Mar 2015 · IEEE transactions on bio-medical engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33-74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as well as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (PAGG) and IVS (PIV S) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (PCT). The one exception was the RCA, which improved for PAGG for 18 of the 20 subjects when compared to PCT (PCT = 2.48; PAGG = 2.07, p = 0.001). A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.
    No preview · Article · Feb 2015 · Medical Physics
  • Kim Arild Steen · James H. McClellan · Ole Green · Henrik Karstoft
    [Show abstract] [Hide abstract]
    ABSTRACT: Long baseline microphone arrays, where the distance between single microphones is large, is a cost-effective method to monitor activity in a large region of interest. The high range between source and sensors in a long baseline setup, generally leads to low signal-to-noise ratios (SNR). This affect the performance of localization. This paper investigates how energy based localization (EBL) can be used for localization and tracking of both single and multiple acoustic sources, within a long baseline array. Least-Squares (LS) optimization have been used for EBL-based localization, however the localization performance is sensitive to low SNR. We propose a tracking scheme based on a cost reference particle filter (CRPF) to increase performance during low SNR. The CRPF is a new class of particle filters which is able to estimate the system state from the available observations without a priori knowledge of any probability density function. We present a modified cost function, for EBL based localization, which is incorporated in the CRPF framework to increase tracking performance during simulated wind gusts and background noise. The proposed method outperforms LS based localization in the case of low SNR.
    No preview · Article · Jan 2015 · Applied Acoustics
  • Carson A Wick · James H McClellan · Omer T Inan · Srini Tridandapani
    [Show abstract] [Hide abstract]
    ABSTRACT: As a measure of chest wall acceleration caused by cardiac motion, the seismocardiogram (SCG) has the potential to supplement the electrocardiogram (ECG) to more accurately trigger cardiac computed tomography angiography (CTA) data acquisition during periods of cardiac quiescence. The SCG was used to identify the systolic and diastolic quiescent periods of the cardiac cycle on a beat-by-beat basis and from composite velocity signals for nine healthy subjects. The cardiac velocity transmitted to the chest wall was calculated using a Kalman filter. The average systolic and diastolic quiescent periods were centered at 30% and 76%, respectively. Inter- and intra-subject variability of the quiescent phases with respect to the ECG was observed, suggesting that the ECG may be a suboptimal modality for predicting cardiac quiescence.
    No preview · Article · Aug 2014
  • Kyle R. Krueger · Waymond R. Scott · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Electromagnetic induction (EMI) sensors are commonly used to detect and locate buried metallic objects such as landmines, but they are capable of extracting much more information about objects, e.g., location and magnetic polarizability. Recent research has led to an effective inversion method to extract target orientation by finding the tensor representation of the target [1-3]. The 'tensor amplitude' extraction techniques also have an important capability that has not been fully examined until now, and that is the ease with which they can be used with a variety of EMI sensor geometries. This paper will examine how making slight alterations to an existing sensor geometry can dramatically increase its effectiveness, while still using the 'tensor amplitude' extraction to accurately and efficiently determine the unknown parameters of a metallic object.
    No preview · Conference Paper · Jul 2014
  • Lingchen Zhu · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Heterogeneous network (HetNet) uses two-tier network architecture in which an unplanned femtocell layer is randomly deployed with no coordination between the coexisting macrocell layer. Both layers share the same spectrum so that spontaneous intercell interference is unavoidable and needs to be identified and canceled afterwards. In order to address the subsequent interference management, this paper proposes an intercell interference channel estimation scheme for HetNets using compressive sensing (CS). Applying CS to analog orthogonal frequency division multiple access (OFDMA) signals not only achieves lower cost sub-Nyquist sampling but also reduces interference by spreading out the energy of reference symbols of interference link that overlaps data symbols of desired link. Our scheme enhances both desired data symbols and interference channel by canceling the estimations of each other from the received signal in turn iteratively. Simulation results show that our scheme obtains accurate estimation of interference channel and is robust to variations in multipath number and signal-to-interference ratio (SIR).
    No preview · Conference Paper · Jun 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Experimental data measured at a field test site with a broadband electromagnetic induction (EMI) sensor are presented. The system is an improved version of the Georgia Tech EMI system developed over the past several years. The system operates over a 300 to one bandwidth and is more sensitive while being more power efficient than earlier systems. Data measured with the system will be presented with an emphasis on features in the data that can be used to separate metallic targets from the soil response and to discriminate between certain classes of metallic targets.
    No preview · Conference Paper · May 2014
  • Lingchen Zhu · Chenchi Luo · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Cognitive radio (CR) systems offer higher spectrum utilization by opportunistically allocating the unused spectrum from primary users to secondary users. For CR it is vital to perform fast and accurate spectrum sensing in a wideband and noisy channel. Cyclic feature detection performs well in signal detection and is also highly robust to noise uncertainty. However, it requires a high sampling rate when operating over a wideband channel. Based on the sparsity of the cyclic spectrum, compressive sampling technique can extend sparse reconstruction to its case. This paper develops a simpler cyclic spectrum recovery method based on random sampling and demonstrates faster and better performance. Recent research on discrete random sampling provides a new connection between sub-Nyquist sampling and aliasing as a noise floor that can be dynamically shaped by different distributions of sampling times. Practical analog-to-digital converters can implement these random sampling schemes. Thus, a reduced hardware complexity cyclic feature detector based on the reconstructed cyclic spectrum is proposed to identify the spectrum occupancy within the entire wideband.
    No preview · Conference Paper · Dec 2013
  • K.R. Krueger · J.H. Mcclellan · W.R. Scott
    [Show abstract] [Hide abstract]
    ABSTRACT: Sensor array measurements can be inverted to image a region containing targets. The resulting amplitude image is usually interpreted as target strength versus location, but often the imaged amplitude is a function of more parameters than just the location. Sparse target regions can be imaged with dictionary based modeling which relies on enumeration of each parameter with a dense grid. With many parameters, the dictionary becomes too large, which leads to computational complexity issues. This paper shows how additional parameters, such as target orientation and symmetry, can be represented by a tensor matrix instead of a simple amplitude. Furthermore, the tensor can be treated as a continuous variable just like amplitude, which enables extraction of multiple parameters, while reducing the storage requirements of the dictionary, and reducing off-grid modeling error.
    No preview · Conference Paper · Oct 2013
  • Chenchi Luo · J.H. Mcclellan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a new perspective on the relationship between the sampling and aliasing. Unlike the uniform sampling case, where the aliases are simply periodic replicas of the original spectrum, random sampling theory shows that the randomization of sampling intervals shapes the aliases into a noise floor in the sampled spectrum. New insights into both the Fourier random sampling problem and Compressive Sensing theory can be obtained using the theoretical framework of random sampling. This paper extends the theory of continuous time random sampling to deal with random discrete intervals generated from a clock. A key result is established to relate the discrete probability distribution of the sampling intervals to the power spectrum of the aliasing noise. Based on the proposed theory, a generic discrete random sampling hardware architecture is also proposed for sampling and reconstructing a class of spectrally sparse signals at an average rate significantly below the Nyquist rate of the signal.
    No preview · Conference Paper · Oct 2013
  • Chenchi Luo · Lingchen Zhu · J.H. Mcclellan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel digital blind calibration method for time interleaved analog to digital converters (TIADCs). A simple cost function based on the cross-correlation of channel statistics is used to derive a steepest descent algorithm for the compensation of timing mismatch errors. Instead of calibrating the timing mismatches independently for each channel, only one adaptation channel needs to be calibrated within a closed loop. The calibration of the rest of the channels can be coordinated according to a scaling relationship established during an initialization stage. As a result, both the computational complexity and convergence speed of the proposed algorithm can be improved significantly with little loss in calibration performance.
    No preview · Conference Paper · Oct 2013
  • Kyle Krueger · Waymond R. Scott · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Dictionary matching techniques have been an e ective way to detect the location and orientation of buried targets using electromagnetic induction (EMI) sensors. Two problems with dictionary detection are that they require a large amount of computer storage to enumerate nine dimensions, and fine discretization of the parameter space must be used to reduce modeling error. The proposed method shrinks the dictionary size by five orders of magnitude, and reduces modeling error by directly solving for the 3×3 tensor model of the target. A robust lowrank matrix approximation algorithm has been implemented which can also account for directional insensitivities in the measurements.
    No preview · Article · Jun 2013 · Proceedings of SPIE - The International Society for Optical Engineering
  • Chenchi Luo · Lingchen Zhu · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a general structure for FIR filters with adjustable magnitude and phase responses controlled by a few parameters. The Farrow structure which uses one parameter to control the fractional delay of an FIR filter can be viewed as special case. A filter bank structure consisting of different types of linear phase differentiators forms the basis of the structure. The filter bank outputs are combined with coefficients derived from a polynomial expansion of the desired frequency response. The magnitude and phase responses are controlled by synthesizing the polynomial coefficients from the small set of control parameters. A new optimal polynomial approximation strategy is also proposed to better approximate the family of target frequency responses.
    No preview · Conference Paper · May 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Two novel methods for detecting cardiac quiescent phases from B-mode echocardiography using a correlation-based frame-to-frame deviation measure were developed. Accurate knowledge of cardiac quiescence is crucial to the performance of many imaging modalities, including computed tomography coronary angiography (CTCA). Synchronous electrocardiography (ECG) and echocardiography data were obtained from 10 healthy human subjects (four male, six female, 23-45 years) and the interventricular septum (IVS) was observed using the apical four-chamber echocardiographic view. The velocity of the IVS was derived from active contour tracking and verified using tissue Doppler imaging echocardiography methods. In turn, the frame-to-frame deviation methods for identifying quiescence of the IVS were verified using active contour tracking. The timing of the diastolic quiescent phase was found to exhibit both inter- and intra-subject variability, suggesting that the current method of CTCA gating based on the ECG is suboptimal and that gating based on signals derived from cardiac motion are likely more accurate in predicting quiescence for cardiac imaging. Two robust and efficient methods for identifying cardiac quiescent phases from B-mode echocardiographic data were developed and verified. The methods presented in this paper will be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.
    No preview · Article · Jan 2013 · IEEE Journal of Translational Engineering in Health and Medicine
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective: We present a Matlab-based tool to convert electrocardiography (ECG) information from paper charts into digital ECG signals. The tool can be used for long-term retrospective studies of cardiac patients to study the evolving features with prognostic value. Methods and procedures: To perform the conversion, we: 1) detect the graphical grid on ECG charts using grayscale thresholding; 2) digitize the ECG signal based on its contour using a column-wise pixel scan; and 3) use template-based optical character recognition to extract patient demographic information from the paper ECG in order to interface the data with the patients' medical record. To validate the digitization technique: 1) correlation between the digital signals and signals digitized from paper ECG are performed and 2) clinically significant ECG parameters are measured and compared from both the paper-based ECG signals and the digitized ECG. Results: The validation demonstrates a correlation value of 0.85-0.9 between the digital ECG signal and the signal digitized from the paper ECG. There is a high correlation in the clinical parameters between the ECG information from the paper charts and digitized signal, with intra-observer and inter-observer correlations of 0.8-0.9 (p <; 0.05), and kappa statistics ranging from 0.85 (inter-observer) to 1.00 (intra-observer). Conclusion: The important features of the ECG signal, especially the QRST complex and the associated intervals, are preserved by obtaining the contour from the paper ECG. The differences between the measures of clinically important features extracted from the original signal and the reconstructed signal are insignificant, thus highlighting the accuracy of this technique. Clinical impact: Using this type of ECG digitization tool to carry out retrospective studies on large databases, which rely on paper ECG records, studies of emerging ECG features can be performed. In addition, this tool can be used to potentially integrate digitized- ECG information with digital ECG analysis programs and with the patient's electronic medical record.
    Full-text · Article · Jan 2013 · IEEE Journal of Translational Engineering in Health and Medicine
  • Chenchi Luo · Milind A. Borkar · Arthur J. Redfern · James H. McClellan
    [Show abstract] [Hide abstract]
    ABSTRACT: Capacitive touch screens are ubiquitous in today's electronic devices. Improved touch screen responsiveness and resolution can be achieved at the expense of the touch screen controller analog hardware complexity and power consumption. This paper proposes an alternative compressive sensing based approach to exploit the sparsity of simultaneous touches with respect to the number of sensor nodes to achieve similar levels of responsiveness. It is possible to reduce the analog data acquisition complexity at the cost of extra digital computations with less total power consumption. Using compressive sensing, in order to resolve the positions of the sparse touches, the number of measurements required is related to the number of touches rather than the number of nodes. Detailed measurement circuits and methodologies are presented along with the corresponding reconstruction algorithm.
    No preview · Article · Sep 2012 · IEEE Journal on Emerging and Selected Topics in Circuits and Systems

Publication Stats

3k Citations
317.37 Total Impact Points

Institutions

  • 1989-2015
    • Georgia Institute of Technology
      • • School of Electrical & Computer Engineering
      • • Center for Signal & Image Processing
      Atlanta, Georgia, United States
  • 2000-2004
    • Clark Atlanta University
      • Department of Engineering
      Atlanta, Georgia, United States
  • 1998
    • Massachusetts Institute of Technology
      Cambridge, Massachusetts, United States
  • 1995-1998
    • Rose Hulman Institute of Technology
      • Department of Electrical and Computer Engineering
      Terre Haute, IN, United States
    • Washington State University
      • School of Electrical Engineering and Computer Science
      Pullman, WA, United States
  • 1996
    • University of Westminster
      Londinium, England, United Kingdom
  • 1994
    • University of California, Berkeley
      • Department of Electrical Engineering and Computer Sciences
      Berkeley, MO, United States
  • 1972
    • Rice University
      • Department of Electrical and Computer Engineering
      Houston, Texas, United States