D.L. Neuhoff

University of Michigan, Ann Arbor, Michigan, United States

Are you D.L. Neuhoff?

Claim your profile

Publications (166)163.84 Total impact

  • Source
    Matthew A. Prelee, David L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces Manhattan sampling in two and higher dimensions, and proves sampling theorems. In two dimensions, Manhattan sampling, which takes samples densely along a Manhattan grid of lines, can be viewed as sampling on the union of two rectangular lattices, one dense horizontally, being a multiple of the fine spacing of the other. The sampling theorem shows that images bandlimited to the union of the Nyquist regions of the two rectangular lattices can be recovered from their Manhattan samples, and an efficient procedure for doing so is given. Such recovery is possible even though there is overlap among the spectral replicas induced by Manhattan sampling. In three and higher dimensions, there are many possible configurations for Manhattan sampling, each consisting of the union of special rectangular lattices called bi-step lattices. This paper identifies them, proves a sampling theorem showing that images bandlimited to the union of the Nyquist regions of the bi-step rectangular lattices are recoverable from Manhattan samples, and presents an efficient onion-peeling procedure for doing so. Furthermore, it develops a special representation for the bi-step lattices and an algebra with nice properties. It is also shown that the set of reconstructable images is maximal in the Landau sense. While most of the paper deals with continuous-space images, Manhattan sampling of discrete-space images is also considered, for infinite, as well as finite, support images.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The development and testing of objective texture similarity metrics that agree with human judgments of texture similarity require, in general, extensive subjective tests. The effectiveness and efficiency of such tests depend on a careful analysis of the abilities of human perception and the application requirements. The focus of this paper is on defining performance requirements and testing procedures for objective texture similarity metrics. We identify three operating domains for evaluating the performance of a similarity metric: the ability to retrieve “identical” textures; the top of the similarity scale, where a monotonic relationship between metric values and subjective scores is desired; and the ability to distinguish between perceptually similar and dissimilar textures. Each domain has different performance goals and requires different testing procedures. For the third domain, we propose ViSiProG, a new Visual Similarity by Progressive Grouping procedure for conducting subjective experiments that organizes a texture database into clusters of visually similar images. The grouping is based on visual blending and greatly simplifies labeling image pairs as similar or dissimilar. ViSiProG collects subjective data in an efficient and effective manner, so that a relatively large database of textures can be accommodated. Experimental results and comparisons with structural texture similarity metrics demonstrate both the effectiveness of the proposed subjective testing procedure and the performance of the metrics.
    Journal of the Optical Society of America A 01/2015; 32(2):329. DOI:10.1364/JOSAA.32.000329 · 1.45 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In order to provide ground truth for subjectively comparing compression methods for scenic bilevel images, as well as for judging objective similarity metrics, this paper describes the subjective similarity rating of a collection of distorted scenic bilevel images. Unlike text, line drawings, and silhouettes, scenic bilevel images contain natural scenes, e.g., landscapes and portraits. Seven scenic images were each distorted in forty-four ways, including random bit flipping, dilation, erosion and lossy compression. To produce subjective similarity ratings, the distorted images were each viewed by 77 subjects. These are then used to compare the performance of four compression algorithms and to assess how well percentage error and SmSIM work as bilevel image similarity metrics. These subjective ratings can also provide ground truth for future tests of objective bilevel image similarity metrics.
    ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 05/2014
  • Yuanhao Zhai, David L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes new objective similarity metrics for scenic bilevel images, which are images containing natural scenes such as landscapes and portraits. Though percentage error is the most commonly used similarity metric for bilevel images, it is not always consistent with human perception. Based on hypotheses about human perception of bilevel images, this paper proposes new metrics that outperform percentage error in the sense of attaining significantly higher Pearson and Spearman-rank correlation coefficients with respect to subjective ratings. The new metrics include Adjusted Percentage Error, Bilevel Gradient Histogram and Connected Components Comparison. The subjective ratings come from similarity evaluations described in a companion paper. Combinations of these metrics are also proposed, which exploit their complementarity to attain even better performance.
    ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 05/2014
  • Guoxin Jin, Thrasyvoulos N. Pappas, David L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: Matched-Texture Coding is a novel image coder that utilizes the self-similarity of natural images that include textures, in order to achieve structurally lossless compression. The key to a high compression ratio is replacing large image blocks with previously encoded blocks with similar structure. Adjusting the lighting of the replaced block is critical for eliminating illumination artifacts and increasing the number of matches. We propose a new adaptive lighting correction method that is based on the Poisson equation with incomplete boundary conditions. In order to fully exploit the benefits of the adaptive Poisson lighting correction, we also propose modifications of the side-matching (SM) algorithm and structural texture similarity metric. We show that the resulting matched-texture algorithm achieves better coding performance.
    ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 05/2014
  • Matthew A. Prelee, David L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This work explores performance vs. communication energy tradeoffs in wireless sensor networks that use the recently proposed cutset deployment strategy in which sensors are placed densely along a grid of intersecting lines. For a given number of sensors, intersensor spacing is less for a cutset network than for a conventional lattice deployment, so that cutset networks require less communication energy, albeit with some potential loss in network performance. Previous work analyzed the energy-performance tradeoffs for square-grid cutset networks in the context of specific decentralized algorithms for source localization based on received signal strength (RSS). The current work also considers the RSS based source localization problem. However, it takes a more fundamental approach to analyzing the tradeoff by considering a centralized task, minimum energy communication paths, Maximum Likelihood estimation algorithms and Cramér-Rao bounds. Moreover, it analyzes triangular and honeycomb cutset deployments, in addition to square-grid ones. The results indicate that cutset networks offer sizable decreases in energy with only modest losses of performance.
    ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 05/2014
  • M.G. Reyes, D.L. Neuhoff, T.N. Pappas
    [Show abstract] [Hide abstract]
    ABSTRACT: An effective, low complexity method for lossy compression of scenic bilevel images, called lossy cutset coding, is proposed based on a Markov random field model. It operates by losslessly encoding pixels in a square grid of lines, which is a cutset with respect to a Markov random field model, and preserves key structural information, such as borders between black and white regions. Relying on the Markov random field model, the decoder takes a MAP approach to reconstructing the interior of each grid block from the pixels on its boundary, thereby creating a piecewise smooth image that is consistent with the encoded grid pixels. The MAP rule, which reduces to finding the block interiors with fewest black-white transitions, is directly implementable for the most commonly occurring block boundaries, thereby avoiding the need for brute force or iterative solutions. Experimental results demonstrate that the new method is computationally simple, outperforms the current lossy compression technique most suited to scenic bilevel images, and provides substantially lower rates than lossless techniques, e.g., JBIG, with little loss in perceived image quality.
    IEEE Transactions on Image Processing 04/2014; 23(4):1652-1665. DOI:10.1109/TIP.2014.2302678 · 3.11 Impact Factor
  • D.L. Neuhoff, S. Sandeep Pradhan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper establishes rates attainable by several lossy schemes for coding a continuous parameter source to a specified mean-squared-error distortion based on sampling at asymptotically large rates. First, a densely sampled, spatiotemporal, stationary Gaussian source is distributively encoded. The Berger-Tung bound to the distributed rate-distortion function and three convergence theorems are used to obtain an upper bound, expressed in terms of the source spectral density, to the smallest attainable rate at asymptotically large sampling rates. The bound is tighter than that recently obtained by Kashyap Both indicate that with ideal distributed lossy coding, dense sensor networks can efficiently sense and convey a field, in contrast to the negative result obtained by Marco for encoders based on scalar quantization and Slepian-Wolf distributed lossless coding. The second scheme is transform coding with scalar coefficient quantization. A new generalized transform coding analysis, as well as the aforementioned convergence theorems, is used to find the smallest attainable rate at asymptotically large sampling rates in terms of the source spectral density and the operational rate-distortion function of the family of quantizers, which in contrast to previous analyses need not be convex. The result shows that when a transform is used, scalar quantization need not cause the poor performance found by Marco As a corollary, the final result pursues an approach, originally proposed by Berger, to show that the inverse water-pouring formula for the rate-distortion function can be attained at high sampling rates by transform coding with ideal vector quantization to encode the coefficients. Also established in the paper are relations between operational rate-distortion and distortion-rate functions for a continuous parameter source and those for the discrete parameter source that results from sampling.
    IEEE Transactions on Information Theory 09/2013; 59(9):5641-5664. DOI:10.1109/TIT.2013.2266653 · 2.65 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Texture is an important visual attribute both for human perception and image analysis systems. We review recently proposed texture similarity metrics and applications that critically depend on such metrics, with emphasis on image and video compression and content-based retrieval. Our focus is on natural textures and structural texture similarity metrics (STSIMs). We examine the relation of STSIMs to existing models of texture perception, texture analysis/synthesis, and texture segmentation. We emphasize the importance of signal characteristics and models of human perception, both for algorithm development and testing/validation.
    Proceedings of the IEEE 09/2013; 101(9):2044-2057. DOI:10.1109/JPROC.2013.2262912 · 5.47 Impact Factor
  • J Ehmann, Thrasyvoulos N. Pappas, David L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: We develop new metrics for texture similarity that account for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity (SSIM) and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of known-item search, the retrieval of textures that are identical to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform PSNR, SSIM and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
    IEEE Transactions on Image Processing 03/2013; 22(7). DOI:10.1109/TIP.2013.2251645 · 3.11 Impact Factor
  • M.A. Prelee, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: For wireless sensor networks, many decentralized algorithms have been developed to address the problem of locating a source that emits acoustic or electromagnetic waves based on received signal strength. Among the motivations for decentralized algorithms is that they reduce the number of transmissions between sensors, thereby increasing sensor battery life. Whereas most such algorithms are designed for arbitrary sensor placements, such as random placements, this paper focuses on applications that permit a choice of sensor placement. In particular, to make communications costs small, it is proposed to place sensors uniformly along evenly spaced rows and columns, i.e., a Manhattan grid. For such a placement, the Midpoint Algorithm is proposed, which is a simple noniterative decentralized algorithm. The results of this paper show that Manhattan grid networks offer improved accuracy vs. energy tradeoff over randomly distributed networks. Results also show the proposed Midpoint Algorithm offers further energy savings over the recent POCS algorithm.
    Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on; 01/2013
  • Yuanhao Zhai, D.L. Neuhoff, T.N. Pappas
    [Show abstract] [Hide abstract]
    ABSTRACT: We develop a new type of statistical texture image feature, called a Local Radius Index (LRI), which can be used to quantify texture similarity based on human perception. Image similarity metrics based on LRI can be applied to image compression, identical texture retrieval and other related applications. LRI extracts texture features by using simple pixel value comparisons in space domain. Better performance can be achieved when LRI is combined with complementary texture features, e.g., Local Binary Patterns (LBP) and the proposed Subband Contrast Distribution. Compared with Structural Texture Similarity Metrics (STSIM), the LRI-based metrics achieve better retrieval performance with much less computation. Applied to the recently developed structurally lossless image coder, Matched Texture Coding, LRI enables similar performance while significantly accelerating the encoding.
    Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on; 01/2013
  • Sangsin Na, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: Asymptotic formulas are derived for the mean-squared error (MSE) distortion of N-level uniform scalar quantizers designed to be MSE optimal for one density function, but applied to another, as N → ∞. These formulas, which are based on the Euler-Maclaurin formula, are then applied with generalized gamma, Bucklew-Gallagher, and Hui-Neuhoff density functions as the designed-for and applied-to densities. It is found that the mismatch between the designed-for and applied-to densities can disturb the delicate balance between granular and overload distortions in optimal quantization, with the result that, generally speaking, the granular or overload distortion dominates, respectively, depending on whether the applied-to density function has a lighter or heavier tail than the designed-for density. Specifically, in the case of generalized gamma densities, a variance mismatch makes overload distortion dominate for an applied-to source with a slightly larger variance, whereas a shape mismatch can tolerate a wider variance difference while retaining the dominance of the granular distortion. In addition, for the studied density functions, the Euler-Maclaurin approach is used to derive asymptotic formulas for the optimal quantizer step size in a simpler, more direct, way than previous approaches.
    IEEE Transactions on Information Theory 05/2012; 58(5):3169-3181. DOI:10.1109/TIT.2011.2179843 · 2.65 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a new texture-based compression approach that relies on new texture similarity metrics and is able to exploit texture redundancies for significant compression gains without loss of visual quality, even though there may visible differences with the original image (structurally lossless). Existing techniques rely on point-by-point metrics that cannot account for the stochastic and repetitive nature of textures. The main idea is to encode selected blocks of textures - as well as smooth blocks and blocks containing boundaries between smooth and/or textured regions - by pointing to previously occurring (already encoded) blocks of similar textures, blocks that are not encoded in this way, are encoded by a baseline method, such as JPEG. Experimental results with natural images demonstrate the advantages of the proposed approach.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012
  • Shengxin Zha, T.N. Pappas, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a hierarchical lossy bilevel image compression method that relies on adaptive cutset sampling (along lines of a rectangular grid with variable block size) and Markov Random Field based reconstruction. It is an efficient encoding scheme that preserves image structure by using a coarser grid in smooth areas of the image and a finer grid in areas with more detail. Experimental results demonstrate that the proposed method performs as well as or better than the fixed-grid approach, and outperforms other lossy bilevel compression methods in its rate-distortion performance.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012
  • D.L. Neuhoff, S.S. Pradhan
    [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that for discrete-time, stationary sources, most lossy source coding techniques have operational rate-distortion functions that approach the Shannon ratedistortion function with respect to squared error to within an additive constant as distortion approaches zero. With the goal of investigating similar phenomena for continuous-time sources, this paper investigates the low-distortion performance of distributed coding of continuous-time, stationary, Gaussian sources based on high-rate sampling. It is found that for bandlimited sources and nonbandlimited sources whose spectra have sufficiently light, e.g., exponentially decreasing, tails, distributed source coding is asymptotically as good as centralized coding in the small distortion regime. On the other hand, for spectra with tails that decay as a power (greater than one) of frequency, it is found that for small distortions the distributed rate-distortion function is a constant times larger than the Shannon rate-distortion, where the constant decreases as the power increases. For example, it is approximately 1.2 when the power is 2. The conclusion is that for a stationary Gaussian source and asymptotically small distortion, the ratio of the distributed to centralized rate-distortion function is a function of the weight of the tail of the source spectrum. In the process of finding the ratio, the low distortion form of the centralized rate-distortion function is found for sources whose spectra have exponential and power law tails.
    Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on; 01/2012
  • Source
    M.A. Prelee, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a sampling theorem for Manhattan-grid sampling, which is a sampling scheme in which data is taken along evenly spaced rows and columns. Given the spacing between the rows, the columns, and the samples along the rows and columns, the theorem shows that an image can be perfectly reconstructed from Manhattan-grid sampling if its spectrum is bandlimited to a cross-shaped region whose arm lengths and widths are determined by the aforementioned sample spacings. The nature of such cross-bandlimited images is demonstrated by filtering an image with a cross-bandlimiting filter for several choices of sampling parameters.
    Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on; 01/2012
  • M.A. Prelee, D.L. Neuhoff, T.N. Pappas
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper builds upon previous work for image reconstruction problems in which samples are taken on evenly spaced rows and columns, i.e., a Manhattan grid. A new reconstruction method is proposed that uses three steps to interpolate the interior of each block under the model that an image can be decomposed into piecewise planar regions plus noise. First, the K-planes algorithm is developed in order to fit several planes to the observed pixel values on the border. Second, one of theK planes is assigned to each pixel of the block interior, by a process of partitioning the block with polygons, thereby creating a piecewise planar approximation. Third, the interior pixels are interpolated by modeling them as a Gauss Markov random field whose mean is the piecewise planar approximation just obtained. The new method is shown to improve significantly upon previous methods, especially in the preservation of “soft” image edges.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We focus on the evaluation of texture similarity metrics for structurally lossless or nearly structurally lossless image compression. By structurally lossless we mean that the original and compressed images, while they may have visible differences in a side-by-side comparison, they have similar quality so that one cannot tell which is the original. This is particularly important for textured regions, which can have significant point-by-point differences, even though to the human eye they appear to be the same. As in traditional metrics, texture similarity metrics are expected to provide a monotonic relationship between measured and perceived distortion. To evaluate metric performance according to this criterion, we introduce a systematic approach for generating synthetic texture distortions that model variations that occur in natural textures. Based on such distortions, we conducted subjective experiments with a variety of original texture images and different types and degrees of distortions. Our results indicate that recently proposed structural texture similarity metrics provide the best performance.
    Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a new approach to sampling images in which samples are taken on a cutset with respect to a graphical image model. The cutsets considered are Manhattan grids, for example every Nth row and column of the image. Cutset sampling is motivated mainly by applications with physical constraints, e.g. a ship taking water samples along its path, but also by the fact that dense sampling along lines might permit better reconstruction of edges than conventional sampling at the same density. The main challenge in cutset sampling lies in the reconstruction of the unsampled blocks. As a first investigation, this paper uses segmentation followed by linear estimation. First, the ACA method [1] is modified to segment the cutset, followed by a binary Markov random field (MRF) inspired segmentation of the unsampled blocks. Finally, block interiors are estimated from the pixels on their boundaries, as well as their segmentation, with methods that include a generalization of bilinear interpolation and linear MMSE methods based on Gaussian MRF models or separable autocorrelation models. The resulting reconstructions are comparable to those obtained with conventional sampling at higher sampling densities, but not generally as good as conventional sampling at lower rates.
    18th IEEE International Conference on Image Processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011; 01/2011

Publication Stats

2k Citations
163.84 Total Impact Points

Institutions

  • 1996–2014
    • University of Michigan
      • • Division of Computer Science and Engineering
      • • Department of Electrical Engineering and Computer Science (EECS)
      Ann Arbor, Michigan, United States
  • 2005–2010
    • California Institute of Technology
      • Department of Electrical Engineering
      Pasadena, CA, United States
  • 2008–2009
    • Northwestern University
      • Department of Electrical Engineering and Computer Science
      Evanston, Illinois, United States
    • Varian Medical Systems
      Palo Alto, California, United States
  • 1991–2007
    • Concordia University–Ann Arbor
      Ann Arbor, Michigan, United States
  • 2006
    • Queen's University
      • Department of Mathematics & Statistics
      Kingston, Ontario, Canada
  • 2003
    • Massachusetts Institute of Technology
      Cambridge, Massachusetts, United States
  • 1995–2001
    • Ajou University
      • Department of Electrical and Computer Engineering
      Sŏul, Seoul, South Korea
  • 1998
    • Stanford University
      • Department of Electrical Engineering
      Stanford, CA, United States
  • 1997
    • Northern Illinois University
      • Department of Computer Science
      DeKalb, IL, United States
  • 1991–1995
    • AT&T Labs
      Austin, Texas, United States
  • 1988–1994
    • Rutgers, The State University of New Jersey
      • Department of Electrical and Computer Engineering
      New Brunswick, NJ, United States
  • 1993
    • Rochester Institute of Technology
      Rochester, New York, United States
    • MITRE
      McLean, Virginia, United States
    • Georgia Institute of Technology
      Atlanta, Georgia, United States