D.L. Neuhoff

University of Michigan, Ann Arbor, Michigan, United States

Are you D.L. Neuhoff?

Claim your profile

Publications (153)160.61 Total impact

  • M.G. Reyes, D.L. Neuhoff, T.N. Pappas
    [Show abstract] [Hide abstract]
    ABSTRACT: An effective, low complexity method for lossy compression of scenic bilevel images, called lossy cutset coding, is proposed based on a Markov random field model. It operates by losslessly encoding pixels in a square grid of lines, which is a cutset with respect to a Markov random field model, and preserves key structural information, such as borders between black and white regions. Relying on the Markov random field model, the decoder takes a MAP approach to reconstructing the interior of each grid block from the pixels on its boundary, thereby creating a piecewise smooth image that is consistent with the encoded grid pixels. The MAP rule, which reduces to finding the block interiors with fewest black-white transitions, is directly implementable for the most commonly occurring block boundaries, thereby avoiding the need for brute force or iterative solutions. Experimental results demonstrate that the new method is computationally simple, outperforms the current lossy compression technique most suited to scenic bilevel images, and provides substantially lower rates than lossless techniques, e.g., JBIG, with little loss in perceived image quality.
    IEEE Transactions on Image Processing 01/2014; 23(4):1652-1665. · 3.20 Impact Factor
  • J Ehmann, T Pappas, D Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: We develop new metrics for texture similarity that account for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity (SSIM) and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of known-item search, the retrieval of textures that are identical to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform PSNR, SSIM and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
    IEEE Transactions on Image Processing 03/2013; · 3.20 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Texture is an important visual attribute both for human perception and image analysis systems. We review recently proposed texture similarity metrics and applications that critically depend on such metrics, with emphasis on image and video compression and content-based retrieval. Our focus is on natural textures and structural texture similarity metrics (STSIMs). We examine the relation of STSIMs to existing models of texture perception, texture analysis/synthesis, and texture segmentation. We emphasize the importance of signal characteristics and models of human perception, both for algorithm development and testing/validation.
    Proceedings of the IEEE 01/2013; 101(9):2044-2057. · 6.91 Impact Factor
  • Yuanhao Zhai, D.L. Neuhoff, T.N. Pappas
    [Show abstract] [Hide abstract]
    ABSTRACT: We develop a new type of statistical texture image feature, called a Local Radius Index (LRI), which can be used to quantify texture similarity based on human perception. Image similarity metrics based on LRI can be applied to image compression, identical texture retrieval and other related applications. LRI extracts texture features by using simple pixel value comparisons in space domain. Better performance can be achieved when LRI is combined with complementary texture features, e.g., Local Binary Patterns (LBP) and the proposed Subband Contrast Distribution. Compared with Structural Texture Similarity Metrics (STSIM), the LRI-based metrics achieve better retrieval performance with much less computation. Applied to the recently developed structurally lossless image coder, Matched Texture Coding, LRI enables similar performance while significantly accelerating the encoding.
    Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on; 01/2013
  • M.A. Prelee, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: For wireless sensor networks, many decentralized algorithms have been developed to address the problem of locating a source that emits acoustic or electromagnetic waves based on received signal strength. Among the motivations for decentralized algorithms is that they reduce the number of transmissions between sensors, thereby increasing sensor battery life. Whereas most such algorithms are designed for arbitrary sensor placements, such as random placements, this paper focuses on applications that permit a choice of sensor placement. In particular, to make communications costs small, it is proposed to place sensors uniformly along evenly spaced rows and columns, i.e., a Manhattan grid. For such a placement, the Midpoint Algorithm is proposed, which is a simple noniterative decentralized algorithm. The results of this paper show that Manhattan grid networks offer improved accuracy vs. energy tradeoff over randomly distributed networks. Results also show the proposed Midpoint Algorithm offers further energy savings over the recent POCS algorithm.
    Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on; 01/2013
  • D.L. Neuhoff, S.S. Pradhan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper establishes rates attainable by several lossy schemes for coding a continuous parameter source to a specified mean-squared-error distortion based on sampling at asymptotically large rates. First, a densely sampled, spatiotemporal, stationary Gaussian source is distributively encoded. The Berger-Tung bound to the distributed rate-distortion function and three convergence theorems are used to obtain an upper bound, expressed in terms of the source spectral density, to the smallest attainable rate at asymptotically large sampling rates. The bound is tighter than that recently obtained by Kashyap Both indicate that with ideal distributed lossy coding, dense sensor networks can efficiently sense and convey a field, in contrast to the negative result obtained by Marco for encoders based on scalar quantization and Slepian-Wolf distributed lossless coding. The second scheme is transform coding with scalar coefficient quantization. A new generalized transform coding analysis, as well as the aforementioned convergence theorems, is used to find the smallest attainable rate at asymptotically large sampling rates in terms of the source spectral density and the operational rate-distortion function of the family of quantizers, which in contrast to previous analyses need not be convex. The result shows that when a transform is used, scalar quantization need not cause the poor performance found by Marco As a corollary, the final result pursues an approach, originally proposed by Berger, to show that the inverse water-pouring formula for the rate-distortion function can be attained at high sampling rates by transform coding with ideal vector quantization to encode the coefficients. Also established in the paper are relations between operational rate-distortion and distortion-rate functions for a continuous parameter source and those for the discrete parameter source that results from sampling.
    IEEE Transactions on Information Theory 01/2013; 59(9):5641-5664. · 2.62 Impact Factor
  • Shengxin Zha, T.N. Pappas, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a hierarchical lossy bilevel image compression method that relies on adaptive cutset sampling (along lines of a rectangular grid with variable block size) and Markov Random Field based reconstruction. It is an efficient encoding scheme that preserves image structure by using a coarser grid in smooth areas of the image and a finer grid in areas with more detail. Experimental results demonstrate that the proposed method performs as well as or better than the fixed-grid approach, and outperforms other lossy bilevel compression methods in its rate-distortion performance.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: We focus on the evaluation of texture similarity metrics for structurally lossless or nearly structurally lossless image compression. By structurally lossless we mean that the original and compressed images, while they may have visible differences in a side-by-side comparison, they have similar quality so that one cannot tell which is the original. This is particularly important for textured regions, which can have significant point-by-point differences, even though to the human eye they appear to be the same. As in traditional metrics, texture similarity metrics are expected to provide a monotonic relationship between measured and perceived distortion. To evaluate metric performance according to this criterion, we introduce a systematic approach for generating synthetic texture distortions that model variations that occur in natural textures. Based on such distortions, we conducted subjective experiments with a variety of original texture images and different types and degrees of distortions. Our results indicate that recently proposed structural texture similarity metrics provide the best performance.
    Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on; 01/2012
  • M.A. Prelee, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a sampling theorem for Manhattan-grid sampling, which is a sampling scheme in which data is taken along evenly spaced rows and columns. Given the spacing between the rows, the columns, and the samples along the rows and columns, the theorem shows that an image can be perfectly reconstructed from Manhattan-grid sampling if its spectrum is bandlimited to a cross-shaped region whose arm lengths and widths are determined by the aforementioned sample spacings. The nature of such cross-bandlimited images is demonstrated by filtering an image with a cross-bandlimiting filter for several choices of sampling parameters.
    Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on; 01/2012
  • D.L. Neuhoff, S.S. Pradhan
    [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that for discrete-time, stationary sources, most lossy source coding techniques have operational rate-distortion functions that approach the Shannon ratedistortion function with respect to squared error to within an additive constant as distortion approaches zero. With the goal of investigating similar phenomena for continuous-time sources, this paper investigates the low-distortion performance of distributed coding of continuous-time, stationary, Gaussian sources based on high-rate sampling. It is found that for bandlimited sources and nonbandlimited sources whose spectra have sufficiently light, e.g., exponentially decreasing, tails, distributed source coding is asymptotically as good as centralized coding in the small distortion regime. On the other hand, for spectra with tails that decay as a power (greater than one) of frequency, it is found that for small distortions the distributed rate-distortion function is a constant times larger than the Shannon rate-distortion, where the constant decreases as the power increases. For example, it is approximately 1.2 when the power is 2. The conclusion is that for a stationary Gaussian source and asymptotically small distortion, the ratio of the distributed to centralized rate-distortion function is a function of the weight of the tail of the source spectrum. In the process of finding the ratio, the low distortion form of the centralized rate-distortion function is found for sources whose spectra have exponential and power law tails.
    Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on; 01/2012
  • M.A. Prelee, D.L. Neuhoff, T.N. Pappas
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper builds upon previous work for image reconstruction problems in which samples are taken on evenly spaced rows and columns, i.e., a Manhattan grid. A new reconstruction method is proposed that uses three steps to interpolate the interior of each block under the model that an image can be decomposed into piecewise planar regions plus noise. First, the K-planes algorithm is developed in order to fit several planes to the observed pixel values on the border. Second, one of theK planes is assigned to each pixel of the block interior, by a process of partitioning the block with polygons, thereby creating a piecewise planar approximation. Third, the interior pixels are interpolated by modeling them as a Gauss Markov random field whose mean is the piecewise planar approximation just obtained. The new method is shown to improve significantly upon previous methods, especially in the preservation of “soft” image edges.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012
  • S. Na, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: Asymptotic formulas are derived for the mean-squared error (MSE) distortion of N-level uniform scalar quantizers designed to be MSE optimal for one density function, but applied to another, as N → ∞. These formulas, which are based on the Euler-Maclaurin formula, are then applied with generalized gamma, Bucklew-Gallagher, and Hui-Neuhoff density functions as the designed-for and applied-to densities. It is found that the mismatch between the designed-for and applied-to densities can disturb the delicate balance between granular and overload distortions in optimal quantization, with the result that, generally speaking, the granular or overload distortion dominates, respectively, depending on whether the applied-to density function has a lighter or heavier tail than the designed-for density. Specifically, in the case of generalized gamma densities, a variance mismatch makes overload distortion dominate for an applied-to source with a slightly larger variance, whereas a shape mismatch can tolerate a wider variance difference while retaining the dominance of the granular distortion. In addition, for the studied density functions, the Euler-Maclaurin approach is used to derive asymptotic formulas for the optimal quantizer step size in a simpler, more direct, way than previous approaches.
    IEEE Transactions on Information Theory 01/2012; 58(5):3169-3181. · 2.62 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a new texture-based compression approach that relies on new texture similarity metrics and is able to exploit texture redundancies for significant compression gains without loss of visual quality, even though there may visible differences with the original image (structurally lossless). Existing techniques rely on point-by-point metrics that cannot account for the stochastic and repetitive nature of textures. The main idea is to encode selected blocks of textures - as well as smooth blocks and blocks containing boundaries between smooth and/or textured regions - by pointing to previously occurring (already encoded) blocks of similar textures, blocks that are not encoded in this way, are encoded by a baseline method, such as JPEG. Experimental results with natural images demonstrate the advantages of the proposed approach.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to facilitate the development of objective texture similarity metrics and to evaluate their performance, one needs a large texture database accurately labeled with perceived similarities between images. We propose ViSiProG, a new Visual Similarity by Progressive Grouping procedure for conducting subjective experiments that organizes a texture database into clusters of visually similar images. The grouping is based on visual blending, and greatly simplifies pairwise labeling. ViSiProG collects subjective data in an efficientand effectivemanner, so that a relativelylarge database of textures can be accommodated. Experimental results and comparisonswith structural texture similarity metrics demonstrate both the effectivenessof the proposedsubjective testing procedure and the performance of the metrics. important to provide a monotonic relationship between measured and perceived distortion, while in image retrieval applications it may be sufficient to distinguish between similar and dissimilar images, while the ordering within each group may not be important. The application also determines whether an absolute or relative similarity scale is needed. The focus of this paper is on content-based image retrieval (CBIR) but the proposed techniqueswill also havea significant impact on other applications, such as image compression.
    01/2011;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a new approach to sampling images in which samples are taken on a cutset with respect to a graphical image model. The cutsets considered are Manhattan grids, for example every Nth row and column of the image. Cutset sampling is motivated mainly by applications with physical constraints, e.g. a ship taking water samples along its path, but also by the fact that dense sampling along lines might permit better reconstruction of edges than conventional sampling at the same density. The main challenge in cutset sampling lies in the reconstruction of the unsampled blocks. As a first investigation, this paper uses segmentation followed by linear estimation. First, the ACA method [1] is modified to segment the cutset, followed by a binary Markov random field (MRF) inspired segmentation of the unsampled blocks. Finally, block interiors are estimated from the pixels on their boundaries, as well as their segmentation, with methods that include a generalization of bilinear interpolation and linear MMSE methods based on Gaussian MRF models or separable autocorrelation models. The resulting reconstructions are comparable to those obtained with conventional sampling at higher sampling densities, but not generally as good as conventional sampling at lower rates.
    18th IEEE International Conference on Image Processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011; 01/2011
  • Source
    D.L. Neuhoff, S.S. Pradhan
    [Show abstract] [Hide abstract]
    ABSTRACT: With mean-squared error D as a goal, it is well known that one may approach the rate-distortion function R(D) of a nonbandlimited, continuous-time Gaussian source by sampling at a sufficiently high rate, applying the Karhunen-Loeve transform to sufficiently long blocks, and then independently coding the transform coefficients of each type. Motivated by the question of the efficiency of dense sensor networks for sampling, encoding and reconstructing spatial random fields, this paper studies the following three cases. In the first, we consider a centralized encoding setup with a sample-transform-quantize scheme where the quantization is assumed to be optimal. In the second, we consider a distributed setup, where a spatio-temporal source is sampled and distributively encoded to be reconstructed at a receiver. We show that with ideal distributed lossy coding, dense sensor networks can efficiently sense and convey a field, in contrast to the negative result obtained by Marco et al. for encoders based on time- and space-invariant scalar quantization and ideal Slepian-Wolf distributed lossless coding. In the third, we consider a centralized setup, with a sample-and-transform coding scheme in which ideal coding of coefficients is replaced by coding with some specified family of quantizers. It is shown that when the sampling rate is large, the operational rate-distortion function of such a scheme comes within a finite constant of that of the first case.
    Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on; 01/2011
  • D. Marco, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the entropy of highly correlated quantized samples. Two results are shown. The first concerns sampling and identically scalar quantizing a stationary continuous-time random process over a finite interval. It is shown that if the process crosses a quantization threshold with positive probability, then the joint entropy of the quantized samples tends to infinity as the sampling rate goes to infinity. The second result provides an upper bound to the rate at which the joint entropy tends to infinity, in the case of an infinite-level uniform threshold scalar quantizer and a stationary Gaussian random process. Specifically, an asymptotic formula for the conditional entropy of one quantized sample conditioned on the previous quantized sample is derived. At high sampling rates, these results indicate a sharp contrast between the large encoding rate (in bits/sec) required by a lossy source code consisting of a fixed scalar quantizer and an ideal, sampling-rate-adapted lossless code, and the bounded encoding rate required by an ideal lossy source code operating at the same distortion.
    IEEE Transactions on Information Theory 06/2010; · 2.62 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.
    Proc SPIE 01/2010;
  • M.G. Reyes, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents Reduced Cutset Coding, a new Arithmetic Coding (AC) based approach to lossless compression of Markov random fields. In recent work, the authors presented an efficient AC based approach to encoding acyclic MRFs and described a Local Conditioning (LC) based approach to encoding cyclic MRFs. In the present work, we introduce an algorithm for AC encoding of a cyclic MRF for which the complexity of the LC method of, or the acyclic MRF algorithm of combined with the Junction Tree (JT) algorithm, is too large. For encoding an MRF based on a cyclic graph G = (V,E), a cutset U ¿ V is selected such that the subgraph GU induced by U, and each of the components of GU, are tractable to either LC or JT. Then, the cutset variables XU are AC encoded with coding distributions based on a reduced MRF defined on GU, and the remaining components XVU of XV are optimally AC encoded conditioned on XU . The increase in rate over optimal encoding of XV is the normalized divergence between the marginal distribution of XU and the reduced MRF on GU used for the AC encoding. We show this follows a Pythagorean decomposition and, additionally, that the optimal exponential parameter for the reduced MRF on GU is the one that preserves the moments from the marginal distribution. We also show that the rate of encoding XU with this moment-matching exponential parameter is equal to the entropy of the reduced MRF with this moment-matching parameter. We illustrate the concepts of our approach by encoding a typical image from an Ising model with a cutset consisting of evenly spaced rows. The performance on this image is similar to that of JBIG.
    Data Compression Conference (DCC), 2010; 01/2010
  • Source
    J. Zujovic, T.N. Pappas, D.L. Neuhoff
    [Show abstract] [Hide abstract]
    ABSTRACT: The development of objective texture similarity metrics for image analysis applications differs from that of traditional image quality metrics because substantial point-by-point deviations are possible for textures that according to human judgment are essentially identical. Thus, structural similarity metrics (SSIM) attempt to incorporate ¿structural¿ information in image comparisons. The recently proposed structural texture similarity metric (STSIM) relies entirely on local image statistics. We extend this idea further by including a broader set of local image statistics, basing the selection on metric performance as compared to subjective evaluations. We utilize both intra- and inter-subband correlations, and also incorporate information about the color composition of the textures into the similarity metrics. The performance of the proposed metrics is compared to PSNR, SSIM, and STSIM on the basis of subjective evaluations using a carefully selected set of 50 texture pairs.
    Image Processing (ICIP), 2009 16th IEEE International Conference on; 12/2009

Publication Stats

1k Citations
160.61 Total Impact Points

Institutions

  • 1990–2013
    • University of Michigan
      • Department of Electrical Engineering and Computer Science (EECS)
      Ann Arbor, Michigan, United States
  • 2008–2010
    • Northwestern University
      • Department of Electrical Engineering and Computer Science
      Evanston, Illinois, United States
    • Varian Medical Systems
      Palo Alto, California, United States
  • 2005–2010
    • California Institute of Technology
      • Department of Electrical Engineering
      Pasadena, CA, United States
  • 2006–2009
    • Queen's University
      • Department of Mathematics & Statistics
      Kingston, Ontario, Canada
  • 1991–2007
    • Concordia University–Ann Arbor
      Ann Arbor, Michigan, United States
  • 2003
    • Massachusetts Institute of Technology
      Cambridge, Massachusetts, United States
  • 1995–2001
    • Ajou University
      • Department of Electrical and Computer Engineering
      Sŏul, Seoul, South Korea
  • 1998
    • Stanford University
      • Department of Electrical Engineering
      Stanford, CA, United States
  • 1997
    • Northern Illinois University
      • Department of Computer Science
      DeKalb, IL, United States
  • 1991–1995
    • AT&T Labs
      Austin, Texas, United States
  • 1988–1994
    • Rutgers, The State University of New Jersey
      • Department of Electrical and Computer Engineering
      New Brunswick, NJ, United States
  • 1993
    • Rochester Institute of Technology
      Rochester, New York, United States
    • MITRE
      McLean, Virginia, United States
    • Georgia Institute of Technology
      Atlanta, Georgia, United States