ArticlePDF Available

Abstract

Image-zooming is a technique of producing a high-resolution image from its low-resolution counterpart. It is also called image interpolation because it is usually implemented by interpolation. Keys' cubic convolution (CC) interpolation method has become a standard in the image interpolation field, but CC interpolates indiscriminately the missing pixels in the horizontal or vertical direction and typically incurs blurring, blocking, ringing or other artefacts. In this study, the authors propose a novel edge-directed CC interpolation scheme which can adapt to the varying edge structures of images. The authors also give an estimation method of the strong edge for a missing pixel location, which guides the interpolation for the missing pixel. The authors' method can preserve the sharp edges and details of images with notable suppression of the artefacts that usually occur with CC interpolation. The experiment results demonstrate that the authors'method outperforms significantly CC interpolation in terms of both subjective and objective measures.
A preview of the PDF is not available
... Therefore, the corresponding processing effect on edges and textures may be not very satisfactory in some cases. Later, edge-oriented interpolation methods are proposed to compensate for the above deficiencies [6,7], which can maintain the edge structure of the image, but usually produce noises or distortions around the stripes and textures of the image [8,9]. The interpolation method based on iterative curvature [10] can preserve the relevant image features and natural texture of the artifact-free enlarged image, while it may be timeconsuming. ...
Article
Full-text available
Image super-resolution (SR) is a classic problem of image processing. This paper proposes a self-learning interpolation method (SLIM) based on a single image by combining grid feature mapping with binary decision tree, which is not only transparent as the interpolation-based methods, but also achieves comparable performance as the learning-based methods. Firstly, it downsamples the given image ILR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LR}}$$\end{document} to obtain its low–low-resolution image ILLR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LLR}}$$\end{document}, which is used to obtain sample data for the self-learning interpolation algorithm for enlarging ILLR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LLR}}$$\end{document} to get ILR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LR}}$$\end{document}. Secondly, it provides a structural feature classification method to divide all of the samples into several groups, such that each class of ILLR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LLR}}$$\end{document} is mapped to a matrix of coefficients for calculating the values of the pixels of ILR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LR}}$$\end{document}. The image ILR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LR}}$$\end{document} is approximated by executing the decision tree to refine the corresponding mapping matrix. Finally, the resulting high-resolution image IHR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{HR}}$$\end{document} is obtained from the given image ILR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^{\textrm{LR}}$$\end{document} by using the mapping matrixes. Experimental results show that SLIM achieves more smooth edges and better details on subjective vision than prevailing SR methods, and it is a transparent one but achieves comparable performances on PSNR and SSIM with the learning-based methods, while it outperforms the interpolation-based methods. It means that SLIM is both transparent and efficient and has much better subjective vision than other SR methods.
... Deception can be identified using behavioural data, physiological assessments, and vehicle-based data. Eye, face, and head movements recorded by a camera are considered behavioural data [2]. Vehicle-based data is provided through steering wheel movement, vehicle speed, braking behaviour, and lane position deviation. ...
... We first oriented the pre-LAA along the L-axis in the LPS coordinate system and then defined the LA and LAA transition region (Fig. 1d). In the surface of the transition region, we obtained a set of curves defined by densely resampled points (Fig. 1d) [18,22]. To estimate the LAA ostium, we used dynamic programming to find a loop that maximizes the sum of curvature of points on the loop by solving Eq. (1) [18]. ...
Article
Full-text available
Purpose: To elucidate the role of atrial anatomical remodeling in atrial fibrillation (AF), we proposed an automatic method to extract and analyze morphological characteristics in left atrium (LA), left atrial appendage (LAA) and pulmonary veins (PVs) and constructed classifiers to evaluate the importance of identified features. Methods: The LA, LAA and PVs were segmented from contrast computed tomography images using either a commercial software or a self-adaptive algorithm proposed by us. From these segments, geometric and fractal features were calculated automatically. To reduce the model complexity, a feature selection procedure is adopted, with the important features identified via univariable analysis and ensemble feature selection. The effectiveness of this approach is well illustrated by the high accuracy of our models. Results: Morphological features, such as LAA ostium dimensions and LA volume and surface area, statistically distinguished (p<0.01) AF patients or AF with LAA filling defects (AF(def+)) patients among all patients. On the test set, the best model to predict AF among all patients had an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI, 0.8–1) and the best model to predict AF(def+) among all patients had an AUC of 0.92 (95% CI, 0.81–1). Conclusion: This study automatically extracted and analyzed atrial morphology in AF and identified atrial anatomical remodeling that statistically distinguished AF or AF(def+). The importance of identified atrial morphological features in characterizing AF or AF(def+) was validated by corresponding classifiers. This work provides a good foundation for a complete computer-assisted diagnostic workflow of predicting the occurrence of AF or AF(def+).
... A number of edge-directed interpolation techniques have been proposed [5,10,12,16,31,36,39] in order to supress the artifacts arising near sharp edges. While effective to varying degrees, these techniques are more complex and computationally expensive than linear interpolation as they include special steps in order to take edge directions into account. ...
Article
Full-text available
We present a number of new piecewise-polynomial kernels for image interpolation. The kernels are constructed by optimizing a measure of interpolation quality based on the magnitude of anisotropic artifacts. The kernel design process is performed symbolically using the Mathematica computer algebra system. An experimental evaluation involving 14 image quality assessment methods demonstrates that our results compare favorably with the existing linear interpolators.
Article
Aiming at the problem of blurring edge and detail information in the process of image zooming, this paper proposes a new method of image zooming based on wavelet packet transform by combining the characteristics of anisotropic diffusion. First, the initial zoomed image with higher resolution is obtained by wavelet transform, and the wavelet packet decomposition is performed to obtain more high-frequency wavelet packet coefficients reflecting image details. Second, due to the existence of noise, the relationship between wavelet packet transform and anisotropic diffusion is obtained by studying the process of wavelet packet threshold denoising, and the expression of coupling threshold based on diffusion function is given and applied to the high-frequency wavelet packet coefficients. Finally, the original image is used as the low-frequency part after passing the soft threshold, and reconstructed with the denoised high-frequency part to obtain the final zoomed image. The traditional zooming algorithms as well as the learning-based zooming algorithms are selected for comparison. The results show that the algorithm in this paper effectively avoids the blurring of edges and details under the premise of ensuring the similarity between the zoomed image and the original image, so that the zoomed image can obtain more high-frequency information, and achieves the purpose of removing the noise as well as enhancing the detailed information of the image. The effectiveness of this paper's algorithm in edge protection can be seen from the results of the comparison with the deep learning-based zooming algorithms.
Article
In real-time image interpolation, polynomial-based approaches are used because they are simple and practical. The unknown pixels in the image grid are obtained using polynomial-based upscaling by taking into account the weighted average of adjacent pixels. It causes degradation in high frequency (HF) parts of the image due to blurring artifacts. Numerous edge-directed and learning-based techniques are discussed in the literature, which also introduces blurring in the image’s high variance region. To address the aforementioned issue, one pre-processing method based on the concept of unsharp masking is proposed in order to generate sharpened high resolution (HR) images from low resolution (LR) images. The LR image is blurred using a weighted average filter, which is based on the concept of unsharp masking. The difference between the LR image and the blurred LR image is then extracted to represent the loss edge portion of the image. This error or degraded HF image is sharpened iteratively using a cuckoo optimized sharpening filter known as the Iteratively Optimized Sharpening (IOS) filter. The subtle is then collected by iteratively applying ISO and combining it with the LR image using one optimized gain factor, resulting in a sharpened LR image. This pre-processing is done prior to interpolation to compensate for the loss of HF details. To up-sample the sharpened LR image, the MAKIMA scheme is used. Because of MAKIMA, the image’s edges and boundaries are preserved. As a result, an HR image with sharpened edges and texture details can be obtained. In comparison to existing techniques, the proposed algorithm yields better results in terms of visual quality and objectivity. Various image databases are used to assess the efficiency of the proposed method.
Chapter
The tag antenna exhibiting operation in European and North American regions covering major UHF RFID bands resonating at 866 MHz and 915 MHz, respectively, has been designed in this paper. The tag antenna operating in single UHF RFID region is converted to operate in dual UHF RFID region band tag antenna by modifying its geometry and optimizing the final geometry to obtain resonance at the required resonant frequencies. The tag antenna proposed in this paper comprises a meandered line element with extended lower stub to obtain additional band at European Region. The designed tag employs Alien Higgs-4 RFID chip having capacitive reactance. The designed tag utilizes inductive spiral loop to obtain conjugate impedance to match the capacitive RFID IC. Further, the designed modified tag antenna is simulated and its performance is analyzed based on different parameters such as its resistance, reactance, radiation efficiency, realized gain, etc. Also, it has been seen that the designed dual band antenna shows bidirectional and omnidirectional radiation pattern in E-plane and H-plane, respectively
Conference Paper
Full-text available
We propose a new wavelet domain image interpolation scheme based on statistical signal estimation. A linear composite MMSE estimator is constructed to synthesize the detailed wavelet coefficients as well as to minimize the mean squared error for high-resolution signal recovery. Based on a discrete time edge model, we use low-resolution information to characterize local intensity changes and perform resolution enhancement accordingly. A linear MMSE estimator follows to minimize the estimation error. Local image statistics are involved in determining the spatially adaptive optimal estimator. With knowledge of edge behavior and local signal statistics, the composite estimation is able to enhance important edges and to maintain the intensity consistency along edges. Strong improvement in both the visual quality and the PSNRs of the interpolated images has been achieved by the proposed estimation scheme
Article
Full-text available
Efficient algorithms for the continuous representation of a discrete signal in terms of B-splines (direct B-spline transform) and for interpolative signal reconstruction (indirect B-spline transform) with an expansion factor m are described. Expressions for the z-transforms of the sampled B-spline functions are determined and a convolution property of these kernels is established. It is shown that both the direct and indirect spline transforms involve linear operators that are space invariant and are implemented efficiently by linear filtering. Fast computational algorithms based on the recursive implementations of these filters are proposed. A B-spline interpolator can also be characterized in terms of its transfer function and its global impulse response (cardinal spline of order n). The case of the cubic spline is treated in greater detail. The present approach is compared with previous methods that are reexamined from a critical point of view. It is concluded that B-spline interpolation correctly applied does not result in a loss of image resolution and that this type of interpolation can be performed in a very efficient manner
Article
Full-text available
We describe a spatially adaptive algorithm for image interpolation. The algorithm uses a wavelet transform to extract information about sharp variations in the low-resolution image and then implicitly applies interpolation which adapts to the image local smoothness/singularity characteristics. The proposed algorithm yields images that are sharper compared to several other methods that we have considered in this paper. Better performance comes at the expense of higher complexity.
Article
This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty.
Article
We present a simple, original method to improve piecewise-linear interpolation with uniform knots: we shift the sampling knots by a fixed amount, while enforcing the interpolation property. We determine the theoretical optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and close to 1/5. We confirm our theoretical findings by performing several experiments: a cumulative rotation experiment and a zoom experiment. Both show a significant increase of the quality of the shifted method with respect to the standard one. We also observe that, in these results, we get a quality that is similar to that of the computationally more costly "high-quality" cubic convolution.
Article
Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts.
Article
Assumptions about image continuity lead to oversmoothed edges in common image interpolation algorithms. A wavelet-based interpolation method that imposes no continuity constraints is introduced. The algorithm estimates the regularity of edges by measuring the decay of wavelet transform coefficients across scales and preserves the underlying regularity by extrapolating a new subband to be used in image resynthesis. The algorithm produces visibly sharper edges than traditional techniques and exhibits an average peak signal-to-noise ratio (PSNR) improvement of 2.5 dB over bilinear and bicubic techniques.
Article
In this paper, we present a nonlinear interpolation scheme for still image resolution enhancement. The algorithm is based on a source model emphasizing the visual integrity of detected edges and incorporates a novel edge fitting operator that has been developed for this application. A small neighborhood about each pixel in the low-resolution image is first mapped to a best-fit continuous space step edge. The bilevel approximation serves as a local template on which the higher resolution sampling grid can then be superimposed (where disputed values in regions of local window overlap are averaged to smooth errors). The result is an image of increased resolution with noticeably sharper edges and, in all tried cases, lower mean-squared reconstruction error than that produced by linear techniques.
Article
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Conference Paper
In the problem of image interpolation, most of the difficulties arise in areas around edges and sharp changes. Around edges, many interpolation methods tend to smooth and blur image detail. Fortunately, most of the signal information is often carried around edges and areas of sharp changes and can be used to predict these missing details from a sampled image. A method for adding image detail based on the cone of influence, the evolution of the wavelet coefficients across scales, is presented.