Signal Image and Video Processing

Publisher: Springer Verlag

Journal description

Current impact factor: 1.02

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 1.019
2012 Impact Factor 0.409
2011 Impact Factor 0.56
2010 Impact Factor 0.617

Impact factor over time

Impact factor
Year

Additional details

5-year impact 0.00
Cited half-life 3.40
Immediacy index 0.12
Eigenfactor 0.00
Article influence 0.00
Other titles Signal, image and video processing (Online), SIViP
ISSN 1863-1703
OCLC 130401260
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Springer Verlag

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on pre-print servers such as arXiv.org
    • Author's post-print on author's personal website immediately
    • Author's post-print on any open access repository after 12 months after publication
    • Publisher's version/PDF cannot be used
    • Published source must be acknowledged
    • Must link to publisher version
    • Set phrase to accompany link to published version (see policy)
    • Articles in some journals can be made Open Access on payment of additional charge
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Multi-view video plus depth (MVD) format is considered as the next-generation standard for advanced 3D video systems. MVD consists of multiple color videos with a depth value associated with each texture pixel. Relying on this representation and by using depth-image-based rendering techniques, new viewpoints for multi-view video applications can be generated. However, since MVD is captured from different viewing angles with different cameras, significant illumination and color differences can be observed between views. These color mismatches degrade the performance of view rendering algorithms by introducing visible artifacts leading to a reduced view synthesis quality. To cope with this issue, we propose an effective method for correcting color inconsistencies in MVD. Firstly, to avoid occlusion problems and allow performing correction in the most accurate way, we consider only the overlapping region when calculating the color mapping function. These common regions are determined using a reliable feature matching technique. Also, to maintain the temporal coherence, correction is applied on a temporal sliding window. Experimental results show that the proposed method reduces the color difference between views and improves view rendering process providing high-quality results.
    Signal Image and Video Processing 12/2015; DOI:10.1007/s11760-015-0761-9
  • Signal Image and Video Processing 08/2015; DOI:10.1007/s11760-015-0802-4
  • [Show abstract] [Hide abstract]
    ABSTRACT: A simple first order differential equation involving delay proposed by U\c{c}ar is enriched with dynamical properties. The chaotic attractors are observed in this system for some values of delay. In this paper, we propose the stability results for this delayed system for arbitrary values of parameters using the method of critical curves. We discuss the effect of each parameter on stability and hence on the chaotic behavior. Our results are confirmed by the numerical observations available in the literature.
    Signal Image and Video Processing 08/2015; DOI:10.1007/s11760-015-0811-3
  • Signal Image and Video Processing 07/2015; DOI:10.1007/s11760-015-0801-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a new characteristic vector model for fingerprint representation that uses a planar graph and triangulation algorithms. This new characteristic vector model performed better in a fingerprint identification system than other vector models already proposed in the literature. Minutiae extraction is an essential step in a fingerprint recognition system. This paper presents a new method for minutiae extraction that explores the duality ridge ending/ridge bifurcation that exists when the skeleton image of a fingerprint is inverted. This new extraction method simplifies the computational complexity of a fingerprint identification system.
    Signal Image and Video Processing 07/2015; 9(5):1121-1135. DOI:10.1007/s11760-013-0548-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: After the development of the next generation video coding standard, referred to as high efficiency video coding (HEVC), the joint collaborative team of the ITU-T video coding experts group and the ISO/IEC moving picture experts group has now also standardized a lossless extension of such a standard. HEVC was originally designed for lossy video compression, thus, not ideal for lossless video compression. In this paper, we propose an efficient residual data coding method for HEVC lossless video compression. Based on the fact that there are statistical differences of residual data between lossy and lossless coding, we improved the HEVC lossless coding using sample-based angular prediction (SAP), modified level binarization, and binarization table selection with the weighted sum of previously encoded level values. Experimental results show that the proposed method provides high compression ratio up to 11.32 and reduces decoding complexity.
    Signal Image and Video Processing 07/2015; 9(5). DOI:10.1007/s11760-013-0545-z
  • Signal Image and Video Processing 06/2015; DOI:10.1007/s11760-015-0788-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: Human detection is a complex problem owing to the variable pose that they can adopt. Here, we address this problem in sparse representation framework with an overcomplete scale-embedded dictionary. Histogram of oriented gradient features extracted from the candidate image patches are sparsely represented by the dictionary that contain positive bases along with negative and trivial bases. The object is detected based on the proposed likelihood measure obtained from the distribution of these sparse coefficients. The likelihood is obtained as the ratio of contribution of positive bases to negative and trivial bases. The positive bases of the dictionary represent the object (human) at various scales. This enables us to detect the object at any scale in one shot and avoids multiple scanning at different scales. This significantly reduces the computational complexity of detection task. In addition to human detection, it also finds the scale at which the human is detected due to the scale-embedded structure of the dictionary.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0781-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: No-reference image quality assessment is of great importance to numerous image processing applications, and various methods have been widely studied with promising results. These methods exploit handcrafted features in the transformation or space domain that are discriminated for image degradations. However, abundant a priori knowledge is required to extract these handcrafted features. The convolutional neural network (CNN) is recently introduced into the no-reference image quality assessment, which integrates feature learning and regression into one optimization process. Therefore, the network structure generates an effective model for estimating image quality. However, the image quality score obtained by the CNN is based on the mean of all of the image patch scores without considering the human visual system, such as edges and contour of images. In this paper, we combine the CNN and the Prewitt magnitude of segmented images and obtain the image quality score using the mean of all the products of the image patch scores and weights based on the result of segmented images. Experimental results on various image distortion types demonstrate that the proposed algorithm achieves good performance.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0784-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: Existing impulse noise reduction techniques perform well at low noise densities; however, their performance drops sharply at higher noise densities. In this paper, we propose a two-stage scheme to surmount this problem. In the proposed approach, first stage consists of impulse detection unit followed by the filtering operation in the second stage. We have employed a genetic expression programming-based classifier for the detection of impulse noise-corrupted pixels. To reduce the blurring effect caused due to filtering operation on the noise-free pixels, we filter the detected noisy pixels only by using a modified median filter. Better peak signal-to-noise ratio, structural similarity index measure, and visual output imply the efficacy of the proposed scheme for noise reduction at higher noise densities.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0780-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern video codecs map the frame area with square blocks to analyse inter-frame movement. This paper proposes a motion compensation algorithm based on a hexagonal form of blocks for video compression. The set partitioning in hierarchical trees algorithm based on the discrete wavelet transform has been used for residual image compression obtained after the motion compensation procedure. Our experiments showed that the usage of a hexagonal form of blocks for motion compensation has an average improvement of about 0.2 dB in terms of peak signal-to-noise ratio of the processed video sequence, the final productivity being even higher for the proposed algorithm as compared with the conventional square form of blocks.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0778-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a theoretical analysis of the spectrum utilization levels in a cognitive radio system. We assume that the traffic of the primary network is bursty and asynchronous with the secondary network, which performs imperfect spectrum sensing. Collisions of the primary and the secondary packets are assumed to result in increased packet error probabilities. We present primary and secondary utilization levels under optimized secondary transmission periods for varying primary traffic characteristics and secondary sensing performance levels. The results are also validated by extensive Monte Carlo simulations. We find that an asynchronous cognitive radio network with imperfect spectrum sensing is feasible when optimized transmission periods are used. The effects of primary traffic’s burst pattern and secondary sensing performance are discussed.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0782-4
  • [Show abstract] [Hide abstract]
    ABSTRACT: The basic idea behind the energy transfer features is that the appearance of objects can be described using a function of energy distribution in images. Inside the image, the energy sources are placed and the energy is transferred from the sources during a certain chosen time. The values of energy distribution function have to be reduced into a reasonable number of values. The process of reducing can be simply solved by sampling. The input image is divided into regular cells. The mean value is calculated inside each cell. The values of samples are then considered as a vector that is used as an input for the SVM classifier. We propose an improvement to this process. The discrete cosine transform coefficients are calculated inside the cells (instead of the mean values) to construct the feature vector for the face and pedestrian detectors. To reduce the number of coefficients, we use the patterns in which the coefficients are grouped into regions. In the face detector, the principal component analysis is also used to create the feature vector with a relatively small dimension. The results show that, using this approach, the objects can be efficiently encoded with a relatively short vector with the results that outperform the results of the state-of-the-art detectors.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0777-1
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a new measure of image focus based on the statistical properties of polynomial coefficients and spectral radius is proposed. Spectral radius captures the dominant features and represents the important dynamics of an image. It is shown that the proposed focus measure is monotonic and unimodal with respect to the degree of defocusation, noise and blurring effects. Moreover, it is sufficiently invariant to contrast changes occur due to the variations in intensities of illumination. The noise studies show that the proposed focus measure is robust under the different noisy and blurring conditions. The performance of proposed focus measure is gauged by comparing with the existing image focus measures. Experimental results using synthetic as well as real-time images with known and unknown distortion conditions show the wider working capability and higher prediction consistency of the proposed focus measure. Moreover, the performance of the proposed approach is validated with most popular five image quality databases: TID2008, LIVE, CSIQ, IVC and Cornell-A57. Experimentation on the databases shows that the proposed metric provides the comparatively higher correlation with ideal mean observer score than the existing metrics.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0775-3
  • [Show abstract] [Hide abstract]
    ABSTRACT: Optical flow approaches for motion estimation calculate vector fields which determine the apparent velocities of objects in time-varying image sequences. Image motion estimation is a fundamental issue in low-level vision and is used in many applications in image sequence processing, such as robot navigation, object tracking, image coding and structure reconstruction. The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. Actually, several methods are used to estimate the optical flow, but a good compromise between computational cost and accuracy is hard to achieve. This work presents a combined local–global total variation approach with structure–texture image decomposition. The combination is used to control the propagation phenomena and to gain robustness against illumination changes, influence of noise on the results and sensitivity to outliers. The resulted method is able to compute larger displacements in a reasonable time.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0772-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a spatiotemporal super-resolution method to enhance both the spatial resolution and the frame rate in a hybrid stereo video system. In this system, a scene is captured by two cameras to form two videos, including a low spatial resolution with high-frame-rate video and a high spatial resolution with low-frame-rate video. For the low-spatial-resolution video, the low-resolution frames are spatially super-resolved by the high-resolution video via the stereo matching, the bilateral overlapped block motion estimation, and the adaptive overlapped block motion compensation algorithms, while for the low-frame-rate video, those missed frames are interpolated using the high-resolution frames obtained by fusing the disparity compensation and the motion compensation frame rate up-conversion. Experimental results demonstrate that the proposed mixed spatiotemporal super-resolution method has a more significant contribution to both the subjective and objective qualities than the pure spatial super-resolution or the frame rate up-conversion.
    Signal Image and Video Processing 05/2015; DOI:10.1007/s11760-015-0774-4