Journal of Real-Time Image Processing

Publisher: Springer Verlag

Journal description

Current impact factor: 1.11

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 1.111
2012 Impact Factor 1.156
2011 Impact Factor 1.02
2010 Impact Factor 0.962

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.06
Cited half-life 3.70
Immediacy index 0.10
Eigenfactor 0.00
Article influence 0.43
Other titles Real-time image processing
ISSN 1861-8200
OCLC 73532162
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Springer Verlag

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on pre-print servers such as arXiv.org
    • Author's post-print on author's personal website immediately
    • Author's post-print on any open access repository after 12 months after publication
    • Publisher's version/PDF cannot be used
    • Published source must be acknowledged
    • Must link to publisher version
    • Set phrase to accompany link to published version (see policy)
    • Articles in some journals can be made Open Access on payment of additional charge
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent computer systems and handheld devices are equipped with high computing capability, such as general purpose GPUs (GPGPU) and multi-core CPUs. Utilizing such resources for computation has become a general trend, making their availability an important issue for the real-time aspect. Discrete cosine transform (DCT) and quantization are two major operations in image compression standards that require complex computations. In this paper, we develop an efficient parallel implementation of the forward DCT and quantization algorithms for JPEG image compression using Open Computing Language (OpenCL). This OpenCL-based parallel implementation utilizes a multi-core CPU and a GPGPU to perform DCT and quantization computations. We demonstrate the capability of this design via two proposed working scenarios. The proposed approach also applies certain optimization techniques to improve the kernel execution time and data movements. We developed an optimal OpenCL kernel for a particular device using device-based optimization factors, such as thread granularity, work-items mapping, workload allocation, and vector-based memory access. We evaluated the performance in a heterogeneous environment, finding that the proposed parallel implementation was able to speed up the execution time of the DCT and quantization by factors of 7.97 and 8.65, respectively, obtained from 1024 × 1024 and 2084 × 2048 image sizes in 4:4:4 format.
    Journal of Real-Time Image Processing 05/2015; DOI:10.1007/s11554-015-0507-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is widely accepted that the growth of Internet and the improvement of Internet’s network conditions helped real-time applications to flourish. The demand for Ultra-High Definition video is constantly increasing. Apart from video and sound, a new kind of real-time data is making its appearance, haptic data. The efficient synchronization of video, audio, and haptic data is a rather challenging effort. The new High-Efficiency Video Coding (HEVC) is quite promising for real-time ultra-high definition video transferring through the Internet. This paper presents related work on High-Efficiency Video Coding. It points out the challenges and the synchronization techniques that have been proposed for synchronizing video and haptic data. Comparative tests between H.264 and HEVC are undertaken. Measurements for the network conditions of the Internet are carried out. The equations for the transferring delay of all the inter-prediction configurations of the HEVC are defined. Finally, it proposes a new efficient algorithm for transferring a real-time HEVC stream with haptic data through the Internet.
    Journal of Real-Time Image Processing 05/2015; DOI:10.1007/s11554-015-0505-7
  • [Show abstract] [Hide abstract]
    ABSTRACT: The MPEG has recently Querydeveloped a new standard, MPEG media transport (MMT), for the next-generation hybrid media delivery service over IP networks considering the emerging convergence of digital broadcast and broadband services. On account of the heterogeneous characteristics of broadcast and broadband networks, MMT provides an efficient delivery timing model to enable inter-network synchronization, measure various kinds of transmission delays and jitters caused by the transmission delay, and re-adjust the timing relationship between the MMT packets to ensure synchronized playback. By exploiting the delivery timing model, it is possible to accurately estimate the round-trip time (RTT) experienced during MMT packet transmission. Based on the measured RTT, we propose an efficient delay-constrained automatic repeat request (ARQ) scheme, which is applicable to MMT packet-based real-time video streaming service over IP networks. In the proposed ARQ scheme, the receiver buffer fullness at the time of packet loss detection is used to compute the arrival deadline, which is the maximum allowed time for completing the requesting and retransmitting of the lost MMT packet. Simulation results demonstrate that the proposed delay-constrained ARQ scheme can not only provide reliable error recovery, but it also achieves significant bandwidth savings by reducing the number of wastefully retransmitted packets that arrive at the receiver side and exceed the allowed arrival deadline.
    Journal of Real-Time Image Processing 05/2015; DOI:10.1007/s11554-015-0503-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: Transmission of high efficiency video coding (HEVC) video streams over error-prone enterprise wireless local area networks (WLAN) architectures is challenging because of the difficulties in buffer overflow management in the switches within enterprise WLAN architectures. Thus, this paper proposes a new quality-aware video transmission method for company-wide enterprise WLAN architectures by combining video transmission technologies with a distributed stochastic buffering model that jointly controls power consumption and queue stabilization. After conducting extensive simulations with HEVC test sequences, significant video quality improvements are observed with the average Y-PSNR gain of 3.33 dB.
    Journal of Real-Time Image Processing 04/2015; DOI:10.1007/s11554-015-0501-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a fast algorithm for integrating connected-component labeling and Euler number computation. Based on graph theory, the Euler number of a binary image in the proposed algorithm is calculated by counting the occurrences of four patterns of the mask for processing foreground pixels in the first scan of a connected-component labeling process, where these four patterns can be found directly without any additional calculation; thus, connected-component labeling and Euler number computation can be integrated more efficiently. Moreover, when computing the Euler number, unlike other conventional algorithms, the proposed algorithm does not need to process background pixels. Experimental results demonstrate that the proposed algorithm is much more efficient than conventional algorithms either for calculating the Euler number alone or simultaneously calculating the Euler number and labeling connected components.
    Journal of Real-Time Image Processing 04/2015; DOI:10.1007/s11554-015-0499-1
  • [Show abstract] [Hide abstract]
    ABSTRACT: Wide-angle images gained a huge popularity in the last years due to the development of computational photography and imaging technological advances. They present the information of a scene in a way which is more natural for the human eye but, on the other hand, they introduce artifacts such as bent lines. These artifacts become more and more unnatural as the field of view increases. In this work, we present a technique aimed to improve the perceptual quality of panorama visualization. The main ingredients of our approach are, on one hand, considering the viewing sphere as a Riemann sphere, what makes natural the application of M\"obius (complex) transformations to the input image, and, on the other hand, a projection scheme which changes in function of the field of view used. We also introduce an implementation of our method, compare it against images produced with other methods and show that the transformations can be done in real-time, which makes our technique very appealing for new settings, as well as for existing interactive panorama applications.
    Journal of Real-Time Image Processing 04/2015; DOI:10.1007/s11554-015-0502-x
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, a clustering approach for bandwidth reduction in distributed smart camera networks is presented. Properties of the environment such as camera positions and environment pathways, as well as dynamics and features of targets are used to limit the flood of messages in the network. To better understand the correlation between camera positioning and pathways in the scene on one hand and temporal and spatial properties of targets on the other hand, and to devise a sound messaging infrastructure, a unifying probabilistic modeling for object association across multiple cameras with disjointed view is used. Communication is efficiently handled using a task-oriented node clustering that partition the network in different groups according to the pathway among cameras, and the appearance and temporal behavior of targets. We propose a novel asynchronous event exchange strategy to handle sporadic messages generated by non-frequent tasks in a distributed tracking application. Using a Xilinx-FPGA with embedded Microblaze processor, we could show that, with limited resource and speed, the embedded processor was able to sustain a high communication load, while performing complex image processing computations.
    Journal of Real-Time Image Processing 04/2015; DOI:10.1007/s11554-015-0498-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: The latest video coding standard, high efficiency video coding (HEVC), is developed to acquire a more efficient coding performance than the previous standard, H.264/AVC. To achieve this coding performance, elaborate coding tools were implemented in HEVC. Although those tools show a higher coding performance than H.264/AVC, the encoding complexity is heavily increased. Especially, motion estimation (ME) requires the most computational complexity because that is always performed on three inter-prediction modes: uni-directional prediction in List 0 (Uni-L0), uni-directional prediction in List 1 (Uni-L1), and bi-prediction (Bi). In this paper, we propose a priority-based inter-prediction mode decision method to reduce the complexity of ME caused by inter-prediction. The proposed method computes the priorities of all inter-prediction modes and decides whether ME is performed or not. Experimental results show that the proposed method reduces the computational complexity of ME up to 55.51% while maintaining similar coding performance compared to HEVC test model (HM) version 10.1.
    Journal of Real-Time Image Processing 03/2015; DOI:10.1007/s11554-015-0493-7
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Journal of Real-Time Image Processing is entering its 10th year of publication with a steady increase in its stature in the area of image processing. The number of downloads grew to 37K+ in 2014. There are currently six special issue calls for papers, which can be viewed in the back matter of this issue. In the past few years, changes have been made to the Editorial Board to incorporate the technological trends and challenges in the field of real-time image processing. In addition, to improve the geographical representation of manuscripts being submitted, changes are being made to the Advisory Board. As a first step, we are delighted to inform our readers that Professor Touradj Ebrahimi, who is a well-known researcher and technology leader with extensive experience in image processing, has kindly agreed to join the Advisory Board. It is a great pleasure to have him in our Advisory Board.The editorial board meeting of JRTIP (see Fig. 1) was recently held in San Francisco in Februar ...
    Journal of Real-Time Image Processing 03/2015; 10(1). DOI:10.1007/s11554-015-0491-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: A challenging problem in spectral unmixing is how to determine the number of endmembers in a given scene. One of the most popular ways to determine the number of endmembers is by estimating the virtual dimensionality (VD) of the hyperspectral image using the well-known Harsanyi–Farrand–Chang (HFC) method. Due to the complexity and high dimensionality of hyperspectral scenes, this task is computationally expensive. Reconfigurable field-programmable gate arrays (FPGAs) are promising platforms that allow hardware/software codesign and the potential to provide powerful onboard computing capabilities and flexibility at the same time. In this paper, we present the first FPGA design for the HFC-VD algorithm. The proposed method has been implemented on a Virtex-7 XC7VX690T FPGA and tested using real hyperspectral data collected by NASA’s Airborne Visible Infra-Red Imaging Spectrometer over the Cuprite mining district in Nevada and the World Trade Center in New York. Experimental results demonstrate that our hardware version of the HFC-VD algorithm can significantly outperform an equivalent software version, which makes our reconfigurable system appealing for onboard hyperspectral data processing. Most important, our implementation exhibits real-time performance with regard to the time that the hyperspectral instrument takes to collect the image data.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-014-0482-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: Super-resolution (SR) covers a set of techniques whose objective is to improve the spatial resolution of a video sequence or a single frame. In this scope, fusion SR techniques obtain high-resolution (HR) frames taking as a reference several low-resolution (LR) frames contained in a video sequence. This paper is based on a selective filter to decide the best LR frames to be used in the super-resolution process. Additionally, each frame division into macro-blocks (MBs) is analyzed both in a fixed block size approach, which decides which MBs should be used in the process, and in a variable block size approach with an adaptive MB size, which has been developed to set an appropriate frame division into MBs with variable size. These contributions not only improve the quality of video sequences, but also reduce the computational cost of a baseline SR algorithm, avoiding the incorporation of non-correlated data. Furthermore, this paper explains the way in which the enhanced algorithm proposed in it outperforms the quality of typical SR applications, such as underwater imagery, surveillance video, or remote sensing. The results are provided in a test environment to objectively compare the image quality enhancement obtained by bilinear interpolation, by the baseline SR algorithm, and by the proposed methods, thus presenting a quantitative comparison based on peak signal-to-noise ratio (PSNR) and Structural SIMilarity (SSIM) index parameters. The comparison has also been extended to other relevant proposals of the state of the art. The proposed algorithm significantly speeds up the previous ones, allowing real-time execution under certain conditions.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0489-3
  • [Show abstract] [Hide abstract]
    ABSTRACT: Visual cryptographic scheme (VCS) has an attractive property of decoding that can be performed directly by the human visual system without any computation. This novel stacking-to-see property of VCS can be secure and cheap used for visual authentication in accessing a data base. In this paper, we design a new participant-specific VCS (PSVCS) to extend the authentication scenario from a single-client authentication in VCS-based authentication schemes to a group of clients with specific threshold property. The proposed (\(t, k, n\))-PSVCS, where \(t\, \le \,k \,\le\, n\), needs \(t\) specific participants and other (\(k-t\)) participants to decode the secret image. When compared with previous VCS-based authentication schemes, our PSVCS-based authentication can provide different privileges to clients and is more suitable for real environment.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0511-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the start of the widespread use of discrete wavelet transform in image processing, the need for its efficient implementation is becoming increasingly more important. This work presents several novel SIMD-vectorized algorithms of 2-D discrete wavelet transform, using a lifting scheme. At the beginning, a stand-alone core of an already known single-loop approach is extracted. This core is further simplified by an appropriate reorganization of operations. Furthermore, the influence of the CPU cache on a 2-D processing order is examined. Finally, SIMD-vectorizations and parallelizations of the proposed approaches are evaluated. The best of the proposed algorithms scale almost linearly with the number of threads. For all of the platforms used in the tests, these algorithms are significantly faster than other known methods, as shown in the experimental sections of the paper.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0486-6