Journal of Real-Time Image Processing

Journal description

Current impact factor: 1.11

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 1.111
2012 Impact Factor 1.156
2011 Impact Factor 1.02
2010 Impact Factor 0.962

Impact factor over time

Impact factor

Additional details

5-year impact 1.06
Cited half-life 3.70
Immediacy index 0.10
Eigenfactor 0.00
Article influence 0.43
Other titles Real-time image processing
ISSN 1861-8200
OCLC 73532162
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: A challenging problem in spectral unmixing is how to determine the number of endmembers in a given scene. One of the most popular ways to determine the number of endmembers is by estimating the virtual dimensionality (VD) of the hyperspectral image using the well-known Harsanyi–Farrand–Chang (HFC) method. Due to the complexity and high dimensionality of hyperspectral scenes, this task is computationally expensive. Reconfigurable field-programmable gate arrays (FPGAs) are promising platforms that allow hardware/software codesign and the potential to provide powerful onboard computing capabilities and flexibility at the same time. In this paper, we present the first FPGA design for the HFC-VD algorithm. The proposed method has been implemented on a Virtex-7 XC7VX690T FPGA and tested using real hyperspectral data collected by NASA’s Airborne Visible Infra-Red Imaging Spectrometer over the Cuprite mining district in Nevada and the World Trade Center in New York. Experimental results demonstrate that our hardware version of the HFC-VD algorithm can significantly outperform an equivalent software version, which makes our reconfigurable system appealing for onboard hyperspectral data processing. Most important, our implementation exhibits real-time performance with regard to the time that the hyperspectral instrument takes to collect the image data.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-014-0482-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: Super-resolution (SR) covers a set of techniques whose objective is to improve the spatial resolution of a video sequence or a single frame. In this scope, fusion SR techniques obtain high-resolution (HR) frames taking as a reference several low-resolution (LR) frames contained in a video sequence. This paper is based on a selective filter to decide the best LR frames to be used in the super-resolution process. Additionally, each frame division into macro-blocks (MBs) is analyzed both in a fixed block size approach, which decides which MBs should be used in the process, and in a variable block size approach with an adaptive MB size, which has been developed to set an appropriate frame division into MBs with variable size. These contributions not only improve the quality of video sequences, but also reduce the computational cost of a baseline SR algorithm, avoiding the incorporation of non-correlated data. Furthermore, this paper explains the way in which the enhanced algorithm proposed in it outperforms the quality of typical SR applications, such as underwater imagery, surveillance video, or remote sensing. The results are provided in a test environment to objectively compare the image quality enhancement obtained by bilinear interpolation, by the baseline SR algorithm, and by the proposed methods, thus presenting a quantitative comparison based on peak signal-to-noise ratio (PSNR) and Structural SIMilarity (SSIM) index parameters. The comparison has also been extended to other relevant proposals of the state of the art. The proposed algorithm significantly speeds up the previous ones, allowing real-time execution under certain conditions.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0489-3
  • [Show abstract] [Hide abstract]
    ABSTRACT: Split and merge segmentation is a popular region-based segmentation scheme for its robustness and computational efficiency. But it is hard to realize for larger size images or video frames in real time due to its iterative sequential data flow pattern. A quad-tree data structure is quite popular for software implementation of the algorithm, where a local parallelism is difficult to establish due to inherent data dependency between processes. In this paper, we have proposed a parallel algorithm of splitting and merging which depends only on local operations. The algorithm is mapped onto a hierarchical cell network, which is a parallel version of Locally Excitory Globally Inhibitory Oscillatory Network (LEGION). Simulation results show that the proposed design is faster than any of the standard split and merge algorithmic implementations, without compromising segmentation quality. The timing performance enhancement is manifested in its Finite State Machine based VLSI implementation in VIRTEX series FPGA platforms. We have also shown that, though segmentation qualitywise split-and-merge algorithm is little bit behind the state-of-the-art algorithms, computational speedwise it over performs those sophisticated and complex algorithms. Good segmentation performance with minimal computational cost enables the proposed design to tackle real time segmentation problem in live video streams. In this paper, we have demonstrated live PAL video segmentation using VIRTEX 5 series FPGA. Moreover, we have extended our design to HD resolution for which the time taken is less than 5 ms rendering a processing throughput of 200 frames per second.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0488-4
  • Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-014-0484-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the start of the widespread use of discrete wavelet transform in image processing, the need for its efficient implementation is becoming increasingly more important. This work presents several novel SIMD-vectorized algorithms of 2-D discrete wavelet transform, using a lifting scheme. At the beginning, a stand-alone core of an already known single-loop approach is extracted. This core is further simplified by an appropriate reorganization of operations. Furthermore, the influence of the CPU cache on a 2-D processing order is examined. Finally, SIMD-vectorizations and parallelizations of the proposed approaches are evaluated. The best of the proposed algorithms scale almost linearly with the number of threads. For all of the platforms used in the tests, these algorithms are significantly faster than other known methods, as shown in the experimental sections of the paper.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0486-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, image processing technics have attracted much attention as powerful tools in the assessment of skin lesions from multispectral images. The Kubelka-Munk Genetic Algorithm (KMGA) is a novel method which has been developed for this diagnostic purpose. It combines the Kubelka-Munk light-tissue interaction model with the Genetic Algorithm optimization process, and allows quantitative measure of cutaneous tissue by computing skin parameter maps such as melanin concentration, volume blood fraction, oxygen saturation or epidermis/dermis thickness. However, its efficiency is seriously reduced by the mass floating-point operations for each pixel of the multispectral image, and this prevents the algorithm from reaching industrial standards related to cost, power and speed for clinical applications. In this paper, our work focuses on the improvement of this theoretical achievement. Therefore, we repropose a new C-based Parallel and Optimized KMGA (PO-KMGA) technique designed and optimized using multiple ways: KM model optimized re-writing, operation massively parallelized using POSIX threads, memory use optimization and routine pipelining with Intel C++ Compiler, etc. Intensive experiments demonstrate that our introduced PO-KMGA framework spends less than 10 min to finish a job that the conventional KMGA spends around two days to do in the same hardware environment with a similar algorithm performance.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0494-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a fast inter-coding algorithm is proposed to reduce the computational load of HEVC encoders. The HEVC reference model (HM) employs the recursive depth-first-search (DFS) of the quad-tree search in terms of rate-distortion optimization in selecting the best coding modes for the best CU, PU, TU partitions, and many associated coding modes. The proposed algorithm evaluates the RD costs of the current CU only for its square-type PUs in the top-down search of the DFS. When the CU partition with the square-type PU is better than its sub-level CU partitions in terms of RD cost in bottom-up search of the DFS, the square type of current CU partition, along with its coding mode, is selected as the best partition. Otherwise, non-square-type PUs for the current CU level are evaluated. If the sub-partition is better than the CU with the non-square PUs, the sub-partition is finally selected as the optimum PU. Otherwise, the best non-square PU is selected as the best PU for the current level. Experimental results demonstrate that the proposed square-type-first inter-PU search can reduce the computational load in average encoding time by 66.7 % with 1-2 % BD loss over HM reference software. In addition, the proposed algorithm can yield an additional average time saving of 26.8-35.5 % against the three fast encoding algorithms adopted in HM.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0487-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: The stereo to multi-view conversion technology plays an important role in the development and promotion of three-dimensional television, which can provide adequate supply of high-quality 3D content for autostereoscopic displays. This paper focuses on a real-time implementation of the stereo to multi-view conversion system, the major parts of which are adaptive meshing, sparse stereo correspondence, energy equation construction and virtual-view rendering. To achieve the real-time performance, we make three main contributions. First, we introduce adaptive meshing to reduce the computational complexity at the expense of slight decrease in quality. Second, we use a simple and effective method based on block matching algorithm to generate the sparse disparity map. Third, for the module of block-saliency calculation, sparse stereo correspondence and view synthesis, novel parallelization strategies and fine-grained optimization techniques based on graphic processing units are used to accelerate the executing speed. Experimental results show that the system can achieve real-time and semi-real-time performance when rendering 8 views with the image resolution of 1280 × 720 and 1920 × 1080 on Tesla K20. The images and videos presented finally are both visually realistic and comfortable.
    Journal of Real-Time Image Processing 01/2015; DOI:10.1007/s11554-015-0490-x
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hyperspectral unmixing is essential for efficient hyperspectral image processing. Nonnegative matrix factorization based on minimum volume constraint (MVC-NMF) is one of the most widely used methods for unsupervised unmixing for hyperspectral image without the pure-pixel assumption. But the model of MVC-NMF is unstable, and the traditional solution based on projected gradient algorithm (PG-MVC-NMF) converges slowly with low accuracy. In this paper, a novel parallel method is proposed for minimum volume constrained hyperspectral image unmixing on CPU–GPU Heterogeneous Platform. First, a optimized unmixing model of minimum logarithmic volume regularized NMF is introduced and solved based on the second-order approximation of function and alternating direction method of multipliers (SO-MVC-NMF). Then, the parallel algorithm for optimized MVC-NMF (PO-MVC-NMF) is proposed based on the CPU–GPU heterogeneous platform, taking advantage of the parallel processing capabilities of GPUs and logic control abilities of CPUs. Experimental results based on both simulated and real hyperspectral images indicate that the proposed algorithm is more accurate and robust than the traditional PG-MVC-NMF, and the total speedup of PO-MVC-NMF compared to PG-MVC-NMF is over 50 times.
    Journal of Real-Time Image Processing 12/2014; DOI:10.1007/s11554-014-0479-x
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently, we are surrounded by a collection of heterogeneous computing devices such as desktop computers, laptops, smart phones, smart televisions, and tablet PCs. Each device is operated with its own host operating system, has its own internal architecture, and performs its independent tasks with its own applications and data. A common property amongst these devices, namely that of internet-connectivity, allows them to configure personal virtual cloud system by interconnecting each other through an intermediate switching device. The personal cloud service should provide a seamlessly unified computing environment across multiple devices with synchronized data and application programs. As such, it allows users to freely switch their workspace from one device to another while continuing the interaction with the available applications. In order to support video applications, the cloud system should provide seamless video synchronization among the multiple devices. However, we note that the current cloud services do not provide efficient data flow among devices. In this paper, we propose and develop a new reliable transport protocol to support video synchronization for the personal virtual cloud system. In order to cope with battery limitation of many mobile devices in personal virtual cloud system, the proposed protocol is designed to provide energy efficient video communications. Our simulation results show that the proposed protocol can reduce end users power consumption up to 25 % compared to the legacy TCP with given packet loss probabilities and the average length of error bursts.
    Journal of Real-Time Image Processing 12/2014; DOI:10.1007/s11554-014-0475-1
  • [Show abstract] [Hide abstract]
    ABSTRACT: High efficiency video coding (HEVC) was developed by the Joint Collaborative Team on video coding to replace the current H.264/AVC standard, which has been widely adopted over the last few years. Therefore, there is a lot of legacy content encoded with H.264/AVC, and an efficient conversion to HEVC is needed. This paper presents a hybrid transcoding algorithm which makes use of soft computing techniques as well as parallel processing. On the one hand, a fast quadtree level decision algorithm tries to exploit the information gathered at the H.264/AVC decoder to make faster decisions on coding unit splitting in HEVC using a Naïve–Bayes probabilistic classifier that is determined by a supervised data mining process. On the other hand, a parallel HEVC-encoding algorithm makes use of a heterogeneous platform composed of a multi-core central processing unit plus a graphics processing unit (GPU). In this way, from a coarse point of view, groups of frames or rows of a frame (both options are possible) are divided into threads to be executed on each core (each of which executes one of the aforementioned classifiers) and, from a finer point of view, all these threads work in a collaborative way on a single GPU to perform the motion estimation process on the co-processor. Experimental results show that the proposed transcoder can achieve a good tradeoff between coding efficiency and complexity compared with the anchor transcoder.
    Journal of Real-Time Image Processing 12/2014; DOI:10.1007/s11554-014-0477-z
  • [Show abstract] [Hide abstract]
    ABSTRACT: This contribution focuses on different topics that are covered by the special issue titled “Real-Time Motion Estimation for image and video processing applications” and which incorporate GPUS, FPGAs, VLSI systems, DSPs, and Multicores, among other platforms. The guest editors have solicited original contributions, which address a wide range of theoretical and practical issues related to high-performance motion estimation image processing including, but not limited to: real-time matching motion estimation systems, real-time energy-based motion estimation systems, gradient-based motion estimation systems, optical flow estimation systems, color motion estimation systems, multi-scale motion estimation systems, optical flow and motion estimation systems, analysis or comparison of specialized architectures for motion estimation systems and real-world applications.
    Journal of Real-Time Image Processing 12/2014; DOI:10.1007/s11554-014-0478-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently, high-efficiency video coding (HEVC) has been developed as a new video coding standard focusing on the coding of ultrahigh definition videos as high-resolution and high-quality videos are getting more popular. However, one of the most important challenges in this new standard is its encoding time complexity. Due to this it is quite difficult to implement the HEVC encoder as a real-time system. In this paper, we have addressed this problem in a new way. Generally, for a natural video sequence good amount of coding blocks are “skip” in nature, which need not be transmitted and can be generated in the decoder side using the reference pictures. In this paper, we propose an early skip detection technique for the HEVC. Our proposed method is based on identifying the motionless and homogeneous regions in a video sequence. Moreover, a novel entropy difference-based calculation is proposed in this paper which can predict the skip coding blocks more accurately in a natural video sequence. The experimental result shows our proposed technique can achieve more than 30 % encoding time reduction than the conventional HEVC encoder with negligible degradation in video quality.
    Journal of Real-Time Image Processing 12/2014; DOI:10.1007/s11554-014-0476-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents fast and low-complexity architecture to implement the computation of 16 × 16 luminance and 8 × 8 chrominance blocks for H.264 intra prediction. To conserve arithmetic operators and hasten processing speed, we propose an architecture based on recycled computation without the need for a multiplier. Center-based computation is used to generate the pixels in plane mode using two parallel cores and the architecture is implemented using four-parallel inputs and two-parallel outputs to achieve high-speed processing. The architecture is a configurable structure capable of computing the parameters and generating the predicted pixels in DC and plane mode within only 239 cycles in the processing of YUV blocks. The high-speed prediction for H.264 encoding meets the requirement for real-time HDTV.
    Journal of Real-Time Image Processing 12/2014; DOI:10.1007/s11554-012-0245-x
  • Journal of Real-Time Image Processing 09/2014; 9(3):579-585. DOI:10.1007/s11554-014-0398-x