Journal of Real-Time Image Processing

Publisher: Springer Verlag

Description

  • Impact factor
    1.16
  • 5-year impact
    1.06
  • Cited half-life
    3.70
  • Immediacy index
    0.10
  • Eigenfactor
    0.00
  • Article influence
    0.43
  • Other titles
    Real-time image processing
  • ISSN
    1861-8200
  • OCLC
    73532162
  • Material type
    Document, Periodical, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Springer Verlag

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on pre-print servers such as arXiv.org
    • Author's post-print on author's personal website immediately
    • Author's post-print on any open access repository after 12 months after publication
    • Publisher's version/PDF cannot be used
    • Published source must be acknowledged
    • Must link to publisher version
    • Set phrase to accompany link to published version (see policy)
    • Articles in some journals can be made Open Access on payment of additional charge
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Hyperspectral unmixing is essential for efficient hyperspectral image processing. Nonnegative matrix factorization based on minimum volume constraint (MVC-NMF) is one of the most widely used methods for unsupervised unmixing for hyperspectral image without the pure-pixel assumption. But the model of MVC-NMF is unstable, and the traditional solution based on projected gradient algorithm (PG-MVC-NMF) converges slowly with low accuracy. In this paper, a novel parallel method is proposed for minimum volume constrained hyperspectral image unmixing on CPU–GPU Heterogeneous Platform. First, a optimized unmixing model of minimum logarithmic volume regularized NMF is introduced and solved based on the second-order approximation of function and alternating direction method of multipliers (SO-MVC-NMF). Then, the parallel algorithm for optimized MVC-NMF (PO-MVC-NMF) is proposed based on the CPU–GPU heterogeneous platform, taking advantage of the parallel processing capabilities of GPUs and logic control abilities of CPUs. Experimental results based on both simulated and real hyperspectral images indicate that the proposed algorithm is more accurate and robust than the traditional PG-MVC-NMF, and the total speedup of PO-MVC-NMF compared to PG-MVC-NMF is over 50 times.
    Journal of Real-Time Image Processing 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently, we are surrounded by a collection of heterogeneous computing devices such as desktop computers, laptops, smart phones, smart televisions, and tablet PCs. Each device is operated with its own host operating system, has its own internal architecture, and performs its independent tasks with its own applications and data. A common property amongst these devices, namely that of internet-connectivity, allows them to configure personal virtual cloud system by interconnecting each other through an intermediate switching device. The personal cloud service should provide a seamlessly unified computing environment across multiple devices with synchronized data and application programs. As such, it allows users to freely switch their workspace from one device to another while continuing the interaction with the available applications. In order to support video applications, the cloud system should provide seamless video synchronization among the multiple devices. However, we note that the current cloud services do not provide efficient data flow among devices. In this paper, we propose and develop a new reliable transport protocol to support video synchronization for the personal virtual cloud system. In order to cope with battery limitation of many mobile devices in personal virtual cloud system, the proposed protocol is designed to provide energy efficient video communications. Our simulation results show that the proposed protocol can reduce end users power consumption up to 25 % compared to the legacy TCP with given packet loss probabilities and the average length of error bursts.
    Journal of Real-Time Image Processing 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This contribution focuses on different topics that are covered by the special issue titled “Real-Time Motion Estimation for image and video processing applications” and which incorporate GPUS, FPGAs, VLSI systems, DSPs, and Multicores, among other platforms. The guest editors have solicited original contributions, which address a wide range of theoretical and practical issues related to high-performance motion estimation image processing including, but not limited to: real-time matching motion estimation systems, real-time energy-based motion estimation systems, gradient-based motion estimation systems, optical flow estimation systems, color motion estimation systems, multi-scale motion estimation systems, optical flow and motion estimation systems, analysis or comparison of specialized architectures for motion estimation systems and real-world applications.
    Journal of Real-Time Image Processing 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently, high-efficiency video coding (HEVC) has been developed as a new video coding standard focusing on the coding of ultrahigh definition videos as high-resolution and high-quality videos are getting more popular. However, one of the most important challenges in this new standard is its encoding time complexity. Due to this it is quite difficult to implement the HEVC encoder as a real-time system. In this paper, we have addressed this problem in a new way. Generally, for a natural video sequence good amount of coding blocks are “skip” in nature, which need not be transmitted and can be generated in the decoder side using the reference pictures. In this paper, we propose an early skip detection technique for the HEVC. Our proposed method is based on identifying the motionless and homogeneous regions in a video sequence. Moreover, a novel entropy difference-based calculation is proposed in this paper which can predict the skip coding blocks more accurately in a natural video sequence. The experimental result shows our proposed technique can achieve more than 30 % encoding time reduction than the conventional HEVC encoder with negligible degradation in video quality.
    Journal of Real-Time Image Processing 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel fall detection system based on the Kinect sensor. The system runs in real-time and is capable of detecting walking falls accurately and robustly without taking into account any false positive activities (i.e. lying on the floor). Velocity and inactivity calculations are performed to decide whether a fall has occurred. The key novelty of our approach is measuring the velocity based on the contraction or expansion of the width, height and depth of the 3D bounding box. By explicitly using the 3D bounding box, our algorithm requires no pre-knowledge of the scene (i.e. floor), as the set of detected actions are adequate to complete the process of fall detection.
    Journal of Real-Time Image Processing 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents fast and low-complexity architecture to implement the computation of 16 × 16 luminance and 8 × 8 chrominance blocks for H.264 intra prediction. To conserve arithmetic operators and hasten processing speed, we propose an architecture based on recycled computation without the need for a multiplier. Center-based computation is used to generate the pixels in plane mode using two parallel cores and the architecture is implemented using four-parallel inputs and two-parallel outputs to achieve high-speed processing. The architecture is a configurable structure capable of computing the parameters and generating the predicted pixels in DC and plane mode within only 239 cycles in the processing of YUV blocks. The high-speed prediction for H.264 encoding meets the requirement for real-time HDTV.
    Journal of Real-Time Image Processing 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Smart camera, i.e. cameras that are able to acquire and process images in real-time, is a typical example of the new embedded computer vision systems. A key example of application is automatic fall detection, which can be useful for helping elderly people in daily life. In this paper, we propose a methodology for development and fast-prototyping of a fall detection system based on such a smart camera, which allows to reduce the development time compared to standard approaches. Founded on a supervised classification approach, we propose a HW/SW implementation to detect falls in a home environment using a single camera and an optimized descriptor adapted to real-time tasks. This heterogeneous implementation is based on Xilinx’s system-on-chip named Zynq. The main contributions of this work are (i) the proposal of a co-design methodology. These methodologies enable the HW/SW partitioning to be delayed using high-level algorithmic description and high-level synthesis tools. Our approach enables fast prototyping which allows fast architecture exploration and optimisation to be performed, (ii) the design of a hardware accelerator dedicated to boosting-based classification, which is a very popular and efficient algorithm used in image analysis, (iii) the proposal of fall-detection embedded in a smart camera and enabling integration into the elderly people environment. Performances of our system are finally compared to the state-of-the-art.
    Journal of Real-Time Image Processing 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: New applications of smart devices interacting with other computing devices are recently providing interesting and feasible solutions in ubiquitous computing environments. In this study, we propose an interactive virtual aquarium system that interacts with a smart device as a user interface. We developed a virtual aquarium graphic system and a remote interaction application of a smart device for building an interactive virtual aquarium system. We performed an experiment that demonstrates the feasibility and the effectiveness of the proposed system as an example of a new type of interactive application of a smart display, where a smart device serves as a remote user interface.
    Journal of Real-Time Image Processing 09/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a real-time approach to automatically generate photomosaic videos from a set of optimized images by taking advantage of CUDA GPU acceleration. Our approach divides an input image into smaller cells—usually rectangular cells—and replaces each cell with a small image of an appropriate color pattern. Photomosaics require a large set of tile images with a variety of patterns to create high-quality digital mosaic images. Because a large database of images requires longer processing time and larger storage space for searching patterns from the database, this requirement causes problems in developing a real-time system or mobile applications that have limited resources. This paper deals with a real-time video photomosaics using genetic feature selection method for building an optimized image set and taking advantage of CUDA to accelerate pattern searching that minimizes computation cost.
    Journal of Real-Time Image Processing 09/2014; 9(3).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Video-coding systems require a large external memory bandwidth to encode a single video frame. Many modules of the current video encoders must access the external memory to read and write data resulting in large power consumption, since memory-related power is dominant in current digital systems. Moreover, external memory access represents an important performance bottleneck in current multimedia systems. In this sense, this article presents the Reference Frame Context Adaptive Variable-Length Coder (RFCAVLC), which is a low-complexity lossless solution to compress the reference data before storing them in the external memory. The proposed approach is based on Huffman codes and employs eight static code tables to avoid the cost of the on-the-fly statistical analysis. The best table to encode each block is selected at run time using a context evaluation, resulting in a context-adaptive configuration. The proposed RFCAVLC reaches an average compression ratio superior to 32 % for the evaluated video sequences. The RFCAVLC architectures, encoder, and decoder were designed and synthesized targeting FPGA and 65 nm TSMC standard cell library. The RFCAVLC design is able to reach real-time encoding for WQSXGA (3,200 × 2,048 pixels) at 33 fps. The RFCAVLC also achieves power savings related to external memory communication that exceed 30 % when processing HD 1,080p videos at 30 fps.
    Journal of Real-Time Image Processing 08/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: High-performance video applications with real-time requirements play an important role in diverse application fields and are often executed by advanced parallel processors or GPUs. For embedded scenarios with strict energy constraints such as automotive image processing, FPGAs represent a feasible power-efficient computer platform. Unfortunately, their hardware-driven design concept results in long development cycles and impedes their acceptance in industrial practice. Additionally, the verification of the FPGA’s correctness and its performance figures are unavailable until a very late development stage, which is critical during design space exploration and the integration in complex embedded systems. Weakly-programmable architectures, supporting design and run-time reuse via flexible hardware components, represent a promising and efficient FPGA development approach. However, they currently lack suitable design and verification methodologies for real-time scenarios. Therefore, this paper proposes a system-level FPGA development concept for video applications with weakly-programmable hardware components. It combines rapid software prototyping with component-based FPGA design and advanced formal real-time analysis and code generation techniques. The presented approach enables an early verification of the application’s correctness, including exact performance figures. It provides a software-level verification of weakly-programmable hardware components and an automated assembly of the final hardware design. The developed tools and their usability are demonstrated by a binarization and a dense block matching application, which represents a basic preprocessing step in automotive image processing for driver assistance systems. When compared to a hand-optimized variant, the generated hardware design achieves comparable performance and chip area figures without requiring significant hardware integration effort.
    Journal of Real-Time Image Processing 07/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Connected-component labeling and Euler number computing are two essential processing tasks for extracting objects’ features in a binary image for the pattern recognition, image analysis, and computer (robot) vision. In general, the two processing tasks are usually executed independently by different algorithms in different scans. This paper proposes a combinational algorithm for labeling connected components in a binary image and computing the Euler number of the image simultaneously. In our algorithm, for the current pixel, the two processing tasks use the same information obtained from its neighbor pixels in the same scan. Moreover, the information obtained during processing the current pixel will be used for processing the next pixel. Our method is simple in principle and powerful in practice. Experimental results demonstrated that our method is much more efficient than conventional methods on various kinds of images, either in the case where the Euler number is calculated alone or in the case where both connected-component labeling and the Euler number computing are necessary.
    Journal of Real-Time Image Processing 06/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Three dimensional range data provides useful information for various computer vision and computer graphics applications. For these, extracting the range data reliably is of utmost importance. Therefore, various range scanners based on different working principles are proposed in the literature. Among these, coded structured light-based range scanners are popular and used in most industrial applications. Unfortunately, these range scanners cannot scan shiny objects reliably. Either highlights on the shiny object surface or the ambient light in the environment disturb the code word. As the code is changed, the range data extracted from it will also be disturbed. In this study, we focus on developing a system that can scan shiny and matte objects under ambient light. Therefore, we propose color invariant-based binary, ternary, and quaternary coded structured light-based range scanners. We hypothesize that, by using color invariants, we can eliminate the effect of highlights and ambient light in the scanning process. Thus, we can extract the range data of shiny and matte objects in a robust manner. We implemented these scanners using a TI DM6437 EVM board with a flexible system setup such that the user can select the scanning type. Furthermore, we implemented a TI MSP430 microcontroller-based rotating table system that accompanies our scanner. With the help of this system, we can obtain the range data of the target object from different viewpoints. We also implemented a range image registration method to obtain the complete object model from the range data extracted. We tested our scanner system on various objects and provided their range and model data.
    Journal of Real-Time Image Processing 06/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Effective compound image compression algorithms require compound images to be first segmented into regions such as text, pictures and background to minimize the loss of visual quality of text during compression. In this paper, a new compound image segmentation algorithm based on the Mixed Raster Content model (MRC) of multilayer approach is proposed (foreground/mask/background). This algorithm first segments a compound image into different classes. Then each class is transformed to the three-layer MRC model differently according to the property of that class. Finally, the foreground and the background layers are compressed using JPEG 2000. The mask layer is compressed using JBIG2. The proposed morphological-based segmentation algorithm design a binary segmentation mask which partitions a compound image into different layers, such as the background layer and the foreground layer accurately. Experimental results show that it is more robust with respect to the font size, style, colour, orientation, and alignment of text in an uneven background. At similar bit rates, our MRC compression with the morphology-based segmentation achieves a much higher subjective quality and coding efficiency than the state-of-the-art compression algorithms, such as JPEG, JPEG 2000 and H.264/AVC-I.
    Journal of Real-Time Image Processing 06/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a real-time pipeline for estimating the camera orientation based on vanishing points for indoor navigation assistance on a Smartphone. The orientation of embedded camera relies on the ability to find a reliable triplet of orthogonal vanishing points. The proposed pipeline introduces a novel sampling strategy among finite and infinite vanishing points with a random sample consensus-based line clustering and a tracking along a video sequence to enforce the accuracy and the robustness by extracting the three most pertinent orthogonal directions while preserving a short processing time for real-time application. Experiments on real images and video sequences acquired with a Smartphone show that the proposed strategy for selecting orthogonal vanishing points is pertinent as our algorithm gives better results than the recently published RNS optimal method, in particular for the yaw angle, which is actually essential for the navigation task.
    Journal of Real-Time Image Processing 04/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The potential computational power of today multicore processors has drastically improved compared to the single processor architecture. Since the trend of increasing the processor frequency is almost over, the competition for increased performance has moved on the number of cores. Consequently, the fundamental feature of system designs and their associated design flows and tools need to change, so that, to support the scalable parallelism and the design portability. The same feature can be exploited to design reconfigurable hardware, such as FPGAs, which leads to rethink the mapping of sequential algorithms to HDL. The sequential programming paradigm, widely used for programming single processor systems, does not naturally provide explicit or implicit forms of scalable parallelism. Conversely, dataflow programming is an approach that naturally provides parallelism and the potential to unify SW and HDL designs on heterogeneous platforms. This study describes a dataflow-based design methodology aiming at a unified co-design and co-synthesis of heterogeneous systems. Experimental results on the implementation of a JPEG codec and a MPEG 4 SP decoder on heterogeneous platforms demonstrate the flexibility and capabilities of this design approach.
    Journal of Real-Time Image Processing 03/2014;