Cedric Nishan Canagarajah

University of Bristol, Bristol, England, United Kingdom

Are you Cedric Nishan Canagarajah?

Claim your profile

Publications (50)7.5 Total impact

  • Source
    Artur Loza, Lyudmila Mihaylova, David R. Bull, Cedric Nishan Canagarajah
  • Nawat Kamnoonwatana, Dimitris Agrafiotis, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: A channel adaptive multiple description video codec is presented with flexible redundancy allocation based on modeling and minimization of the end-to-end distortion. We employ a three-loop multiple description coding scheme for which we develop models that estimate the rate-distortion performance of the side encoders as well as the overall end-to-end distortion given channel statistics. A simple yet effective algorithm is formulated for determining appropriate levels of redundancy given a total bit rate and channel estimates in the form of packet error rates. The experimental results presented validate the proposed models over various channels conditions. The performance and adaptivity of the codec is evaluated through extensive simulations with a 2$\,\times\,$2 wireless multiple input multiple output system. A gain of more than 10 dB can be achieved compared to a non-adaptive system and even larger gains can be had relative to typical single description transmissions.
    IEEE Transactions on Circuits and Systems for Video Technology 01/2012; 22:1-11. · 1.82 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The performance of an OFDM system can be improved by performing per-subcarrier antenna selection, which facilitates the exploitation of frequency and spatial diversity in the wireless channel. In this paper, we extend the concept of per-subcarrier antenna selection to a multiuser cognitive radio environment and present a practical subcarrier and antenna selection algorithm that exploits the multiuser, frequency and spatial diversities inherent in such systems while requiring only limited channel knowledge. The problem is formulated as an integer programming (IP) problem. We demonstrate that a linear relaxation of the problem still leads to an optimal solution, thus reducing the computational complexity relative to other approaches found in the literature. Simulation results demonstrate that the proposed resource allocation problem leads to an improvement in secondary users' link qualities, compared to a single-input single-output system and, at the same time, limits the interference to the primary user. I. INTRODUCTION
    Proceedings of IEEE International Conference on Communications, ICC 2011, Kyoto, Japan, 5-9 June, 2011; 01/2011
  • Source
    EURASIP J. Wireless Comm. and Networking. 01/2011; 2011.
  • Source
    IEEE Trans. Circuits Syst. Video Techn. 01/2010; 20:473-484.
  • Source
    Artur Loza, Lyudmila Mihaylova, David R. Bull, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of object tracking in video sequences for surveillance applications by using a recently proposed structural similarity-based image distance measure. Multimodality surveillance videos pose specific challenges to tracking algorithms, due to, for example, low or variable light conditions and the presence of spurious or camouflaged objects. These factors often cause undesired luminance and contrast variations in videos produced by infrared sensors (due to varying thermal conditions) and visible sensors (e.g., the object entering shadowy areas). Commonly used colour and edge histogram-based trackers often fail in such conditions. In contrast, the structural similarity measure reflects the distance between two video frames by jointly comparing their luminance, contrast and spatial characteristics and is sensitive to relative rather than absolute changes in the video frame. In this work, we show that the performance of a particle filter tracker is improved significantly when the structural similarity-based distance is applied instead of the conventional Bhattacharyya histogram-based distance. Extensive evaluation of the proposed algorithm is presented together with comparisons with colour, edge and mean-shift trackers using real-world surveillance video sequences from multimodal (infrared and visible) cameras.
    Machine Vision and Applications 01/2009; 20:71-83. · 1.10 Impact Factor
  • Arasanathan Anjulan, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a unified framework for object mining system for videos, which combines shot segmentation, clustering, retrieval and object mining using a single set of detected local invariant regions. The local invariant regions are tracked throughout a shot and stable tracks are extracted. The conventional key frame method is replaced with these stable tracks of local regions to characterize different shots. A grouping technique is introduced to combine the stable tracks into meaningful object clusters. These clusters are used to mine similar objects. Compared to other object mining systems, our approach mines more instances of similar objects in different shots. The proposed framework is applied to full length feature films and the results are compared with state of the art methods.
    IEEE Transactions on Circuits and Systems for Video Technology 01/2009; 19:63-76. · 1.82 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel dynamically reconfigurable hardware architecture for lossless compression and its optimization for space imagery. The proposed system makes use of reconfiguration to support optimal modeling strategies adaptively for data with different dimensions. The advantage of the proposed system is the efficient combination of different compression functions. For image data, we propose a new multi-mode image model which can detect the local features of the image and use different modes to encode regions with different features. Experimental results show that our system improves compression ratios of space image while maintaining low complexity and high throughput.
    Reconfigurable Computing: Architectures, Tools and Applications, 4th International Workshop, ARC 2008, London, UK, March 26-28, 2008. Proceedings; 01/2008
  • Izhar Zaidi, Atukem Nabina, Cedric Nishan Canagarajah, José L. Núñez-Yáñez
    [Show abstract] [Hide abstract]
    ABSTRACT: ABSTRACT This paper explores the utilization of run-time ,Partial Dynamic Reconfiguration in the LEON3 open-source soft core processor, which is a highly configurable SPARC (Scalable Processor ARChitecture) V8 instruction set processor. The work explores the possibilities of sharing different arithmetic functions tightly coupled to the integer pipeline and mapped to the same silicon area, saving power consumption,and area utilisation. The same strategy can be used to extend the instruction set architecture of the processor with new instructions that are optimized for DSP applications. The logic necessary ,to support ,these instructions could then be swapped ,as demanded ,by the application.
    11th Euromicro Conference on Digital System Design: Architectures, Methods and Tools, DSD 2008, Parma, Italy, September 3-5, 2008; 01/2008
  • Source
    Nawat Kamnoonwatana, Dimitris Agrafiotis, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: With accompanying conference presentation A novel technique for exploring the use of indexing metadata in improving coding efficiency is proposed in this paper. The technique uses an MPEG-7 descriptor as the basis for a fast mode decision algorithm for H.264/AVC encoders. The descriptor is used to form homogenous clusters for each frame, within which limited available coding modes for each macroblock are defined. The coding mode of an already coded macroblock that belongs to the same cluster in the same frame as well as the statistics of the coding modes of similar clusters in previous frames, are used for limiting the range of available coding modes within each cluster. The results show that the proposed algorithm is able to achieve an average of 47% time-saving when compared to the full search method and 21% when compared to the fast mode decision algorithm employed in the JM12.2 reference H.264 software encoder. In both cases, results yield only a small degradation in rate-distortion performance and a negligible loss in subjective quality
    Proceedings of the International Conference on Image Processing, ICIP 2008, October 12-15, 2008, San Diego, California, USA; 01/2008
  • Source
    IEEE Transactions on Wireless Communications. 01/2007; 6:3589-3599.
  • Kart Lim, Cedric Nishan Canagarajah, Alin Achim
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a novel algorithm for the de-speckling of SAR images which exploits a priori statistical information from both the spatial and wavelet domains. In the spatial domain, we apply the Method-of-Log-Cumulants (MoLC), which is based on Mellin transform, in order to locally estimate parameters corresponding to an assumed Generalized Gaussian Rayleigh (GGR) model for the image. We then compute classical cumulants for the image and speckle models and relate them into their wavelet domain counterparts. Using wavelet cumulants, we separately derive parameters corresponding to an assumed generalized Gaussian (GG) model for the image and noise wavelet coefficients. Finally, we feed the resulting parameters into a Bayesian maximum a priori (MAP) estimator, which is applied to the wavelet coefficients of the log-transformed SAR image. Our proposed method outperforms several recently proposed de-speckling techniques both visually and in terms of different objective measures.
    Advances in Multimedia Information Processing - PCM 2007, 8th Pacific Rim Conference on Multimedia, Hong Kong, China, December 11-14, 2007, Proceedings; 01/2007
  • Arasanathan Anjulan, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a novel object mining system for videos. An algorithm published in a previous paper by the authors is used to segment the video into shots and extract stable tracks from them. A grouping technique is introduced to combine these stable tracks into meaningful object clusters. These clusters are used in mining similar objects. Compared to other object mining systems, our approach mines more instances of similar objects in different shots. The proposed framework is applied to a full length feature film and improved results are shown.
    Multimedia Content Analysis and Mining, International Workshop, MCAM 2007, Weihai, China, June 30 - July 1, 2007, Proceedings; 01/2007
  • Source
    Timothy M. A. Smith, David W. Redmill, Cedric Nishan Canagarajah, David R. Bull
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional volumetric scene reconstruction algorithms involve the evalua- tion of many millions of voxels which is highly time consuming. This paper presents an efficient algorithm based of future frame prediction that can dra- matically reduce the number of voxels to be evaluated in time varying scenes. The new prediction method, combining scene flow and morphological dila- tions, is evaluated against a simple model dilation method. Results show the proposed method outperforms a simple dilation method and has the poten- tial to improve the efficiency of volumetric scene reconstruction algorithms while retaining quality given accurate optical flows.
    Proceedings of the British Machine Vision Conference 2007, University of Warwick, UK, September 10-13, 2007; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of the Applied Multi-dimensional Fusion Project is to investigate the benefits that data fusion and related techniques may bring to future military Intelligence Surveillance Target Acquisition and Reconnaissance systems. In the course of this work, it is intended to show the practical application of some of the best multi-dimensional fusion research in the UK. This paper highlights the work done in the area of multi-spectral synthetic data generation, super-resolution, joint fusion and blind image restoration, multi-resolution target detection and identification and assessment measures for fusion. The paper also delves into the future aspirations of the work to look further at the use of hyper-spectral data and hyper-spectral fusion. The paper presents a wide work base in multi-dimensional fusion that is brought together through the use of common synthetic data, posing real-life problems faced in the theatre of war. Work done to date has produced practical pertinent research products with direct applicability to the problems posed.
    Comput. J. 01/2007; 50:646-659.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates the impact of pixel-level fusion of videos from visible (VIZ) and infrared (IR) surveillance cameras on object tracking performance, as compared to tracking in single modality videos. Tracking has been accomplished by means of a particle filter which fuses a colour cue and the structural similarity measure (SSIM). The highest tracking accuracy has been obtained in IR sequences, whereas the VIZ video showed the worst tracking performance due to higher levels of clutter. However, metrics for fusion assessment clearly point towards the supremacy of the multiresolutional methods, especially Dual Tree-Complex Wavelet Transform method. Thus, a new, tracking-oriented metric is needed that is able to accurately assess how fusion affects the performance of the tracker This work has been funded by the UK Data and Information Fusion Defence Technology Centre (DIF DTC) AMDF and Tracking Cluster projects. We would like to thank the Eden Project for allowing us to record the Eden Project Multi-Sensor Data Set (of which Eden 2.1 and 4.1 are part of) and QinetiQ, UK, for providing the QQ data set
    2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007, Minneapolis, Minnesota, USA; 01/2007
  • [Show abstract] [Hide abstract]
    ABSTRACT: — The problem of assessing the quality of fused images (composites created from inputs of differing modalities, such as infrared and visible light radiation) is an important and growing area of research. Recent work has shown that the process of assessing fused images should not rely entirely on subjective quality methods, with objective tasks and computational metrics having important contributions to the assessment procedure. The current paper extends previous findings, applying a psychophysical selection task, metric evaluation, and subjective quality judgments to a range of fused surveillance images. Fusion schemes included the contrast pyramid and shift invariant discrete wavelet transform (Experiment 1), the complex wavelet transform (Experiments 1 & 2), and two false-coloring methods (Experiment 2). In addition, JPEG2000 compression was applied at two levels, as well as an uncompressed control. Reaction time results showed the contrast pyramid to lead to slowest performance in the objective task, whilst the presence of color greatly reduced reaction times. These results differed from both the subjective and metric results. The findings support the view that subjective quality ratings should be used with caution, especially if not accompanied by some task.
    Journal of the Society for Information Display. 10/2006; 14(10).
  • TAP. 01/2006; 3:309-332.
  • Source
    Nedeljko Cvejic, John Lewis, David R. Bull, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a novel multimodal image fusion algorithm in the independent component analysis (ICA) domain. Region-based fusion of ICA coefficients is implemented, where segmentation is performed in the spatial domain and ICA coefficients from separate regions are fused separately. The ICA coefficients from given regions are consequently weighted using the Piella fusion metric in order to maximize the quality of the fused image. The proposed method exhibits significantly higher performance than the basic ICA algorithm and also shows improvement over other state-of-the-art algorithms
    Proceedings of the International Conference on Image Processing, ICIP 2006, October 8-11, Atlanta, Georgia, USA; 01/2006 · 1.48 Impact Factor
  • Arasanathan Anjulan, Cedric Nishan Canagarajah
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a method for automatic video annotation and scene retrieval based on local region descriptors. A novel framework is proposed for combined video segmentation, content extraction and retrieval. A similarity measure, previously proposed by the authors based on local region features, is used for video segmentation. The local regions are tracked throughout a shot and stable features are extracted. The conventional key frame method is replaced with these stable local features to characterise different shots. Compared to previous video annotation approaches, the proposed method is highly robust to camera and object motions and can withstand severe illumination changes and spatial editing. We apply the proposed framework to shot cut detection and scene retrieval applications and demonstrate superior performance compared to existing methods. Furthermore as segmentation and content extraction are performed within the same step, the overall computational complexity of the system is considerably reduced.
    Image and Video Retrieval, 5th International Conference, CIVR 2006, Tempe, AZ, USA, July 13-15, 2006, Proceedings; 01/2006