Ravi K. Sharma’s research while affiliated with Oregon Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (12)


Multisensor Image Registration
  • Article

April 2003

·

32 Reads

·

26 Citations

Ravi K. Sharma

·

We describe an approach for registration of images from different sensors that may generate local polarity reversals and inconsistent features. The developed technique is based on modifying the objective function by transforming the images into a representation that is less dependent on the local image intensity, and using an iterative process that reduces the effect of inconsistent features.


Bayesian Sensor Image Fusion using Local Linear Generative Models

March 2002

·

43 Reads

·

38 Citations

Optical Engineering

We present a probabilistic method for fusion of images produced by multiple sensors. The approach is based on an image formation model in which the sensor images are noisy, locally linear functions of an underlying true scene (latent variable). A Bayesian framework then provides for maximum likelihood or maximum a posteriori estimates of the true scene from the sensor images. Least squares estimates of the parameters of the image formation model involve (local) second order image statistics, and are related to local principal component analysis. We demonstrate the efficacy of the method on images from visible-band and infrared sensors.


Figure 3: Registration using a ne model (Data from AMPS). 
Figure 4: Registration of FLIR and visual images using projective model (Data from SVTD 5]). 
Figure 5: Registration of FLIR and radar images (Data from SVTD 5]). 
Ravi K. Sharma and Misha Pavel, "Registration of video sequences from multiple sensors", Registration of Video Sequences from Multiple Sensors
  • Article
  • Full-text available

April 2000

·

72 Reads

·

11 Citations

In this paper, we describe an approach for registration of video sequences from a suite of multiple sensors including television, infrared and radar. Video sequences generated by these sensors may contain abrupt changes in local contrast and inconsistent image features, which pose additional difficulties for registration. Our approach to registration addresses the difficulties caused by using multiple sensors. We use a representation for registration that is invariant to local contrast changes, followed by smoothing of the resulting error measure used for registration, for robust estimation of registration parameters. We use an iterative procedure to reduce the effect of inconsistent features. Finally, we describe a method that uses samesensor registration to aide in performing registration of sequences of video frames across multiple sensors. 1 Introduction Image registration is a critical component of multi-sensor systems that typically use different types of sensors. An example of...

Download

Probabilistic Image Sensor Fusion

April 2000

·

40 Reads

·

40 Citations

We present a probabilistic method for fusion of images produced by multiple sensors. The approach is based on an image formation model in which the sensor images are noisy, locally linear functions of an underlying, true scene. A Bayesian framework then provides for maximum likelihood or maximum a posteriori estimates of the true scene from the sensor images. Maximum likelihood estimates of the parameters of the image formation model involve (local) second order image statistics, and thus are related to local principal component analysis. We demonstrate the efficacy of the method on images from visible-band and infrared sensors. 1 Introduction Advances in sensing devices have fueled the deployment of multiple sensors in several computational vision systems [1, for example]. Using multiple sensors can increase reliability with respect to single sensor systems. This work was motivated by a need for an aircraft autonomous landing guidance (ALG) system [2, 3] that uses visible-b...


Figure 1: An illustration of the tradeoo between the elevation Y and the distance Z for a xed range R
Figure 3: C-scope representation Following the extraction of the 3D geometric information all sensor data including the MMW radar must be transformed into a domain in which fusion can be performed. In the subsequent discussion we consider several types of conformal representations and their appropriateness for fusion of radar and FLIR images. The described techniques will be applicable to combining sources such as LLTV, and possibly terrain database. In addition to  
Figure 5: PPI representation of simulated radar and FLIR images  
Figure 6: M-scope representation  
Figure 7: Fusion of Radar and FLIR  
Misha Pavel and Ravi K. Sharma, "Fusion of radar images: Rectification without the flat earth assumption", Fusion of Radar Images Rectification Without the Flat Earth Assumption

April 2000

·

180 Reads

·

1 Citation

Proceedings of SPIE - The International Society for Optical Engineering

A multisensor suite that consists of a forward looking infrared camera, a radar, and a low light television camera is likely to be a useful component of enhanced and synthetic vision systems. We examine several aspects of signal processing needed to combine effectively the individual sensor outputs. First we discuss transformations of individual sensor image data to a common representation. Our focus is on rectification of radar data without relying on a flat-earth assumption. We then describe a novel approach to image representation that minimizes loss of information in the transformation process. Finally, we discuss an optimal algorithm for fusion of radar and infrared images. Keywords: sensor fusion, multiresolution representation, conformal representation, imaging radar rectification. 1 INTRODUCTION Future enhanced and synthetic vision systems will most likely rely on multiple sources of information that will include millimeter wave radar (MMWR), forward looking infrared (FLIR),...


Misha Pavel and Ravi K. Sharma, "Model-based sensor fusion for aviation", Model-based sensor fusion for aviation

April 2000

·

78 Reads

We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions (e.g., color), such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes. Keywords: sensor fusion, image fusion, color fusion, color vision, color mapping, multisensor fusion, enhanced vision 1. INTRODUCTION The efficiency, robustness, and safety of many visually guided systems, such as flight control, can be improved by providing the operator with the necessary visual information at all times, even in low visibility conditions. One way to achieve this objective, i...


Probabilistic Model-based Multisensor Image Fusion

April 2000

·

234 Reads

·

46 Citations

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xii 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 Single Sensor Computational Vision System . . . . . . . . . . . . . . . . . . 3 1.2 Multisensor Image Fusion System . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Application of fusion for navigation guidance in aviation . . . . . . . 7 1.3 Issues in Fusing Imagery from Multiple Sensors . . . . . . . . . . . . . . . . 10 1.3.1 Mismatch in features . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 Combination of multisensor images . . . . . . . . . . . . . . . . . . . 11 1.3.3 Geometric representations of imagery from different sensors . . . . . 12 1.3.4 Registration of misaligned image features . . . . . . . . . . . . . . . 12 1.3.5 Spatial resolution of different sensors . . . . . . . . . . . . . . . . . . 13 1.3.6 Differences in frame rate . . . . . . . . . . . . . . . . . . . . . ....


Multistream video fusion using local principal components analysis

October 1998

·

10 Reads

·

11 Citations

Proceedings of SPIE - The International Society for Optical Engineering

We present an approach for fusion of video streams produced by multiple imaging sensors such as visible-band and infrared sensors. Our approach is based on a model in which the sensor images are noisy, locally affine functions of the true scene. This model explicitly incorporates reversals in local contrast, sensor-specific features and noise in the sensing process. Given the parameters of the local affine transformations and the sensor images, a Bayesian framework provides a maximum a posterior estimate of the true scene. This estimate constitutes the rule for fusing the sensor images. We also give a maximum likelihood estimate for the parameters of the local affine transformations. Under Gaussian assumptions on the underlying distributions, estimation of the affine parameters is achieved by local principal component analysis. The sensor noise is estimated by analyzing the sequence of images in each video stream. The analysis of the video streams and the synthesis of the fused stream is performed in a multiresolution pyramid domain.



AeroSense '97

June 1997

·

12 Reads

·

4 Citations

Proceedings of SPIE - The International Society for Optical Engineering

We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Citations (9)


... The data have been taken from the following reference (http://vision.arc.nasa.gov/personnel/al/hsr/fusion/97.html) [10,11]. ...

Reference:

A Bayesian Approach for Data and Image Fusion
Model-Based Sensor Fusion for Aviation
  • Citing Article
  • June 1997

Proceedings of SPIE - The International Society for Optical Engineering

... Principal component analysis signifies the importance of the major contributor to the total variation at each axis of distinction (Sharma et al., 1998). It evaluates the significance and contribution of each factor to total variance whereas each coefficient of proper vector shows the degree of contribution of every original variable with each principal component it is related. ...

Multistream video fusion using local principal components analysis
  • Citing Article
  • October 1998

Proceedings of SPIE - The International Society for Optical Engineering

... where indicates the spatial coordinates of a pixel, is the intensity of the source image at , is the intensity of the true scene at to be estimated, is the noise, and is the sensor selectivity coefficient, taking on values from representing the percentage of the true scene contributing to the source image [6]. In our work, we use , which determines if the true scene contributes to the source image or not [1]. ...

Probabilistic Image Sensor Fusion.
  • Citing Conference Paper
  • January 1998

... Trong xử lý ảnh số, ảnh có thể được biến đổi theo các biến đổi khác nhau như biến đổi tương quan, biến đổi afin và biến đổi phối cảnh. Các phép biến đổi ảnh này đã được nghiên cứu bởi nhiều công trình nghiên cứu [8][9][10]. Trong trường hợp biến đổi hai ảnh từ hai cảm biến ảnh khác nhau của hệ quang điện tử đài chỉ huy trung tâm hệ thống pháo phòng không cùng quan sát một đối tượng về cùng trường nhìn và độ phân giải, quá trình biến đổi này được gọi là sắp xếp ảnh. ...

Multisensor Image Registration
  • Citing Article
  • April 2003

... Multimodal data have increasingly become more accessible with the proposed intelligent transportation systems to gather more comprehensive and complementary information, which in turn, require optimized exploitation strategies taking advantage of system redundancy. While data fusion is not new, it has recently garnered a significant increase in research interest for the need of further foundational principles, and for the varied applications including image fusion [11] [12], target recognition [13], speaker recognition [14] and handwriting analysis [15]. Recent developments in sensor technology have provided more potential in developing real-time transportation systems technologies. ...

Probabilistic Model-based Multisensor Image Fusion

... Hence, multisensor image alignment is typically based on deriving canonical image representations that are invariant to the dissimilarities between the sensors, yet capture relevant image structure. The underlying structures are geometric primitives, such as feature points, contours and corners [8], [9], [10]. In our technique, of course, we do not have to define these primitives a priori but they should emerge implicitly from the embedding. ...

Ravi K. Sharma and Misha Pavel, "Registration of video sequences from multiple sensors", Registration of Video Sequences from Multiple Sensors

... Pyramid algorithms utilise decomposition techniques to fuse images into coarse resolution and then extract data from recomposed images from the coarse resolution sensors. Pyramid algorithms can be divided into two distinct groups: linear decomposition (Aiazzi et al., 1999) and non-linear decomposition (Pavel and Sharma, 1996). Morphological pyramid is one of the non-linear decomposition techniques (Laporterie and Flouzat, 2003). ...

Misha Pavel and Ravi K. Sharma, "Fusion of radar images: Rectification without the flat earth assumption", Fusion of Radar Images Rectification Without the Flat Earth Assumption

Proceedings of SPIE - The International Society for Optical Engineering

... MODIS, on the other hand, image super-resolution to obtain a single high-spatial-resolution image by combining a set of low-spatial-resolution images of the same scene [31][32][33][34]. They have also been employed in multispectral or hyperspectral image fusion to optimally weigh the spatial and spectral information [35][36][37][38]. However, there are only a few studies on the use of the Bayesian theory for the spatio-temporal fusion of satellite data [39][40][41]. ...

Bayesian Sensor Image Fusion using Local Linear Generative Models
  • Citing Article
  • March 2002

Optical Engineering