M. Aggarwal

University of Illinois, Urbana-Champaign, Urbana, IL, United States

Are you M. Aggarwal?

Claim your profile

Publications (8)4.63 Total impact

  • Source
    M. Aggarwal, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: Most imaging sensors have limited dynamic range and hence are sensitive to only a part of the illumination range present in a natural scene. The dynamic range can be improved by acquiring multiple images of the same scene under different exposure settings and then combining them. In this paper, we describe a camera design for simultaneously acquiring multiple images of the same scene under different exposure settings. The cross-section of the incoming beam from a scene point is partitioned into as many parts as the desired degree of split. This is done by splitting the aperture into multiple parts and directing the light exiting from each in a different direction using an assembly of mirrors. A sensor is placed in the path of each beam and exposure of each sensor is controlled either by appropriately setting its exposure parameter, or by splitting the incoming beam unevenly. The resulting multiple exposure images are used to construct a high dynamic range image. We have implemented a video-rate camera based on this design and the results obtained are presented
    Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on; 02/2001
  • Source
    M. Aggarwal, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: Most imaging sensors have a limited dynamic range and hence can satisfactorily respond to only a part of illumination levels present in a scene. This is particularly disadvantageous for omnidirectional and panoramic cameras since larger fields of view have larger brightness ranges. We propose a simple modification to existing high resolution omnidirectional/panoramic cameras in which the process of increasing the dynamic range is coupled with the process of increasing the field of view. This is achieved by placing a graded transparency (mask) in front of the sensor which allows every scene point to be imaged under multiple exposure settings as the camera pans, a process anyway required to capture large fields of view at high resolution. The sequence of images are then mosaiced to construct a high resolution, high dynamic range panoramic/omnidirectional image. Our method is robust to alignment errors between the mask and the sensor grid and does not require the mask to be placed on the sensing surface. We have designed a panoramic camera with the proposed modifications and have discussed various theoretical and practical issues encountered in obtaining a robust design. We show with an example of high resolution, high dynamic range panoramic image obtained from the camera we designed
    Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on; 02/2001
  • Source
    Conference Paper: A new imaging model
    M. Aggarwal, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper has been prompted by observations of some anomalies in the performance of the standard imaging models (pin-hole, thin-lens and Gaussian thicklens), in the context of composing omnifocus images and estimating depth maps from a sequence of images. A closer examination of the models revealed that they assume a position of the aperture that conflicts with the designs of many available lenses. We have shown in this paper that the imaging geometry and photometric properties of an image are significantly influenced by the position of the aperture. This is confirmed by the discrepancies between observed mappings and those predicted by the models. We have therefore concluded that the current imaging models do not adequately represent practical imaging systems. We have proposed a new imaging model which overcomes these deficiencies and have given the associated mappings. The impact of this model on some common imaging scenarios is described, along with experimental verification of the better performance of the model on three real lenses
    Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on; 02/2001
  • Source
    M. Aggarwal, H. Hua, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper has been prompted by observations of disparities between the observed fall-off in irradiance for off-axis points and that accounted for by the cosine-fourth and vignetting effects. A closer examination of the image formation process for real lenses revealed that even in the absence of vignetting a point light source does not uniformly illuminate the aperture, an effect known as pupil aberration. For example, we found the variation for a 16 mm lens to be as large as 31% for a field angle of 10°. In this paper, we critically evaluate the roles of cosine-fourth and vignetting effects and demonstrate the significance of the pupil aberration on the fall-off in irradiance away from image center. The pupil aberration effect strongly depends on the aperture size and shape and this dependence has been demonstrated through two sets of experiments with three real lenses. The effect of pupil aberration is thus a third important cause of fall in irradiance away from the image center in addition to the familiar cosine-fourth and vignetting effects, that must be taken into account in applications that rely heavily on photometric variation such as shape from shading and mosaicing
    Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on; 02/2001
  • Source
    M. Aggarwal, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: Imaging cameras have only finite depth of field and only those objects within that depth range are simultaneously in focus. The depth of field of a camera can be improved by mosaicing a sequence of images taken under different focal settings. In conventional mosaicing schemes, a focus measure is computed for every scene point across the image sequence and the point is selected from that image where the focus measure is highest. We have, however, proved in this paper that the focus measure is not the highest in the best focussed frame for a certain class of scene points. The incorrect selection of image frames for these points, causes visual artifacts to appear in the resulting mosaic. We have also proposed a method to isolate such scene points, and an algorithm to compose large depth of field mosaics without the undesirable artifacts
    Pattern Recognition, 2000. Proceedings. 15th International Conference on; 02/2000
  • Source
    M. Aggarwal, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: For most imaging cameras, it is desirable that the sensor plane be perpendicular to the optical axis. Such an orientation ensures that the imaging configuration is perspective and planar scene objects perpendicular to the optical axis can be focussed in their entirety. In this paper, we present an image processing method to estimate and subsequently correct the sensor tilt with precision. We propose to measure the tilt by measuring the variation of defocus in an image of a planar calibration chart placed perpendicular to the optical axis. We show that the proposed defocusing based method is inherently more accurate than geometry based techniques which estimate tilt by measuring the deviation in the geometry of a scene pattern. We analyze the sensitivity of the tilt estimates to errors in the experimental setup and show that the proposed technique is quite robust to errors even as large as 1 degree in the orientation of the calibration chart
    Pattern Recognition, 2000. Proceedings. 15th International Conference on; 02/2000
  • Source
    Conference Paper: Camera center estimation
    M. Aggarwal, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: A fast camera calibration technique to estimate the center of perspective projection of an imaging camera has been described. The proposed technique requires a single image of two planar calibration charts arranged in a special manner. The special arrangement of the calibration charts simplifies the projection equations relating the 3-D scene coordinates to the 2-D image coordinates and they can be suitably combined to eliminate the unknown intrinsic parameters other than the desired center. We have analyzed the error in the center estimate due to various alignment errors in the experimental setup, and shown that the scheme is quite robust
    Pattern Recognition, 2000. Proceedings. 15th International Conference on; 02/2000
  • Source
    M. Aggarwal, N. Shanbhag, N. Ahuja
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we have presented a systematic technique to improve throughput of signal/image processing algorithms when implemented on flexible precision hardware. Many image/signal processing algorithms need 8-16 bit precision while the DSPs available are of much higher precision (32 bit). Significant performance gain can be obtained if multiple low precision computations can be performed in one cycle of a high precision DSP. We have proposed a framework based on algorithm transformation techniques of unfolding and retiming to systematically map low precision algorithms onto high precision DSPs. The improvement in throughput obtained by this framework is linearly related to the ratio of precision used by the processor and that required by the algorithm. The efficacy of this technique has been demonstrated on a IIR filter. We have also established some theoretical bounds on the maximum throughput that can be achieved using the proposed methodology
    Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on; 06/1998 · 4.63 Impact Factor

Publication Stats

92 Citations
4.63 Total Impact Points

Institutions

  • 1998–2001
    • University of Illinois, Urbana-Champaign
      • • Beckman Institute for Advanced Science and Technology
      • • Department of Electrical and Computer Engineering
      Urbana, IL, United States