Sébastien Roy's research while affiliated with Université de Montréal and other places

Publications (33)

Article
Anomaly detection is a key functionality in various vision systems, such as surveillance and security. In this work, we present a convolutional neural network (CNN) that supports the detection of anomaly, which has not been defined when building the model, at frame level. Our CNN, named SmithNet, is structured to simultaneously learn commonly occur...
Article
Full-text available
This paper addresses the stereo camera synchronization problem for dynamic scenes by proposing a new triangulation method which is invariant to the temporal offset of the cameras. Contrary to spatio-temporal alignment approaches, our method estimates the correct positions of the tracked points without explicitly estimating the temporal offset of th...
Article
Reconstruction from structured light can be greatly affected by indirect illumination such as interreflections between surfaces in the scene and sub-surface scattering. This paper introduces band-pass white noise patterns designed specifically to reduce the effects of indirect illumination, and still be robust to standard challenges in scanning sys...
Conference Paper
We present a scanning method that recovers dense sub pixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Sub pixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-p...
Conference Paper
Full-text available
An omni stereo pair of images provides depth information from stereo up to 360 degrees around a central observer. A method for synthesizing omni stereo video textures was recently introduced which was Based on blending of overlapping stereo videos that were filmed several seconds apart. While it produced loop able omni stereo videos that can be dis...
Conference Paper
Full-text available
We introduce in this paper a camera setup for stereo immersive (omnistereo) capture. An omnistereo pair of images gives stereo information up to 360 degrees around a central observer. Previous methods to produce omnistereo images assume a static scene in order to stitch together multiple images captured by a stereo camera rotating on a fixed tripod...
Article
PURPOSE: To assess the accuracy of the scanning electron microscopy (SEM) and present alternative approaches to quantify surface roughness based on numerical analysis. SETTING: Department of Ophthalmology, Maisonneuve-Rosemont Hospital, University of Montreal, Montreal, Quebec, Canada. DESIGN: Experimental study. METHODS: Lamellar stromal cuts were...
Article
Full-text available
Most methods for synthesizing panoramas assume that the scene is static. A few methods have been proposed for synthesizing stereo or motion panoramas, but there has been little attempt to synthesize panoramas that have both stereo and motion. One faces several challenges in synthesizing stereo motion panoramas, for example, to ensure temporal synch...
Article
In this paper we present two methods to geometrically calibrate a video projector using a markerless planar surface. The first method assumes a partial knowledge on the camera parameters, whereas the second method consists in an auto-calibration method with no assumption on the parameters of the camera. Instead, the auto-calibration is performed by...
Conference Paper
Full-text available
A panoramic stereo (or omnistereo) pair of images provides depth information from stereo up to 360 degrees around a central observer. Because omnistereo lenses or mirrors do not yet exist, synthesizing omnistereo images requires multiple stereo camera positions and baseline orientations. Recent omnistereo methods stitch together many small field of...
Conference Paper
Full-text available
Reconstruction from structured light can be greatly affected by interreflections between surfaces in the scene. This paper introduces band-pass white noise patterns designed specifically to reduce interreflections, and still be robust to standard challenges in scanning systems such as scene depth discontinuities, defocus and low camera-projector pi...
Article
Linear or 1D cameras are used in several areas such as industrial inspection and satellite imagery. Since 1D cameras consist of a linear sensor, a motion (usually perpendicular to the sensor orientation) is performed in order to acquire a full image. In this paper, we present a novel linear method to estimate the intrinsic and extrinsic parameters...
Conference Paper
This paper proposes a real-time probabilistic solution to the problem of camera motion estimation in a video sequence. Instead of using explicit tracking of features, it only uses instantaneous image intensity variations without prior estimation of optical flow. We represent the camera motion as a probability density which is constructed from the i...
Article
Full-text available
An omnistereoscopic image is a pair of panoramic images that enables stereoscopic depth perception all around an observer. An omnistereo projection on a cylindrical display does not require tracking of the observer's viewing direction. However, such a display introduces stereo distortions. In this article, we investigate two projection models for r...
Article
We present algorithms for plane-based calibration of general radially distorted cameras. By this, we understand cameras that have a distortion center and an optical axis such that the projection rays of pixels lying on a circle centered on the distortion center form a right viewing cone centered on the optical axis. The camera is said to have a sin...
Conference Paper
In this paper we address the problem of geometric calibration of video projectors. Like in most previous methods we also use a camera that observes the projection on a planar surface. Contrary to those previous methods, we neither require the camera to be calibrated nor the presence of a calibration grid or other metric information about the scene....
Article
Dans cet article, nous considérons le problème de la calibration géométrique d'un projecteur vidéo à l'aide d'un plan (mur) non marqué et d'une caméra partiellement calibrés. Au lieu d'utiliser des points de contrôle pour estimer l'orientation mur-caméra, nous retrouvons cette relation en échantillonnant l'hémisphère des orientations possibles. Ce...
Article
In this paper we address the problem of geometric video projector calibration using a markerless planar surface (wall) and a partially calibrated camera. Instead of using control points to infer the camera-wall orientation, we find such relation by efficiently sampling the hemisphere of possible orientations. This process is so fast that even the f...
Article
Full-text available
This paper presents a novel algorithm that improves the localization of disparity discontinuities of disparity maps obtained by multi-baseline stereo. Rather than associating a disparity label to every pixel of a disparity map, it associates a position to every disparity discontinuity. This formulation allows us to find an approximate solution to a...
Article
We present an algorithm for plane-based self-calibration of cameras with radially symmetric distortions given a set of sparse feature matches in at least two views. The projection function of such cameras can be seen as a projection with a pinhole camera, followed by a non-parametric displacement of the image points in the direction of the distorti...
Conference Paper
Mean-Shift tracking gained a lot of popularity in computer vision community. This is due to its simplicity and robustness. However, the original formulation does not estimate the orientation of the tracked object. In this paper, we extend the original mean-shift tracker for ori- entation estimation. We use the gradient field as an orientation signa...
Conference Paper
Matrix factorization is a key component for solving several computer vision problems. It is particularly challenging in the presence of missing or erroneous data, which often arise in structure-from-motion. We propose batch algorithms for matrix factorization. They are based on closure and basis constraints, that are used either on the cameras or t...
Conference Paper
We propose a new and flexible hierarchical multibaseline stereo algorithm that features a non-uniform spatial decomposition of the disparity map. The visibility computation and refinement of the disparity map are integrated into a single iterative framework that does not add extra constraints to the cost function. This makes it possible to use a st...
Conference Paper
We present a new approach for self-calibrating the distortion function and the distortion center of cameras with general radially symmetric distortion. In contrast to most current models, we propose a model encompassing fisheye lenses as well as catadioptric cameras with a view angle larger than 180°. Rather than representing distortion as an image...
Conference Paper
Full-text available
This paper presents a new model to overcome the occlusion problems coming from wide baseline multiple camera stereo. Rather than explicitly modeling occlusions in the matching cost function, it detects occlusions in the depth map obtained from regular efficient stereo matching algorithms. Occlusions are detected as inconsistencies of the depth map...
Conference Paper
We present a new method to find motion planes in energy based and spatio-temporal derivative optical flow. Because our method makes few assumptions about the motion model and the number of motions present in the sampling window, we are able to recover simple single motion as well as com- plex distributions involving transparency and occlusions. We...
Conference Paper
We present a D reconstruction technique based on the maximum-flow for- mulation. Starting with a set of calibrated images, we globally search for the most probable D model given the photoconsistency and the spatial continu- ity constraints. This search is done radially from the center of the reconstruc- tion volume; therefore imposing a radial topo...
Conference Paper
Full-text available
This paper addresses the stereo correspondence problem where the images are large enough to make stereo matching difficult. In order to reduce the problem size, we propose a new non-uniform hier- archical scheme with the ability to handle different coarseness levels simultaneously. Our framework, based on a maximum flow formulation, allows a much b...
Article
Full-text available
An omnistereo pair of images enables depth percep-tion all around the observer. Because omnistereo lenses or mirrors do not yet exist, capturing an omnistereo video would require using several stereo cameras at different baseline orientations. This paper presents a multi-take capture method for creating high-resolution omnistereo videos at an affor...

Citations

... [14] shows how to get accurate dense matches using only the reconstructed phase. [3] introduces a sub-pixel matching for unsynchronized structured light, while for each match an energy is minimized by gradient descent. Matching based on peak calculation [15] and [1] also achieves sub-pixel accuracy but requires higher computational effort than the method presented. ...
... Other methods have performed unsynchronized coded light scans (Sagawa et al., 2014;Moreno et al., 2015;El Asmi and Roy, 2018). The difficulties of the unsynchronized capture reside in finding the first image in the captured sequence and in finding the mixture between two consecutive patterns partially seen by the camera as a single image. ...
... 360°view synthesis creates new panoramic viewpoints from different input [43]. For example, ODS video can be created from three fisheye cameras [8], two 360°cameras mounted side by side [32], or two rotating line cameras [23]. However, ODS provides only binocular disparity and no motion parallax. ...
... These methods will be detailed in the next section. The second category, unlike the previous one, consists in encoding the position of the projector and the camera in a LookUp- Table (LUT) (Kushnir and Kiryati, 2007;Wexler et al., 2003;Couture et al., 2014). The unstructured light method provides bidirectional matching (from camera to projector and from projector to camera). ...
... In (Martin et al., 2013), they use the unstructured light method to achieve the subpixel accuracy. This method is very robust to indirect illumination and scene discontinuities through their gray level band-pass white noise patterns. ...
... It is hard for catadioptric systems with curved mirrors to capture high-resolution stereo panoramas due to mirror curvature that generates blur [24], and moving camera systems are unable to capture dynamic scenes. Multi-camera setups [2], [4], [5], [16] are capable of capturing highresolution dynamic scenes, but they have issues with size, expense, and camera self-occlusion (resulting in wasted pixels). Furthermore, they require careful post-processing to avoid visible seams. ...
... The main limitation of that method is that it used simple blending to handle overlaps between frames. It was shown [7] that this blending method works fine for certain types of motions such as water waves, but that it produces noticable ghosting [11] for motions of well-defined visual features that can be tracked over time. ...
... 13 Surface roughness, defined as the standard deviation of surface elevation, has emerged as an important parameter in evaluating stromal bed quality for endothelial keratoplasty. [14][15][16][17][18][19][20] However, standards for measuring stromal bed roughness have not yet been defined, precluding accurate comparison of dissection techniques and determination of functional correlations. Choosing the most appropriate technique depends on multiple factors, including the optical quality of the surface, the scale of desired measurements, the need for sample preparation and acceptable measurement accuracy. ...
... The advantage of using many small slits is that the slit-to-slit camera translation is small and so is the parallax. An alternative method for reducing parallax in stereo panoramas is to use large stereo frames, which are chosen such that the left edge of one frame and the right edge of the next frame lie on the line through the two camera positions [6,8]. See Figure 1. ...
... More generally, a linear relative motion with constant velocity between the camera and the calibration target is typically considered. Draréni et al. [23] utilized a controllable linear stage to translate the line-scan camera along the Y c axis, when the camera is watching a planar checkerboard pattern; hence a 2D scan image is captured. If the planar pattern is almost parallel to the image plane, however, this method can not work since it entails dividing by elements of rotation matrices. ...