Article

Panoramic view construction

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, the problem of constructing the whole view of a scene background from an image sequence is considered. First, point or block correspondence between each pair of successive frames is determined. Three parametric motion models are used: 2-D translation with scale change, affine, and projective. Motion parameters are estimated using either robust criteria and the Levenberg–Marquardt algorithm, or affine moment invariants. Then the parametric models are composed and all the frames are aligned, yielding a whole view of the scene background. A new technique is introduced for the correction of accumulated frame alignment errors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Rendering with no geometry concerns the fulfilling of a mosaic image from a set of local views (Chen, 1995); (Szeliski and Shum, 1997); (Bhosle et al., 2002); (Trakaa and Tziritasa, 2003). ...
... In addition to multimedia their usual domain of application (Chen and Williams, 1993); (Pollard et al., 2000); (Fehn et al., 2001); (Saito et al., 2003); (Trakaa and Tziritasa, 2003), virtual views generated by NVS may be of great interest in robotic micromanipulation which deals with the handling of objects in the range from 1 µm to 1 mm. ...
Article
Full-text available
The paper deals with the problem of imaging at the microscale. The trifocal transfer based novel view synthesis approach is developed and applied to the images from two photon microscopes mounted in a stereoscopic configuration and observing vertically the work scene. The final result is a lateral virtual microscope working up to 6 frames per second with a resolution up to that of the real microscopes. Visual feedback, accurate measurements and control have been performed with, showing it ability to be used for robotic manipulation of MEMS parts. Keywords: Novel view synthesis, trifocal tensor, photon microscope, microassembly, micromanipulation, MEMS.
... Under this framework, various methods to register the overlap area of two adjacent images have been proposed based on an 8-parameter perspective transformation [40], a polynomial transformation with more freedom [41], and other geometric corrections [42]. A comparison of some commonly-used parametric models for camera motion recovery can be found in [78]. ...
... According to the definition by Robert Baker, the builder of the first panoramic theater in London (1787), a panorama is a type of mural painting built in a circular space around a central platform, where spectators can look around in all directions and see a scene as if they were in the middle of it [1]. The term panoramic image is closely related to projection of planar images on the panorama surface [2]–[8]. Panoramic images may also be obtained from the panorama surface by the inverse projection, i.e. by mapping of some area of the panorama onto the image plane. ...
Article
Full-text available
An approach to compression of panoramic images is presented. Individual panoramic images can be efficiently compressed with application of decorrelating transforms, such as block based DCT, or wavelet transform, then coefficients quantization, and VL coding, as in JPEG 2000 standard. Further compression of the sequence of images is possible, if there is a significant correlation between images due to the existence of overlapping areas. Mosaic-based panoramic image sequence codec is proposed for archiving or transmission purposes
... A good survey of image registration methods can be found in [1], where detailed classification and examples are given. Recently, regarding the panorama image creation problem some methods were proposed that combine panoramic images into mosaic using bundle adjustment [2] [3], block matching and reconstruction [4][5] [6], edge methods [7] and point methods [8]. In [9] a very regular camera cycle is considered to analytically compute image concatenation equations. ...
Article
Full-text available
This paper presents a new approach for detection of overlapping regions in panorama images. We take advantage of MPEG-7 standard image descriptors to detect similar parts in images. Our experiments show that when based mainly on color descriptors we are able to detect images similarities with transition vector and reconstruct the whole panorama mosaic.
... criterion 1). Traka and Tziritas [9] give an algorithm for constructing a non-convex hull R(S) by starting with the convex hull of S and successively adding extra points from S to the perimeter; but for their purposes it is required that all points of S lie on the perimeter of R(S) (which is therefore a Hamiltonian circuit of the complete graph on those points), whereas for our more general purposes this condition is unnecessarily stringent. In the context of GIS, a method based on Voronoi diagrams has been suggested by [6]. ...
Conference Paper
Full-text available
There are many situations in GIScience where it would be useful to be able to assign a region to characterize the space occupied by a set of points. Such a region should represent the location or configuration of the points as an aggregate, abstracting away from the individual points themselves. In this paper, we call such a region a ‘footprint’ for the points. We investigate and compare a number of methods for producing such footprints, with respect to nine general criteria. The discussion identifies a number of potential choices and avenues for further research. Finally, we contrast the related research already conducted in this area, highlighting differences between these existing constructs and our ‘footprints’.
... Images Constructed using MPEG-7-Based Image Registration Andrzej Glowacz, Michal Grega, Piotr Romaniak, and Mikołaj Leszczuk P [3], block matching and reconstruction [4], [5], [6], edge methods [7] and point methods [8]. In [9] a very regular camera cycle is considered to analytically compute image concatenation equations. ...
Conference Paper
Full-text available
This paper describes an innovative compression method of panoramic images based on MPEG-7 descriptors. The proposed solution employs a detection of a series of individual video frame overlaps in order to produce concatenated panoramic images. The presented method is easy to implement even in simple devices such as low power consuming chipsets installed in remote cameras having limited power supplies. Under subjective tests it has been proved that the concatenation method allows for achieving lower transmission rates while sustaining picture quality.
Article
Image mosaicing is an effective means of constructing a single seamless image by aligning multiple partially overlapped images. Over the years, the research attention on mosaicing has increased a lot due to the growing applications and subsequently, many algorithms related to mosaicing and its contributing steps have come into existence. Though, the varied approaches for mosaic generation are effective, several difficulties arise in each step of mosaicing which need to be addressed specifically. The aim of this review is to provide an insight into the existing mosaicing algorithms, along with their merits and shortcomings. Additionally, the manuscript provides a classification of these algorithms based on their domain of processing, application, image type, and visual attributes. Furthermore, a comparison among various mosaicing methods is presented to find out which algorithm works best for a particular application and image type. Finally, the paper is concluded with a highlight on future research directions.
Article
Full-text available
Many spatial analyses involve constructing possibly non-convex polygons, also called “footprints,” that characterize the shape of a set of points in the plane. In cases where the point set contains pronounced clusters and outliers, footprints consisting of disconnected shapes and excluding outliers are desirable. This paper develops and tests a new algorithm for generating such possibly disconnected shapes from clustered points with outliers. The algorithm is called χ-outline, and is based on an extension of the established χ-shape algorithm. The χ-outline algorithm is simple, flexible, and as efficient as the most widely used alternatives, O(n log n) time complexity. Compared with other footprint algorithms, the χ-outline algorithm requires fewer parameters than two-step clustering-footprint generation and is not limited to simple connected polygons, a limitation of χ-shapes. Further, experimental comparison with leading alternatives demonstrates that χ-outlines match or exceed the accuracy of α-shapes or two-step clustering-footprint generation, and is more robust to some forms of non-uniform point densities. The effectiveness of the algorithm is demonstrated through the case study of recovering the complex and disconnected boundary of a wildfire from crowdsourced wildfire reports.
Conference Paper
Full-text available
Panoramic view of Ultrasound is an imaging process that produces a panoramic image that provides both qualitative and quantitative information. Panoramic imaging broadens the scope of spatial relationships, hereby sequentially aligning individual images in their anatomical context. Panoramic imaging has the ability to display an entire abnormality and show its relationship to adjacent structures on a single static image. It is nothing but stitching of series of images to give one long image. Panoramic imaging is one of the key technologies used in widening the field of view of medical ultrasound images for clinical diagnosis and measurement. Two dimensional (2-D) ultrasound panorama has been prevalent in routine clinical practices, including precise measurement and tracing of the extended and tubular structures in abdomen, distance measurements in phantom study, detection of muscle tear, visualizing the musculoligamentous structures of the neck and the anatomical structures in obstetrics. Over the past few years, 3-D ultrasound has been popular.
Conference Paper
Image measurement based on machine vision is a promising method for the precise measurement of workpiece. As for a large sized workpiece that exceeds the FOV (field of view) of the lens, series of images have to be taken and image mosaic must be performed to implement measurement. When the large workpiece to measure is lack of characteristics, we have to rely on the precise motion control system to locate the camera. However, positioning error will be introduced into the system because inaccuracy will inevitably exist in design and installation mechanically and this will lead to the inaccuracy of panorama after image mosaic. A camera positioning error was self calibrated through image mosaic operation for large workpiece with abundant characteristics. Because the image measuring instrument usually use close loop control system which has high repetitive positioning precision, the system can compensate the positioning error using self calibrated data. Consequently, image mosaic for large workpiece which has few characteristics can obtain high accuracy using corrected positioning error motion control system and experiments proved this conclusion.
Article
Video mosaic is to make a wide or a panoramic image from video sequence by assembling the individual overlapping video frames. In this paper a technique is presented for video mosaic, which employs motion estimation and robust random sample consensus method. Initial point correspondences between each pair of consecutive frames are computed by block matching method based on phase correlation, then the random sample consensus algorithm is used to identify and reject outliers and the alignment parameters between the consecutive frames are calculated to realize the video image alignment. Some video frames are mosaiced to construct a panoramic image to testify the availability of this approach.
Article
To get the scene image of wide view field, firstly block matching based on phase correlation is applied to estimate the motion vector field, then the outlier motion vectors caused by image noise or shadow of motion object are rejected with the motion vector distribution, lastly the parameters of transformation model can be calculated with the obtained dominant motion vectors to implement image sequence stitching. Experimental result using real image sequence demonstrates the availability of this method.
Article
Full-text available
This paper describes an innovative compression method of panoramic images based on MPEG-7 descriptors. The proposed solution employs a detection of a series of individual video frame overlaps in order to produce concatenated panoramic images. The presented method is easy to implement even in simple devices such as low power consuming chipsets installed in remote cameras having limited power supplies. Under subjective tests it has been proved that the concatenation method allows for achieving lower transmission rates while sustaining picture quality.
Conference Paper
This paper proposes a novel method to create wide-angle and high-resolution videos from video sequences by assembling the individual overlapping video frames. As we know, we register the frame sequences of different videos on the same plane using the perspective matrix and synthesize the overlapped region using a nonlinear blending method which is a time-consuming task; moreover, the scene depth is frequently varying for dynamic video content. Therefore, the projection transform should be changed as the dynamic content in real time. To tackle these problems, we propose a robust algorithm to estimate projection transform fast and blend in real time. Our algorithm includes two stages: the first stage is registration, an accuracy projection transform will get from the first frames of all videos; in the second stage, an optimized method will be used to make the projection transform changed as the sense depth verified in real-time, then a less-exhausted blending method is used to blend images and be gotten a high quality videos. Some real video stitching experiments are carried out, and our experiment results demonstrate that our algorithm can speed up video stitching and get high visual quality obviously.
Conference Paper
Video mosaic is to make a wide or a panoramic image from video sequence by assembling the individual overlapping video frames. In this paper a technique is presented for video mosaic, which employs block matching based on phase correlation and robust M-estimation method. Initial point correspondences between each pair of consecutive frames are computed by block matching method based on phase correlation, then M-estimation method is used to identify and reject outliers and the alignment parameters between the consecutive frames is subsequently estimated by Levenberg-Marquardt iterative nonlinear minimization algorithm. Some mosaic examples are constructed to testify the availability of this approach.
Conference Paper
Full-text available
We propose a method for estimating the motion parameters for a rotating camera with a possibly changing focal length. This method is suitable for analysing sport event videos. The estimation process consists of three stages: (1) robust estimation of the 2D translational components; (2) 2D block-based motion estimation; (3) robust estimation of a parametric motion model. For more reliability, confidence measures for the 2D motion vectors are introduced. The reliability of the method is illustrated on some difficult real image sequences.
Article
Full-text available
Most approaches for estimating optical flow assume that, within a finite image region, only a single motion is present. This single motion assumption is violated in common situations involving transparency, depth discontinuities, independently moving objects, shadows, and specular reflections. To robustly estimate optical flow, the single motion assumption must be relaxed. This paper presents a framework based on robust estimation that addresses violations of the brightness constancy and spatial smoothness assumptions caused by multiple motions. We show how the robust estimation framework can be applied to standard formulations of the optical flow problem thus reducing their sensitivity to violations of their underlying assumptions. The approach has been applied to three standard techniques for recovering optical flow : area-based regression, correlation, and regularization with motion discontinuities. This paper focuses on the recovery of multiple parametric motion models within a region, as well as the recovery of piecewise-smooth flow fields, and provides examples with natural and synthetic image sequences.
Article
Full-text available
We present a new technique for long-term global motion estimation of image objects. The estimated motion parameters describe the continuous and time-consistent motion over the whole sequence relatively to a fixed reference coordinate system. The proposed method is suitable for the estimation of affine motion parameters as well as for higher order motion models like the parabolic model-combining the advantages of feature matching and optical flow techniques. A hierarchical strategy is applied for the estimation, first translation, affine motion, and finally higher order motion parameters, which is robust and computationally efficient. A closed-loop prediction scheme is applied to avoid the problem of error accumulation in long-term motion estimation. The presented results indicate that the proposed technique is a very accurate and robust approach for long-term global motion estimation, which can be used for applications such as MPEG-4 sprite coding or MPEG-7 motion description. We also show that the efficiency of global motion estimation can be significantly increased if a higher order motion model is applied, and we present a new sprite coding scheme for on-line applications. We further demonstrate that the proposed estimator serves as a powerful tool for segmentation of video sequences
Article
Full-text available
Research on ways to extend and improve query methods for image databases is widespread. We have developed the QBIC (Query by Image Content) system to explore content-based retrieval methods. QBIC allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, camera and object motion, and other graphical information. Two key properties of QBIC are (1) its use of image and video content-computable properties of color, texture, shape and motion of images, videos and their objects-in the queries, and (2) its graphical query language, in which queries are posed by drawing, selecting and other graphical means. This article describes the QBIC system and demonstrates its query capabilities. QBIC technology is part of several IBM products
Article
Full-text available
A method is proposed for parametric motion estimation of an image region. It is assumed that the region considered undergoes an affine transformation, whichmeans that the motion is composed of a translation and a pure affine function of pixel coordinates. The solution of the object correspondence problem is assumed to be known. The estimation of the six motion parameters is based on the moments of the corresponding image regions. Moments up to order three are needed. For the motion computation each region is transformed to a standard position which is defined using affine invariants. The result of motion estimation is checked in the construction of a mosaic image.
Article
Full-text available
. The problem of piecing together individual frames in a video sequence to create seamless panoramas #video mosaics# has attracted increasing attention in recent times. One challenge in this domain has been to rapidly and automatically create high quality seamless mosaics using inexpensive cameras and relatively free hand motions. In order to capture a wide angle scene using a video sequence of relatively narrow angle views, the scene needs to be scanned in a 2D pattern. This is like painting a canvas on a 2D manifold with the video frames using multiple connected 1D brush strokes. An important issue that needs to be addressed in this context is that of aligning frames that have been captured using a 2D scanning of the scene rather than a 1D scan as is commonly done in many existing mosaicing systems. In this paper we present an end-to-end solution to the problem of video mosaicing when the transformations between frames may be modeled as parametric. We provide solutions ...
Article
This paper proposes a robust approach to image matching by exploiting the only available geometric constraint, namely, the epipolar constraint. The images are uncalibrated, namely the motion between them and the camera parameters are not known. Thus, the images can be taken by different cameras or a single camera at different time instants. If we make an exhaustive search for the epipolar geometry, the complexity is prohibitively high. The idea underlying our approach is to use classical techniques (correlation and relaxation methods in our particular implementation) to find an initial set of matches, and then use a robust technique—the Least Median of Squares (LMedS)—to discard false matches in this set. The epipolar geometry can then be accurately estimated using a meaningful image criterion. More matches are eventually found, as in stereo matching, by using the recovered epipolar geometry. A large number of experiments have been carried out, and very good results have been obtained.Regarding the relaxation technique, we define a new measure of matching support, which allows a higher tolerance to deformation with respect to rigid transformations in the image plane and a smaller contribution for distant matches than for nearby ones. A new strategy for updating matches is developed, which only selects those matches having both high matching support and low matching ambiguity. The update strategy is different from the classical “winner-take-all”, which is easily stuck at a local minimum, and also from “loser-take-nothing”, which is usually very slow. The proposed algorithm has been widely tested and works remarkably well in a scene with many repetitive patterns.
Article
Recently, there has been a growing interest in the use of mosaic images to represent the information contained in video sequences. This paper systematically investigates how to go beyond thinking of the mosaic simply as a visualization device, but rather as a basis for an efficient and complete representation of video sequences. We describe two different types of mosaics called the static and the dynamic mosaics that are suitable for different needs and scenarios. These two types of mosaics are unified and generalized in a mosaic representation called the temporal pyramid. To handle sequences containing large variations in image resolution, we develop a multiresolution mosaic. We discuss a series of increasingly complex alignment transformations (ranging from 2D to 3D and layers) for making the mosaics. We describe techniques for the basic elements of the mosaic construction process, namely sequence alignment, sequence integration into a mosaic image, and residual analysis to represent information not captured by the mosaic image. We describe several powerful video applications of mosaic representations including video compression, video enhancement, enhanced visualization, and other applications in video indexing, search, and manipulation.
Conference Paper
The problem of piecing together individual frames in a video sequence to create seamless panoramas (video mosaics) has attracted increasing attention in recent times. One challenge in this domain has been to rapidly and automatically create high quality seamless mosaics using inexpensive cameras and relatively free hand motions. In order to capture a wide angle scene using a video sequence of relatively narrow angle views, the scene needs to be scanned in a 2D pattern. This is like painting a canvas on a 2D manifold with the video frames using multiple connected 1D brush strokes. An important issue that needs to be addressed in this context is that of aligning frames that have been captured using a 2D scanning of the scene rather than a 1D scan as is commonly done in many existing mosaicing systems. In this paper we present an end-to-end solution to the problem of video mosaicing when the transformations between frames may be modeled as parametric. We provide solutions to two key problems: (i) automatic inference of topology of the video frames on a 2D manifold, and (ii) globally consistent estimation of alignment parameters that map each frame to a consistent mosaic coordinate system. Our method iterates among automatic topology determination, local alignment, and globally consistent parameter estimation to produce a coherent mosaic from a video sequence, regardless of the camera's scan path over the scene. While this framework is developed independent of the specific alignment model, we illustrate the approach by constructing planar and spherical mosaics from real videos.
Article
This paper presents a new technique for the creation of a sequence of mosaic images from an original video shot. A mosaic image represents, on a single image, the scene background seen all over the sequence and its creation requires the estimation of the warping parameters and the use of a blending technique. The warping parameters permit one to represent each original image in the mosaic reference. An estimation method, based on a direct comparison between the current original image and the previously calculated mosaic is proposed. A new analytic minimization criterion is also designed to optimize the determination of the blending coefficient used for the update of the mosaic image with a new original image. This criterion is based on constraints related to the temporal variations of the background, the temporal delay and the resolution of the created mosaic images, while its minimization can be analytically performed. Finally, the proposed method is applied to the creation of new video sequences in which the camera point of view, the camera focal, or the image size are modified. This approach has been tested and validated on real video sequences with large camera motion.
Article
We present direct featureless methods for estimating the eight parameters of an “exact” projective (homographic) coordinate transformation to register pairs of images, together with the application of seamlessly combining a plurality of images of the same scene, resulting in a single image (or new image sequence) of greater resolution or spatial extent. The approach is “exact” for two cases of static scenes: (1) images taken from the same location of an arbitrary three-dimensional (3-D) scene, with a camera that is free to pan, tilt, rotate about its optical axis, and zoom, or (2) images of a flat scene taken from arbitrary locations. The featureless projective approach generalizes interframe camera motion estimation methods that have previously used a camera model (which lacks the degrees of freedom to “exactly” characterize such phenomena as camera pan and tilt) and/or which have relied upon finding points of correspondence between the image frames. The featureless projective approach, which operates directly on the image pixels, is shown to be superior in accuracy and the ability to enhance the resolution. The proposed methods work well on image data collected from both good-quality and poor-quality video under a wide variety of conditions (sunny, cloudy, day, night). These new fully automatic methods are also shown to be robust to deviations from the assumptions of static scene and no parallax
Article
As computer-based video becomes ubiquitous with the expansion of transmission, storage, and manipulation capabilities, it will offer a rich source of imagery for computer graphics applications. This article looks at one way to use video as a new source of high-resolution, photorealistic imagery for these applications. If you walked through an environment, such as a building interior, and filmed a video sequence of what you saw you could subsequently register and composite the video images together into large mosaics of the scene. In this way, you can achieve an essentially unlimited resolution. Furthermore, since you can acquire the images using any optical technology, you can reconstruct any scene regardless of its range or scale. Video mosaics can be used in many different applications, including the creation of virtual reality environments, computer-game settings, and movie special effects. I present algorithms that align images and composite scenes of increasing complexity-beginning with simple planar scenes and progressing to panoramic scenes and, finally, to scenes with depth variation. I begin with a review of basic imaging equations and conclude with some novel applications of the virtual environments created using the algorithms presented
Article
An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web (WWW), the major inhibitors of rapid access to on-line video data are costs and management of capture and storage, lack of real-time delivery, and nonavailability of content-based intelligent search and indexing techniques. The solutions for capture, storage, and delivery may be on the horizon or a little beyond. However, even with rapid delivery, the lack of efficient authoring and querying tools for visual content-based indexing may still inhibit as widespread a use of video information as that of text and traditional tabular data is currently. In order to be able to nonlinearly browse and index into videos through visual content, it is necessary to develop authoring tools that can automatically separate moving objects and significant components of the scene, and represent these in a compact form. Given that video data comes in torrents-almost a megabyte every 30th of a second-it will be highly inefficient to search for objects and scenes in every frame of a video. In this paper, we present techniques to automatically derive compact representations of scenes and objects from the motion information. Image motion is a significant cue in videos for the separation of scenes into their significant components and for the separation of moving objects. Motion analysis is useful in capturing the visual content of videos for indexing and browsing in two different ways. First, separation of the static scene from moving objects can be accomplished by employing dominant 2D/3D motion estimation methods. Alternatively, if the goal is to be able to represent the fixed scene too as a composition of significant structures and objects, then simultaneous multiple motion methods might be more appropriate. In either case, view-based summarized representations of the scene can be created by video compositing/mosaicing based on the estimated motions. We present robust algorithms for both kinds of representations: 1) dominant motion estimation based techniques which exploit a fairly common occurrence in videos that a mostly fixed background (scene) is imaged with or without independently moving objects, and 2) simultaneous multiple motion estimation and representation of motion video using layered representations. Ample examples of the representations achieved by each method are included in the paper
Article
The determination of invariant characteristics is an important problem in pattern recognition. Many invariants are known, which have been obtained either by normalization or by other methods. This paper shows that the method of normalization is much more general and allows one to derive a lot of sets of invariants from the second list as well. To this end, the normalization method is generalized and is presented in such a way that it is easy to apply, thus unifying and simplifying the determination of invariants. Furthermore, this paper discusses the advantages and disadvantages of the invariants obtained by normalization. Their main advantage is that the normalization process provides us with a standard position of the object. Because of the generality of the method, also new invariants are obtained such as normalized moments more stable than known ones, Legendre descriptors and Zernike descriptors to affine transformations, two-dimensional Fourier descriptors and affine moment invariants obtained by combining Hu's moment invariants and normalized moments
Article
Video is a rich source of information. It provides visual information about scenes. This information is implicitly buried inside the raw video data, however, and is provided with the cost of very high temporal redundancy. While the standard sequential form of video storage is adequate for viewing in a movie mode, it fails to support rapid access to information of interest that is required in many of the emerging applications of video. This paper presents an approach for efficient access, use and manipulation of video data. The video data are first transformed from their sequential and redundant frame-based representation, in which the information about the scene is distributed over many frames, to an explicit and compact scene-based representation, to which each frame can be directly related. This compact reorganization of the video data supports nonlinear browsing and efficient indexing to provide rapid access directly to information of interest. This paper describes a new set of methods for indexing into the video sequence based on the scene-based representation. These indexing methods are based on geometric and dynamic information contained in the video. These methods complement the more traditional content-based indexing methods, which utilize image appearance information (namely, color and texture properties) but are considerably simpler to achieve and are highly computationally efficient
Article
An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the WorldWide Web (WWW), the major inhibitors of rapid access to on-line video data are costs and management of capture and storage, lack of real-time delivery, and non-availability of contentbased intelligent search and indexing techniques. The solutions for capture, storage and delivery maybe on the horizon or a little beyond. However, even with rapid delivery, the lack of efficient authoring and querying tools for visual content-based indexing may still inhibit as widespread a use of video information as that of text and traditional tabular data is currently. In order to be able to non-linearly browse and index into videos through visual content, it is necessary to develop authoring tools that can automatically separate moving objects and significant components of the scene, and represent these in a compact ...
Empirical analysis of heuristics The Traveling Salesman Problem
  • B Golden
  • W Stewart
B. Golden, W. Stewart, Empirical analysis of heuristics, in: E. Lawler, et al., (Eds.), The Traveling Salesman Problem, Wiley, New York, 1984.
  • F Preparata
  • M I Shamos
F. Preparata, M.I. Shamos, Computational Geometry, Springer, Berlin, 1985.
The robust estimation of multiple motions
  • Black