Conference Paper

Reconstruction of a road by local image matches and global 3D optimization

Comput. Vision Lab., Maryland Univ., College Park, MD
DOI: 10.1109/ROBOT.1990.126186 Conference: Robotics and Automation, 1990. Proceedings., 1990 IEEE International Conference on
Source: IEEE Xplore

ABSTRACT A method is presented for reconstructing a 3-D road from a single
image. It finds the images of opposite points of the road. Opposite
points are points which face each other on the opposite sides of the
road; the images of these points are called matching points. For points
chosen from one side of the road image, the algorithm finds all the
matching point candidates on the other side, based on local properties
of a road. However, these solutions do not necessarily satisfy the
global properties of a typical road. A dynamic programming algorithm is
applied to reject the candidates which do not fit the global road. A
benchmark using synthetic roads is described. It shows that the roads
reconstructed by the proposed method match the actual roads better than
those reconstructed by two other road reconstruction algorithms.
Experiments with 50 road images taken by the autonomous land vehicle
(ALV) showed that the method is robust with real-world data and that the
reconstructions are fairly consistent with road profiles obtained by
fusion between range images and video images

1 Bookmark
  • [Show abstract] [Hide abstract]
    ABSTRACT: 4D/RCS is a hierarchical architecture designed for the control of intelligent systems. One of the main areas that 4D/RCS has been applied to recently is the control of autonomous vehicles. To accomplish this, a hierarchical decomposition of on-road driving activities has been performed which has resulted in implementation of 4D/RCS tailored towards this application. This implementation has seven layers and ranges from a journey manager which determines the order of the places you wish to drive to, through a destination manager which provides turn-by-turn directions on how to get to a destination, through a route segment, drive behavior, elemental maneuver, goal path trajectory, and then finally to servo controllers. In this paper, we show, within the 4D/RCS architecture, how knowledge-driven top-down symbolic representations combined with low-level bottom-up tasks can synergistically provide valuable information for on-road driving better than what is possible with either of them alone. We demonstrate these ideas using field data obtained from an Unmanned Ground Vehicle (UGV) traversing urban on-road environments.
    Proceedings of SPIE - The International Society for Optical Engineering 05/2007; · 0.20 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The development of wireless sensor networks, such as researchers Advanced Driver Assistance Systems (ADAS) requires the ability to analyze the road scene just like a human does. Road scene analysis is an essential, complex, and challenging task and it consists of: road detection (which includes the localization of the road, the determination of the relative position between vehicle and road, and the analysis of the vehicle's heading direction) and obstacle detection (which is mainly based on localizing possible obstacles on the vehicle's path). The detection of the road borders, the estimation of the road geometry, and the localization of the vehicle are essential tasks in this context since they are required for the lateral and longitudinal control of the vehicle. Within this field, on-board vision has been widely used since it has many advantages (higher resolution, low power consumption, low cost, easy aesthetic integration, and nonintrusive nature) over other active sensors such as RADAR or LIDAR. At first glance the problem of detecting the road geometry from visual information seems simple and early works in this field were quickly rewarded with promising results. However, the large variety of scenarios and the high rates of success demanded by the industry have kept the lane detection research work alive. In this article a comprehensive review of vision-based road detection systems vision is presented.
    ACM Computing Surveys (CSUR). 10/2013; 46(1).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the visual servoing of a six degrees of freedom (6-DOF) manipulator for unknown three-dimensional profile following. The profile has an unknown curvature, but its cross section is known. The visual servoing keeps the transformation between a cross section of the profile and the camera constant with respect to 6 DOE The position of the profile with respect to only five degrees of freedom can be measured with the camera since the image does not provide position information along the profile. The kinematic model of the robot is used to reconstruct the displacement along the profile, i.e., the sixth degree of freedom, and allows to control the profile-following velocity. Experiments show good accuracy for positioning at a sampling rate of 50 Hz. Two control strategies are tested: proportional-integral control and generalized predictive control (GPC). The visual servoing exhibits better accuracy with the GPC in simulations and in real experiments on a 6-DOF manipulator due to the predictive property of the algorithm.
    IEEE Transactions on Robotics and Automation 08/2002; 18(4):511-520.

Preview (2 Sources)

Available from