Igor Bogoslavskyi

Igor Bogoslavskyi
University of Bonn | Uni Bonn · Institute of Geodesy and Geoinformation (IGG)

MSc

About

11
Publications
7,883
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
619
Citations

Publications

Publications (11)
Conference Paper
Full-text available
The ability to build maps is a key functionality for the majority of mobile robots. A central ingredient to most mapping systems is the registration or alignment of the recorded sensor data. In this paper, we present a general methodology for photometric registration that can deal with multiple different cues. We provide examples for registering RG...
Article
Full-text available
The ability to build maps is a key functionality for the majority of mobile robots. A central ingredient to most mapping systems is the registration or alignment of the recorded sensor data. In this paper, we present a general methodology for photometric registration that can deal with multiple different cues. We provide examples for registering RG...
Conference Paper
Full-text available
3D laser scanners are frequently used sensors for mobile robots or autonomous cars and they are often used to perceive the static as well as dynamic aspects in the scene. In this context, matching 3D point clouds of objects is a crucial capability. Most matching methods such as numerous flavors of ICP provide little information about the quality of...
Article
Full-text available
The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individua...
Article
Full-text available
The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individua...
Conference Paper
Full-text available
Object segmentation from 3D range data is an important topic in mobile robotics. A robot navigating in a dynamic environment needs to be aware of objects that might change or move. A segmentation of the laser scans into individual objects is typically the first processing step before a further analysis is performed. In this paper, we present a fast...
Article
Full-text available
The ability to explore an unknown environment is an important prerequisite for building truly autonomous robots. Two central capabilities for autonomous exploration are the selection of the next view point(s) for gathering new observations and robust navigation. In this paper, we propose a novel exploration strategy that exploits background knowled...
Conference Paper
Full-text available
In autonomous exploration tasks, robots usually rely on a SLAM system to build a map of the environment online and then use it for navigation purposes. Although there has been substantial progress in robustly building accurate maps, these systems cannot guarantee the consistency of the resulting environment model. In this paper, we address the prob...
Conference Paper
Full-text available
In this document, we have summarized on our preliminary analysis of the ROVINA project. Our inspection shows that the environment is challenging and that it is comparable to a rescue scenario, albeit having more demanding sensing requirements due to the nature of the addressed task.
Conference Paper
Full-text available
For autonomous robots, the ability to classify their local surroundings into traversable and non-traversable areas is crucial for navigation. In this paper, we address the problem of online traversability analysis for robots that are only equipped with a Kinect-style sensor. Our approach processes the depth data at 10 fps-25 fps on a standard noteb...

Network

Cited By