Abm Tariqul Islam

Abm Tariqul Islam
University of Rostock · Institute of Computer Science

PhD in Computer Science

About

17
Publications
3,921
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
49
Citations
Citations since 2016
6 Research Items
42 Citations
20162017201820192020202120220246810
20162017201820192020202120220246810
20162017201820192020202120220246810
20162017201820192020202120220246810
Introduction
I currently work at the Institute of Computer Science, University of Rostock. I do research, development and implementation of solution to relevant issues in computer vision, 3D imaging, RGB-D image enhancement, Depth camera and OptiTrack systems.
Additional affiliations
August 2019 - present
University of Rostock
Position
  • Senior Researcher
Description
  • Problem worked on: 3D image/video enhancement Approach: Filtering based real-time spatial and temporal enhancement of static/dynamic scenes obtained from 3D cameras (e.g., Microsoft Kinect, Xtion Pro). In addition to exploiting existing image filters, I introduced a novel 1D least median of squares filter to enhance the depth frames. Related mathematical domain: regression, linear algebra, coordinate geometry etc. Computer vision tools: image filters, convolution, morphological processing etc.
December 2012 - July 2019
University of Rostock
Position
  • Researcher
Description
  • Problem worked on: 3D image/video enhancement Approach: Filtering based real-time spatial and temporal enhancement of static/dynamic scenes obtained from 3D cameras (e.g., Microsoft Kinect, Xtion Pro). In addition to exploiting existing image filters, I introduced a novel 1D least median of squares filter to enhance the depth frames. Related mathematical domain: regression, linear algebra, coordinate geometry etc. Computer vision tools: image filters, convolution, morphological processing etc.
May 2010 - September 2012
Drylab
Position
  • Programmer
Description
  • Job description: Software design and development for processing, transcoding and color correcting of video files from RED cameras. Additional duties: Collaborating with programmers and clients, software testing. Development tools and languages: C++ and Objective-C. Achievements: 1. Developed software products to enhance the raw footage taken from RED movie camera. 2. Learned about the details of programming in Mac OSX and iOS environment. 3. Learned how to work with a group of programmers and
Education
January 2013 - June 2019
University of Rostock
Field of study
  • Computer Vision, 3D Image Processing, 3D Depth Image Enhancement
August 2008 - June 2009
University of Granada
Field of study
  • Computer Vision, Image Processing, Color Image Processing
August 2008 - July 2010
Norwegian University of Science and Technology
Field of study
  • Computer Vision, Image Processing, Color Image Processing

Publications

Publications (17)
Article
Foveated rendering adapts the image synthesis process to the user’s gaze. By exploiting the human visual system’s limitations, in particular in terms of reduced acuity in peripheral vision, it strives to deliver high-quality visual experiences at very reduced computational, storage, and transmission costs. Despite the very substantial progress made...
Conference Paper
Considering human capability for spatial orientation and navigation, the visualization used to support the localization of off-screen targets inevitably influences the visual-spatial processing that relies on two frameworks. So far it is not proven which frame of reference, egocentric or exocentric, contributes most to efficient viewpoint guidance...
Conference Paper
The depth images from RGB-D cameras contain a substantial amount of artifacts such as holes and flickering. Moreover, for fast moving objects in successive frames, we perceive ghosting artifacts on the depth images. Hence, the poor quality of the depth images limits them to be used in various applications. Here, we propose a gradient based spatial...
Article
Full-text available
Abstract The popularity of online collaboration on lecture content has been growing steadily over the last few decades because of its potential to enhance the overall learning experience. We propose a didactical approach of online collaboration where the students and the teachers can collaborate seamlessly on the lecture contents. The approach, whi...
Article
Full-text available
In recent years, depth cameras (such as Microsoft Kinect and ToF cameras) have gained much popularity in computer graphics, visual computing and virtual reality communities due to their low price and easy availability. While depth cameras (e.g. Microsoft Kinect) provide RGB images along with real-time depth information at high frame rate, the depth...
Conference Paper
We present a first attempt to use interpolation based approach to combine a mobile eye tracker with an external tracking system to obtain a 3D gaze vector for a freely moving user. Our method captures calibration points of varying distances, pupil positions and head positions/orientations while the user can move freely within the range of the exter...
Conference Paper
Full-text available
We propose a new method to enhance the depth images from RGB-D sensors, such as Kinects, by filling the missing/invalid values which are reported by those sensors at certain pixels. We introduce a robust 1D least median of squares (1D LMedS) approach to accurately estimate the depth values of those invalid pixels. We use this approach for efficient...
Conference Paper
Full-text available
We propose a new method to fill missing or invalid values in depth images generated from the Kinect depth sensor. To fill the missing depth values, we use a robust least median of squares (LMedS) approach. We apply our method for telepresence environments, where Kinects are used very often for reconstructing the captured scene in 3D. We introduce a...
Conference Paper
We propose a new method to fill missing or invalid values in depth images generated from the Kinect depth sensor. To fill the missing depth values, we use a robust least median of squares (LMedS) approach. We apply our method for telepresence environments, where Kinects are used very often for reconstructing the captured scene in 3D. We introduce a...
Conference Paper
In this paper, we present a novel idea of converting pedagogical text documents to visual learning objects by automatically extracting nouns and semantic keywords from the text documents, and representing these keywords as a word cloud. A word cloud contains words that are weighted based on frequency, time, appearance, etc., depending on the concep...
Conference Paper
Full-text available
We present an approach for high quality rendering of the 3D representation of a remote collaboration scene, along with real-time rendering speed, by expanding the unstructured lumigraph rendering (ULR) method. ULR uses a 3D proxy which is in the simplest case a 2D plane. We develop dynamic proxy for ULR, to get a better and more detailed 3D proxy i...
Conference Paper
Multi-camera telepresence systems, used for remote collaboration, face the problem of transmitting large amount of dynamic data generated from multiple viewpoints. This problem becomes even more critical when the practical issue of bandwidth limitation comes into play. Even if the data stream can be reduced by different strategies such as dynamic c...
Conference Paper
Full-text available
Cutting-edge telepresence systems equipped with multiple cameras for capturing the whole scene of a collaboration space, face the challenge of transmitting huge amount of dynamic data from multiple viewpoints. With the introduction of Light Field Displays (LFDs) in to the remote collaboration space, it became possible to produce an impression of 3D...
Conference Paper
Life-size high-resolution telepresence systems, used for remote collaboration, face the problem of transmitting huge data from multiple viewpoints. We present different strategies focusing on efficient camera selection and acquisition method to discard part of image data for transmission as a preprocess to classical video compression schemes. At th...
Conference Paper
The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring stro...
Conference Paper
Full-text available
Development and implementation of spatial color algorithms has been an active field of research in image processing for the last few decades. A number of investigations have been carried out so far in mimicking the properties of the human visual system (HVS). Various algorithms and models have been developed, but they produce more or less neutral o...

Network

Cited By

Projects

Projects (2)
Project
I want to understand how I can improve classroom based teaching using ubiquitous devices such as mobile phones, smart watches or tablets. My main interest is in a low-level use, where technology is supporting the teaching process without forcing a focus on technology, device user interface or similar distractions.