Fig 3 - uploaded by Vytautas Vysniauskas
Content may be subject to copyright.
Scanning anti-aliased image line 

Scanning anti-aliased image line 

Source publication
Article
Full-text available
V. Vysniauskas. Anti-aliased Pixel and Intensity Slope Detector H Electronics and Electrical Engineering. - Kaunas: Technologija, 2009. - No. 7(95). - P. 107-110. Each image dot can be associated with some image part, for example line, area where intensity changes or area with constant intensity. Image processing use these image parts. Image transf...

Context in source publication

Context 1
... number of tests it was found, that anti-aliased and slope pixels has some positive and negative and no coefficients with zero value. Zero values means that pixel is on the top or the bottom of intensity hop or on the straight line. Very simple situation (Fig. 3) shows how slope and anti-aliased pixel detector works. When pixel lies on slope or is anti-aliased, surround coefficients have positive and negative values and when line intensity unvarying no more than two zero values which direction is the same as ...

Similar publications

Article
Full-text available
K. Krumins, V. Petersons, V. Plocins. Features of Implementation of the Modified "up-and-down" Method // Electronics and Electrical Engineering. - Kaunas: Technologija, 2009. - No. 5(93). - P. 51-54. There is described the modified "up-and down" method and illustrated the advantages of the said method in comparison with the standard "up-and-down" m...

Citations

... Assuming manually edited data as ground truth, it is possible to measure total pixel match-mismatch ratio. When compared with a pixel matching software that implements pixel and intensity slope detector (Vysniauskas, 2009), 706 of 1367 predicted images are exactly matched with the ground truth. 535 of remaining 661 image has small (<2%) differences with the ground truth data. ...
Conference Paper
Full-text available
A Digital Terrain Model (DTM) is a representation of the bare-earth with elevations at regularly spaced intervals. This data is captured via aerial imagery or airborne laser scanning. Prior to use, all the above-ground natural (trees, bushes, etc.) and man-made (houses, cars, etc.) structures needed to be identified and removed so that surface of the earth can be interpolated from the remaining points. Elevation data that includes above-ground objects is called as Digital Surface Model (DSM). DTM is mostly generated by cleaning the objects from DSM with the help of a human operator. Automating this workflow is an opportunity for reducing manual work and it is aimed to solve this problem by using conditional adversarial networks. In theory, having enough raw and cleaned (DSM & DTM) data pairs will be a good input for a machine learning system that translates this raw (DSM) data to cleaned one (DTM). Recent progress in topics like 'Image-to-Image Translation with Conditional Adversarial Networks' makes a solution possible for this problem. In this study, a specific conditional adversarial network implementation "pix2pix" is adapted to this domain. Data for "elevations at regularly spaced intervals" is similar to an image data, both can be represented as two dimensional arrays (or in other words matrices). Every elevation point map to an exact image pixel and even with a 1-millimeter precision in z-axis, any real-world elevation value can be safely stored in a data cell that holds 24-bit RGB pixel data. This makes total pixel count of image equals to total count of elevation points in elevation data. Thus, elevation data for large areas results in sub-optimal input for "pix2pix" and requires a tiling. Consequently, the challenge becomes "finding most appropriate image representation of elevation data to feed into pix2pix" training cycle. This involves iterating over "elevation-to-pixel-value-mapping functions" and dividing elevation data into sub regions for better performing images in pix2pix.