Content uploaded by Hannah Janout
Author content
All content in this area was uploaded by Hannah Janout on Sep 10, 2020
Content may be subject to copyright.
PySpot: A Python Based Framework for the Assessment of
Laser-Modified 3D Microstructures for Windows and Raspbian
Hannah Janout1 a, Bianca Buchegger2,3 b, Andreas Haghofer1 c , Dominic Hoeglinger2, Jaroslaw
Jacak2 d, Stephan Winkler1 e, Armin Hochreiner2 f
1University of Applied Sciences, Upper Austria School of Informatics, Communications and Media, Softwarepark 11, 4232
Hagenberg, Austria
2University of Applied Sciences, Upper Austria School of Medical Engineering and Applied Social Sciences,
Garnisionstraße 21, 4240 Linz, Austria
3Institute of Applied Physics, Johannes Kepler University Linz, Altenberger Straße 69, 4040 Linz, Austria
Hannah.Janout@fh-hagenberg.at
Keywords: Image Processing Methods, Fluorescence Microscopy, 3D Microstructures, Image Analysis, Software
Development, Bioinformatics
Abstract: Biocompatible 3D microstructures created with laser lithography and modified for optimal cell growth with
laser grafting, can imitate the 3D structure cells naturally grow in. For the evaluation of the quality and success
of those 3D microstructures, specialized software is required. Our software PySpot can load 2D and 3D images
and analyze those through image segmentation, edge detection, and surface plots. Additionally, the creation
and modification of regions of interest (ROI) allow for the quality evaluation of specific areas in an image by
intensity analysis. 3D rendering allows for identifying complex geometrical properties. Further, PySpot can
be used on Windows as well as on Raspbian, making it flexible to use.
1 INTRODUCTION
1.1 Cell Research using Laser
Generated and Manipulated
Structures
2D and 3D writing of biocompatible microstructures,
as well as surface modification of such structures,
have gained importance in the fields of cell research,
material science, and biophysics. One way to create
such a microstructure is the technique of laser lithog-
raphy, where micro- and nanometer-sized structures
can be fabricated (Maruo S., 1997)(Kawata S.,
ahttps://orcid.org/0000-0002-0294-3585
bhttps://orcid.org/0000-0003-4346-8415
chttps://orcid.org/0000-0001-6649-5374
dhttps://orcid.org/0000-0002-4989-1276
ehttps://orcid.org/0000-0002-5196-4294
fhttps://orcid.org/0000-0002-0027-1535
2001). Functionalized structures, e.g., improving
the imitation of natural cell growth, can be achieved
either by using different materials (Wollhofen R.,
2017) (Buchegger B., 2019) or by surface modifi-
cation of the written structures, called laser grafting
(Ovsianikov A., 2012). Laser grafting provides
a selective and precise way to apply coating and
functionalization even to the tiniest forms of objects.
Often, the functionality and success of these mi-
crostructures can be analyzed by using a fluorescence
microscope. Images obtained with fluorescence
microscopes have to be interpreted with the appro-
priate software. In this work, we introduce PySpot
– an easy-to-use image analysis tool for the use on
Windows as well as Raspbian.
1.2 Problem Definition and Goals
Images taken using a fluorescence microscope can
come in the form of SPE files, which is a 2D im-
age file format. SPE files come from the Princeton
Instruments CCD Cameras. Additionally, since this
technique is used for 3D areas, some images come
in the form of a python source file. Those images
were created by a C++ program implemented by the
researchers at the University of Applied Sciences in
Linz. They contain a 3D list of values, which are fur-
ther processed as a 3D Numpy array.
a b
c d
Figure 1: Different images resulting from the workflow de-
scribed in the chapter above. (a) and (b) show the result
of 2D data, while (c) and (d) show different layers of a 3D
image.
Figure 1 (a) and (b) show examples of 2D data re-
ceived through SPE files. Those images contain a grid
drawn through the appliance of laser lithography and
laser grafting. The images (c) and (d) show a 3D mi-
crostructure. (c) shows the first layer of a structure,
and (d) shows the tenth layer. In both of those im-
ages, the intensities are rather low, making it difficult
to analyze the structure quality without proper soft-
ware. Thus, we require an image analysis software to
assess the reliability and quality of the newly created
technique.
One of the most famous and widely used image anal-
ysis tools is the open-source processing software Im-
ageJ. Opening and modifying an image of the SPE
format is not possible in the base version of ImageJ
and requires the installation of a plugin. The 3D
Numpy arrays cannot be opened or modified with the
base version but would require the implementation of
a custom plugin. ImageJ includes a vast number of
functionalities, making it sometimes too complicated
and unsuitable for unfamiliar users.
Another frequently used tool for image analysis is
provided by the computing environment and language
MATLAB and its image processing toolbox. While
MATLAB and its toolbox offer most functionalities
needed for the review, it requires the implementation
of a GUI from scratch and the integration of the spe-
cific functions.
The variety of images’ format and the lack of natu-
ral to use software for image analysis raised the need
for specialized software. Our software enables load-
ing images in the form of SPE files, as well as Numpy
arrays. Detailed analysis requires image segmenta-
tion, edge detection, and the creation and manipula-
tion of ROIs. To evaluate the quality of the created
image, the information contained inside a ROIs’ bor-
ders need specific statistical evaluation by intensity
analysis. PySpot is a user-friendly graphical interface
implemented in Python for the analysis and evalua-
tion of the 2D and 3D images. PySpots functionality,
as well as its methods, are described in detail in this
paper.
2 IMPLEMENTATION
PySpot is an image analysis tool implemented in
Python and was created to easily analyze and evaluate
images of 2D and 3D laser-created microstructures,
see Figure 2. PySpot accepts the SPE format created
by the Princeton Instruments CCD Cameras, as well
as in the form of 3D Numpy arrays for analysis, and
all available functionalities work for both formats.
Figure 2: Flowchart visualising the workflow needed to cre-
ate the research data.
PySpot provides features for the creation of regions
of interest (ROI) and their analysis through statisti-
cal evaluation. Furthermore, it allows for the evalu-
ation of the image through thresholding, edge detec-
tion, and surface plots.
2.1 Cross-Sections
A cross-section is defined as ,,something that has been
cut in half so that you can see the inside or a model
or picture of this.” (Rationality, 2019). In PySpot, a
line represents a cross-section. It provides informa-
tion regarding the intensities of pixels located on the
line representing the cross-section, as well as its coor-
dinates and length. Additionally, a histogram allows
for simple visualization of the intensity difference of
the respective pixels.
2.2 Region of Interest
A region of interest (ROI) is a subset of an image, that
has been identified for a particular reason. In connec-
tion with image analysis, an ROI is a specific section
of an image that marks the region to be analyzed by
the program. In PySpot, an ROI can either be formed
like a rectangle or polygon and give information on
the ROIs area, dimension, coordinates, and the statis-
tical evaluation of its containing pixels.
Since there was no pre-existing class for the creation
of an ROI and the saving of its values, an entirely
new class structure was implemented. At the top of
this structure stands an abstract class named ,,ROI”,
which declares all variables and methods needed for
the representation of both forms of ROI. This includes
variables for the ROIs coordinates and a list for its
pixels, as well as several setters, getters, and methods
to modify a ROIs pixels value and their position. The
classes written for the rectangular and polygonal ROI
presentation inherit from the ,,ROI” class and extend
it with variables and functions needed for their spe-
cific cases. These include a variable for the visual
representation of the ROI, functions for the calcula-
tion of its original coordinates, area, and dimensions.
2.3 Automatic ROI Detection
PySpot includes a feature for the automatic detection
of ROI on the currently displayed image. This feature
uses the principle of thresholding and contour finding.
Since both algorithms work with grayscale images,
the first step consists of converting the displayed
image from RGBA to grayscale. The resulting image
will be given to a thresholding function to create
a binary image with the foreground in white and
the background in black, which will then be used
to single out the contours inside the image. The
received contours vary greatly in size and can be
as small as just one pixel. Therefore, each contour
below a certain height and width is singled out. The
user chooses those limits. If a contour meets the
requirements, its points will be saved into an array,
added as a new ROI, and automatically drawn onto
the image.
2.3.1 Thresholding
Thresholding is an image segmentation technique,
which isolates certain values on an image by convert-
ing it to a binary image. The image given to a thresh-
olding function needs to be in a greyscale format, and
its component will then be partitioned into the fore-
ground and background based on their intensity value.
Every pixel with a value above a specific limit will be
assigned a value. Pixel with a value equal or below
the limit will be assigned a different value (Sahoo P.,
1988).
2.3.2 Contour Finding
A contour is a curved line, which connects all con-
secutive points with the same value along a boundary.
In image analysis, contours provide a proper way to
get important information regarding object represen-
tation and image recognition (Satoshi Suzuki, 1985)
(Seo J., 2016). Contour finding algorithms use binary
images with white as the color for the foreground and
black for the background. Due to this requirement, it
is best to convert the desired image to greyscale and
then apply thresholding. Contour finding algorithms
find a start point near a white-black border and from
there on track objects alongside the boundary between
the white foreground and black background. The
contours coordinates are saved in memory in the re-
spective tracing order (Satoshi Suzuki, 1985) (Seo J.,
2016).
In the case of a 3D image, the ROI will be searched
for in the currently displayed layer and not for every
individual one. The resulting ROI created by this step
will be inserted in every layer of the 3D image, re-
sulting in a way to supervise the change in intensity
throughout the layers.
2.4 Searching for a ROIs Boundary
Finding the right borders for ROI can be quite
difficult in certain images. Thus, PySpot includes
an edge detection feature, which will highlight the
images’ object edges. Edge detection allows for an
easy way to segment an image and visually extract
data from it. This can help in the image analysis and
localization of objects and therefore, the boundaries
of a potential ROI.
The edge detection feature uses well-known al-
gorithms like the Gaussian-blur and Canny Edge
Detection. Both of those algorithms require grayscale
images. Hence, the first step consists of the images’
conversion from RGBA to grayscale. Once it has
been converted, it is given to the Canny Edge De-
tection algorithm, which uses the discontinuities of
brightness to detect the edges. Due to a great variety
in the images that have to be analyzed, PySpot will
automatically display a second edge detection image,
which got smoothed by an additional Gaussian-blur
beforehand. This results in a smoother version of the
image and will make small, insignificant edges and
unwanted noise disappear from the end-result.
2.4.1 Gaussian-blur
The Gaussian-blur, also called Gaussian-smoothing,
is an image processing technique used to reduce noise
and the amount of detail in an image by smoothing
the intensity differences with the Gaussian function
(Gedraite E., 2011) (Deng G., 1993).
g(x,y) = 1
2πσ2e
−
x2+y2
2σ2(1)
Due to the property of an image to display a pixel
value as one single value, the equation shown in equa-
tion 1 has to be approximated. The Gaussian-blurs
kernel decides the radius around a pixel in which the
approximation should be realized (Gedraite E., 2011)
(Deng G., 1993).
2.4.2 Canny Edge Detection
Canny Edge detection was developed by John F.
Canny in 1986 and is a widely used edge detection
operation, which works by detecting an images’ dis-
continuities in brightness. Edge detection algorithms
are being used for the segmentation and data extrac-
tion through tracing the boundaries of objects inside
an image (J., 1986).
This algorithm is composed of five different steps,
which can only be applied to greyscale images. In
the first step, the Canny Edge Detection algorithm
smooths the given image with the Gaussian-blur func-
tion. This results in a smoother image with less noise,
which could be interpreted as the wrong edges. Af-
terward, in the second step, four different filters are
being used to allow the detection of horizontal, ver-
tical, and diagonal edges. This is done by applying
an edge detection operator on the image, calculating
the first derivative in the horizontal or vertical direc-
tion. In the next step, the edges of the image will be
thinned out. Therefore, smaller, insignificant edges
disappear. This is done by comparing the intensity of
each pixel with the pixel at its positive and negative
gradient direction. If the current pixels intensity is
higher than its neighbors, it is being kept as an edge;
otherwise, it will be suppressed. The edges remaining
after this step will be categorized as either ,,strong” or
,,weak” edges based on their derivates value. To fil-
ter out the remaining noise and the actual edges, in
the last step of the algorithm, pixels marked as weak
will be transformed into strong ones, as long as they
are neighbors to at least one other strong pixel in the
image (J., 1986).
2.5 Using Thresholding for Improved
ROI Detection
Even though the usage of the automatic ROI detection
feature or the visual boundaries achieved through
edge detection, finding the perfect form and size for
an ROI can be quite difficult. The automatic detection
might not find the exact ROIs desired by the user due
to a wrong set of parameters or too much noise inside
the image. The edge detection functionality cannot
always provide a full outline of the objects, because
of intensity fluctuations alongside the border.
One way to improve the detection step and narrow
down the amount of potential ROIs is thresholding. A
thresholding functionality sets all pixels of an image
to either fore- or background based on a limit given
by the user. This creates a sharp edge between the
contained elements and their background, facilitating
the process of finding a ROIs boundary when used as
the base image used for automatic detection.
2.6 Image Rescaling
Images acquired by the workflow can vary in size, de-
pending on the use of camera settings and the data to
be analyzed. Thus, using a fixed dimension for the
image displayed by PySpot can be quite tricky, due to
loss in quality. Therefore, images in PySpot display
in their original size.
While this works perfectly for most images, some im-
ages’ original size is too small to see the elements
clearly or to select ROI with the utmost precision.
Therefore, PySpot includes a feature to change the
size of the images by a specific factor between one
and ten. A user can freely change the images’ scaling
and zoom in and out as they wish to.
Resizing an image can always be done during the ana-
lyzation process. Hence, all elements contained have
to be adapted to the new size of the image as well.
The size of the graphical elements used to represent
an ROI or cross-section cannot be resized directly.
Therefore, each graphical element is redrawn during
the resizing step and positioned at its respective coor-
dinates in the newly sized image. These coordinates
are calculated based on the position of the elements in
the original sized image, preventing wrong position-
ing.
2.7 Axes Visualization
A 3D image in PySpot is displayed layer by layer,
using a scrollbar to scroll up and down the images
individual sections alongside one axis. This allows
for an easy way to modify and analyze the currently
displayed axis. As a side-effect, it is not possible to
look at the axes besides the one currently displayed,
requiring the need for a particular feature to view all
axes simultaneously.
XY YZ
XZ
Figure 3: Axes visualization of 3D data. The individual
axes (XY, XZ, YZ) are displayed next to each other, giving
a great overview.
The axes visualized are the XY-axis (which most anal-
ysis processes use) the XZ-axis and YZ-axis. A new
window opens upon starting the feature to prevent
an overfilling of the main window. This requires the
implementation of an entirely new application. This
new application contains three individual graphical
views for each axis to be displayed, scrollbars to al-
low scrolling and a checkbox to allow switching be-
tween a colored and grayscaled image. Upon opening
the application, for each image alongside an axis, its
pixel values are extracted from the Numpy array. The
extracted values are converted into an image and ap-
pended to an array of images. This array contains all
images of an axis. This structure allows for an easy
way to scroll through the layers of an axis. Once ev-
ery image was converted and saved into its respective
array, the application opens. Visualizing the axes in a
single window.
2.8 3D Rendering
Often, it is important to get a 3D understanding of
the construct of a 3D image. This cannot be visual-
ized simply by looking at the individual layers of the
axes. 3D rendering provides an easy way to get a bet-
ter 3D understanding of the images objects.
The 3D rendering feature in PySpot is based on PyQt-
Graph, which relies on PyOpenGL for the 2D and 3D
visualization of objects. For this feature, the image
firstly loaded into PySpot is given to another script as
a parameter. This script creates a new window, which
will contain the 3D render. Information about the im-
ages individual pixels is taken and modified. Through
this modification, negative values are removed, and
the overall values smoothed down. This array of mod-
ified pixels is added to a GLViewWidget as an item
and shown as a 3D Render.
2.9 Grayscaling for Improved Intensity
Differentiation
Each color has a different influence on the way a per-
son perceives an image and its containing objects.
Therefore, PySpot offers functionality to switch be-
tween an images’ colored and grayscaled version.
Colored images provide a better way to visualize the
degree of intensity of the objects of an image, while
grayscaled images offer a better way to analyze and
locate the transition between intensities.
To provide this feature, the loading step of an im-
age creates an array for each color scheme. During
the next step, each layer of the loaded image is con-
verted into a colored RGBA and grayscaled image
and saved into the respective array. Through a check-
box in the GUI, a user can switch between those to
color schemes. Allowing the adaptation of the im-
ages’ presentation to their current need.
3 RESULTS
We evaluated the accuracy of the laser manipulation
methods used to create 3D microstructures by utiliz-
ing PySpot.
In the first step, an image is loaded into the program
and displayed as an RGBA image. Example images
are shown in Figure 1 (a) and (b) and came from an
SPE file. Image (a) contains a grid as a carrier struc-
ture where a small part was modified by utilizing laser
manipulation to add a thin layer that enables the bind-
ing of fluorescence-labeled proteins.
The more particles are assembled at a position,
the higher the intensity and, therefore, the brighter
the color used for visualization. While an image
with RGBA colors perfectly visualizes the intensity
hotspots and intensity distribution of an image. That
great difference in color can make it quite difficult to
identify the intensity gradient between areas and el-
ements. Additionally, smaller, maybe not so signif-
icant areas only vary slightly from the darker back-
ground, making it easy to overlook them with this
color scheme. Thus, an image can be displayed in
grayscale by checking a checkbox to the right of the
image saying ,,GrayScale”. A grayscale image is not
as ideal for identifying intensity hotspots as the other
version but provides for a better way to identify and
locate the transition between foreground and back-
ground. Making the determination of an areas’ border
and shape simpler and can affect the way a potential
ROI is drawn.
No matter the coloration of an image, both of those
can be transformed into a binary image through the
thresholding functionality from Section 2.4.2. Apply-
ing a threshold to the image turns all pixels above a
specified limit white and all below to black. This way,
the slow transitions between an images’ object and
its background disappear and are replaced by a sharp
change in intensity. This sharp change forms a border
between an object’s fore- and background, which can
vary greatly depending on the limit chosen, as shown
in Figure 4.
a b
c d
Figure 4: Both images show the original image from Figure
1 (a) and (b). (a), (c) show the appliance of a threshold with
the limit set to 100, and (b) and (d) with 150.
(a) shows a threshold example of Figure 1 (a) with a
limit of 100 and (b) of 150. (c) and (d) show the result
of the same thresholding limits applied to Figure 1
(b). It is visible that regions highlighted as foreground
in the image (a) and (c) cover a much greater area
than those in (b) and (d) due to a broader range of ac-
cepted pixels. Analyzing an image with a threshold-
ing filter gives a better overview and understanding of
the individual pixels’ values, and the borders created
can function as a guide for a future ROI outline, be it
by hand or through the automatic ROI detection fea-
ture. This detection feature is based on the principle
of thresholding and contour finding. Therefore, ap-
plying a threshold beforehand provides an easy way
to segment the region of an image selectively. Also,
narrowing down the area significant for ROI detection
saves computing time, and unnecessary calculations.
In Figure 5, the images from Figure 4 was used as
the base for the appliance of automatic ROI detection.
The outlines of the ROIs detected this way are almost
identical with the thresholds outlines created before-
hand, showing the importance of this feature.
a b
Figure 5: Automatically detected ROI on the corresponding
threshold images from Figure 4. As seen in the previous fig-
ure, the ROI in (a) are rather small and outline the brighter
structure almost perfectly. Contrary, the ROIs in (b) cover a
much greater area due to the brighter background.
As an alternative to the thresholding feature, another
method utilized to get a better understanding of an ar-
eas’ outline and the border of potential ROIs is the
edge detection feature. This feature uses a Gaussian-
blur and Canny Edge Detection to filter out the out-
lines of an images’ elements and displays them in
two different result windows. The first of those
windows contains an edge detection solely with the
Gaussian filter already implemented in the Canny al-
gorithm. Therefore, also containing the edges of
smaller, potentially insignificant borders. Thus, the
second window displays an image, which had an ad-
ditional Gaussian-blur applied to it before detecting
the edges. This results in a smoother input image and
less detected borders. The parameters used for the
Gaussian-blur and the limits required for the Canny
Edge Detection algorithm can be determined by the
user.
As seen in Figure 6, a detection without additional
Gaussian-blur results in more detected edges. Since
most ROIs only need the main outline of an element
and not the small edges in between, this image often
a
b
Figure 6: Image (a) shows the original image in grayscale
and the result of the Canny Edge Detection without and ex-
tra blur. In the image (b) a Gaussian-blur was applied before
the edge detection.
contains too much information. As an alternative, the
image from Figure 6 (b) has had a Gaussian-blur with
a dimension of 5, and a sigma of 3 applied to it, sig-
nificantly reducing the detected edges. This reduces
the contained information in the image and will dis-
play an element’s outlines clearer in some cases than
the version with all edges. Since this feature is solely
to get a better understanding of the edges of the ele-
ments, the result is not taken into account during an
automatic ROI detection.
In some cases, the element important for the image
evaluation is too small to distinguish its contour cor-
rectly from the background. For these cases, an image
can be enlarged by a certain factor between 1 and 10.
Thus, making it easier to evaluate smaller elements
and accurately draw an ROI around them, no matter
the original size of the image.
a b
Figure 7: Visualization of the zoom function. (a) and (b)
show close-ups of the structures visible in the images of
Figure 1 (a) and (b).
In Figure 7, an application example of this feature
is shown. This figure shows the images from above
enlarged by factor 5. This enlargement allows for
better visualization of small detail within an image,
which would otherwise be overlooked. Additionally,
the transition between fore- and background become
more visible.
Regardless of the way, an ROI is drawn, be it auto-
matically or manually, the evaluation of their pixels is
always the same. First of all, every ROI displays its
dimensions and area, as well as coordinates, in a table
as basic information. Another second table displays
information about the intensity of all pixels inside an
ROI. This includes the minimum, average, and max-
imum intensity of all contained pixels, giving a gen-
eral view of the brightness of the marked region. The
brighter a ROIs intensity, the more particles have ac-
cumulated inside its borders. Naturally, the overall
intensity pixels is not enough to evaluate the success
of the cultivation. Therefore, a third table contains
information about each pixel inside of the currently
selected ROIs. Each pixels’ intensity is compared to
its background, creating a direct relationship between
its quality and intensity. The first comparison made is
on a percentage basis and calculated by dividing the
backgrounds’ intensity through the pixels’, and saved
as the contrast of the pixel to its background. The sec-
ond comparison is named Delta and gives the absolute
difference between pixel intensity and background.
Histograms visualize the individual intensities and
their frequency in an ROI, offering a great addition
to the statistical evaluation, due to the qualities con-
nected to the intensities of the pixels, as seen in Figure
8.
a b
c d
Figure 8: Histograms of the biggest ROI in Figure 5. (a) and
(b) show the histograms for the ROI outlined in Figure 5 (a).
(c) and (b) those outlined in Figure 5 (b). (a) and (c) contain
the intensities of the big ROI created with a threshold of
100, while (b) and (d) cover the values of the smaller ROI.
The visualization with a histogram helps in the de-
tection of outliners, which could have a significant in-
fluence on the quality and evaluation. Through the
knowledge of the intensity distribution, a thresholds
limit and ultimately the outlines of an ROI can be set
with more precision.
As seen in Figure 8 and 9 the results can vary greatly
a
b
c
d
Figure 9: Tables (a) and (b) correspond to the bigger ROI
drawn in Figure 5, while (c) and (d) cover those of the big-
ger ROI in Figure 5.
from one ROI to another. While the ROI in (a) mostly
has intensity values around 4.000 counts per pixel,
the ROI in (b) shows much higher overall intensity
values, with the majority ranging between 5.000 and
6.000 counts per pixel. Showing that segmenting the
image with different thresholds has a great impact on
the selected regions’ overall quality. Since the first
ROI spans over a greater area than the second one,
its intensities are far more diverse than those concen-
trated in (b), taking thinly cultivated regions into ac-
count as well.
Through a comparison of (a), (b) with (c), and (d), the
difference between the two analyzed images becomes
prominent. When judging the images with the naked
eye, the difference between the images does not seem
too severe. Through comparison of the images with
the program, the major difference becomes clear. The
ROI on Figure 5 (a) is around ten times higher than
those in (b).
All those steps can be taken with 3D images, as well.
Visualizing the difference between the individual lay-
ers of a 3D area created through later modification.
The first step consisted of a grayscale conversion to
get a better understanding of the individual transitions
between fore- and background, as shown in Figure 10.
Figure 10: (a) shows the grayscaled version of the first layer
of the 3D image, which contains no fluorescent particles and
is therefore black. (b) shows the third layer of the 3D image
and the drawn structure.
As seen in the figure, the first image has not been
modified through a laser and, therefore, shows no
signs of any fluorescent particles accumulated on it.
Contrary, the third layer shown in (b) contains a
structure drawn by a laser and its accumulated par-
ticles. Just like with 2D images, this transition can
be made more prominent through applying a thresh-
olding function, allowing for better visualization and
restriction of the areas important for the quality eval-
uation.
Figure 11: (a) and (b) visualize the effect of a threshold on
the third layer, while (c) and (d) show the results on layer 4.
The images on the lefthand side were created with a thresh-
old limit of 100, the ones on the righthand side with 150.
As seen in Figure 11, the result can vary greatly
among layers. This variation can be due to the qual-
ity of the modification on the different layers, caus-
ing particles to more prominently accumulate on one
layer than another. While this might be the case for
several layers in some images, others only contain one
layer with such a great difference and show certain
outliners.
This makes the selection of a matching ROI quite dif-
ficult since there is no complete consistency in the
present data. Due to this inconsistency among some
layers, the layer chosen for an automatic ROI detec-
tion has a great influence on the resulting quality eval-
uation of the image.
Figure 12: Automatically detected ROI with threshold limit
150 on layer 3 in (a) and layer 4 in (b).
Figure 12 shows the biggest ROI resulting from an
automatic detection on layer three with a threshold
limit of 150. As seen in the figure, the shown ROI
includes many pixels with an intensity above 150 on
layer three, while it contains almost no pixels above
this value on layer four. Visualizing the great differ-
ence in intensity and density of fluorescent particles
on those two layers. Choosing another layer as the
base for the detection, the resulting ROI would have
marked a different area. Therefore, the layer chosen
for the detection determines the main feature of the
quality evaluation of the whole 3D image.
The great difference is even more evident in the sta-
tistical evaluation of the ROIs pixel values, as seen in
the histograms of Figure 14.
Figure 13: Histograms of the ROI shown in Figure 12. (a)
displays the values for layer three, while layer four is repre-
sented in (b).
As seen in the histograms, the intensity of the con-
tained pixels varies greatly between the layers. On
layer three, most of the contained pixels show an in-
tensity value of 180 counts per pixel, with the major-
ity collecting around 200 and the highest going up to
just above 250 counts per pixel. Contrary to this, the
pixels on layer four show a way lower intensity, with
values ranging between 5 and 160. The largest share
shows an intensity of around 40 counts per pixel, with
an average of 58 counts per pixel, due to the high
amount of pixels with a value of 60 counts.
While the histograms show the distribution of counts
per pixel, the statistical evaluation is provided by dif-
ferent tables, as seen in Figure 14.
These tables show the ROI dimension, area, and in-
formation about the overall pixels contained. As seen
in this data, on the third layer, the average intensity
comes to 187, with a minimum of 66 and a maxi-
mum of 264 counts per pixel. Contrary to this, the
same ROI on layer four only shows an average of 58,
a minimum of 4 and a maximum of 156 counts per
pixel. Therefore, the average intensity on layer three
is more than three times the average of layer four.
4 CONCLUSION
In this paper, we show an easy-to-use software
for analyzing and evaluating SPE and Numpy ar-
Figure 14: Statistical evaluation of the ROI on the different
layers. (a) and (b) visualize the results of layer three, while
(c) and (d) show the results of layer four.
ray data. The features of the software were illus-
trated by utilizing 2D and 3D imaged microstruc-
tures. PySpot provides standard functionalities, e.g.,
histograms and thresholding and even more special-
ized features like automated detection of ROIs and
visualization of three orthogonal layers (XY, XZ, and
YZ) in one figure. As PySpot is a Python-based soft-
ware, there is a possibility to use it on Windows and
Raspbian Jessie. Therefore, it can be used on a rasp-
berry pi (cheap, single-board computers). PySpot
can be downloaded free of charge at the homepage
of the Bioinformatics Research Group Hagenberg
(htt p ://bioin f ormatics.f h −hagenberg.at). Due to
its simplicity, PySpot is also suited for inexperienced
users.
ACKNOWLEDGEMENTS
This work was funded by the TIMED funding of the
Upper Austrian University of Applied Sciences TC-
LOEM and by the Base Funding of the Upper Aus-
trian University of Applied Sciences Project – AB-
SAOS.
REFERENCES
Buchegger B., Vidal C., N. J. B. B. K. A. H. A. K. T. A.
J. J. (2019). Gold nanoislands grown on multiphoton
polymerized structures as substrate for enzymatic re-
actions. ACS Materials Letters, pages 399–403.
Deng G., C. L. (1993). An adaptive gaussian filter for noise
reduction and edge detection. volume 3, pages 1615 –
1619 vol.3.
Gedraite E., H. M. (2011). Investigation on the effect of
a gaussian blur in image filtering and segmentation.
pages 393–396.
J., C. F. (1986). A computational approach to edge detec-
tion. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence, PAMI-8:679–698.
Kawata S., Sun H.-B., T. T. T. K. (2001). Finer features for
functional microdevices. Nature, 412(6848):697–698.
Maruo S., Nakamura O., K. S. (1997). Three-dimensional
microfabrication with two-photon-absorbed pho-
topolymerization. Opt. Lett., 22(2):132–134.
Ovsianikov A., Li Z., T. J. S. J. L. R. (2012). Selective func-
tionalization of 3d matrices via multiphoton grafting
and subsequent click chemistry. Advanced Functional
Materials, 22(16):3429–3433.
Rationality (2019). The Cambridge Dictionary.
Sahoo P., Soltani S. and, W. A. (1988). A survey of thresh-
olding techniques. Computer Vision, Graphics, and
Image Processing, 41:233–260.
Satoshi Suzuki, K. A. (1985). Topological structural anal-
ysis of digitized binary images by border following.
Computer Vision, Graphics, and Image Processing,
30(1):32 – 46.
Seo J., Chae S., S. J. K. D. C. C. H. T. (2016).
Fast Contour-Tracing Algorithm Based on a Pixel-
Following Method for Image Sensors.
Wollhofen R., Buchegger B., E. C. J. J. K. J. K. T. A. (2017).
Functional photoresists for sub-diffraction stimulated
emission depletion lithography. Opt. Mater. Express,
7(7):2538–2559.