Content uploaded by Mathis Hoffmann
Author content
All content in this area was uploaded by Mathis Hoffmann on Oct 10, 2019
Content may be subject to copyright.
Fast and robust detection of solar modules in
electroluminescence images
Mathis Hoﬀmann1,2, Bernd Doll2,3,4, Florian Talkenberg5, Christoph J.
Brabec2,3, Andreas K. Maier1, and Vincent Christlein1
1Pattern Recognition Lab, FriedrichAlexanderUniversit¨at ErlangenN¨urnberg,
Germany
2Institute Materials for Electronics and Energy Technology,
FriedrichAlexanderUniversit¨at ErlangenN¨urnberg, Germany
3HelmholtzInstitut ErlangenN¨urnberg, Germany
4Graduate School in Advanced Optical Technologies, Erlangen, Germany
5greateyes GmbH, Berlin, Germany
Abstract.
Fast, nondestructive and onsite quality control tools, mainly
high sensitive imaging techniques, are important to assess the reliability
of photovoltaic plants. To minimize the risk of further damages and
electrical yield losses, electroluminescence (EL) imaging is used to detect
local defects in an early stage, which might cause future electric losses. For
an automated defect recognition on EL measurements, a robust detection
and rectiﬁcation of modules, as well as an optional segmentation into
cells is required. This paper introduces a method to detect solar modules
and crossing points between solar cells in EL images. We only require
1D image statistics for the detection, resulting in an approach that is
computationally eﬃcient. In addition, the method is able to detect the
modules under perspective distortion and in scenarios, where multiple
modules are visible in the image. We compare our method to the state of
the art and show that it is superior in presence of perspective distortion
while the performance on images, where the module is roughly coplanar
to the detector, is similar to the reference method. Finally, we show that
we greatly improve in terms of computational time in comparison to the
reference method.
Keywords: Object detection ·Automated inspection ·EL imaging.
1 Introduction
Over the last decade, photovoltaic (PV) energy has become an important factor
in emissionfree energy production. In 2016 for example, about
40 GW
of PV
capacity was installed in Germany, which amounts to nearly one ﬁfth of the total
installed electric capacity [
3
]. Not only in Germany, renwable elecricity production
has been transformed to a considerable business. It is expected, that by 2023
about one third of world wide electricity comes from renwable sources [
14
]. To
ensure high performance of the installed modules, regular inspection by imaging
and nonimaging methods is required. For onsite inspection, imaging methods
2 M. Hoﬀmann et al.
Fig. 1: Example EL image.
are very useful to ﬁnd out which modules are defect after signs of decreasing
electricity generation have been detected. Typically, onsite inspection of solar
modules is performed by infrared (IR) or electroluminescence (EL) imaging. This
work focusses on EL imaging. However, it could be adapted to other modalities
as well.
A solar module (see Fig. 1) consists of a varying number of solar cells that are
placed onto a regular grid. Since cells on a module share a similar structure and
cracks are usually spread out only within each cell, it is a natural algorithmic
choice to perform detailed inspection on a per cell basis. To this end, an automatic
detection of the module and crossing points between cells is required.
Our main contributions are as follows: We propose a method for the detection
of solar modules and the crossing points between solar cells in the image. It
works irrespective of the module’s pose and position. Our method is based on
1D image statistics, leading to a very fast approach. In addition, we show how
this can be extended to situations, where multiple modules are visible in the
image. Finally, we compare our method to the state of the art and show that the
detection performance is comparable, while the computational time is lowered by
a factor of 40.
The remainder of this work is organized a follows: In Sec. 2, we summarize
the state of the art in object detection and speciﬁcally on the detection of solar
modules. In Sections 3and 4, we introduce our method, which is eventually
compare against the state of the art in Sec. 5.
2 Related work
The detection of solar modules in an EL image is an object detection task.
Traditionally, featurebased methods have been applied to solve the task of object
detection. Especially, Haar wavelets have proven to be successful [
10
]. For an
the eﬃcient computation, Viola and Jones [
13
] made use of integral images,
previously known as summed area tables [
1
]. Integral images are also an essential
part of our method.
The detection of solar modules is related to the detection of checkerboard
calibration patterns in the image, since both are planar objects with a regular
structure. Recently, integral images have been used with a modeldriven approach
Fast and robust detection of solar modules in electroluminescence images 3
to robustly and accurately detect checkerboard calibration patterns in presence
of blur and noise [
8
]. We will employ a similar modeldriven approach that
exploits the regular structure of the cells, but only uses 1D image statistics.
Similar techniques are applied by the document analysis community to detect
text lines [9].
In the last years, convolutional neural networks (CNNs) have achieved superior
performance in many computer vision tasks. For example, singlestage detectors
like YOLO [
11
] yield good detection performance with a tolerable computational
cost. Multistage object detectors, such as RCNN [
6
], achieve even better results
but come with an increased computational cost. In contrast to CNNbased
approaches, the proposed method does not require any training data and is
computationally very eﬃcient.
There are not many preliminary works on the automated detection of solar
modules. Vetter et al. [
12
] proposed an object detection pipeline that consists of
several stacked ﬁlters followed by a Hough transform to detect solar modules in
noisy infrared thermography measurements. Recently, Deitsch et al. [
2
] proposed
a processing pipeline for solar modules that jointly detects the modules in an
EL image, estimates the conﬁguration (i. e., the number of rows and columns
of cells), estimates the lens distortion and performs segmentation into rectiﬁed
cell images. Their approach consists of a preprocessing step, where a multiscale
vesselness ﬁlter [5] is used to extract ridges (separating lines between cells) and
bus bars. Then, parabolic curves are ﬁtted onto the result to obtain a parametric
model of the module. Finally, the distortion is estimated and module corners
are extracted. Since this is, to the best of our knowledge, the only method that
automatically detects solar modules and cell crossing points in EL images, we
use this as a reference method to assess the performance of our approach.
3 Detection of the module
This work is supposed to be used for EL images of solar modules in diﬀerent
constellations. As shown in Fig. 6, modules might be imaged from diﬀerent
viewpoints. In addition, there might be more than one module visible in the
image. In this work, we focus on cases, where one module is fully visible and
others might be partially viewed, since this commonly happens, when EL images
of modules mounted next to each other are captured in the ﬁeld. However, this
method can be easily adapted to robustly handle diﬀerent situations. The only
assumption we make is that the number of cells in a row and per column is
known.
The detection of the module in the image and the localization of crossing
points between solar cells is performed in two steps. First, the module is roughly
located to obtain an initial guess of a rigid transformation between model and
image coordinates. We describe the procedure in Sec. 3.1 and Sec. 3.2. Then, the
resulting transform is used to predict coarse locations of crossing points. These
locations are then reﬁned as described in Sec. 4.
4 M. Hoﬀmann et al.
Fig. 2: Modules are located by integrating the image in
x
and
y
direction (blue lines).
From the ﬁrst derivative of this integration (orange lines), an inner and outer bounding
box can be estimated (red boxes). The module corners can be found by considering the
pixel sums within the marked subregions.
3.1 Detection of a single module
We locate the module by considering 1D images statistics obtained by summing
the image in
x
and
y
direction. This is related, but not equal to the concept that
is known as integral images [
1
,
13
]. Let
I
denote an EL image of a solar module.
Throughout this work, we assume that images are column major, i. e.,
I
[
x, y
],
where
x∈
[1
, w
] and
y∈
[1
, h
], denotes a single pixel in column
y
and row
x
.
Then, the integration over rows is given by
IΣx [y] =
w
X
x=1
I[x, y].(1)
The sum over columns
IΣy
is deﬁned similarly. Fig. 2visualizes the statistics
obtained by this summation (blue lines). Since the module is clearly separated
from the background by the mean intensity, the location of the module in the
image can be easily obtained from
IΣx
and
IΣy
. However, we are merely interested
in the absolute values of the mean intensities than in the change of the latter.
Therefore, we consider the gradients
∇σIΣx
and
∇σIΣy
, where
σ
denotes a
Gaussian smoothing to suppress high frequencies. Since we are only interested in
low frequent changes, we heuristically set σ= 0.01 ·max(w, h).
As shown in Fig. 2, a left edge of a module is characterized by a maximum in
∇σIΣx
or
∇σIΣy
. Similarly, a right edge corresponds to a minimum. In addition,
the skewness of the module with respect to the image’s
y
axis corresponds to the
width of the minimum and maximum peak in
∇σIΣx
, whereas the skewness of
the module with respect to the
x
axis corresponds to the peakwidths in
∇σIΣy
.
Formally, let
x1
and
x2
denote the location of the maximum and minimum
on
∇σIΣx
, and
y1
and
y2
denote the location of the maximum and minimum on
∇σIΣy
, respectively. Further, let
x1−
and
x1+
denote the pair of points where
Fast and robust detection of solar modules in electroluminescence images 5
Fig. 3: Module detection with multiple modules.
the peak corresponding to
x1
vanishes. We deﬁne two bounding boxes for the
module (see Fig. 2) as follows: The outer bounding box is given by
B1= [b1,1,b1,2,b1,3,b1,4] = [(x1−, y2+),(x2+ , y2+),(x2+ , y1−),(x1−, y1−)] ,
(2)
while the inner bounding box is given by
B2= [b2,1,b2,2,b2,3,b2,4] = [(x1+, y2−),(x2−, y2−),(x2−, y1+ ),(x1+, y1+ )] .
(3)
With these bounding boxes, we obtain a ﬁrst estimate of the module position.
However, it is unclear if
b1,1
or
b2,1
corresponds to the left upper corner of the
module. The same holds for
b1,2
versus
b2,2
and so on. This information is lost by
the summation over the image. However, we can easily determine the exact pose
of the module. To this end, we consider the sum over the subregions between the
bounding boxes, cf. Fig. 2. This way, we can identify the four corners
{b1, . . . , b4}
of the module and obtain a rough estimate of the module position and pose. To
simplify the detection of crossing points, we assume that the longer side of a
nonsquare module always corresponds to the edges (b1,b2) and (b3,b4).
3.2 Detection of multiple modules
In many onsite applications, multiple modules will be visible in an EL image (see
Fig. 3). In these cases, the detection of a single maximum and minimum along
each axis will not suﬃce. To account for this, we need to deﬁne, when a point in
∇σIΣk
,
k∈ {x, y}
, will be considered a maximum/minimum. We compute the
standard deviation
σk
of
∇σIΣk
and consider every point a maximum, where
2
σk<∇σIΣk
and every point a minimum, where
−
2
σk>∇σIΣk
. Then, we
apply non maximum/minimum suppression to obtain a single detection per
maximum and minimum. As a result, we obtain a sequence of extrema per axis.
6 M. Hoﬀmann et al.
Ideally, every minimum is directly followed by a maximum. However, due to false
positives this is not always the case.
In this work, we focus on the case, where only one module is fully visible,
whereas the others are partially occluded. Since we know that a module in the
image corresponds to a maximum followed by a minimum, we can easily identify
false positives. We group all maxima and minima that occur sequentially and
only keep the one that corresponds to the largest or smallest value in
∇σIΣk
.
Still, we might have multiple pairs of maxima followed by a minimum. We choose
the one where the distance between minimum and maximum is maximal.
This is a very simple strategy that does not allow to detect more than one
module. However, an extension to multiple modules is straightforward.
4 Detection of cell crossing points
For the detection of cell crossing points, we assert that the module consists
of
N
columns of cells and
M
rows, where a typical module conﬁguration is
N
= 10 and
M
= 6. However, our approach is not limited to that conﬁguration.
Without loss of generality, we assume that
N≥M
. With this information, we
can deﬁne a simple model of the module. It consists of the corners and cell
crossings on a regular grid, where the cell size is 1. By deﬁnition, the origin of the
model coordinate system resides in the upper left corner with the y axis pointing
downwards. Hence, every point in the model is given by
mi,j = (i−1, j −1) i≤N, j ≤M . (4)
From the module detection step, we roughly know the four corners
{b1, . . . , b4}
of the module that correspond to model points
{m1,1,mN,1,mN,M ,m1,M }
.
Here, we assume that the longer side of a nonsquare module always corresponds
to edges (
b1,b2
) and (
b3,b4
), and that
N≥M
. Note that this does not limit
the approach regarding the orientation of the module since, for example, (
b1,b2
)
can deﬁne a horizontal or vertical line in the image.
We aim to estimate a transform that converts model coordinates
mi,j
into
image coordinates
xi,j
, which is done by using a homography matrix
H0
that
encodes the relation between model and image plane. With the four correspon
dences between the module edges in model and image plane, we estimate
H0
using the direct linear transform (DLT) [
7
]. Using
H0
, we obtain an initial guess
to the position of each crossing point by
˜
xi,j ≈H0˜
mi,j ,(5)
where the model point
m
= (
x, y
) in cartesian coordinates is converted to its
homogeneous representation by ˜
m= (x, y, 1).
Now, we aim to reﬁne this initial guess by a local search. To this end, we
extract a rectiﬁed image patch of the local neighborhood around each initial
guess (Sec. 4.1). Using the resulting image patches, we apply the detection of
cell crossing points (Sec. 4.2). Finally, we detect outliers and reestimate H0to
minimize the reprojection error between detected cell crossing points and the
corresponding model points (Sec. 4.3).
Fast and robust detection of solar modules in electroluminescence images 7
(a) Module corner
(b) Crossing of two cells on
an edge of the module
(c) Crossing of four cells
Fig. 4: Diﬀerent types of crossings between cells (cf. Figure 4b and 4c) as well as the
corners of a module (cf. Fig. 4a) lead to diﬀerent responses in the 1D statistics. We
show the accumulated intensities in blue and the gradient of that in orange.
4.1 Extraction of rectiﬁed image patches
For the local search, we consider only a small region around the initial guess. By
means of the homography
H0
, we have some prior knowledge about the position
and pose of the module in the image. We take this into account by warping a
region that corresponds to the size of approximately one cell. To this end, we
create a regular grid of pixel coordinates. The size of the grid depends on the
approximate size of a cell in the image, which is obtained by
ˆri,j =kˆ
xi,j −ˆ
xi+1,j+1 k2,(6)
where the approximation
ˆ
x
is given by Equ. (5) and conversion from homogeneous
˜
x
= (
x1, x2, x3
)

to inhomogeneous coordinates is
ˆ
x
=
˜x1
˜x3,˜x2
˜x3
. Note that
the approximation
ˆri,j
is only valid in the vicinity of
ˆ
xi,j
. The warping is
then performed by mapping model coordinates into image coordinates using
H0
followed by sampling the image using bilinear interpolation. As a result, a
rectiﬁed patch image
Ii,j
is obtained that is coarsely centered at the true cell
crossing point, see Fig. 4.
4.2 Cell crossing points detection
The detection step for cell crossing points is very similar to the module detection
step but with local image patches. It is carried out for every model point
mi,j
and
image patch
Ii,j
to ﬁnd an estimate
xi,j
to the (unknown) true image location
of
mi,j
. To simplify notation, we drop the index throughout this section. We
compute 1D image statistics from
I
to obtain
IΣx
and
IΣy
, as well as
∇σIΣx
and
∇σIΣy
, as described in Sec. 3.1. The smoothing factor
σ
is set relative to
the image size in the same way as for the module detection.
We ﬁnd that there are diﬀerent types of cell crossings that have diﬀering
intensity proﬁles, see Fig. 4. Another challenge is that busbars are hard to
distinguish from the ridges (separating regions between cells) between cells, see
8 M. Hoﬀmann et al.
for example Fig. 4c. Therefore, we cannot consider a single minimum/maximum.
We proceed similar to the approach for the detection of multiple modules, cf.
Sec. 3.2). We apply thresholding and nonmaximum/nonminimum suppression
on
∇σIΣx
and
∇σIΣy
to obtain a sequence of maxima and minima along each
axis. The threshold is set to 1
.
5
·σk
, where
σk
is the standard deviation of
∇σIΣk
.
From the location of
m
in the model grid, we know the type of the target cell
crossing. We distinguish between ridges and edges of the module. A cell crossing
might consist of both. For example a crossing between two cells on the left border
of the module, see Fig. 4b, consists of an edge on the
x
axis and a ridge on the
y
axis.
Detection of ridges
A ridge is characterized by a minimum in
∇σIΣk
followed
by a maximum. As noted earlier, ridges are hard to distinguish from busbars.
Luckily, solar cells are usually built symmetrically. Hence, given that image
patches are roughly rectiﬁed and that the initial guess to the crossing point is
not close to the border of the image patch, it is likely that we observe an even
number of busbars. As a consequence, we simply use all minima that are directly
followed by a maximum, order them by their position and take the middle. We
expect to have an odd number of such sequences (an even number of busbars
and the actual ridge we are interested in). In case this heuristic is not applicable,
because we found an even number of such sequences, we simply drop this point.
The correct position on the respective axis corresponds to the turning point of
∇σIΣk .
Detection of edges
For edges, we distinguish between left/top edges and bot
tom/right edges of the module. Left/top edges are characterized by a maximum,
whereas bottom/right edges correspond to a minimum in
∇σIΣk
. In case of
multiple extrema, we make a heuristic to choose the correct one. We assume that
our initial guess is not far oﬀ. Therefore, we choose the maximum or minimum
that is closest to the center of the patch.
4.3 Outlier detection
We chose to apply a fast method to detect the crossing points by considering
1D image statistics only. As a result, the detected crossing points contain a
signiﬁcant number of outliers. In addition, every detected crossing point exhibits
some measurement error. Therefore, we need to identify outliers and ﬁnd a
robust estimate to
H
that minimizes the overall error. Since
H
has 8 degrees
of freedom, only four point correspondences (
mi,j ,ˆ
xi,j
) are required to obtain
a unique solution. On the other hand, a typical module with 10 rows and 6
columns has 77 crossing points. Hence, even if the detection of crossing points
failed in a signiﬁcant number of cases, the number of point correspondences is
typically much larger than 4. Therefore, this problem is well suited to be solved
by Random Sample Consensus (RANSAC) [
4
]. We apply RANSAC to ﬁnd those
point correspondences that give the most consistent model. At every iteration
t
,
Fast and robust detection of solar modules in electroluminescence images 9
we randomly sample four point correspondences and estimate
Ht
using the DLT.
For the determination of the consensus set, we treat a point as an outlier if the
detected point
xi,j
and the estimated point
Ht˜
mi,j
diﬀer by more than
5 %
of
the cell size.
The error of the model
Ht
is given by the following leastsquares formulation
et=1
NM X
i,j
kˆ
xi,j −xi,j k2
2,(7)
where
ˆ
x
is the current estimate by the model
H
in cartesian coordinates. Finally,
we estimate
H
using all point correspondences from the consensus set to minimize
et.
5 Experimental results
We conduct a series of experiments to show that our approach is robust w. r. t.
to the position and pose of the module in the image as well as to various degrees
of distortion of the modules. In Sec. 5.1, we introduce the dataset that we use
throughout our experiments. In Sec. 5.2, we quantitatively compare the results
of our approach with our reference method [
2
]. In addition, we show that our
method robustly handles cases, where multiple modules are visible in the image
or the module is perspectively distorted. Finally, in Sec. 5.3, we compare the
computation time of our approach to the state of the art.
5.1 Dataset
Deitsch et al. [
2
] propose a joint detection and segmentation approach for solar
modules. In their evaluation, they use two datasets. They report their compu
tational performance on a dataset that consists of
44
modules. We will refer to
this dataset as DataA and use it only for the performance evaluation, to obtain
results that are easy to compare. In addition, they use a dataset that consists of
8
modules to evaluate their segmentation. We will refer to this data as DataB, see
Fig. 6b. The data is publicly available, which allows for a direct comparison of the
two methods. However, since we do not apply a pixelwise segmentation, we could
not use the segmentation masks they also provided. To this end, we manually
added polygonal annotations, where each corner of the polygon corresponds to
one of the corners of the module.
To assess the performance in diﬀerent settings, we add two additional datasets.
One of them consists of
10
images with multiple modules visible. We deem this
setting important, since in onsite applications, it is diﬃcult to measure only a
single module. We will refer to this as DataC. An example is shown in Fig. 6c.
The other consists of
9
images, where the module has been gradually rotated
around the
y
axis with a step size of
10◦
starting at
0◦
. We will refer to this as
DataD, see Fig. 6a. We manually added polygonal annotations to DataC and
DataD, too.
10 M. Hoﬀmann et al.
0.9 0.92 0.94 0.96 0.98 1
0
0.5
1
IoU
recall
DataB ours (8.25)
DataB [2] (7.5)
DataC ours (9.8)
DataC [2] (0.0)
DataD ours (7.7)
DataD [2] (2.9)
Fig. 5: Detection results on diﬀerent datasets. We report the AUC in brackets.
(a) (b) (c)
Fig. 6: Estimated model coordinates on diﬀerent modules.
For the EL imaging procedure of DataC and DataD, two diﬀerent silicon
detector CCD cameras with an optical long pass ﬁlter have been used. For the
diﬀerent PV module tilting angles (DataD), a Sensovation ”coolSamba HR320”
was used, while for the outdoor PV string measurements a Greateyes ”GE BI
2048 2048” was employed (DataC).
5.2 Detection results
We are interested in the number of modules that are detected correctly and
how accurate the detection is. To assess the detection accuracy, we calculate
the intersection over union (IoU) between ground truth polygon and detection.
Additionally, we report the recall at diﬀerent IoUthresholds.
Fig. 5summarizes the detection results. We see that our method outperforms
the reference method on the test dataset provided by Deitsch et al. [
2
] (DataB) by
a small margin. However, the results of the reference method are a little bit more
accurate. This can be explained by the fact that they consider lens distortion,
Fast and robust detection of solar modules in electroluminescence images 11
while our method only estimates a projective transformation between model and
image coordinates. The experiments on DataD assess the robustness of both
methods with respect to rotations of the module. We clearly see that our method
is considerably robust against rotations, while the reference method requires that
the modules are roughly rectiﬁed. Finally, we determine the performance of our
method, when multiple modules are visible in the image (DataC). The reference
method does not support this scenario. It turns out that our method gives very
good results when an image shows multiple modules.
In Fig. 6, we visually show the module crossing points estimated using our
method. For the rotated modules (DataD), it turns out that the detection fails
for
70◦
and
80◦
rotation. However, for
60◦
and less, we consistently achieve good
results (see Fig. 6b). Finally, Fig. 6c reveals that the method also works on
varying types of modules and in presence of severe degradation.
5.3 Computation time
We determine the computational performance of our method on a workstation
equipped with an Intel Xeon E51630 CPU running at
3.7 GHz
. The method is
implemented in Python3 using NumPy and only uses a single thread. We use the
same
44
module images that Deitsch et al. [
2
] have used for their performance
evaluation to obtain results that can be compared easily. On average, the
44
images are processed in
15 s
, resulting in approximately
340 ms
per module. This
includes the initialization time of the interpreter and the time for loading the
images. The average raw processing time of a single image is about 190 ms.
Deitsch et al. [
2
] report an overall processing time of
6 min
for the
44
images
using a multithreaded implementation. Therefore, a single image amounts to
13.5 s
on average. Hence, our method is about
40
times faster than the reference
method. On the other hand, the reference method does not only detect the cell
crossing points but also performs segmentation of the active cell area. In addition,
they account for lens distortion as well. This partially justiﬁes the performance
diﬀerence.
6 Conclusion
In this work, we have presented a new approach to detect solar modules in EL
images. It is based on 1D image statistics and relates to object detection methods
based on integral images. To this end, it can be implemented eﬃciently and we are
conﬁdent, that a realtime processing of images is feasible. The experiments show
that our method is superior in presence of perspective distortion while performing
similarly well than state of the art on nondistorted EL images. Additionally, we
show that it is able to deal with scenarios, where multiple modules are present
in the image.
In future, the method could be extended to account for complex scenarios,
where perspective distortion is strong. In these situations, the stability could be
improved by a prior rectiﬁcation of the module, e. g., using the Hough transform
12 M. Hoﬀmann et al.
to detect the orientation of the module. Since point correspondences between the
module and a virtual model of the latter are established, the proposed method
could be extended to calibrate the parameters of a camera model, too. This would
allow to take lens distortion into account and to extract undistorted cell images.
Acknowledgements
We gratefully acknowledge funding of the Federal Ministry for Economic Aﬀairs
and Energy (BMWi: Grant No. 0324286, iPV4.0) and the Erlangen Graduate
School in Advanced Optical Technologies (SAOT) by the German Research
Foundation (DFG) in the framework of the German excellence initiative.
References
1.
Crow, F.C.: Summedarea tables for texture mapping. In: ACM SIGGRAPH
Computer Graphics. pp. 207–212 (1984) 2,4
2.
Deitsch, S., BuerhopLutz, C., Maier, A., Gallwitz, F., Riess, C.: Segmenta
tion of photovoltaic module cells in electroluminescence images. arXiv preprint
arXiv:1806.06530 [V2] (2018) 3,9,10,11
3. EU energy in ﬁgures  statistical pocketbook. European Commission (2018) 1
4.
Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model ﬁtting
with applications to image analysis and automated cartography. Communications
of the ACM 24(6), 381–395 (1981) 8
5.
Frangi, A.F., Niessen, W.J., Vincken, K.L., Viergever, M.A.: Multiscale vessel
enhancement ﬁltering. In: International Conference on Medical Image Computing
and Computerassisted Intervention. pp. 130–137 (1998) 3
6.
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate
object detection and semantic segmentation. In: IEEE Conference on Computer
Vision and Pattern Recognition. pp. 580–587 (2014) 3
7.
Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge
University Press (2003) 6
8.
Hoﬀmann, M., Ernst, A., Bergen, T., Hettenkofer, S., Garbas, J.U.: A robust
chessboard detector for geometric camera calibration. In: International Conference
on Computer Vision Theory and Applications. pp. 34–43 (2017) 3
9.
LikformanSulem, L., Zahour, A., Taconet, B.: Text line segmentation of historical
documents: a survey. International Journal of Document Analysis and Recognition
(IJDAR) 9(24), 123–138 (2007) 3
10.
Papageorgiou, C.P., Oren, M., Poggio, T.: A general framework for object detection.
In: International Conference on Computer Vision. vol. 6, pp. 555–562 (1998) 2
11.
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Uniﬁed,
realtime object detection. In: IEEE Conference on Computer Vision and Pattern
Recognition. pp. 779–788 (2016) 3
12.
Vetter, A., Hepp, J., Brabec, C.J.: Automatized segmentation of photovoltaic
modules in irimages with extreme noise. Infrared Physics & Technology
76
, 439–
443 (2016) 3
13.
Viola, P., Jones, M., et al.: Rapid object detection using a boosted cascade of
simple features. IEEE Conference on Computer Vision and Pattern Recognition
1
,
511–518 (2001) 2,4
14. Zervos, A. (ed.): Renewables 2018. International Energy Agency (2018) 1