Conference PaperPDF Available

Abstract and Figures

Table detection is a crucial step in many document analysis applications as tables are used for presenting essential information to the reader in a structured manner. It is a hard problem due to varying layouts and encodings of the tables. Researchers have proposed numerous techniques for table detection based on layout analysis of documents. Most of these techniques fail to generalize because they rely on hand engineered features which are not robust to layout variations. In this paper, we have presented a deep learning based method for table detection. In the proposed method, document images are first pre-processed. These images are then fed to a Region Proposal Network followed by a fully connected neural network for table detection. The proposed method works with high precision on document images with varying layouts that include documents, research papers, and magazines. We have done our evaluations on publicly available UNLV dataset where it beats Tesseract's state of the art table detection system by a significant margin.
Content may be subject to copyright.
Table Detection using Deep Learning
Azka Gilani, Shah Rukh Qasim, Imran Malikand Faisal Shafait
National University of Sciences and Technology (NUST), Islamabad, Pakistan
Email: agilani(dot)mscs15seecs,14beesqasim,malik(dot)imran,faisal(dot)shafait(at)seecs.edu.pk
Abstract— Table detection is a crucial step in many document
analysis applications as tables are used for presenting essential
information to the reader in a structured manner. It is a
hard problem due to varying layouts and encodings of the
tables. Researchers have proposed numerous techniques for table
detection based on layout analysis of documents. Most of these
techniques fail to generalize because they rely on hand engineered
features which are not robust to layout variations. In this paper,
we have presented a deep learning based method for table
detection. In the proposed method, document images are first
pre-processed. These images are then fed to a Region Proposal
Network followed by a fully connected neural network for table
detection. The proposed method works with high precision on
document images with varying layouts that include documents,
research papers, and magazines. We have done our evaluations
on publicly available UNLV dataset where it beats Tesseract's
state of the art table detection system by a significant margin.
I. INTRODUCTION
Tables are widely used for presenting structural and func-
tional information. They are present in diverse classes of doc-
uments including newspapers, research articles and scientific
documents, etc. Tables enable readers to rapidly compare,
analyse and understand facts present in documents. Table
detection in documents is significant in the field of documents
analysis and recognition; hence it has attracted a number of
researchers to make their contributions in this domain.
Table detection is carried out by layout and content analysis
of documents. Tables have varying layouts and variety of
encodings. Because of this reason, writing a general algorithm
for table detection is very hard. Hence, table detection is
considered a hard problem in scientific society. Large number
of researches have been carried out in this field but most of
them have their limitations. Existing commercial and open
source techniques for document analysis including Tesseract
lack the capability to completely detect table regions from
document images [1].
In the recent years, deep learning techniques have greatly
improved the results on various computer vision problems.
Recently, Hao et al. [2] presented an approach for table
detection in documents using deep learning. Their proposed
method employs combination of custom algorithms and ma-
chine learning in order to generate region proposals and to
detect whether a table exists in the proposed region or not.
The major limitation of this method is that it is limited to
PDF (Portable Document Format) documents only which are
non-raster. Another limitation is that it works well on the tables
that have ruling lines but fails to detect those without ruling
lines and those which are spanned across multiple columns.
Hence in order to improve the performance of table detection
and to make up for the limitations of prior techniques, this
paper proposes a methodology for table detection based on
purely based on deep learning without using extensive pre or
post processing. To explain it further, the document image
is transformed into a new image. Then, this paper uses
Faster Recurrent Convolutional Neural Network (Faster R-
CNN) as the deep learning module. In contrary to Hao et
al. [2] technique, the Faster-RCNN computes region proposals
itself and then helps in determining whether the selected area
is a table or not. Our approach has a major advantage of
being invariant to changes in table structure and layout as it
can be fined-tuned to work on any dataset very easily. This
capability is not present in any of the existing approaches.
Hence, we make a significant contribution to table detection
problem by making it data-driven. Additionally, we have
used publicly available UNLV dataset for evaluation of our
proposed methodology [3] where it gives better results than
Tesseract’s table detection system. We have also compared
our results with the commercial market leading OCR Engine,
Abbyy Cloud OCR SDK [4].
The rest of the paper is organized as follows: Section
II describes researches related to table detection. Section 3
describes our proposed methodology that consists of pre-
processing and detection module. Section IV explains de-
scribes performance measures that have been used to eval-
uate our system and explains experimental results. Section V
concludes the paper and provides some directions for future
research.
II. LITERATURE REVIEW
Several researchers have reported their work regarding table
detection in document images. Kieninger et al. [5]–[7] pro-
posed an algorithm for table spotting and structure extraction
from documents called T-Recs. This system takes word bound-
ing boxes as input. They are clustered to form segmentation
graph using bottom-up approach. The key problem with this
technique is that it depends entirely on word bounding boxes
and is unable to perform well in presence of multi-column
layouts.
Another approach was proposed by Wang et al. [8]. It
detects table lines depending on distance between consecutive
words. After that, horizontal consecutive words are grouped
together with vertical adjacent lines in order to propose
table entity candidates. This statistical approach assumes that
maximum number of columns in the document is two and
designs the algorithm according to three layout templates
(single column, double column, mixed column). Then, column
classification algorithm is applied to find out column layout
of the page and use this information as prior knowledge for
table spotting. Major limitation of this technique is that it can
only work on those templates for which it has been designed.
Hu et al. [9] presented an approach for table detection while
assuming that input images are single columned. Like previous
methods, this technique can not be applied on multi-column
layouts. Shafait et al. [10] presented another approach for
table detection in heterogeneous documents. This system is
integrated into open source Tesseract OCR engine. It works
well on large variety of documents but major limitation is that
it is a traditional technique and not data-driven.
Tupaj et al. [11] proposed an OCR based table detection
technique. The system searches for sequences of table-like
lines based on the keywords that might be present in the
table headers. The line that contains keyword is regarded as
the starting line while subsequent lines are then analyzed to
match with predefined set of tokens which are then categorized
as table structure. The limitation of this technique is that it
depends highly on the keywords that might appear in table
headers.
Harit et al. [12] proposed a technique for table detection
based on the identification of unique table start and trailer
pattern. The major limitation of this method is that it will not
work properly whenever the table start patterns are not unique
in document images.
Gatos et al. [13] proposed an approach for table detection by
finding area of intersection between the horizontal and vertical
lines. Table are then reconstructed by drawing corresponding
horizontal and vertical lines that are connected to intersection
pairs. The limitation of this system is that it works only for the
documents in which the table rows and columns are separated
by ruling lines. Costa e Silva [14] presented a technique
for table detection using Hidden Markov Models (HMMs).
The system extracts text from PDF files using pdftotext Linux
utility. Feature vectors are then computed on basis of spaces
present between the text. The major limitation of this technique
is that it works only on non-raster PDF files that do not have
any noise.
Kasar [15] presented a method to locate tables by identify-
ing column and row line separators. This system then employs
run-length approach in order to detect horizontal and vertical
lines from input image. From each group of horizontal and
vertical lines, a set of 26 low level features are extracted and
passed to Support Vector Machine (SVM) which then detects
the table. The major limitation of this approach is that it will
fail on tables without ruling lines.
Jahan et al. [16] presented a method that uses local thresh-
olds for word spacing and line height for localization and
extraction of table regions from document images. The major
limitation of this method is that it detects table regions along
with surrounding text regions. Hence it cannot be used for
localization of table regions only.
Anh et al. [17] presented a hybrid approach for table
detection in document images. This system first classifies
document in text and non-text regions. On the basis of that,
it uses a hybrid method to find candidate table regions. These
regions are then examined to get table regions. This approach
will fail if table is spanned across multiple columns in the
document. Moreover, it will not work for scanned images as
it does not use any heuristic filter to cater for noisy images.
Hao et al. [2] presented deep learning based approach for
table detection. This system computes region proposals from
document images through some predefined set of rules. These
region proposals are then passed to the CNN that detects
whether a certain region proposal belongs to table region or
not. The major limitation is that it works well for tables with
ruling lines but fails to localize table regions if the table is
spanned across multiple columns. Another limitation is that it
works only on non-raster PDF documents.
In order to make up for the limitations of prior method-
ologies, this paper attempts to adapt Faster R-CNN, a deep
learning technique used for object detection in natural images,
to solve table detection problem.
III. PROP OS ED METHODOLOGY
The proposed method consists of two major modules:
Image transformation and table detection. Documents consist
of content region and blank spaces. Image transformation is
applied in order to separate these regions while the table
detection module uses Faster R-CNN as a basic element of
deep network. Faster R-CNN is highly dependant on combined
network that is composed of Region Proposal Networks (RPN)
and Fast R-CNN. In this section we will describe each module
in detail.
A. Image Transformation
Image Transformation is the initial step of our proposed
methodology. Faster R-CNN [18] was initially proposed for
natural images. Hence image transformation plays a pre-
liminary role in conversion of document images to natural
images as close as possible so that we can easily fine-tune
on existing Faster R-CNN models. Distance transform [19]–
[21] is a derived representation of digital image. It calculates
the precise distance between text regions and white spaces
present in the document image which can give a good estimate
about presence of a table region. In our proposed methodology,
we have used different types of distance transforms so that
different features can be stored in all three channels. Image
transformation is done using the following procedure:
procedure IMAGE TRANSFORMATION(I)
bEuclideanDistanceTransform(I)
gLinearDistanceTransform(I)
rMaxDistanceTransform(I)
PChannelMerge(b,g,r)
return P
The transformation algorithm takes binary image as an
input. It then computes Euclidean distance transform, linear
distance transform and max distance transform [19]–[21] on
blue, green and red channels of the image respectively. Result
Fig. 1: Transformed Images
of the image transformation algorithm on document images is
shown in Figure 1.
B. Table detection
For detection, our approach employed Faster R-CNN [18].
Faster R-CNN was originally proposed for object detection
and classification in natural images. It is composed of two
modules. The first module is a RPN that propose regions.
The region proposals are fed to the second module which is
the detector module that was originally proposed in Fast R-
CNN [22]. The entire system is a unified network for object
detection. Figure 2 shows the architectural diagram for our
system.
1) Region Proposal Network: As described in [18], Re-
gion Proposal Network (RPN) takes the transformed image
as an input and returns an output of a set of rectangular
object proposals, each with an objectness score. RPN shares
common set of convolutional layers with detector module of
Faster-RCNN. Ren et al. has used Zeiler and Fergus model
(ZF) [23] and Simonyan and Zisserman model (VGG-16) in
their experiments.
In order to generate region proposals, a small network is
slided over the convolutional feature map output by the last
shared feature map. RPN takes an n×nspatial window of
the input convolutional feature map as an input. It maps each
sliding window to a lower dimensional (256-d for ZF model)
feature. The complete architecture of network is shown in
Figure 2. The feature map is then passed to two fully con-
nected layers that include a regression layer and a classification
layer. For this paper we have used default implementation
of Faster R-CNN that takes n=3. The fully connected layers
of the network are shared across all spatial locations. This
architecture [18] is naturally implemented with an n×ncon-
volutional layer followed by two 1×1convolutional layers for
regression and classification. At each sliding window, Faster
R-CNN simultaneously predicts multiple region proposal for
each location that can be denoted by k. So classification layer
has 2k output scores while regression layer has 4k outputs that
Fig. 2: Our approach: The document image is first transformed
and then fed into a fine-tuned CNN model. It outputs a feature
map which are fed into region proposal network for proposing
candidate table regions. These regions are finally given as
input to fully connected detection network along with the
convolutional feature map to classify them into tables or non-
tables.
are encoding coordinates of kboxes. The kregional proposals
are then parametrized in relevance to kreference boxes which
are known as anchors. Faster R-CNN yields k=9 anchors at
each sliding position.
The important fact is that Faster R-CNN generates region
proposals that are scale and translational invariant. The RPN is
then trained end-to-end by Stochastic Gradient Descent (SGD)
and back propagation.In this paper, all the layers are fine tuned
by ZF network.
2) Detection network: After the training of network for
region proposal generation, these proposals are then passed
to the region based object detection CNN’s module that will
utilize these proposals. Detection module is highly based on
the unified network that is composed of RPN and Fast R-CNN
with shared convolutional layers. Resultantly, it detect tables
from test set and returns the coordinates of bounding boxes of
predicted tables.
C. Training
We have used Caffe based implementation of Faster R-
CNN [18] to fine tune on our images. Momentum Optimizer
with learning rate of 0.001 and a momentum of 0.9 was
used. Number of training iterations was 10,000. We trained
our system on 2 classes i.e. background and table region.
Background class has been used as the negative example
(table region is missing) while table class has been used as
the positive example (containing table region). Due to this
reason, our proposed system doesn’t search aggressively for
table regions on negative samples.
Fig. 3: Results showing: (a) Partial detection, (b) Missed, (c) Over-Segmented, and (c) False Positives
Fig. 4: Some sample images from the UNLV dataset showing detection results of proposed Table Detection approach. Ground
truth is blue while the detected regions are red.
IV. PERFORMANCE MEASURES
Different performance measures have been mentioned in the
literature for evaluation of table detection algorithms. These
measures include precision and recall [9], [24] that have been
used for evaluating various table detection algorithms [1], [8],
[24]–[26]. We have compared our proposed methodology with
Shafait et al. [10] and a commercial engine, Abbyy Cloud
OCR SDK [4]. The evaluation measures described in [10] have
been employed.
We have used open sourced UNLV dataset [3] as used
in [10] to make a fair comparison of both methodologies.
Due to this dataset, we didn’t compare our methodology with
systems that were proposed in ICDAR 2013 Table Detection
competition. So, it wouldn’t be a fair comparison to their
proposed approach. Most of the techniques that were proposed
in ICDAR 2013 [15], [27] are not data driven and are highly
dependent on table layout and extraction of custom features
from the images. This makes those techniques non robust to
varying layout.
Considering Ground truth bounding box is represent by Gi
while the bounding box detected by our system is represented
by Dj. The formula for finding the overlapped region between
two bounding boxes is given by [10].
A(Gi, Dj) = 2× |GiDj|
|Gi|+|Dj|, A [0,1] (1)
A(Gi, Dj) represents the overlapped region between ground
truth and detected bounding boxes. Depending on the area of
intersecting region, its value will lie between zero and one.
Note that we are using the same threshold values as described
by Shafait et al. [10] to make a fair comparison with their
technique.
Figure 3 shows some of the errors (partial detection, over
segmentation, and false positive detection) that occurred dur-
ing table detection. Here the blue region represents the ground
truth bounding boxes while red region represents bounding
boxes of detected regions.
A. Correct Detections
These are the number of ground truth tables that have a
major overlap (A 0.9) with one of the detected tables. The
area has been calculated using eq.1
B. Partial Detections
These are the number of ground truth tables that have a
partial overlap (0.1 < A < 0.9) with one of the detected tables.
C. Over-Segmented Tables
These are the number of ground truth tables that have
overlap (0.1 < A < 0.9) with more than one detected tables. It
means that different parts of the ground truth table have been
detected as separate tables.
D. Under-Segmented Tables
These are the number of ground truth tables that have major
overlap (0.1 < A <0.9) with detected table but that detected
table also overlaps with several other ground truth tables. It
means that more than one tables were merged during detection
and were reported as single table.
E. False Positive Tables
This indicates the number of detected tables that do not have
an overlap (A 0.1) with any of the ground truth tables. Such
tables are missed during detection.
F. Missed Tables
This indicates the number of ground truth tables that do not
have an overlap (A 0.1) with any of the detected tables. It
means that these tables are missed by the detecting algorithm.
G. Precision
Precision measure has been used for evaluating the overall
performance of table detection method. It finds the percentage
of detected tables that actually belong to table regions of
ground truth document image. Formula for calculating pre-
cision is as follows:
Area of Ground truth regions in Detected regions
Area of all Detected table regions (2)
H. Recall
It is evaluated by finding the percentage of ground truth
table regions that were marked as detected table regions.
Formula for calculating recall is as follows:
Area of Ground truth regions in Detected regions
Area of all Ground truth table regions (3)
I. F1 Score
It considers both precision and recall to compute the accu-
racy of methodology.
2×Precision ×Recall
Precision ×Recall (4)
Fig. 5: Visualization of various engines. Ground truth bound-
ing box is represented by blue color while the detected
bounding box by our method is represented by green color.
Magenta color represents bounding box of Abbyy Cloud OCR
SDK while maroon color shows the result of Tesseract.
V. EXPERIMENTS AND RESULTS
In order to evaluate the performance of the proposed
methodology, we chose publicly available UNLV dataset [3].
This dataset consists of wide variety of document images
ranging from business reports to research papers and mag-
azines that includes varying and very complex table layouts.
This dataset contains approximately 10,000 images at different
resolutions. For each scanned image, manually keyed ground
truth text is provided, along with manually determined zone
information. Each zone is further categorized depending on
the contents (text, half-tone, table, etc.) of that zone. Amongst
10,000 document images, only 427 contain table regions. We
have used all of these 427 images from UNLV dataset for
evaluating our proposed technique. As the dataset is small so
we have used transfer learning approach [28]. We have used
data augmentation approaches including rotation, scaling and
flipping to overcome over-fitting.
Performance comparison between open source Shafait et
al. [10] technique (Tesseract), a commercial engine (Abbyy
Cloud OCR SDK) and our method is shown in Table I. While
parsing table, row and column headers are often used as keys.
So even if they are missed, it is impossible to extract any
information; hence, the whole detected table becomes useless.
Thus, number of correct detections is the most expressive
performance measure. Tesseract and Abbyy fail to detect
tables in presence of complex layouts that consists of wide
white spaces. The results exhibit that our approach has better
performance as correct detections significantly improve from
44% to 60.5%.
Figure 5 visualizes results of all three engines with respect
to ground truth. Overall results of our proposed methodology
have been shown in Figure 4.
VI. CONCLUSION
This paper presented an approach for table detection based
on deep learning. The proposed system uses image transfor-
mation for separating text regions from non-text regions. It
then uses RPN followed by fully connected neural network for
detection of table regions in document images. Experimental
Accuracy (%)
Performance Measures Tesseract Abbyy Without Distance Transform Our Approach
Correct Detections 44.9 41.28 51.37 60.5
Partial Detections 28.4 32.1 42.2 30.2
Missed Tables 25.68 25.68 6.42 9.17
Over Segmented Tables 3.66 7.33 29.35 24.7
Under Segmented Table 3.66 7.33 42.20 30.27
False Positive Detections 22.72 7.21 5 10.17
Area Precision 93.2 95.0 84.5 82.3
Area Recall 64.29 64.3 89.17 90.67
F1 Score 76.09 76.69 86.77 86.29
TABLE I: Performance comparison of different engines
results show that deep learning based system is robust to layout
analysis for table detection as it is not dependent on hand
engineered features. The proposed system has been evaluated
on publicly available UNLV dataset. It gives better results
as compared to the Tesseract's state-of-the-art table detection
system. We plan to extend this work in the direction of table
structure and content extraction in future.
REFERENCES
[1] J. Hu, R. S. Kashi, D. Lopresti, and G. T. Wilfong, “Evaluating the
performance of table processing algorithms,” International Journal on
Document Analysis and Recognition, vol. 4, no. 3, pp. 140–153, 2002.
[2] L. Hao, L. Gao, X. Yi, and Z. Tang, “A table detection method for
pdf documents based on convolutional neural networks,” in Document
Analysis Systems (DAS), 2016 12th IAPR Workshop on. IEEE, 2016,
pp. 287–292.
[3] A. Shahab, “Table ground truth for the UW3 and
UNLV datasets,” [Online; accessed 7-April-2017]. [Online].
Available: http://www.iapr-tc11.org/mediawiki/index.php?title=Table_
Ground_Truth_for_the_UW3_and_UNLV_datasets
[4] Abbyy. (2017) OCR SDK engine. [Online]. Available: https://www.
abbyy.com/en-eu/ocr-sdk/
[5] T. Kieninger and A. Dengel, “A paper-to-html table converting system,”
in Proceedings of document analysis systems (DAS), vol. 98, 1998.
[6] ——, “Table recognition and labeling using intrinsic layout features,” in
International Conference on Advances in Pattern Recognition. Springer,
1999, pp. 307–316.
[7] ——, “Applying the t-recs table recognition system to the business letter
domain,” in Document Analysis and Recognition, 2001. Proceedings.
Sixth International Conference on. IEEE, 2001, pp. 518–522.
[8] Y. Wang, I. Phillip, and R. Haralick, “Automatic table ground truth
generation and a background-analysis-based table structure extraction
method,” in Document Analysis and Recognition, 2001. Proceedings.
Sixth International Conference on. IEEE, 2001, pp. 528–532.
[9] J. Hu, R. S. Kashi, D. P. Lopresti, and G. Wilfong, “Medium-
independent table detection,” in Electronic Imaging. International
Society for Optics and Photonics, 1999, pp. 291–302.
[10] F. Shafait and R. Smith, “Table detection in heterogeneous documents,”
in Proceedings of the 9th IAPR International Workshop on Document
Analysis Systems. ACM, 2010, pp. 65–72.
[11] S. Tupaj, Z. Shi, C. H. Chang, and H. Alam, “Extracting tabular infor-
mation from text files,” EECS Department, Tufts University, Medford,
USA, 1996.
[12] G. Harit and A. Bansal, “Table detection in document images using
header and trailer patterns,” in Proceedings of the Eighth Indian Con-
ference on Computer Vision, Graphics and Image Processing. ACM,
2012, p. 62.
[13] B. Gatos, D. Danatsas, I. Pratikakis, and S. Perantonis, “Automatic table
detection in document images,” Pattern recognition and data mining, pp.
609–618, 2005.
[14] A. C. e Silva, “Learning rich Hidden Markov Models in document
analysis: Table location,” in Document Analysis and Recognition, 2009.
ICDAR’09. 10th International Conference on. IEEE, 2009, pp. 843–
847.
[15] T. Kasar, P. Barlas, S. Adam, C. Chatelain, and T. Paquet, “Learning
to detect tables in scanned document images using line information,” in
Document Analysis and Recognition (ICDAR), 2013 12th International
Conference on. IEEE, 2013, pp. 1185–1189.
[16] M. A. Jahan and R. G. Ragel, “Locating tables in scanned documents
for reconstructing and republishing,” in Information and Automation for
Sustainability (ICIAfS), 2014 7th International Conference on. IEEE,
2014, pp. 1–6.
[17] T. T. Anh, N. In-Seop, and K. Soo-Hyung, “A hybrid method for table
detection from document image,” in Pattern Recognition (ACPR), 2015
3rd IAPR Asian Conference on. IEEE, 2015, pp. 131–135.
[18] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-
time object detection with region proposal networks,” in Advances in
neural information processing systems, 2015, pp. 91–99.
[19] H. Breu, J. Gil, D. Kirkpatrick, and M. Werman, “Linear time Euclidean
distance transform algorithms,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 17, no. 5, pp. 529–533, 1995.
[20] R. Fabbri, L. D. F. Costa, J. C. Torelli, and O. M. Bruno, “2D Euclidean
distance transform algorithms: A comparative survey,” ACM Computing
Surveys (CSUR), vol. 40, no. 1, p. 2, 2008.
[21] I. Ragnemalm, “The Euclidean distance transform in arbitrary dimen-
sions,” Pattern Recognition Letters, vol. 14, no. 11, pp. 883–888, 1993.
[22] R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International
Conference on Computer Vision, 2015, pp. 1440–1448.
[23] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolu-
tional networks,” in European conference on computer vision. Springer,
2014, pp. 818–833.
[24] T. Kieninger and A. Dengel, “An approach towards benchmarking of
table structure recognition results,” in Document Analysis and Recog-
nition, 2005. Proceedings. Eighth International Conference on. IEEE,
2005, pp. 1232–1236.
[25] S. Mandal, S. Chowdhury, A. K. Das, and B. Chanda, “A simple and
effective table detection system from document images,International
Journal of Document Analysis and Recognition (IJDAR), vol. 8, no. 2-3,
pp. 172–182, 2006.
[26] F. Shafait, D. Keysers, and T. Breuel, “Performance evaluation and
benchmarking of six-page segmentation algorithms,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 30, no. 6, pp. 941–
954, 2008.
[27] J. Fang, L. Gao, K. Bai, R. Qiu, X. Tao, and Z. Tang, “A table
detection method for multipage pdf documents via visual seperators
and tabular structures,” in Document Analysis and Recognition (ICDAR),
2011 International Conference on. IEEE, 2011, pp. 779–783.
[28] I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, Data Mining: Practical
machine learning tools and techniques. Morgan Kaufmann, 2016.
... These backbones are typically designed for image classification and pre-trained on datasets like MS-COCO and ImageNet. As a result, their direct application to POD may not yield the best results [14][15][16], (ii) the challenge of POD is further complicated by the variation within object classes (intra-class variation) like Figures (Fig. 1a and c) and charts ( Fig. 1i and l) and the similarity between different classes (inter-class similarity) like figures, circuit diagrams and charts ( Fig. 1j and k). Hence, it resembles a classification challenge, (iii) current solutions tend to excel in POD at lower Intersection over Union (IoU) thresholds, however, achieving precise data extraction requires POD at higher IoU thresholds, (iv) Also, existing solutions for POD are predominantly focused on the detection of table objects, despite the prevalence of other page objects in documents as well. ...
... Extensive research has been conducted specifically on TOD [15,[23][24][25][26][27]. These methods utilized various Faster R-CNN variants [28], Mask R-CNN variants [1,26] and Cascade Mask R-CNN [14,[29][30][31]. ...
... The authors employed their approach by using pre-trained models on the Pascal-VOC dataset. Gilani et al. [15] employed a similar Faster-RCNN approach on tables based on transfer learning. The authors employed preprocessing techniques before passing the transformed images through Faster-RCNN. ...
Article
Full-text available
Document Layout Analysis (DLA) has emerged as a challenging problem in the field of computer vision. The primary goal of DLA involves the identification of page objects including tables, figures, images, and equations from document images. In this paper, we propose a Lightweight and Robust Page Object Detection Network (LR-PODNet) for page object detection (POD) from heterogeneous document images. The proposed network improves the object detection capabilities of the YOLOv5 model by integrating the two components: Convolutional Global Attention Block (C3-AB) and Hybrid Dilated Atrous spatial pyramid pooling Block (HDAB) for POD. The C3-AB is an enhanced version of the C3 module of YOLOv5 which incorporates a global attention block instead of bottleneck-CSP block. It enhances the capability of the model to capture global dimensional features and suppresses the redundant content. The output from C3-AB is passed to the HDAB for extraction of both local and contextual features. The HDAB is strategically incorporated instead of SPPF within the YOLOv5 architecture to enhance multiple feature extraction capabilities. The experimental results show that the proposed LR-PODNet outperforms the existing methods by achieving the mAP@0.5:0.95 of 77.5% and 76.2% on the IIIT-AR-13K and NCERT5K-IITRPR datasets, respectively. Additionally, we have also evaluated the robustness of the proposed model on these two datasets by varying the IoU threshold.
... Learning-based Approaches Cesarini et al. [51] deviates from rule-based approaches by pioneering a supervised learning system for identifying [74] and YOLO [75], as well as two-stage detectors like Fast R-CNN [8], Faster R-CNN [9], Mask R-CNN [76], and Cascade Mask R-CNN [77], were employed for detecting various document elements, including figures and formulas [78,79,80,81,82,83,13,84]. Additional enhancement techniques, such as image transformations involving coloration and dilation, were applied by [79,82,85]. ...
... Learning-based Approaches Cesarini et al. [51] deviates from rule-based approaches by pioneering a supervised learning system for identifying [74] and YOLO [75], as well as two-stage detectors like Fast R-CNN [8], Faster R-CNN [9], Mask R-CNN [76], and Cascade Mask R-CNN [77], were employed for detecting various document elements, including figures and formulas [78,79,80,81,82,83,13,84]. Additional enhancement techniques, such as image transformations involving coloration and dilation, were applied by [79,82,85]. Siddiqui et al. [86] integrate deformable convolution and RoI-Pooling [87] into Faster R-CNN for improved handling of geometrical changes. ...
Preprint
Full-text available
Table detection within document images is a crucial task in document processing, involving the identification and localization of tables. Recent strides in deep learning have substantially improved the accuracy of this task, but it still heavily relies on large labeled datasets for effective training. Several semi-supervised approaches have emerged to overcome this challenge, often employing CNN-based detectors with anchor proposals and post-processing techniques like non-maximal suppression (NMS). However, recent advancements in the field have shifted the focus towards transformer-based techniques, eliminating the need for NMS and emphasizing object queries and attention mechanisms. Previous research has focused on two key areas to improve transformer-based detectors: refining the quality of object queries and optimizing attention mechanisms. However, increasing object queries can introduce redundancy, while adjustments to the attention mechanism can increase complexity. To address these challenges, we introduce a semi-supervised approach employing SAM-DETR, a novel approach for precise alignment between object queries and target features. Our approach demonstrates remarkable reductions in false positives and substantial enhancements in table detection performance, particularly in complex documents characterized by diverse table structures. This work provides more efficient and accurate table detection in semi-supervised settings.
... Advancements introduced efficient single-stage [49,50] and two-stage [51][52][53][54] detectors for document elements. Some studies [55][56][57] applied image transformations for improvement, while Siddiqui et al. [58] enhanced Faster R-CNN with deformable-convolution and RoI-Pooling for geometrical modifications. The researchers Agarwal et al. [59] employed a composite network using deformable convolution to augment the performance of Cascade R-CNN. ...
Article
Full-text available
Table detection, a pivotal task in document analysis, aims to precisely recognize and locate tables within document images. Although deep learning has shown remarkable progress in this realm, it typically requires an extensive dataset of labeled data for proficient training. Current CNN-based semi-supervised table detection approaches use the anchor generation process and non-maximum suppression in their detection process, limiting training efficiency. Meanwhile, transformer-based semi-supervised techniques adopted a one-to-one match strategy that provides noisy pseudo-labels, limiting overall efficiency. This study presents an innovative transformer-based semi-supervised table detector. It improves the quality of pseudo-labels through a novel matching strategy combining one-to-one and one-to-many assignment techniques. This approach significantly enhances training efficiency during the early stages, ensuring superior pseudo-labels for further training. Our semi-supervised approach is comprehensively evaluated on benchmark datasets, including PubLayNet, ICADR-19, and TableBank. It achieves new state-of-the-art results, with a mAP of 95.7% and 97.9% on TableBank (word) and PubLaynet with 30% label data, marking a 7.4 and 7.6 point improvement over previous semi-supervised table detection approach, respectively. The results clearly show the superiority of our semi-supervised approach, surpassing all existing state-of-the-art methods by substantial margins. This research represents a significant advancement in semi-supervised table detection methods, offering a more efficient and accurate solution for practical document analysis tasks.
... In recent years, a wide variety of deep learning (DL) approaches have achieved outstanding performance in a wide range of application domains (Liu et al., 2019;Sujatha et al., 2021). The versatile applications of deep neural networks in areas such as image processing (Hemanth and Estrela, 2017), object segmentation (Chen et al., 2013), document analysis (Gilani et al., 2017), time series classification (Ismail Fawaz et al., 2019), time series prediction (Lim and Zohren, 2021), layout classification (Binmakhashen and Mahmoud, 2019), sensor analysis, and other areas have contributed to immense growth. Intelligent and automated decision-making bears the potential to improve and transform . ...
Article
Full-text available
Since the advent of deep learning (DL), the field has witnessed a continuous stream of innovations. However, the translation of these advancements into practical applications has not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by the ongoing research in eXplainable AI (XAI) and privacy-preserving machine learning (PPML), which seek to address some limitations associated with these opaque and data-intensive models. Despite brisk research activity in both fields, little attention has been paid to their interaction. This work is the first to thoroughly investigate the effects of privacy-preserving techniques on explanations generated by common XAI methods for DL models. A detailed experimental analysis is conducted to quantify the impact of private training on the explanations provided by DL models, applied to six image datasets and five time series datasets across various domains. The analysis comprises three privacy techniques, nine XAI methods, and seven model architectures. The findings suggest non-negligible changes in explanations through the implementation of privacy measures. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this research marks an initial step toward resolving the challenges that hinder the deployment of AI in safety-critical settings.
... Advancements introduced efficient single-stage [56], [57] and two-stage [58], [59], [60], [61] detectors for document elements [62][63][64]. Some studies [65][66][67] applied image transformations for improvement, while Siddiqui et al. [68] enhanced Faster R-CNN with deformable-convolution and RoI-Pooling for geometrical modifications. The researchers Agarwal et al. [69] employed a composite network using deformable convolution to augment the performance of Cascade R-CNN. ...
Article
Full-text available
Pool of knowledge available to the mankind depends on the source of learning resources, which can vary from ancient printed documents to present electronic material. The rapid conversion of material available in traditional libraries to digital form needs a significant amount of work if we are to maintain the format and the look of the electronic documents as same as their printed counterparts. Most of the printed documents contain not only characters and its formatting but also some associated non text objects such as tables, charts and graphical objects. It is challenging to detect them and to concentrate on the format preservation of the contents while reproducing them. To address this issue, we propose an algorithm using local thresholds for word space and line height to locate and extract all categories of tables from scanned document images. From the experiments performed on 298 documents, we conclude that our algorithm has an overall accuracy of about 75% in detecting tables from the scanned document images. Since the algorithm does not completely depend on rule lines, it can detect all categories of tables in a range of scanned documents with different font types, styles and sizes to extract their formatting features. Moreover, the algorithm can be applied to locate tables in multi column layouts with small modification in layout analysis. Treating tables with their existing formatting features will tremendously help the reproducing of printed documents for reprinting and updating purposes.
Conference Paper
Full-text available
This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.
Conference Paper
Full-text available
This paper presents a method to detect table regions in document images by identifying the column and row line-separators and their properties. The method employs a run-length approach to identify the horizontal and vertical lines present in the input image. From each group of intersecting horizontal and vertical lines, a set of 26 low-level features are extracted and an SVM classifier is used to test if it belongs to a table or not. The performance of the method is evaluated on a heterogeneous corpus of French, English and Arabic documents that contain various types of table structures and compared with that of the Tesseract OCR system.
Conference Paper
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region pro-posal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolu-tional features. For the very deep VGG-16 model [18], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2% mAP) and 2012 (70.4% mAP) using 300 proposals per image. The code will be released.
Book
Data Mining: Practical Machine Learning Tools and Techniques, Fourth Edition, offers a thorough grounding in machine learning concepts, along with practical advice on applying these tools and techniques in real-world data mining situations. This highly anticipated fourth edition of the most acclaimed work on data mining and machine learning teaches readers everything they need to know to get going, from preparing inputs, interpreting outputs, evaluating results, to the algorithmic methods at the heart of successful data mining approaches. Extensive updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including substantial new chapters on probabilistic methods and on deep learning. Accompanying the book is a new version of the popular WEKA machine learning software from the University of Waikato. Authors Witten, Frank, Hall, and Pal include today's techniques coupled with the methods at the leading edge of contemporary research. Please visit the book companion website at http://www.cs.waikato.ac.nz/ml/weka/book.html It contains Powerpoint slides for Chapters 1-12. This is a very comprehensive teaching resource, with many PPT slides covering each chapter of the book Online Appendix on the Weka workbench; again a very comprehensive learning aid for the open source software that goes with the book Table of contents, highlighting the many new sections in the 4th edition, along with reviews of the 1st edition, errata, etc. Provides a thorough grounding in machine learning concepts, as well as practical advice on applying the tools and techniques to data mining projects Presents concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods Includes a downloadable WEKA software toolkit, a comprehensive collection of machine learning algorithms for data mining tasks-in an easy-to-use interactive interface Includes open-access online courses that introduce practical applications of the material in the book.
Conference Paper
In this paper, we present a hybrid method consisting of three main stages for detecting tables in document images. Based on table structure, our system separates table into two main categories, ruling line table and non-ruling line table. In the first stage, the text and non-text elements in document are classified by a heuristic filter. Then, the white space analysis is used to group the text elements into text lines, while ruling line table candidates are identified from non-text elements. In the second stage, based on the text lines, text and non-text elements, a hybrid method which consist of the alternative bottom-up and top-down approaches is implemented to find the table region candidates. In the final stage, these candidates are examined to get the table regions by analyzing text lines and spare lines. Experimental results with the document database from the ICDAR2013 table competition show that the proposed method works better than the previous ones.
Conference Paper
Large Convolutional Neural Network models have recently demonstrated impressive classification performance on the ImageNet benchmark \cite{Kriz12}. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky \etal on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
Conference Paper
Document organization may be described in different ways. The physical presentation on one hand by means of a nested layout structure representing a “part-of”-relationship, e.g. of text blocks, text line segments, words, and characters. The composition of meaningful entities, such as title, author, address, or abstract, on the other, in terms of a logical structure. Both views to the contents of a document are complementary. They relate to each other as being explicitely given by publication guidelines where the position and dimensions of logical objects are precisely described. Moreover such “publication guidelines” more or less hold for various types of documents. Although they are intuitively understandable for human beings, they are hard to formalize explicitly because of the freedom originators and authors of documents have in order to incorporate their own creativity. However, humans use these intrinsic layout features to get first hints towards the logical meaning of document information. This is even obvious when considering the very difficult document examples in Figure 1 where no text is given but it is nevertheless possible to generate hypothesis for logical meaning. This paper deals with the problem of how to make use of such relationships between the layout structure and logical objects in documents. In particular, we describe an approach for layout based labeling of documents and for recognizing the structure of tables.