Web Data Extraction, Applications and Techniques: A Survey

Knowledge-Based Systems (Impact Factor: 2.95). 07/2012; 70. DOI: 10.1016/j.knosys.2014.07.007
Source: arXiv

ABSTRACT Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of application domains. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc application domains. Other approaches, instead,
heavily reuse techniques and algorithms developed in the field of Information
This survey aims at providing a structured and comprehensive overview of the
research efforts made in the field of Web Data Extraction. The fil rouge of our
work is to provide a classification of existing approaches in terms of the
applications for which they have been employed. This differentiates our work
from other surveys devoted to classify existing approaches on the basis of the
algorithms, techniques and tools they use.
We classified Web Data Extraction approaches into categories and, for each
category, we illustrated the basic techniques along with their main variants.
We grouped existing applications in two main areas: applications at the
Enterprise level and at the Social Web level. Such a classification relies on a
twofold reason: on one hand, Web Data Extraction techniques emerged as a key
tool to perform data analysis in Business and Competitive Intelligence systems
as well as for business process re-engineering. On the other hand, Web Data
Extraction techniques allow for gathering a large amount of structured data
continuously generated and disseminated by Web 2.0, Social Media and Online
Social Network users and this offers unprecedented opportunities of analyzing
human behaviors on a large scale.
We discussed also about the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.

167 Reads
    • "Academic researches on deep web have been expanded for last decade after the term " Deep Web " introduced at 2000 [11]. The number of web databases reached by agents is approximately 25 million pages at 2007 [5], [6]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a new method proposed for finding and extracting the SRRs. The method first detects content dense nodes on HTML DOM and then extracts SRRs to suggest a list of candidate HTML DOM nodes for a given single research result Web page instance. Afterwards an evaluation algorithm has been applied to the candidate list to find the best solution without any human interaction and manual process. Experimental results show that the proposed methods are successful for finding and extracting the SRRs.
    The International Conference on Data Mining, Internet Computing, and Big Data (BigData2014), Asia Pacific University of Technology & Innovation (APU), Kuala Lumpur, Malaysia; 11/2014
  • Source
    • "Thus, our tool becomes more effective to extract the desired information on users of the social network from the public pages available on the web, instead of consulting the social network itself. There exist several methods used to extract information from pages of social networks [18]. However, in order to retrieve them, one can use the API of a given available search machine. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An undergraduate program must prepare its students for the major needs of the labor market. One of the main ways to identify what are the demands to be met is creating a manner to manage information of its alumni. This consists of gathering data from program's alumni and finding out what are their main areas of employment on the labor market or which are their main fields of research in the academy. Usually, this data is obtained through available forms on the Web or forwarded by mail or email; however, these methods, in addition to being laborious, do not present good feedback from the alumni. Thus, this work proposes a novel method to help teaching staffs of undergraduate programs to gather information on the desired population of alumni, semi-automatically, on the Web. Overall, by using a few alumni pages as an initial set of sample pages, the proposed method was capable of gathering information concerning a number of alumni twice as bigger than adopted conventional methods.
    Latin American Web Congress (LaWeb), Ouro Preto, Brazil; 10/2014
  • Source
    • "For a wider and deeper survey of web information extraction systems please see [6] and [7] "
    [Show abstract] [Hide abstract]
    ABSTRACT: Information extraction from printed documents is still a crucial problem in many interorganizational workflows. Solutions for other application domains, for example, the web, do not fit this peculiar scenario well, as printed documents do not carry any explicit structural or syntactical description. Moreover, printed documents usually lack any explicit indication about their source. We present a system, which we call PATO, for extracting predefined items from printed documents in a dynamic multisource scenario. PATO selects the source-specific wrapper required by each document, determines whether no suitable wrapper exists, and generates one when necessary. PATO assumes that the need for new source-specific wrappers is a part of normal system operation: new wrappers are generated online based on a few point-and-click operations performed by a human operator on a GUI. The role of operators is an integral part of the design and PATO may be configured to accommodate a broad range of automation levels. We show that PATO exhibits very good performance on a challenging data set composed of more than 600 printed documents drawn from three different application domains: invoices, datasheets of electronic components, and patents. We also perform an extensive analysis of the crucial tradeoff between accuracy and automation level.
    IEEE Transactions on Knowledge and Data Engineering 01/2014; 26(1):208-220. DOI:10.1109/TKDE.2012.254 · 2.07 Impact Factor
Show more