Table 4 - uploaded by Mrugendrasinh L. Rahevar
Content may be subject to copyright.

Source publication
Full-text available
Summarization of data is called "information" but to get the information is somewhat difficult from raw data. This is the task where humans also face issues to get the correct one and so in the era of computers, we need a machine which can easily perform this for us. Here, summarization of data not only includes copying salient and important aspect...


... In another research work, the author [9] have considered the other features of the content such as boldened, italics, underline and quotes as the important properties to find the significance of the sentence. The author [10] have considered the words that were presented in the title to extract the summary. The word in the title are most relevant to the summary. ...
The advancement of technologies produce vast amount of data over the internet. The massive amount of information flooded in the webpages become more difficult to extract the meaningful insights. Social media websites are playing major role in publishing news events on the similar topic with different contents. Extracting the hidden information from the multiple webpages are tedious job for researchers and industrialists. This paper mainly focuses on gathering information from multiple webpages and to produce summary from those contents under similar topic. Multi-document extractive summarization has been developed using the graph based text summarization method. Proposed method builds a graph between the multi-documents using the Katz centrality of nodes. The performance of proposed GeSUM (Graph based Extractive Summarization) is evaluated with the ROUGE metrics.