Web Content Extraction

Web Content Extraction

  • Supratip Ghose added an answer:
    Is there any open source moodle dataset that can be used for research purpose?

    I want to get moodle learning dataset in CVS format. Is there any open source moodle dataset for research purpose and can anyone suggest me any tools to extract moodle web data in CVS format?

    Thanks for your answer. Actually I am talking about integrating data mining tools in course management systems....My student followed the paper "Data mining in course management systems.Moodle case study and tutorial" by Cristóbal Romero *
    , Sebastián Ventura, Enrique García published in Elsevier Science to get the technology to preprocess data and employ this dataset for knowledge discovery. But they failed to extract the data with some tools as described in the paper. 

  • Ainhoa Serna added an answer:
    What is your favorite search engine to find ontologies?

    I'm interested in finding ontologies in the domain of sustainable territories

    Ainhoa Serna · Mondragon Unibertsitatea

    Thank you all! 


  • Anoop V S added an answer:
    Is there any algorithms for extracting the aspects from the text data?

    I am currently working with a topic modelling based aspect-specific sentiment analysis of product reviews. The topics returned by the topic modeling tools need not be aspects. So how can I find out the aspects from this information? Do I need to find aspects manually or is there any tool or algorithms available? 

    Anoop V S · Indian Institute of Information Technology and Management - Kerala

    Thank you all for your valuable suggestions. 

  • Viktor Dmitriyev added an answer:
    I want to download tweets for my research... Can you please help me to do the same?

    I require large dataset of tweets for analysis in Big Data. Please guide me how i can get those tweets.

    Viktor Dmitriyev · Carl von Ossietzky Universität Oldenburg

    Follow the link to find description of the twitter timeline extraction with python script that extracts those tweets. Here is the link - https://github.com/rasbt/datacollect/tree/master/twitter_timeline .

    In addition, you will need to install following packages "twitter, pandas, pyprind". It's described there in the README file.

    In case you are using Windows, it worth checking conda package  manager (http://conda.pydata.org/docs/) shipped with the Anaconda (https://store.continuum.io/cshop/anaconda/). Installing it will really simplify your life in sense of installing custom python packages that require a lot of 'work around' with compilers and etc.

  • Ahmad T Siddiqui added an answer:
    Looking for an old paper on a circuit-board information retrieval system implementation?

    Many years ago I read a paper on a hardware implementation of an information retrieval system. It was implemented as a circuit board, where the query would be set by putting jumpers on one side of the board and the result would be indicated by LEDs or the equivalent on another side of the board. The math behind it was very insightful, and I'd love to find it again, but I've been unable to. The paper was written (probably well) before 1975, perhaps even in the 1950's. I vaguely remember that the primary author's name began with an S but that's as far as I've gotten. (I'm not thinking of Vannevar Bush's Memex.)

    Can anyone help?

    Ahmad T Siddiqui · Taif University

    Dear Sir,

    Check the file attached. May be it can help you.


  • Madhan Kumar Srinivasan added an answer:
    How do I get the DBLP and SIGMOD query set?

    Hello Everyone,

    I want to know how to get the DBLP and SIGMOD query set. If you know the links, please can you share me? But if it is not gained query set from the links,these tested query is created by yourself when the query is processed. Please share me.. Thank you all.

    Madhan Kumar Srinivasan · Global Science and Technology Forum, Singapore

    I am not sure about DBLP data set. But, if you can explore, the following link is useful to get good data set for typical analytical problems. I hope this may be of use.


  • Hassan Saif added an answer:
    How can I recognize a document is positive or negative based on polarity words?

    I have:

    - Polarity words.


    - Good: Pol 5.
    - Bad: Pol -5.

    My assignment:

    Determine a document is negative or positive. So how I have to do, please tell me about that, I'm a newbie in NLP (sentiment analysis).
    I want to use polarity to do that, don't use Naive Bayes. So anyone tell me about algorithm based on polarity words.

    Thanks for your time.

    Hassan Saif · The Open University (UK)

    Since you have the prior sentiment orientation of words (e.g., good, +5; bad, -5). You can simply follow the  assumption: 

    The sentiment orientation of a document/sentence is an average sum of the sentiment orientations of its words and phrases [1]

    As such: 

    Sentiment (Document) = Average(Positive Sentiment Score of words) / Average(Negative Sentiment Score of words)

    The above solution is rather simple and naive since it does not consider the context of words in the document. However, you can start experimenting with this solution since you are still new to this field. 

    Also, note that the above solution follows the lexicon-based approach to sentiment analysis. A recent and important work in this vein is by Thelwall et al. [2]

    Hope this help ;)

    [1] Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135, 2008.

    [2] Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. Sentiment strength detec- tion for the social web. Journal of the American Society for Information Science and Technology, 63(1):163–173, 2012.

  • Arockiya Selvi added an answer:
    What is the difference between stopwords density and token count?

    Hello, please can you share info with me about how to count the stop words and tokens for text. I would like clarification with examples. Thanks

    Arockiya Selvi · Arul Anandar College

    Stopword density analyses  the words that are repeated more times in our programming. It causes the search engines to confuse display information regarding whih keyword.

    Token count returns number of tokens(Smallest units of a program) in your text.

  • Michal Meina added an answer:
    Where can I get a stopwords list for webpage categorizations?

    Dear all, I want to get some stopwords for web page classification when I want the train for learning classifiers. So if you know some link and how to get these stopwords, can you share them with me please? Thanks all.

    Michal Meina · University of Warsaw

    I would strongly recommend to use stop word corpus from NLTK [ http://www.nltk.org/book/ch02.html ]

    It has 2,400 stopwords for 11 languages

  • Efstratios Kontopoulos added an answer:
    Can anyone suggest to me what can be done in an area of semantic web scrapping?

    I am interested in doing some work in area of semantic web crawling/scraping and using that semantic data to do some discovery.

  • Panei San added an answer:
    What kind of java is appropriate for information extraction and web page classification?

    Hello everyone!

    Can you advice me what java is more learn for my opinion?

    Panei San · University of Computer Studies, Yangon


  • Panei San added an answer:
    What tags are more suitable for main content extraction from HTML webpages?

    Hello, everyone

    I am interesting the Content Extraction from HTML web pages. Now I use the HTML tags for dividing the block of web page and use the tag-to-text ratio and anchor-text-to-text ratio and title density to extract main content. But all of HTML tags don't appropriate where content extraction. SO I want to know what tags are more accurate and more suitable for web page' cleaning? Thank You all...

    Panei San · University of Computer Studies, Yangon

    Thank you all!

     I really want to know that I consider the content extraction based on Line-block concept.  The line-block concept means that it will take from the start tag to the end tag. For example, <div>...</div>, <p>..</p> and so on. But I am testing and writing the code for it, it is corrected for the correct HTML format file. If the HTML format is wrong such as the start tag includes but don't the end tag, the code shows the wrong answer and error. So how to handle these coding and what tags are only used for the content extraction?

  • Mustapha Bouakkaz added an answer:
    What is the difference between Hadoop and Data Warehouse?
    I have read a couple of articles which are trying to sell the idea that the organization should basically choose between either implementing Hadoop (which is a powerful tool when it comes to unstructured and complex datasets) or implementing Data Warehouse (which is a powerful tool when it comes to structured datasets). But my question is, can´t they actually go along, since Big Data is about both structured and unstructured data?
    Mustapha Bouakkaz · Université Amar Telidji Laghouat

    The second one include the first one

  • Stephane RIBOT added an answer:
    What's the easiest way to collect data from Twitter and Facebook?
    I'm developing a strategy as a MSc project. I will be monitoring, collecting, and analyzing the data of a Facebook page (posts, comments, likes, shares) and a Twitter profile (tweets, retweets, mentions, and public tweets containing one/two keywords only). Any suggestions would be great. Also, what mining techniques do you recommend? I'm thinking sentiment analysis and would like to use one or two more techniques. What techniques do you recommend?

    Stephane RIBOT · Université Jean Moulin Lyon 3
    the easiest is GNIP, http://gnip.com/
    note that you may have to pay but it may be worth it due to its simplicity (if you dont program that is !!
  • Ian Kennedy added an answer:
    Can a Probabilistic timed automaton be used to model the underlying network in query routing?
    The network for routing the query is based on Markov process. If we want to model the time taken to answer a query , is Probabilistic timed automaton a better model?
    Ian Kennedy · Independent Researcher
    A stochastic queue would be indicated. Look up queueing theory. Start here: "http://en.wikipedia.org/wiki/Queueing_theory". If you wanted to use a number of probabilistic timed automatons, you would then have the complexity of having to build in the appropriate statistical properties.
  • Akila Gopu asked a question:
    What is the best model to capture Query passing/tossing in CQA?
    CQA is Community Query Answering, example stackoverflow.com or Yahoo answers. The current model that is used to capture this query passing is the Markov model. Is there any other alternative to the Markov model?
  • Andrew Meyerhoff added an answer:
    Is it be possible to extract the code for a specific segment of a webpage?
    Suppose I need to extract code for the voting portion of a webpage alone. Can it be any tool for doing this.
    Andrew Meyerhoff · Indiana University-Purdue University Indianapolis
    use python with beautifulsoup just make a URL request with python and parse it with beautifulsoup for the needed element. makes life much easier than you believe
  • Alexandre Beauvois added an answer:
    Looking for an efficient algorithm available for web crawling
    I need to extract specific data from related websites . For example I need to extract data from specific website providing the positive feedback about a type of vehicle. Kindly suggest some good code or algorithm for this.
    Alexandre Beauvois · Entropic Synergies
    Cheerio for javascrpt (nodejs) programming language : see http://vimeo.com/31950192
  • Fotis Kokkoras added an answer:
    Data mining and web content mining
    How to data mining algorithms be implemented for web content mining?
    Fotis Kokkoras · Technological Educational Institute of Thessaly
    The web content extraction is a task applied to web pages, not to databases. You are scraping unstructured data from the web, you put them in structured storage (databases) and then apply data mining algorithms to them. That's the order.
  • Debajyoti Mukhopadhyay asked a question:
    Semantic Search Engine with user friendly output and ranking needed.
    We have witnessed the power of a regular search engine like Google. There is a semantic search engine like Swoogle as well. However, we are trying to build a semantic search engine with more user friendly display capability and relevant ranking algorithm. Can anybody suggest ideas?
  • Andras Kornai added an answer:
    Chinese textmining software
    Does anybody know of any useable textmining software programs that do topic modeling and also cover Chinese as a language? This seems harder to find that I had thought. I found things like FudanNLP - (http://code.google.com/p/fudannlp/) and Ictclas (http://www.ictclas.org/ictclas_download.aspx), neither of which I have been able to make work so far. Pingar (http://apidemo.pingar.com/AnalyzeDocument.aspx) doesn't seem to have topic extraction. Mallet does seem to have a Chinese module and does have topic modeling, but I have yet to figure that one out too. Does anybody have any other suggestions?
    Andras Kornai · Hungarian Academy of Sciences
    Have you considered commercial software vendors like Basis Tech?
  • Massimo Ruffolo added an answer:
    Making effective Web Content Extraction technologies
    One of my principal research and devolpment interenst is in Web Content Extraction. I founded a start-up in this field www.altiliagroup.com. If there is someone interested in collaborating with us on this topic or in working as principal software architect for Altilia please let me know.
    Massimo Ruffolo · National Research Council
    Hi all,

    thanks for your interest in my post.

    We are seaching for companies interested in becoming resellers of our content extraction and management technologies and for technical people with deep expertices in web content extraction tecnologies interested in working with as software architect.

About Web Content Extraction

Content extraction is the process of identifying the Main Content and/or removing the additional items, such as advertisements, navigation bars, design elements or legal disclaimers. The rapid growth of text based information on the Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the Main Content (MC) from the additional content items.

Topic Followers (2481) See all