- Bash Badawi added an answer:What is the difference between Hadoop and Data Warehouse?I have read a couple of articles which are trying to sell the idea that the organization should basically choose between either implementing Hadoop (which is a powerful tool when it comes to unstructured and complex datasets) or implementing Data Warehouse (which is a powerful tool when it comes to structured datasets). But my question is, can´t they actually go along, since Big Data is about both structured and unstructured data?
Financially speaking, Hadoop can be a very inexpensive alternative to a data warehouse. You can store your structured data across cheap computing/storage nodes as opposed to adding large servers, disaster recover, fail-over, etc.
Where as in Hadoop, you can break up the data and let HDFS, the Hadoop distributed file system handle the 3X copies of each chunk of data. You can use Pig, or Hive, or Ambari or Flume to run queries against just as if you are using a DW. I have implemented this at a client site where we were migrating a COBOL/DB2 system and decided to go with Hadoop just to save money. It's also a great transition point until you know where the data should be housed, but Hadoop storing 3X the data and the ease of scaling out versus scaling up is a nice feature.Following
- Panei San added an answer:Where can I get the web page datasets such as BBC or NY Times datasets for web page classification?
I do the implementation of web page classification. Now I am testing on small dataset such as downloading about 50 web pages (sport, business,...etc.)from bbc web sites . But I need more web pages for further implementation and calculate the classification accuracy. Therefore, if you know and have some web page dataset, please can you share me or give links.
pan ei san
Thanks Sir, for your valuable advices.Following
- Leslie Sikos added an answer:What is your favorite search engine to find ontologies?
I'm interested in finding ontologies in the domain of sustainable territoriesFollowing
- Saurabh Gayali added an answer:What tags are more suitable for main content extraction from HTML webpages?
I am interesting the Content Extraction from HTML web pages. Now I use the HTML tags for dividing the block of web page and use the tag-to-text ratio and anchor-text-to-text ratio and title density to extract main content. But all of HTML tags don't appropriate where content extraction. SO I want to know what tags are more accurate and more suitable for web page' cleaning? Thank You all...
try visual ping
you can visually select what you want to extractFollowing
- Kaushalya Madhawa added an answer:How to get twitter historical data?
I am doing a research in twitter sentiment analysis related to financial predictions and i need to have a historical dataset from twitter backed to three years. last year twitter announced that they will release historical data for scientific proposes.
I am asking if anybody have an idea about how to get this data?
Only way to get access to historical data is you have go through Gnip. As Lidia mentioned check the cost for what you need.
I too had a similar problem and because of the cost had to rely on current tweets instead of historical ones and downloaded them manually.
- Alireza Mojtahedi added an answer:What are the best ways to content-analyze social media streams?
I'm looking for recent developments in automated analysis of Twitter, Facebook, or any other text-based social media streams. What are researchers able to extract? How are facts gathered, summarized, visualized?
If you can point me to recent research, technologies, and specifically conferences dealing with automation of social media content, I'd much appreciate it. VR
We spent about 2 years to build an intelligent platform and technology to analyze social media and the rest of the web to detect anomalies and the trends in any given region. We call it "a bird's eye view" model. The platform released to public 1 month ago. You can learn more about the platform here: www.tilofy.com and I would be more than happy to give you free access to play with it and I would love to hear your feedback. Basically, our smart algorithms using machine learning and NLP to extract context from the offline world like a group of conversations from twitter or Instagram and analyze "whats happening in that given region in real-time". We have historical data for almost 1.5 years + real-time feed and our last phase is predictive model which is in progress. For example, if there is a marathon in LA today, Tilofy draws a heat map for you in real-time to see how the event evolve in the region and expand around the location. Basically, you can monitor the marathon from start to end in real-time and have access to all conversations, age, gender and race breakdown, sentiment, top influencers, pictures, videos, text, event start, pick time and end, tag cloud and taxonomy which determine other people interests about the same subject. All this information generating with the algorithms and is fully automated. I think it gives you a very good idea how big data can shine using Elastic Search, Ruby and Java!Following
- Mustapha Bouakkaz added an answer:Can anyone suggest some concept extraction tools?
Friends, I want to extract concepts from large collection of text. Is there any tools available for the same ? As per my knowledge, Topics and Concepts are different and I can't use topic modeling tools to extract concepts. Please help me by suggesting some tools for concept extraction.
TAG : Textual Aggregation by Graph
- Dr. Vaishali S. Parsania added an answer:Any online data source for download- on which various data mining algorithms can be performed?
Datasource which is as well free of cost and permitted to download...
- Supratip Ghose added an answer:Is there any open source moodle dataset that can be used for research purpose?
I want to get moodle learning dataset in CVS format. Is there any open source moodle dataset for research purpose and can anyone suggest me any tools to extract moodle web data in CVS format?
Thanks for your answer. Actually I am talking about integrating data mining tools in course management systems....My student followed the paper "Data mining in course management systems.Moodle case study and tutorial" by Cristóbal Romero *
, Sebastián Ventura, Enrique García published in Elsevier Science to get the technology to preprocess data and employ this dataset for knowledge discovery. But they failed to extract the data with some tools as described in the paper.Following
- Sandeep R Sirsat added an answer:Is there any algorithms for extracting the aspects from the text data?
I am currently working with a topic modelling based aspect-specific sentiment analysis of product reviews. The topics returned by the topic modeling tools need not be aspects. So how can I find out the aspects from this information? Do I need to find aspects manually or is there any tool or algorithms available?
Yes, there are many approaches available to extract aspects from the textual data. You can use use semantics based sentiment analysis in identifying and extracting aspects from textual corpora.Following
- Viktor Dmitriyev added an answer:I want to download tweets for my research... Can you please help me to do the same?
I require large dataset of tweets for analysis in Big Data. Please guide me how i can get those tweets.
Follow the link to find description of the twitter timeline extraction with python script that extracts those tweets. Here is the link - https://github.com/rasbt/datacollect/tree/master/twitter_timeline .
In addition, you will need to install following packages "twitter, pandas, pyprind". It's described there in the README file.
In case you are using Windows, it worth checking conda package manager (http://conda.pydata.org/docs/) shipped with the Anaconda (https://store.continuum.io/cshop/anaconda/). Installing it will really simplify your life in sense of installing custom python packages that require a lot of 'work around' with compilers and etc.Following
- Ahmad T Siddiqui added an answer:Looking for an old paper on a circuit-board information retrieval system implementation?
Many years ago I read a paper on a hardware implementation of an information retrieval system. It was implemented as a circuit board, where the query would be set by putting jumpers on one side of the board and the result would be indicated by LEDs or the equivalent on another side of the board. The math behind it was very insightful, and I'd love to find it again, but I've been unable to. The paper was written (probably well) before 1975, perhaps even in the 1950's. I vaguely remember that the primary author's name began with an S but that's as far as I've gotten. (I'm not thinking of Vannevar Bush's Memex.)
Can anyone help?
Check the file attached. May be it can help you.
- Madhan Kumar Srinivasan added an answer:How do I get the DBLP and SIGMOD query set?
I want to know how to get the DBLP and SIGMOD query set. If you know the links, please can you share me? But if it is not gained query set from the links,these tested query is created by yourself when the query is processed. Please share me.. Thank you all.
I am not sure about DBLP data set. But, if you can explore, the following link is useful to get good data set for typical analytical problems. I hope this may be of use.
- Hassan Saif added an answer:How can I recognize a document is positive or negative based on polarity words?
- Polarity words.
- Good: Pol 5.
- Bad: Pol -5.
Determine a document is negative or positive. So how I have to do, please tell me about that, I'm a newbie in NLP (sentiment analysis).
I want to use polarity to do that, don't use Naive Bayes. So anyone tell me about algorithm based on polarity words.
Thanks for your time.
Since you have the prior sentiment orientation of words (e.g., good, +5; bad, -5). You can simply follow the assumption:
The sentiment orientation of a document/sentence is an average sum of the sentiment orientations of its words and phrases 
Sentiment (Document) = Average(Positive Sentiment Score of words) / Average(Negative Sentiment Score of words)
The above solution is rather simple and naive since it does not consider the context of words in the document. However, you can start experimenting with this solution since you are still new to this field.
Also, note that the above solution follows the lexicon-based approach to sentiment analysis. A recent and important work in this vein is by Thelwall et al. 
Hope this help ;)
 Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135, 2008.
 Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. Sentiment strength detec- tion for the social web. Journal of the American Society for Information Science and Technology, 63(1):163–173, 2012.Following
- Arockiya Selvi added an answer:What is the difference between stopwords density and token count?
Hello, please can you share info with me about how to count the stop words and tokens for text. I would like clarification with examples. Thanks
Stopword density analyses the words that are repeated more times in our programming. It causes the search engines to confuse display information regarding whih keyword.
Token count returns number of tokens(Smallest units of a program) in your text.Following
- Michal Meina added an answer:Where can I get a stopwords list for webpage categorizations?
Dear all, I want to get some stopwords for web page classification when I want the train for learning classifiers. So if you know some link and how to get these stopwords, can you share them with me please? Thanks all.
I would strongly recommend to use stop word corpus from NLTK [ http://www.nltk.org/book/ch02.html ]
It has 2,400 stopwords for 11 languagesFollowing
- Efstratios Kontopoulos added an answer:Can anyone suggest to me what can be done in an area of semantic web scrapping?
I am interested in doing some work in area of semantic web crawling/scraping and using that semantic data to do some discovery.Following
- Panei San added an answer:What kind of java is appropriate for information extraction and web page classification?
Can you advice me what java is more learn for my opinion?
- Stephane RIBOT added an answer:What's the easiest way to collect data from Twitter and Facebook?I'm developing a strategy as a MSc project. I will be monitoring, collecting, and analyzing the data of a Facebook page (posts, comments, likes, shares) and a Twitter profile (tweets, retweets, mentions, and public tweets containing one/two keywords only). Any suggestions would be great. Also, what mining techniques do you recommend? I'm thinking sentiment analysis and would like to use one or two more techniques. What techniques do you recommend?
Thanksthe easiest is GNIP, http://gnip.com/
note that you may have to pay but it may be worth it due to its simplicity (if you dont program that is !!Following
- Ian Kennedy added an answer:Can a Probabilistic timed automaton be used to model the underlying network in query routing?The network for routing the query is based on Markov process. If we want to model the time taken to answer a query , is Probabilistic timed automaton a better model?A stochastic queue would be indicated. Look up queueing theory. Start here: "http://en.wikipedia.org/wiki/Queueing_theory". If you wanted to use a number of probabilistic timed automatons, you would then have the complexity of having to build in the appropriate statistical properties.Following
- Akila Gopu asked a question:What is the best model to capture Query passing/tossing in CQA?CQA is Community Query Answering, example stackoverflow.com or Yahoo answers. The current model that is used to capture this query passing is the Markov model. Is there any other alternative to the Markov model?Following
- Andrew Meyerhoff added an answer:Is it be possible to extract the code for a specific segment of a webpage?Suppose I need to extract code for the voting portion of a webpage alone. Can it be any tool for doing this.use python with beautifulsoup just make a URL request with python and parse it with beautifulsoup for the needed element. makes life much easier than you believe
- Alexandre Beauvois added an answer:Looking for an efficient algorithm available for web crawlingI need to extract specific data from related websites . For example I need to extract data from specific website providing the positive feedback about a type of vehicle. Kindly suggest some good code or algorithm for this.Cheerio for javascrpt (nodejs) programming language : see http://vimeo.com/31950192Following
- Fotis Kokkoras added an answer:Data mining and web content miningHow to data mining algorithms be implemented for web content mining?The web content extraction is a task applied to web pages, not to databases. You are scraping unstructured data from the web, you put them in structured storage (databases) and then apply data mining algorithms to them. That's the order.Following
- Debajyoti Mukhopadhyay asked a question:Semantic Search Engine with user friendly output and ranking needed.We have witnessed the power of a regular search engine like Google. There is a semantic search engine like Swoogle as well. However, we are trying to build a semantic search engine with more user friendly display capability and relevant ranking algorithm. Can anybody suggest ideas?Following
- Andras Kornai added an answer:Chinese textmining softwareDoes anybody know of any useable textmining software programs that do topic modeling and also cover Chinese as a language? This seems harder to find that I had thought. I found things like FudanNLP - (http://code.google.com/p/fudannlp/) and Ictclas (http://www.ictclas.org/ictclas_download.aspx), neither of which I have been able to make work so far. Pingar (http://apidemo.pingar.com/AnalyzeDocument.aspx) doesn't seem to have topic extraction. Mallet does seem to have a Chinese module and does have topic modeling, but I have yet to figure that one out too. Does anybody have any other suggestions?Have you considered commercial software vendors like Basis Tech?Following
- Massimo Ruffolo added an answer:Making effective Web Content Extraction technologiesOne of my principal research and devolpment interenst is in Web Content Extraction. I founded a start-up in this field www.altiliagroup.com. If there is someone interested in collaborating with us on this topic or in working as principal software architect for Altilia please let me know.Hi all,
thanks for your interest in my post.
We are seaching for companies interested in becoming resellers of our content extraction and management technologies and for technical people with deep expertices in web content extraction tecnologies interested in working with as software architect.Following
About Web Content Extraction
Content extraction is the process of identifying the Main Content and/or removing the additional items, such as advertisements, navigation bars, design elements or legal disclaimers. The rapid growth of text based information on the Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the Main Content (MC) from the additional content items.