- Ian Kennedy added an answer:Looking for an old paper on a circuit-board information retrieval system implementation?
Many years ago I read a paper on a hardware implementation of an information retrieval system. It was implemented as a circuit board, where the query would be set by putting jumpers on one side of the board and the result would be indicated by LEDs or the equivalent on another side of the board. The math behind it was very insightful, and I'd love to find it again, but I've been unable to. The paper was written (probably well) before 1975, perhaps even in the 1950's. I vaguely remember that the primary author's name began with an S but that's as far as I've gotten. (I'm not thinking of Vannevar Bush's Memex.)
Can anyone help?
In 1969 we used large library cards, with the bibliographic
information and abstract one side, and our annoations on the
other. We cut open the prepunched holes around the edge for
each of our keywords. If I wanted something about Education and
Architecture, I would put a knitting needle through the Education hole (1)
and keep the cards that dropped onto the table. I would then
use these cards only, and put a knitting needle through the
Architecture hole (2), and use the dropping cards as my reading list.
A circuit board to do the above would have to have the means to
accept a document number (e.g. 344); accept and store its keywords (numbers 1 and 2).
To retrieve, it would have to have the means to accept a few keyword
numbers, and display all document numbers (e.g. 20, 344 and 401)
that matched the Boolean keyword criteria. E.g. Education(1) AND Architecture (2).
More complex outputs containing OR or NOT functions could be
implemented too. Diodes or Resistor-transistor logic could have been
used to implement the Boolean logic. The only 'Maths' would
have been Boolean algebra.Following
- Madhan Kumar Srinivasan added an answer:How do I get the DBLP and SIGMOD query set?
I want to know how to get the DBLP and SIGMOD query set. If you know the links, please can you share me? But if it is not gained query set from the links,these tested query is created by yourself when the query is processed. Please share me.. Thank you all.
I am not sure about DBLP data set. But, if you can explore, the following link is useful to get good data set for typical analytical problems. I hope this may be of use.
- Hassan Saif added an answer:How can I recognize a document is positive or negative based on polarity words?
- Polarity words.
- Good: Pol 5.
- Bad: Pol -5.
Determine a document is negative or positive. So how I have to do, please tell me about that, I'm a newbie in NLP (sentiment analysis).
I want to use polarity to do that, don't use Naive Bayes. So anyone tell me about algorithm based on polarity words.
Thanks for your time.
Since you have the prior sentiment orientation of words (e.g., good, +5; bad, -5). You can simply follow the assumption:
The sentiment orientation of a document/sentence is an average sum of the sentiment orientations of its words and phrases 
Sentiment (Document) = Average(Positive Sentiment Score of words) / Average(Negative Sentiment Score of words)
The above solution is rather simple and naive since it does not consider the context of words in the document. However, you can start experimenting with this solution since you are still new to this field.
Also, note that the above solution follows the lexicon-based approach to sentiment analysis. A recent and important work in this vein is by Thelwall et al. 
Hope this help ;)
 Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135, 2008.
 Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. Sentiment strength detec- tion for the social web. Journal of the American Society for Information Science and Technology, 63(1):163–173, 2012.Following
- Arockiya Selvi added an answer:What is the difference between stopwords density and token count?
Hello, please can you share info with me about how to count the stop words and tokens for text. I would like clarification with examples. Thanks
Stopword density analyses the words that are repeated more times in our programming. It causes the search engines to confuse display information regarding whih keyword.
Token count returns number of tokens(Smallest units of a program) in your text.Following
- Michal Meina added an answer:Where can I get a stopwords list for webpage categorizations?
Dear all, I want to get some stopwords for web page classification when I want the train for learning classifiers. So if you know some link and how to get these stopwords, can you share them with me please? Thanks all.
I would strongly recommend to use stop word corpus from NLTK [ http://www.nltk.org/book/ch02.html ]
It has 2,400 stopwords for 11 languagesFollowing
- Efstratios Kontopoulos added an answer:Can anyone suggest to me what can be done in an area of semantic web scrapping?
I am interested in doing some work in area of semantic web crawling/scraping and using that semantic data to do some discovery.Following
- Panei San added an answer:What kind of java is appropriate for information extraction and web page classification?
Can you advice me what java is more learn for my opinion?
- Panei San added an answer:What tags are more suitable for main content extraction from HTML webpages?
I am interesting the Content Extraction from HTML web pages. Now I use the HTML tags for dividing the block of web page and use the tag-to-text ratio and anchor-text-to-text ratio and title density to extract main content. But all of HTML tags don't appropriate where content extraction. SO I want to know what tags are more accurate and more suitable for web page' cleaning? Thank You all...
Thank you all!
I really want to know that I consider the content extraction based on Line-block concept. The line-block concept means that it will take from the start tag to the end tag. For example, <div>...</div>, <p>..</p> and so on. But I am testing and writing the code for it, it is corrected for the correct HTML format file. If the HTML format is wrong such as the start tag includes but don't the end tag, the code shows the wrong answer and error. So how to handle these coding and what tags are only used for the content extraction?Following
- Mustapha Bouakkaz added an answer:What is the difference between Hadoop and Data Warehouse?I have read a couple of articles which are trying to sell the idea that the organization should basically choose between either implementing Hadoop (which is a powerful tool when it comes to unstructured and complex datasets) or implementing Data Warehouse (which is a powerful tool when it comes to structured datasets). But my question is, can´t they actually go along, since Big Data is about both structured and unstructured data?
The second one include the first oneFollowing
- Stephane RIBOT added an answer:What's the easiest way to collect data from Twitter and Facebook?I'm developing a strategy as a MSc project. I will be monitoring, collecting, and analyzing the data of a Facebook page (posts, comments, likes, shares) and a Twitter profile (tweets, retweets, mentions, and public tweets containing one/two keywords only). Any suggestions would be great. Also, what mining techniques do you recommend? I'm thinking sentiment analysis and would like to use one or two more techniques. What techniques do you recommend?
Thanksthe easiest is GNIP, http://gnip.com/
note that you may have to pay but it may be worth it due to its simplicity (if you dont program that is !!Following
- Ian Kennedy added an answer:Can a Probabilistic timed automaton be used to model the underlying network in query routing?The network for routing the query is based on Markov process. If we want to model the time taken to answer a query , is Probabilistic timed automaton a better model?A stochastic queue would be indicated. Look up queueing theory. Start here: "http://en.wikipedia.org/wiki/Queueing_theory". If you wanted to use a number of probabilistic timed automatons, you would then have the complexity of having to build in the appropriate statistical properties.Following
- Akila Gopu asked a question:What is the best model to capture Query passing/tossing in CQA?CQA is Community Query Answering, example stackoverflow.com or Yahoo answers. The current model that is used to capture this query passing is the Markov model. Is there any other alternative to the Markov model?Following
- Andrew Meyerhoff added an answer:Is it be possible to extract the code for a specific segment of a webpage?Suppose I need to extract code for the voting portion of a webpage alone. Can it be any tool for doing this.use python with beautifulsoup just make a URL request with python and parse it with beautifulsoup for the needed element. makes life much easier than you believe
- Alexandre Beauvois added an answer:Looking for an efficient algorithm available for web crawlingI need to extract specific data from related websites . For example I need to extract data from specific website providing the positive feedback about a type of vehicle. Kindly suggest some good code or algorithm for this.Cheerio for javascrpt (nodejs) programming language : see http://vimeo.com/31950192Following
- Fotis Kokkoras added an answer:Data mining and web content miningHow to data mining algorithms be implemented for web content mining?The web content extraction is a task applied to web pages, not to databases. You are scraping unstructured data from the web, you put them in structured storage (databases) and then apply data mining algorithms to them. That's the order.Following
- Debajyoti Mukhopadhyay asked a question:Semantic Search Engine with user friendly output and ranking needed.We have witnessed the power of a regular search engine like Google. There is a semantic search engine like Swoogle as well. However, we are trying to build a semantic search engine with more user friendly display capability and relevant ranking algorithm. Can anybody suggest ideas?Following
- Andras Kornai added an answer:Chinese textmining softwareDoes anybody know of any useable textmining software programs that do topic modeling and also cover Chinese as a language? This seems harder to find that I had thought. I found things like FudanNLP - (http://code.google.com/p/fudannlp/) and Ictclas (http://www.ictclas.org/ictclas_download.aspx), neither of which I have been able to make work so far. Pingar (http://apidemo.pingar.com/AnalyzeDocument.aspx) doesn't seem to have topic extraction. Mallet does seem to have a Chinese module and does have topic modeling, but I have yet to figure that one out too. Does anybody have any other suggestions?Have you considered commercial software vendors like Basis Tech?Following
- Massimo Ruffolo added an answer:Making effective Web Content Extraction technologiesOne of my principal research and devolpment interenst is in Web Content Extraction. I founded a start-up in this field www.altiliagroup.com. If there is someone interested in collaborating with us on this topic or in working as principal software architect for Altilia please let me know.Hi all,
thanks for your interest in my post.
We are seaching for companies interested in becoming resellers of our content extraction and management technologies and for technical people with deep expertices in web content extraction tecnologies interested in working with as software architect.Following
About Web Content Extraction
Content extraction is the process of identifying the Main Content and/or removing the additional items, such as advertisements, navigation bars, design elements or legal disclaimers. The rapid growth of text based information on the Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the Main Content (MC) from the additional content items.