Science topic

Computational Intelligence - Science topic

Computational methodologies inspired by naturally occuring phenomena.
Questions related to Computational Intelligence
Question
QSARpro webinar covers:
1) Understanding of the innovative Group QSAR (GQSAR) method and its application
2) How to unleash the strength of your molecule dataset
3) How to customise and build smart QSAR models
4) How to enhance the scope of your predicted results by addressing Inverse QSAR problem
The webinar lasts for about 30 minutes followed by an interactive Q & A session.
Come prepared with your questions because you can ask your questions to the wellknown QSAR expert Dr. Subhash Ajmani, PhD, Scientist & Senior Management staff at NovaLead Pharma.
Question
Has anybody used RevoDeployR? I would like to make web applications that run my R codes. I came across RevoDeployR but I am not sure if it is a good tool.
Question
19 answers
Are artificial immune system (AIS) algorithms considered as population-based? How about the newly developed chemical reaction optimization (CRO) algorithm? What are the similarities and differences between them? How do they differ from the robust and well-known Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Simulated Annealing (SA), Tabu Search (TS) and many more in terms of their behavior and operator characteristics?
Relevant answer
Answer
.
unsure if AIS enters in the "co-evolutionary class" of optimization problems but the following paper indicates that free lunches do indeed exist in that class
Wolpert, D.H., and Macready, W.G. (2005) "Coevolutionary free lunches," IEEE Transactions on Evolutionary Computation, 9(6): 721–735
(available through citeseer)
.
Question
Need to use as training dataset .
Question
10 answers
By categorization i mean assigning text to a particular class
Relevant answer
Answer
.
Boosting of decision stumps :
Open-source, free, fast and hard to beat
.
Question
1 answer
Where 'R^-1' is the inverse of the auto correlation matrix, 'K' is a very large number and 'I' is Identity matrix. Why does 'K' have to be very large?
Relevant answer
Answer
Well, sometimes K is set large. But, the recursion for R-1 is obtained by the algebraic Riccati equation. So, K must be close to real value in order to reduce errors caused by transients. The best way is to allow some data, investigate R, and estimate K before running the RLS algorithm.
Do not forget that the RLS are applicable to stationary and quasi stationary processes. In the case of brightly pronounced dynamics, use the Kalman algorithm.
If you do not know the noise statistics and initial error statistics, use the unbiased FIR algorithm. This algorithm also has the RLS or Kalman form, but is more robust.
Pca
Question
Hi Any ideas about "" implementation of Business Continuity Plan ( PCA in french) "" thanks
Question
3 answers
Emotional influence on the integration of sensory modalities in cognitive architectures.
Relevant answer
Answer
This question is in the context of integrating visual and auditory information having impact of emotions regarding cognitive architectures. thanks anyway
Why is that the Neural network can't be over trained....?
Question
4 answers
I heard something wrong with the over training...!!
Relevant answer
Answer
I didn't got your question correctly but i think you are talking about overfitting. Check for it you may get your answer.
Question
1 answer
Since the AIS algorithms is a subset of swarm intelligence whereas known as "collective" intelligence that more focused on decentralized, self-organized, and learning ability. In contrast, considered SA, its an algorithm with more focused search whereas the single-point iterations is adopted. However, I couldn't see any way that can hybridize these two algorithm since the differences in nature that both shown. Any suggestion?
Relevant answer
Answer
You can stil hybridize.
In AIS the IS system building is similar like evolutionary algorithm, where u can apply the strategy of SA , so that the SA can run in parallel one subunit of AIS.
Question
1 answer
Regarding my Neural Network training.
Relevant answer
Answer
It depends on improvement done on the performance output by slightly varying the parameters of the BPN network not the algorithm itself.
Question
1 answer
I want to know an effective way to make an analysis of Population based GA, PSO for time and space complexity analysis.
Relevant answer
Answer
U can try to apply the traditional way of finding the complexity of a program by analyzing it after u develop the code for the problem.
Are you aware of this nice conference track: Knowledge Discovery and Business Intelligence (KDBI - EPIA2013) http://lnkd.in/HHeYgc ?
Question
1 answer
Papers in Springer LNAI (ISI indexed). Best papers in Journal Expert Systems (ISI JCR). Deadline: 15/3/2013
Relevant answer
Answer
CFP: Knowledge Discovery and Business Intelligence (KDBI - EPIA2013) Dear Colleague, We would be very grateful if you could disseminate this call among your research peers and colleagues. Please excuse us if you receive this email more than once. ------------------------------------------------------------------------------ Knowledge Discovery and Business Intelligence (KDBI - EPIA2013) thematic track of the 16th Portuguese Conference on Artificial Intelligence (EPIA 2013) Angra do Heroísmo, Açores, Portugal, 2013. Web-page: http://www.epia2013.uac.pt/ >> Deadline for paper submission: 15th March 2013 << >> Best papers in Special Issue in Journal Expert Systems (indexed at ISI) << ------------------------------------------------------------------------------ Nowadays, business organizations are increasingly moving towards decision-making processes that are based on information. In parallel, the amount of data representing the activities of organizations that is stored in databases is also exponentially growing. Thus, the pressure to extract as much useful information as possible from these data is very strong. Knowledge Discovery (KD) is a branch of the Artificial Intelligence (AI) field that aims to extract useful and understandable high-level knowledge from complex and/or large volumes of data. On the other hand, Business Intelligence (BI) is an umbrella term that represents computer architectures, tools, technologies and methods to enhance managerial decision making in public and corporate enterprises, from operational to strategic level. KD and BI are faced with new challenges. For example, due to the Internet expansion, huge amounts of data are available through the Web. Moreover, objects of analysis exist in time and space, often under dynamic and unstable environments, evolving incrementally over time. Another KD challenge is the integration of background knowledge (e.g. cognitive models or inductive logic) into the learning process. In addition, AI plays a crucial role in BI, providing methodologies to deal with prediction, optimization and adaptability to dynamic environments, in an attempt to offer support to better (more informed) decisions. In effect, several AI techniques can be used to address these problems, namely KD/Data Mining, Evolutionary Computation and Modern Optimization, Forecasting, Neural Computing and Intelligent Agents. Hence, the aim of this track is to gather the latest research in KD and BI. In particular, papers that describe experience and lessons learned from KD/BI projects and/or present business and organizational impacts using AI technologies, are welcome. Finally, we encourage papers that deal with the interaction with the end users, taking into its impact on real organizations. ------------------ Topics of interest ------------------ A non-exhaustive list of topics of interest is defined as follows: Knowledge Discovery (KD): - Data Pre-Processing - Intelligent Data Analysis - Temporal and Spatial KD - Data and Knowledge Visualization - Machine Learning (e.g. Decision Trees, Neural Networks, Bayesian Learning, Inductive and Fuzzy Logic) and Statistical Methods - Hybrid Learning Models and Methods: Using KD methods and Cognitive Models, Neuro-Symbolic Systems, etc. - Domain KD: Learning from Heterogeneous, Unstructured (e.g. text) and Multimedia data, Networks, Graphs and Link Analysis - Data Mining: Classification, Regression, Clustering and Association Rules - Ubiquitous Data Mining: Distributed Data Mining, Incremental Learning, Change Detection, Learning from Ubiquitous Data Streams Business Intelligence (BI)/ Data Science: - Methodologies, Architectures or Computational Tools - Artificial Intelligence (e.g. KD, Evolutionary Computation, Intelligent Agents, Logic) applied to BI: Data Warehouse, OLAP, Data Mining, Decision Support Systems, Adaptive BI,Web Intelligence and Competitive Intelligence. Real-world Applications - Prediction/Optimization in Finance, Marketing, Medicine, Sales, Production - Mining Big Data and Cloud computing ------------------ Paper submission ------------------ All papers should be submitted in PDF format through EPIA’2013 submission Website (select "Knowledge Discovery and Business Intelligence" track): https://www.easychair.org/conferences/?conf=epia2013 Submissions must be original and can be of two types: regular (full-length) papers should not exceed twelve (12) pages in length, whereas short papers should not exceed six (6) pages. Each submission will be peer reviewed by at least three members of the Programme Committee. The reviewing process is double blind, so authors should remove names and affiliations from the submitted papers, and must take reasonable care to assure anonymity during the review process. References to own work may be included in the paper, as long as referred to in the third person. Papers should strictly adhere to formatting instructions of the conference: http://www.epia2013.uac.pt/?page_id=564 The best accepted papers will appear in the proceedings published by Springer in the LNAI series (EPIA 2011 proceedings were indexed by the Thomson ISI Web of Knowledge, Scopus, DBLP and ACM digital library). The remaining accepted papers will be published in the local proceedings with ISBN. ------------------ Special Issue in Journal Expert Systems ------------------ Authors of the best papers presented at the KDBI 2013 track of EPIA will be invited to submit extended versions of their manuscripts for a special issue KDBI of the 'The Wiley-Blackwell Journal Expert Systems: The Journal of Knowledge Engineering', indexed at ISI Web of Knowledge (ISI impact factor JCR2011 of 0.684, JCR2010 of 1.231): http://www3.interscience.wiley.com/journal/117963144/home ------------------ Deadlines: ------------------ - Paper submission: March 15, 2013 - Acceptance notification: April 30, 2013 - Camera-ready papers: May 31, 2013 - Venue: September, 9 to 13, 2013 ------------------ Organizing Committee ------------------ * Paulo Cortez, University of Minho, Portugal (contact person, http://www3.dsi.uminho.pt/pcortez) * Luís Cavique, Universidade Aberta, Portugal * João Gama, University of Porto, Portugal * Nuno Marques, New University of Lisbon, Portugal * Manuel Filipe Santos, University of Minho, Portugal ------------------ Program Committee (provisory list): ------------------ Agnes Braud, (Univ. Robert Schuman, France) Albert Bifet, (University of Waikato, NZ) Aline Villavicencio, (UFRGS, Brazil) Alípio Jorge, (Univ. Porto, Portugal) André Carvalho, (Univ. São Paulo, Brazil) Armando Mendes, (Univ. Açores, Portugal) Bernardete Ribeiro, (Univ. Coimbra, Portugal) Carlos Ferreira, (Institute of Eng. of Porto, Portugal) Fatima Rodrigues, (Institute of Eng. of Porto, Portugal) Fernando Bação, (Univ. Nova Lisboa, Portugal) Filipe Pinto, (Polytechnical Inst. Leiria, Portugal) Gladys Castillo, (Univ. Aveiro, Portugal) Joaquim Silva, (Univ. Nova Lisboa, Portugal) José Costa, (UFRN, Brazil) Karin Becker, (UFRGS, Brazil) Leandro Krug Wives, (UFRGS, Brazil) Luis Lamb, (UFRGS, Brazil) Marcos Domingues, (Univ. São Paulo, Brazil) Margarida Cardoso, (ISCTE-IUL, Portugal) Mark Embrechts, (Rensselaer Polytechnic Institute, USA) Mohamed Gaber, (Univ. of Portsmouth, UK) Murate Testik, (Hacettepe University, Turkey) Ning Chen, (Institute of Eng. of Porto, Portugal) Orlando Belo, (Univ. Minho, Portugal) Paulo Gomes, (Univ. Coimbra, Portugal) Pedro Castillo, (Univ. de Granada, Spain) Peter Geczy, (AIST, Japan) Phillipe Lenca, (Telecom Bretagne, France) Rui Camacho, (Univ. Porto, Portugal) Stefan Lessmann, (Univ. of Hamburg, Germany) Stéphane Lallich, (Univ. Lyon 2, France) Susana Nascimento, (Univ. Nova Lisboa, Portugal) Yanchang Zhao, (Australia Government) Ying Tan, (Peking University, China) ------------------------------------------------------------------------------
Question
3 answers
Related to properties of data sets
Relevant answer
Answer
"uncertainty" has plural meanings, depending on context and discipline. It is really an umbrella term that covers several semi-related concepts associated with "lack of certainty" in reasoning. Varieties of uncertainty include vagueness, variability, imprecision, indeterminacy, undefined, unknowable, absence (i.e. missing information), error, and so on.
"Imprecise" has a more specific meaning than "uncertain". In numerical contexts, it basically means fewer significant digits. If the true value is "3.14159...." and the estimates are "3" vs. "3.14", then the first is imprecise compared to the second. Similar distinctions can be made in qualitative/structural/linguistic domains.
"Vague" combines imprecision with generality and corresponds to a range or region of values, and roughly corresponds to "fuzzy specification" or "fuzzy estimation". The adverb "about" is often used in vague expressions, i.e. "He is about 6 feet tall". (Notice we never hear "He is about 6.231193 feet tall".)
For more, see these books by Michael Smithson:
Ignorance and Uncertainty - http://www.amazon.com/dp/0387969454/
Question
26 answers
I am working on classifying mammogram images using computational intelligence. Is there a database with images that can be opened in Windows 7. There are a few but they are supported by a Unix environment. If anyone has experience working in the field, please do share.
Relevant answer
Answer
Thank you Mr. Eltoukhy,
Sir,
I saw your papers, you are doing an excellent effort. As far as I know from the literature, MIAS has less cases as compared to DDSM database. I was successful downloading a few images from DDSM database using the CygWin software that emulates the Unix environment in Windows. Now I am a bit stuck up with opening the groundtruth files in Matlab. There is a code for matlab with the name get_ddsm_groundtruth.m with which one can download the DDSM Radiologist Annotations and Metadata. However it is giving errors. If anyone has worked on that please do share.
Regards.
Arbab Masood Ahmad
Question
7 answers
To be a programmer who can come up with novel solutions means a certain amount of creativity is required, but how do we enable students to develop it?
Relevant answer
Answer
As a student, I was lucky enough to study under Professor Syed Hammad Ahmed at FAST-National University of Computers and Emerging Sciences. Of the many courses he taught us, one was Algorithms. I know that traditionally, teachers simply show their students different famous algorithms and calculate their efficiency and discuss the pros and cons of each one. The teacher is the presenter and explains each algorithm to the students. The teacher occasionally asks students for their ideas and gives feedback.
What our teacher did differently was that he simply presented the PROBLEM to us, and would stand aside and let the students come up with solutions. Then different students would present their algorithms and the others would be allowed to debate about its pros and cons. So it wasn't a teacher-student thing any more. It was like a p2p network, the students teaching each other. He'd occasionally step in to guide the class towards a certain direction, or give us a "building-block" algorithm and ask to create all further algorithms from that building block. As a result, we, the students, derived all the famous algorithms on our own. He'd reveal the names of the algorithms after we had created them ourselves =) as a result, programming creativity generally increased in the whole class.
Question
4 answers
If you read an article or journal paper or book, which this referred some information relevant to the original source. The reference will give reference to the source or origin or base or initiation of research work. If the common struture is there, then literature-collection time (the search time) for the research will reduce. Otherwise literature-search will convert into a research problem, because of heavy data.
Relevant answer
Answer
I guess the idea of digital object identifier is a step in this direction to uniquely identify digital publications (http://www.doi.org).
On the other hand I never had issues to find a paper with Google or in the digital libraries of ACM or IEEE. Reference style was much more important when the method of searching was to use library index cards ...
Question
3 answers
With reference to perception action cycle
Relevant answer
Answer
Emotions could basically be understood as an inner context that get associated with the sensorimotor processes of an agent.
You can also view the emotions as tighly linked with a system of homeostatic drives and allostatic control (see attached paper).
To me the question that remains really unclear is which sensory-emotive associations are innate (e.g fear of spiders), and which one are acquired through experience (e.g joy to see a snow flake).
What are the other uses of a mathematical model for acute pain?
Question
Could a model for Acute pain have uses outside of medicine or biology? Could it have applications in robotics?
Question
26 answers
I have a text classification task. I used sliding windows method to pupolate my data set. The problem is that the size of the data set is huge and the data points are very similar in my data set. I would like to reduce the data set without losing informative data points. I am aware of variable selection techniques such as "kruskal.test", "limma", "rfe", "rf", "lasso", .... But how can I choose a suitable method for my problem without doing computationaly intensive operations.
Relevant answer
Answer
In Bioinformatics, Position Specific Scoring Matrix (PSSM) is used to encode the protein sequence. Since the sequence length greatly varies, moving window approach is suitable. After you have numerically encoded data, you can use any standard data reduction techniques in machine learning as mentioned in previous comments.
Question
What are the appropriate computational intelligent model to improve the user satisfaction on m-government services?
Are you aware of this nice workshop (2nd CFP): Knowledge Discovery and Business Intelligence (KDBI - EPIA2013) ? :)
Question
Papers in Springer LNAI (ISI indexed). Best papers in Journal Expert Systems (ISI JCR). Deadline: 15/3/2013, see more at: http://lnkd.in/HHeYgc
Question
2 answers
Robots due to cost are a limited resource for teaching but useful. They engage students and make concrete principles but it is not possible to have one robot per student for both cost and space reasons. How can I get the same benefits of robots for teaching AI through other methods?
Article Neural nets
Relevant answer
Answer
Thank you this is interesting.
Question
63 answers
Problem solving is central to computing and engineering education, but it is not clear what is the best way to engage students with developing problem-solving skills. Are there better ways in other disciplines?
Relevant answer
Answer
Hello all,
I think you have put your finger on a fundamental in problem/project based learning (PBL). As an art and English teacher in an engineering university in France, letting go of one's fear of being wrong, therefore judged by one's peers, inhibits creativity. The teacher's role in setting a comfortable zone for errors is vital . for all disciplines.This phemenon is also true within a tight academic hierarchy, when in a meeting people are scared of participating for fear of verbal bullying.
Within the classroom, the dynamic teacher/student is different and in my opinion more conducive to developing self-confidence and innovation.
Best.
Catherine
Do you need an exciting congress to publish you recent work on Computational Intelligence?
Question
Next 8-11th September, in the stunning beach of 'Porto de Galinhas'-Brazil, researchers from several approaches of Computational Intelligence will attend 2013 BRICS-CCI and 11th CBIC. Website => http://brics-cci.org The *NEW* Submission Deadline is 20th May. -Send to us your latest work!
Question
5 answers
Collecting data samples from a power plant requires time and funds. In order to make sure that you make use of the limited time that you spent on the plant capturing the data, a specific data selection and preparation scheme would be very helpful to be used as guideline. Can anyone suggest papers that could be a reference for a data preparation method?
Relevant answer
Answer
Dear Nistah,
I think you have already answered your question, as you said you need to identify the parameters contributes to the trip occurrences. I think the first thing you need to do is to identify the processes (which I hope you already have because you have the simulation model). The second thing is to identify all the activities involved within the process. I assume here Trip Occurrence can be seen as one of the activities within the process. Now, here you need to be careful, trip occurrence activity can be either because of the process itself or because of succeeding/preceding activity (or process). This is the point (as you have already identified the activities/processes contributing towards the trip occurrence event) where you can define the parameters for the trip occurrence data collection.
Question
4 answers
...
Relevant answer
Answer
Thank you so much for your suggestion.
Question
8 answers
I am a bit confused with "No of clusters" and "No of Seeds" in K-Mean clustering algorithms. Kindly provide an example for understanding the point of view. What is the effect if we change either?
Relevant answer
Answer
To decide what is the best number of cluster is a different problem than to decide how to set the values of the seeds.
The first problem is how to decide the"value of k" in k-means (k= amount of clusters), because any additional cluster improves the quality of the clustering but at a decreasing rate, and having too many clusters may be useless to decision makers, data comprehension, data explanation, etc.
The number of initial seeds (initial centers of clusters) is the same as number of clusters (at leats in the original k-means). The problem of the VALUES of the seeds is different than problem of number of clusters... normally you would use random cluster centers, but some research points to better ways to choose them. With better seeds, k-means converges faster and the quality of the clusters is good.
I remember that there is variations of k-means mixed with hieralchical methods, in those, you use more than k seeds, and later you must collapse (unify) some clusters, like in hieralchical clustering, until reducing the number of clusters to k. In that method, final number of clusters is not equal to initial number of seeds.
What is the aim of the AI community?
Question
6 answers
What does the AI community want to achieve in terms of development?
Relevant answer
Answer
There are many researchers with very different goals working in the field of AI today. Some target systems that solve isolated problems that have traditionally required humans, this is generally called Narrow AI. Deep Blue, Watson and Siri are examples of this kind of AI. There is also a community within AI working on the explicit goal of human-level intelligence (and ultimately beyond). This community is called Artificial General Intelligence (AGI). And there are also researchers who attempt to use AI to validate psychological and neurological models of the human mind. So in short, there is really no clear, common goal amongst all AI researchers, and in fact not even a widely accepted definition of intelligence.
Question
4 answers
Which machine learning algorithm is able to gauge a student's domain level (Beginner, Intermediate and Advance) in an online quiz? Once this has been established, is there a metric/algo that would be able to ascertain a smooth transition from each domain level, both with supervised and unsupervised learning? Which one is more suitable? What are some features to use to determine each domain level?
Relevant answer
Answer
If I understand correctly, you want to categorize answers to an (online) quiz into three domain levels using Machine Learning techniques? Unless you already have some metric in place, a supervised method will require example results with known domain levels (so you can compare the answers from known beginners to known intermediates, etc).
Once you have that kind of data, the problem resolves into a simple classifier problem, which depends somewhat on the type of questions and answers to solve. I would guess a high-dimensionality SVM or a SLP/MLP would be quite sufficient; in fact, a simple Naive Bayes classifier would probably do just fine already.
If you want to do unsupervised learning, the problem becomes somewhat trickier. You can use a K-means or SOM algorithm to find three groups of answers, but those do not necessarily correspond to domain level knowledge groups. I'm afraid I don't know how that could best be solved; interested to hear what you come up with!
Computing with discrete compositional data structures in analogue computers
Question
Slides & video of my talk "Vector Symbolic Architectures: Computing with discrete compositional data structures in analogue computers" are available from the Talks section of my home page: http://bit.ly/RossGayler This is relevant to cognitive scientists interested in how neural networks might handle complex data structures (as may occur in linguistic processing). It should also be relevant to computer scientists interested in unconventional computation methods.
Question
1 answer
In the English language there are a lot of tools to represent the knowledge from document in the conceptual graph, like CharGare, etc. at http://conceptualgraphs.org/.
Relevant answer
Answer
To my knowledge, There is no open source tool which do this level of processing, but you can see some simpler tools like "Arabic wordNet" http://sourceforge.net/projects/awnbrowser
Question
1 answer
Conceptual Graph from the knowledge representation and in the semantic notion. In the English language there are a lot of tools to represent the knowledge from document in the conceptual graph, like CharGare, Ameen at http://conceptualgraphs.org/.
Relevant answer
Answer
"Atlas.ti" is good and usefull in all lenguajes but I would recommend the old "pajek".
Question
7 answers
Can we convert or make theory class also as interactive as practical or lab session by using the follwoing techniques:
1. Teach using some simple equivalent model
2. Overlaping/sandwitch way of learning: Teach and practice and teach practice use simulation tool such as MATLAB, PSIM etc
3. Instead of calculator MATLAB can be used as tool to solve the problem, it save paper also.
Relevant answer
Answer
In my point of view theory classes are important as well. As students can learn theory first then they can implement in Lab classes properly. However in Lab sessions students are relax not bound for eye contact with teacher or white board, this also become reason to enjoy Lab class more.
CFP NICSO 2013 September 2nd - 4th, 2013 Canterbury, UK (http://www.nicso2013.org)
Question
The VI International Workshop on Nature Inspired Cooperative Strategies for Optimization http://www.nicso2013.org Full paper submission (extended): May 5, 2013 Acceptance notification: May 25, 2013 Final camera ready: June 5, 2013 NICSO: September 2-4, 2013
Are you aware of this nice R tool package: New rminer v1.3 (R package that eases data mining classification and regression) ?
Question
1 answer
Package available at CRAN: http://cran.r-project.org/web/packages/rminer/ Feedback about the package is welcome.
Relevant answer
Answer
Today I have launched rminer version 1.4 at CRAN:  http://cran.r-project.org/web/packages/rminer/
Question
11 answers
It would be interesting to know what is the most promising prospect in an attempt to develop AI
Relevant answer
Answer
Hi Heman,
I understand your point of view.
I thought your question was too broad.
By reformulating it, I could answer. I try again.
We know that biological neural networks are well suited for Natural Intelligence. We can expect that a good simulation of biological neural networks should be well suited for Artificial Intelligence.
However, we observe that if we would like to solve a practical problem, neural networks are just one of the learning machines, which we can use. Moreover, the neural networks, which is the most used, the MLP, is not plausible from a biological point of view.
My point of view is that there are two possible ways:
- find the best simulation of biological neural networks to achieve Artificial Intelligence (the Graal),
- solve some tasks of Artificial Intelligence, using learning machines.
Question
2 answers
Details of the research work will follow if there are capable contributors.
Relevant answer
Answer
Hi Carlos, please, can you link me the the already hybridized ACO+QL. Thanks for your contribution
Question
1 answer
Experimental work is identified as one of the best practices for students to gain knowledge and to develop skills with interest. It is observed that attendance is also good for the lab, when compared to the theory.
The lab will help student test, impliment some concept priciples, new idea and to some extent or some time it act like an incubation centre for solving real world problems.
Relevant answer
Answer
What are the possibilities can conduct the remote experiments? Is it possible to design and implement a traditional. The Internet or three 3G or 4G technology provides new opportunities for remote experimentation.
In future there may be two types of experimenets:
REMOTE AND NON-REMOTE EXPERIMENTS
This perticular ditance learnig lab can be applied for the few following labs:
Simulation based experiment.
Low volatge electronic circuit base simulation Like PSIM,MATLAB
Some experiment fluid Mechanic lab
Computer Programming
Note: Lab must be certified by the HSE department
Both student side and college is equipped with less cost with low risk experiment setup Like in Eletronic Circuit :a breadboard, some desktop instruments and a power supply. Experiments on electrical circuits have been conducted over the mobile communication/internet using experimental
fecility located. THis perticular paper discusses some implementation issues. The address of the laboratory home page is http://www.its.bth.se/distancelab/english/
What is the difference between Knowledgeable and Knowledge as a term?
Question
3 answers
What is the difference between Knowledgeable and Knowledge as a term?
Relevant answer
Answer
Thanks Alan for your cooperation. I want the difference between the two words as a term, because I want to find each word its own definition of a term using another language.
Question
5 answers
If the algorithm is very greedy (e.g. local search), then restarts of that algorithm might resemble a memetic algorithm. Let's consider a memetic algorithm with a very expensive local optimization technique -- we would like to run the local optimizer less often, so we might want to filter the starting points. This can be difficult since a start point with a good fitness evaluation might not have that much more room for improvement. So, what we need are ways to estimate the room for improvement for a given (e.g. completely random or somewhat random) solution.
I'm primarily interested in continuous search spaces. Given two solutions x1 and x2, is there a way to find out/estimate the probability that the (local) optimum near x1 is better than the local optimum near x2 when f(x1) is worse than f(x2)?
Relevant answer
Answer
1) In reference to the restarts, I understand you make reference a multi-start methods. In this case, I can tell you that my previous experience says that multi-start strategies are particularly interesting when applying meta-heuristics with a single solution, while this strategy is not so good when applying population-based approaches. The reason is that if, for example, you apply Simulated Annealing, the search process could be performed during a given number of iterations, after which you could store the best solution found, and execute again Simulated Annealing with a different initial solution. Other option, would be re-start the parameters of Simulated Annealing and continue the search process from the point at which finished the previous execution of Simulated Annealing. On the contrary, if you apply a population-based meta-heuristic (e.g. an evolutionary algorithm, memetic algorithm, etc.) the use of different initial solutions in each individual would be almost equivalent to appliy a multi-start / re-start strategy.
2) In reference to the last question, and supposing that the problem to be solved is NP-hard, I would answer no.
If you have two solutions (x1, and x2) of an optimization problem, and their fitness values are f(x1) and f(x2), I consider that it is not possible to determine whether or not a local optimum close x1 is better than that one close to x2 if the unique information you have is both fitness values. In fact, small changes in the solutions (e.g. the application of the mutation operator) could involve important changes in the fitness value of that solution, i.e. in those hard problems in which the neighbourhood structure is not known in detail, it is not possible to determine the best solutions areas. In other words, in NP-hard problems, it is not trivial to determine the effect in the objective space after a change in the search (or solution) space.
Finally, as you are interested in this question, I suggest to read the papers of Alberto Moraglio, who has investigated in detail the concept of "distance" over the solution space.
Can human brain understands itself fully?Just as in mathematics a function cannot be used to define itself.
Question
2 answers
Just as in mathematics a function or theoretical concept/term cannot be used to define itself. we have to use our another understanding-apart from function/concept etc in question itself- to make sense of the function /term/concept., so I suppose should be the case with the brain. But brain is also the highest and only tool for understanding. what else can be used to understands it? O it will be case of different component of Brain understanding each other. But then can there be ever full understanding of Brain.?
Relevant answer
Answer
This is an interesting question and I think philosopher(which I am not) would have better tools and methodology to answer it correctly. In my view the analogy with mathematics as you presented it is not valid. In mathematics to define some concepts within a certain theory we may introduce a different level (meta) theory which may define, explain, prove concepts in earlier theory. Question about "full understanding" of anything (not only brain) is a different type of question. I think the "full" will be never possible. It is like saying we will have "full understanding" of universe, or life, or any other interesting things in nature. I hope there will be always something more to understand. Back to you analogy , Actually it is not that brain "understands" brain in the same way as specific mathematical framework describes itself. Brains of multiple people from many generations contribute to creating proper frameworks to describe brain and its function. Mathematical theory and models, computational models, behavioral sciences, even theology are all a different tools to understand. These tools were created or discovered (as you may know many mathematicians and philosophers have a view that all mathematics is given in nature and mathematicians just discover it using their brains!). We will always have "understanding" within a certain framework and these frameworks may overlap, evolve into new theories or limit itself to certain properties of brain. I am not looking to achieve "full understanding" of anything it would be very boring if after some "final" discovery there would be nothing to discover anymore (-:.
Bayesian Regulation in MATLAB
Question
I've got an assignment to create plot eigen energy in x-axis to interval energy in y-axis using trainbr in MATLAB. In my case, matrix N and E are equivalent, but when I try created plot E to N using trainbr I got stuck, can anyone please explain to me how to use trainbr in MATLAB? Your answer will be much appreciated, thks
Question
3 answers
When to try to gauge similarity?
Relevant answer
Answer
parameters for LSI are the dimensions selected when applying SVD. Also, weighting schemes (local and global, document length normalisation (if working with documents), pre-processing and stemming can be considered as parameters which impact on the performance of LSA.
Question
7 answers
I'm currently considering two optimization algorithms (artificial immune system & simulated annealing) mainly on solving scheduling problems in manufacturing. However, I would like some opinion either to consider hybrids of two or more algorithms or non-hybridized algorithm. Basically, what degree of "hybrid" between two or more algorithms can be considered as a hybrid algorithm? How actually hybrid conducted or what trends that mostly adopted out there in order for two or more algorithms successfully hybridized? On what aspect actually hybrid actually needed?
Relevant answer
Answer
Hi,
In order to perform hybridization, you need to identify the strengths and weaknesses of both methods. Basically, you can use one then in sequence, just starting from the best solution found by the first one or run them in parallel and exchange their solution from time to time. But this may not be the best idea.
If one approach (A) is better at performing a global search and the other one (B) is better in the local exploration, then you can embed B within A. For instance Memetic Algorithms are Genetic Algorithms using local search or limited metaheuristic as special mutation operators. GRASP is also a good candidate for A and you can use any other metaheuristic as the local search: GRASP / Tabu Search, GRASP / Simulated Annealing, GRASP / VND, and so on.
As Tino mentioned, you can also think about a matheuristic, that is a combination of heuristic / metaheuristic and mathematical programming (exact) method.
Question
18 answers
I want to know the algorithm of random generators.
Relevant answer
Answer
Hi Ali,
As Sasikumar said, computers generates only pseudo random numbers. But in order to increase the entropy of your computer-generated numbers, you can observe some "unpredictable" phenomena such as mouse movement, keyboard taping, network interface activity or (the best one from my knowledge) the gyroscope movement of your device (think of a mobile phone moving on the hand of a baby..)
You can see this video (http://www.youtube.com/watch?v=itaMNuWLzJo) which explains how pseudo-randomness work.
For algorithms, my favourite one is the Mersenne twister (very fast and easy to implement) see here : http://en.wikipedia.org/wiki/Mersenne_twister
A paper discussing how the Linux random number generator works : http://eprint.iacr.org/2006/086.pdf
Help this may help.
Good luck.
iliasse.
Question
How would i apply user behaviour in library e-service with computational intelligent?
Question
4 answers
For example, assume we have a frame representation for a castle, how can we tell the computer to draw the castle from that knowledge?
Relevant answer
Answer
There are several ways, first, if you can program actions for you to program in java or constryas the object you want, while on the other hand, you could use Maya or AutoCAD to accomplish your project. It must be noted that the need to think about rendering, which is not necessarily associated with all the software for 3D design. I think it would be good Scracht You will review some MIT to strengthen or develop your skills in programming and computing.
Greetings.
Rafael Resendiz Ramirez
What do you think about publication?
Question
Prof. Dr. David Parnas(a pioneer in Software Engineering) has joined the group of scientists which openly criticize the number-of-publications-based approach towards ranking academic production. On his November 2007 paper Stop the Numbers Game, he elaborates on several reasons on why the current number-based academic evaluation system used in many fields by universities all over the world (be it either oriented to the amount of publications or the amount of quotations each of those get) is flawed and, instead of generating more advance of the sciences, it leads to knowledge stagnation.
Question
5 answers
how to participate in the workshop of AI in bangalore
Relevant answer
Answer
Can Any one Give us the link.....
Question
Genetic Lifeform and Disk Operating System 2 is the aperture science second mind
Question
6 answers
I search for matlab code implementing rough set
Relevant answer
Answer
Hi, I am working on Rough Set Methodology. I am stuck at a point in implementing the RST to my problem. Can anybody help in getting me solution to this problem. I am having multiple decision attributed with multiple conditional attributes. I have not come across any method which accommodate multiple decision attributes. Can you help me to tackle this problem? Suggested literature can also be provided. Thanks in advance.
Question
6 answers
Hi.
I am working on a specific evolutionary algorithm improvement.
Can anybody suggest a paper or book about evolutionary algorithms comparison rules?
Is saw some approaches used in other papers for comparison , but i'd like to have a comprehensive reference about these rules.(if there is any written rules).
Thanks
Relevant answer
Answer
The rules are simple:
1.) Test on multiple problem instances, or you will overfit on the one you use.
2.) Do parameter studies (grid search), or you don't know if you just used the wrong paramters on the algorithms you compare.
3.) Do multiruns. You risk getting better/worse results only by chance.
4.) Do not hide runtime behind generations/number of evaluation. If your mutation operator takes several minutes, it is unfair, to plot fitness against the number of function evaluations-
5.) Compare your method to the state of the art. If you don't you are improving nothing.
Even if many papers published in that area are not that thorough, that should be the minimum standard. If you made it through all these steps (which is NOT easy), you can try to publish it.
Some additional advice: Don't get me wrong, but significant improvements on common strategies in EA are very rare. This can be easily explained. In many cases the "dullest" algorithms yield to very good results. I call it "the curse of multistart hillclimbing". The increase of complexity seldom has a significant effect on the EA efficiency when applied to a broader range of problems. There is almost no free lunch, right?
There is one exception, which is including problem awareness in the algorithm.The constraint-preserving mutation operators for TSP in order based genetic algorithms, would be a good example for that.
Question
1 answer
Hi everybody..can any one help me to know that how to developed any small Intelligent system using any programming language ..may be JAVA....basic problem it that how can we add intelligence into software?...
Relevant answer
Answer
First you need to know what form of "Intelligence" you want to add.
I found a book, might be quite long in the tooth that claims to teach early A.I. techniques in Java.
"Hands-On AI with Java: Smart Gaming, Robotics, and More by Edwin Wise, TAB Electronics Series by McGraw Hill, ISBN 0-07-142496-2 (2004) Don't know if it is still in print though.
Question
3 answers
I am looking for modern platforms that support automated defeasible reasoning and decision support for a medical application. My graduate work (back in the mid-90s) helped build an agent-based defeasible reasoner and implemented a primitive "Medical Diagnostic Advisor". I'm now working with a company that needs a *defeasible* rule-based engine to support decisionmaking in real time based on input from a monitoring system. I was hoping someone could point me in the right direction for the latest developments in defeasible reasoners, preferably LISP based, although C/++/objective is fine too. Thanks!
Relevant answer
Answer
Hmmm....
I haven't heard that term in a while, perhaps defeasible has been subsumed under some other theoretical umbrella, but I can't say for sure.
Question
Integrated statistical techniques are in the process of being launched in India that can curb credit card and debit card fraud. The techniques are a combination of Baum-Welch and multivariable gaussian distribution besides many other integrated technologies. We are looking at international tie-ups with agencies that are already working in this sphere. You may write-in to arijayc@gmail.com
Question
2 answers
Hi for all,
would anybody introduce a book to learn how a good fitness(cost) function i can select ; for classification purposes,
thanks
Relevant answer
Answer
Tanx philip
Question
Use of Computational Intelligence / Artificial Intelligence/ Artificial Life To Solve Millennium Prize Mathematical Problems.
If Computational Intelligence / Artificial Intelligence / Artificial Life could be used to solve the Millennium Prize Mathematical Problems please send me feedback on ian.ajzenszmidt@alumni.unimelb.edu.au. Success in this endeavor would be a great public relations and prestige coup.
http://www.claymath.org/millennium/ .is the source of the following:
In order to celebrate mathematics in the new millennium, The Clay Mathematics Institute of Cambridge, Massachusetts (CMI) has named seven Prize Problems. The Scientific Advisory Board of CMI selected these problems, focusing on important classic questions that have resisted solution over the years. The Board of Directors of CMI designated a $7 million prize fund for the solution to these problems, with $1 million allocated to each. During the Millennium Meeting held on May 24, 2000 at the Collège de France, Timothy Gowers presented a lecture entitled The Importance of Mathematics, aimed for the general public, while John Tate and Michael Atiyah spoke on the problems. The CMI invited specialists to formulate each problem.
One hundred years earlier, on August 8, 1900, David Hilbert delivered his famous lecture about open mathematical problems at the second International Congress of Mathematicians in Paris. This influenced our decision to announce the millennium problems as the central theme of a Paris meeting.
The rules for the award of the prize have the endorsement of the CMI Scientific Advisory Board and the approval of the Directors. The members of these boards have the responsibility to preserve the nature, the integrity, and the spirit of this prize.
Question
4 answers
hi all, can anyone give a suggesstion To find the role of metacognition , the appropriate soft computing principles
Relevant answer
Answer
I think you are asking the wrong question.
It's not what CAN be implemented, it's How do you implement it?
Let's take a look at the Agency thing.
Limbic Parameterization is really easy to implement. Decide what is important (the brain has had a long history of genetic development to get this right) and generate a "Soft" signal that varies according to how well the important elements are being serviced by the brain.
For instance, Hunger is simply the brain determining that there might be food available, and that it might need to eat some.
For this to act as a libido, or Drive, to feed, the animal must get "Hungrier" the longer it doesn't feed, and have it's behavior biased to minimize the "Hunger" signal.
Ok, so lets take a look at a robot, given an adc it is possible to read the voltage of the battery pack, and from this derive a soft signal that determines at what voltages the signal will have what strength. As the battery voltage drops, the meta-cognitive feeling should get stronger, and when the voltage increases, it should get weaker. If you have a threshold under which the signal does not reach the processor, and the priority of feeding increases with the strength of the signal, then, sooner or later the robot will determine that it needs to recharge.
The Comfort zone, or Threshold setting for hunger, can be a soft value. In humans we think there are two levels to comfort calculation, the Set Point, which is set in childhood, and the Adjustment point which is set by experience. The Model therefore uses these two numbers in a relationship where the adjustment point is the primary setting, but in extremis, (when adrenal function is invoked) the Set point is used instead.
Ok, so when an action is being pre-analysed the Modeling mechanism looks at the Adjustment/Set Points, and determines if the action will, cause the metacognitive signal to increase or decrease, it then evaluates if a strong signal is positive or negative. The Modeling mechanism must decide if the projected extension of the metacognitive signal should be allowed or not, and thus whether the action should be allowed. In this sense the metacognitive signal, is the parameters for the decision of whether to do the action or not.
When the log, is compared to the model, the previous state of the parameters and the current state of the parameters, indicate whether the predicted parameters moved in the direction of comfort or otherwise. The metacognitive "Self" signal is probably a fitness measure for matching the "Agency" requirements with the Models predictions.
Question
2 answers
Support Vector Machine can play important role in Protein fold recognition. HMM and ANN also play there Role in it..
Relevant answer
Answer
Yes ,SVM is a good choice especially for classification of non-linear objects and with the aid of fuzzy logic we can improve its efficiency.
Question
4 answers
Here we are faced with global economic challenges. Business are indeed struggling to cope with different aspects including unreliable labor.
Relevant answer
Answer
The driving force for literally anything in the business world is profit.
Anything could be automated, but only those which are profitable are automated.
However I do agree with your point that labor is becoming increasingly unreliable. Attributes for the same is the increasing communist attitude. A communist's attitude is always a pathway for a grave.
Question
2 answers
Support Vector Machine can play important role in Protein fold recognition. HMM and ANN also play there Role in it..
Relevant answer
Answer
No Actually I try to applied SVM to fold recognition...
Question
hey all wonderful people here..
I just need your views for best Artificial Neural Network approach, if we are using neural network for strategic decision making in our application..
thanks in advance,
warm regards
Chetan
Question
1 answer
Hi everyone,
I'm working on software sensor (Biomass). And due to the complexity of the bioprocess, i'm using black box ( neural networks) to predict the biomass concentration during the culture. For that i'm using matlab to implement my RBF-NN (Radial Base Function - NN) and PCA for pre-processing the data. But here is my problem i can found how to do this cause i'm new in matlab and i don't want to use the matlab NN module. Is there a better NN or pre-processing method for modeling bioprocess? Please i need some help?
Best regards.
Relevant answer
Answer
I think MATLAB is a good option for this...U can as well go for neurosolutions...
Question
1 answer
hey all, I am working for building a firewall for prevention of SQL Injection in website. I am planning this through artificial neural network. need your comments and suggestions on this.
if anybody is working on similar then guidance is needed..
thanks in advance
Chetan
Relevant answer
Answer
For building a firewall for detection of SQL injection, you model based on neural netowrk (NN) must go through two phases. The former one, is to train the NN with a dataset that contains SQL injection attacks, so NN will build the model or profile of those attacks; it is able to learn and then recognize novel attacks as it have a high capability of generalization. The last phase, it is evaluation of your firewall; after NN leans, you firewall based NN be able to distinguish SQL injection attacks from normal SQL queries.
So, try to find a dataset that contains both SQL injection attacks and normal SQL queries, then divide this dataset in two parts, 70% of that database wil serves as training dataset and the 30% of that database will be used as testing dataset.
good luck.
Question
1 answer
Dear group members
What are the most advantages of LOLIMOT(Locally LInear MOdel Tree algorithm)in comparison to MLP Neural networks?
Relevant answer
Answer
Within a LOLIMOT training procedure, only linear optimization techniques (which are fast and deterministic) are necessary. In contrast, MLP is mostly trained by a nonlinear LM algorithm which may lead to a local but not global optimum. Moreover, the linear extrapolation behavior (MLP: constant extrapolation) of a LOLIMOT-trained model and the locality property can be advantegous for many applications. Another feature is that the user can influence the splitting procedure by the incorporation of prior knowledge; if the user expects a nonlinear behavior only in a special input dimension, the other dimensions can be neglected within the partitioning process. There may be further advantages depending on the specific application.
Question
VLifeSCOPE (SCOPE) is Structure Based Compound Optimization, Prioritization & Evolution computational method. SCOPE brings together two powerful approaches – one, comparative binding energy analysis based method for lead optimization and two, score based approach for activity prediction.
Comparative binding energy analysis is a receptor-dependent analogue method to enable better understanding of ligand - receptor interactions. For each of the ligands under consideration, intermolecular and intramolecular energies are calculated for the ligand - receptor complexes, the unbound ligands and the receptor.
Advantages of VLifeSCOPE:
1. Identification of residues that are the key to modulating the ligand activity in a target
2. Predicting activity of newly designed compounds docked into a target
3. Prioritization of docked compounds based on their predicted activity
VLifeSCOPE is now available with VLifeMDS 3.5 as advanced module
Access VLifeSCOPE webinar archives here:http://www.vlifesciences.com/webinar/Webinar.php
Question
3 answers
Now a days we have hex core microprocessor in market, but even after this so much improvement in microprocessor we don't have any remarkable change in performance overall(speed must be six times to Pentium).
Don't you think we must think for hardware improvement, or some idea how can we interface our hard drive to microprocessor in such way so that we can minimize the misses. Now miss rate is very high(as hard disks are quite slower comparatively).
All suggestions or comments are most welcomed as I want to work on it and looking for new ideas to work.
Partners in this project are also welcomed.
Thanks in advance..
Warm Regards,
Chetan
Relevant answer
Answer
Actually I think it is time to quit waiting for hard drives and relegate them to archive.
Consider your lowly Flash Key, with more memory than a hard drive had 20 years ago, how fast do you think you can clock flash memory? Wouldn't it make sense to use flash as a buffer between your harddrive/CDROM/DVD/BLUERAY archive media, and the processor?
Question
5 answers
Hi all, I am working on fuzzy logic implementation in PL\SQL or SQL queries. I need help from someone, who is working on same or have worked.
Thanks a lot
Chetan
Relevant answer
Answer
Thanks a lot for response....
I am just trying to devise an application by which user can choose (banks, finance companies, insurance companies) and choose any plan or account for him.
User can type any query like Google search query and result should be optimized by using fuzzy logic on database...
I need basic architecture for this idea, so that I can start working specifically...
Is it enough description or should I be some more specific...?
UniCSE CFP
Question
2 answers
Relevant answer
Answer
Thanks a lot for help.. I will contact you very soon..
What do you think about publication?
Question
Prof. Dr. David Parnas(a pioneer in Software Engineering) has joined the group of scientists which openly criticize the number-of-publications-based approach towards ranking academic production. On his November 2007 paper Stop the Numbers Game, he elaborates on several reasons on why the current number-based academic evaluation system used in many fields by universities all over the world (be it either oriented to the amount of publications or the amount of quotations each of those get) is flawed and, instead of generating more advance of the sciences, it leads to knowledge stagnation.
Question
1 answer
In parsing mechanism what does it mean?
Relevant answer
Answer
Using dynamic programming to reduce the order of the parsing algorithm by keeping a record of previous successful partial parses.
Question
The basic structuring methods presented are the array, the record, the set, and the sequence. More complicated structures are not usually defined as static types, but are instead dynamically generated during the execution of the program, when they may vary in size and shape. Such structures, and include lists, rings, trees, and general, finite graphs. Variables and data types are introduced in a program in order to be used for computation. To this end, a set of operators must be available. For each standard data type a programming languages offers a certain set of primitive, standard operators, and likewise with each structuring method a distinct operation and notation for selecting a component. The task of composition of operations is often considered the heart of the art of programming. However, it will become evident that the appropriate composition of data is equally fundamental and essential. The most important basic operators are comparison and assignment, i.e., the test for equality (and for order in the case of ordered types), and the command to enforce equality.
The fundamental difference between these two operations is emphasized by the clear distinction in their denotation throughout this text.
Test for equality : x = y (an expression with value TRUE or FALSE)
Assignment to x : x: = y (a statement making x equal to y)
These fundamental operators are defined for most data types, but it should be noted that their execution may involve a substantial amount of computational effort, if the data are large and highly structured. For the standard primitive data types, we postulate not only the availability of assignment and comparison, but also a set of operators to create (computer) new values. Thus we introduce the standard operations of arithmetic for numeric types and the elementary operators of propositional logic for logical values.
Question
157 answers
Mind modeling, relevant knowledge base, knowledge representation, cognition, computation
Relevant answer
Answer
What does it mean to understand mind?
The best scientific understanding possible, the foundation of the scientific method is a model making experimentally verifiable predictions, and then experimental verification of these predictions.
The model could be classical or quantum, exact or approximate - these are secondary to the very question of understanding. It is good if the model can specify accuracy of its predictions. But even qualitative predictions is a strong indication of understanding. When there is no predictions at all, this indicates that there is no understanding.
Does it make sense?
Question
11 answers
for e.g. the input values can take finite states only, say 10 symbols {A, B, C, D, E, F, G, H, I, J}
Relevant answer
Answer
Boolean algebra!
Question
12 answers
If I have 1000 instances of false class and 50 instances of true class, which practice is aimed for a better result? To select False and True on the basis of equal ratio or equal number?
Relevant answer
Answer
I guess it depends on which classifier and training algorithm you use. Since your question belongs to the ANN topic, I assume you want to use a neural network classifier (but the answer is valid for many other types of classifiers). If you are using anything similar to an MLP, I would go for equal number of instances. Otherwise the training algorithm will be dominated by the classification errors on the most numerous class. That is, if you use the standard BP algorithm the ANN weights will be modified at each presentation of a training example. Since one class has 20 times more examples of the other, most of the training procedure will be spent on minimising the classification error on the most numerous class. In the extreme case, the training algorithm could just learn to classify all examples as 'false' and it would reach still 95% accuracy. If the data set is noisy, trying to learn the exact classification boundaries may give a lower accuracy than 95%. If your training set is balanced (i.e. all classes are equally represented), you force the ANN to learn to distinguish the two classes (e.g. if all examples are classified as 'false', the classification error is 50%). However, if some errors are more dangerous than others (e.g. a false negative in a mammography), you may want to work with unbalanced sets in order to bias the learning procedure on the most important classes.
Question
I've read a couple of papers from the WWW conference on how to predict click-through rates of text ads. Does anyone know any good research on image ads?
Question
20 answers
Something that covers PSO, DE, ES, EDA, etc.
Relevant answer
Answer
I can also recommend Sean Luke's free book on meta heuristics at:
(Full disclosure: Sean and his wife are on my PhD committee.)
Call for Voting for Top 10 Questions in Intelligent Informatics/Computing (Top10Qi)
Question
***************************************************************************** Call for Voting for Top 10 Questions in Intelligent Informatics/Computing (Top10Qi) ***************************************************************************** Sixty years ago, Alan M. Turing, raised the essential question, "Can machines think?" in his article "Computing Machinery and Intelligence", which can be regarded as one of seeds for AI and other intelligent computing. The Top10Qi open forum is trying to offer a common platform for all of us to work together to think about the basic questions and further figure out the top 10 questions in Intelligent Informatics/Computing (Top10Qi). Thanks the great supports to Top10Qi from many people, we have received 128 questions from all over the world, which can be viewed at Top10Qi web site: http://wi-consortium.org/blog/top10qi/index.html We are now in the 2nd stage, i.e., voting for the top 10 questions which are to be discussed in the panel session at WIC 2012. You are cordially invited to vote for up to 10 questions from the question list in the Top10Qi voting system at http://top10qi.org/ After clicking the <Non-TC Member> button, you will enter a sign-in page in which you should provide the following information: Email Address: (to send an acknowledge message including your voted questions) Name: Your Name (Optional but preferred) -------------------------------------------------- The vote deadline is November 24, 2012. -------------------------------------------------- We are looking forward to having your strong support and contribution! If having any questions, please send an email to <Top10Qi@gmail.com>. Best Regards Top10Qi Organizing Committee
Question
7 answers
Relevant answer
Answer
It ought to depend on how you use the statistical models. If you use them to analyze/generalize a whole bunch of inputs, yes, that is flawed, as there are human behaviours that do not depend on a large set of inputs, but something more like an internal model. This is basically Chomsky's point, and he's right to the extent that learning approaches which depend on aggregation of data like this are not a good clue to human behaviour. However, that is not the only way you can use statistics. If you look at Anderson's ACT-R model, for example, it is firmly rooted in statistics but at a memory chunk level not at a stimulus level. This is both neurally relatively plausible and capable of modelling complex human behaviours, such as problem solving.
I'm not sure Norvig's points are helpful, either. ACT-R is sort of algorithmic modelling, but it really does make claims of correspondence to the processes in human psychology. That's why it's really cognitive modelling. For some time, artificial intelligence and cognitive science have gone their separate ways. Chomsky's critique of artificial intelligence as not saying much about human language is reasonable in this respect, but that doesn't undermine the use of statistical modelling in disciplines which do care about understanding the processes in human psychology.
Why computer scientists and AI researchers are attracted by the idea of Oracle?
Question
Oracle is a misleading word. In Ancient Greece these were people who knew the meanings of events because of their connections to Gods. Oracle in today computer science is a fancy word for a random number generator. But random numbers do not create meanings. May be mathematicians should better learn psychology and trying to develop mathematical models of the meanings and creativity.
Question
31 answers
What are you ideas about Danger Theory Algorithm? There are some algorithms to model and implement Artificial Immune System like Negative Selection and Clonal selection and recently Danger Theory. I want work on Dendritic Cell Algorithm (DCA), but it is a little ambiguous. Any recommendations?
Relevant answer
Answer
Since the advent of the internet, as more computers join broadband Internet and ubiquitous computing becomes more common, operational and data security of computer systems can be compromised much rapidly resulting in significant loss in the revenue and a strategic set back to an enterprise. The aim of this research project is to develop a general purpose open-source Artificial Immune System (AIS) based Intrusion Detection System (IDS), which will be able to recognize previously unknown malware of all types including but not limited to file infectors, boot-sector infectors, macro viruses, trojans and other malware and must be able to detect and stop/filter traffic floods launched by other compromised hosts in the network.
This Microsoft Windows based software solution will act as a first line of defence against common intrusion attacks, and ultimately will become an integral part of professional security systems. The choice of the Microsoft Windows operating system is due to the fact that most security threats are aimed at Windows due to its large market share and because of being the close-source software. Moreover, many potential security vulnerabilities and bugs skipped from the design team’s attention. The key benefits of this project are:
This software will provide reliable and scalable detection of all abnormal TCP-SYN, UDP and Ping flood activities based on the normal-self concept of the AIS.
This product will guarantee prevention of any malware infiltration through implementation of port security.
It will also detect new viruses on-demand and on-access without the need for updates.
This general purpose intrusion detection system will be first of its kind in open-source community; hence it will set the trend for further initiatives in the field of computer security.
The resultant software of this project will help to increase the confidence of the national researchers working in the area of computer security, and will help them to get into an otherwise very closed and exclusive community of computer security experts.
Question
11 answers
Algorithms to deal with unbalanced clusters for classification?
Relevant answer
Answer
The following link has a brief overview of different techniques for dealing with unbalanced training data:
Question
8 answers
Data Mining of Big Data using tools like SVM, clusters, trees, MCMC, and NN have emerged in place of conventional statistics in order to handle large size and complexity. These are well-suited to *spot patterns in large, static data sets*.
But, the inevitable demand for *rapid analysis of streaming data* reveals the limitations of Data Mining methods, especially where data streams are unstable, chaotic, non-stationary or have concept drift. And that covers many important areas (!), like human behavior (economic, social, commercial, etc.) My focus is mostly computational finance.
Data Mining methods lag in adjusting to changes in the observed behavior. A key problem is the uncertainty whether estimates calculated from past data still are good enough to apply in the often changing future. How frequently and when does the model for prediction or classification need to be updated? How responsive to incoming data should the estimating procedure be to achieve the needed reliability, without getting whipsawed or lagging amidst shifts?
Having a machine learning tool that self-corrects to minimize prediction and classification errors is the challenge. A forgetting factor, as in the dynaTree R package, could be effective if it adjusted automatically. A gain factor, as in Kalman Filtering, can be set pretty well for steady systems (physics), but is sluggish in chaotic settings. GARCH and its relatives provide particularly clumsy structures. Many other approaches exist like Dynamic Model Averaging, adaptive ensembles. Some models must work well, like real-time demand estimators within Google’s Borg, which load-balances its servers.
Have you had success in this area? Can you cite methods, sources, examples or software? I would be glad to discuss this more if you have interest. Thanks.
Tom
Relevant answer
Answer
A good real-time classification method is presented in this thesis:
"Trend or No Trend: A Novel Nonparametric Method for Classifying Time Series".
Question
Here is a link http://www.eann.org.uk/eann2012 for the Engineering Applications of Neural Networks Conference that will take place in London, between 20-23 September 2012 (EANN 2012).
Question
4 answers
Considering the embodiment process of an organism, in which the autopoiesis plays its role across all the body cells, for Varela ad Maturana, "a cognitive system is a system whose organization defines a domain of interactions in which it can act with relevance to the maintenance of itself." (This domain of interactions seems to be the sufficient condition for a system, to be considered a cognitive system, so the "neurality" seems not to be necessary...)
Relevant answer
Answer
Luca,
Thank you for clarification. This is of course a very different topic.
Consciousness seems specific to restricted cells and networks. Some people believe in panpsychism - everything is conscious, there is no scientific reasons for this. Scientifically we know a lot about consciousness and cognition. My intuition is moved by what actually happens - there should be something observed or experienced that we would like to understand. May be if you ask your question differently I could engage.
Question
2 answers
I'm looking at graphical models for solving some 2d vision problems (specifically, energy-minimization models such as boltzmann machines) and I've been thinking about the problem of translation invariance. Obviously, there are many machine vision algorithms that have translation invariance, and there are many graphical models for vision, but I have not yet seen any purely graphical models that are also translation invariant (without clever 'tricks' such as translating the image over and over and applying the whole network again each time). Any pointers or links would be appreciated, as I really don't know where to look at this point. By 'graphical models' I preferably mean neural networks or Bayesian networks, though other types of networks would also be interesting to study.
Relevant answer
Answer
Typically invariance is a quality of the feature space rather than the model itself. For example, if you stored the contour of an image with respect to the top left corner of an image, it would not be translation invariant. However, if you found the center of mass of the contour and stored the values relative to that, it would be translation invariant. If you calculated the moments of the contour and stored the contour values at angles relative to some central moment, then you could make it rotation invariant. And if you scaled your contour to some fixed size, you could make it scale invariant.
So if you're looking to solve a particular vision problem, look for invariance in the features (or in the case of shape recognition, the moments).
Question
5 answers
Even assuming the strong relationship between form and context, would it be possible for a computer system to take into account that context is subjective?
Relevant answer
Answer
It may reproduce aesthetic conventions or recognise them, as long as they have become cultural conventions. But then, how about the aesthetics of contemporary art, which aim to challenge all pre-determined conventions?
Question
16 answers
What is the latest technology in advanced machine learning ( AML), beside Natural Language Processing and Neural Network?
Relevant answer
Answer
Subscribe to Journal of Machine learning (jmlr.csail.mit.edu) - they send you an e-copy of the quarterly publication; you will (obviously get the idea of current research by reading the abstracts). Some of the broad topics are Integrated machine learning, Machine learning in affective modeling,
Question
Can any one suggest me some basic papers where I can find neural network & rough set theory ´combined together?
Question
5 answers
Can I get more details for Hidden Markov Models and it's equations to recognize images?
Relevant answer
Answer
We have a free open source program for inverse markov modeling that uses terminology for ion channels but variable names can be changed. The math is the same.
The program has many many tools and is interfaced and will run in Windows, MAC and Linux. Go to the website and take a look: www.qub.buffalo.edu
Question
17 answers
I am in need of a free texture analysis software or matlab code. Can you recommend one?
Relevant answer
Answer
Why not try Dream 3d
I guess it works also for 2D EBSD when u assume thickness of 1 pixel ?
We also develop some freeware 3D EBSD software but it is not quite there yet.
Question
8 answers
I am working on an application in which I am using a similar pattern of vectors.
Can anybody tell me how to use SOM plotted data in matlab to find out the similarity? It's only showing the relative input vectors in every neuron.
Relevant answer
Answer
after tic and toc, you can used mean and standard deviation; for do this try the system run for some more times and achieve its mean and standard deviation. another way is student's test, and a new way is also exist: ROC curve.
Question
5 answers
Please suggest with an example using data mining technique. How can I know the attacks in wireless sensor network from a dataset?
Relevant answer
Answer
Hello,
I'm working on these attacks right now - for last few weeks.
I've seen a few articles with classifications.
Check this one
If it's not what you're looking for let me know. I can check further.
Bye
Question
16 answers
I need to classify raw data and hope I could use particle swarm optimization method to classify part of the data.
I hope to predict the remaing data.
Relevant answer
Question
8 answers
When ones is handling a sequence of equations, what about his semantic perpective? How he is really "aware" about those symbols (and so on)?
Relevant answer
Answer
In my opinion, the boundary between "mere symbol shuffling" and "actually understanding" is not as crisp as some people (e.g. Searle) seem to think. It is a matter of degree, it is a spectrum. Like when a computer program is proving a theorem: I would argue that the program as a whole does to some extent "understand" what it is doing; it is doing reasoning at a semantic level (to some extent). But of course this program is executed on hardware that is only capable of mere symbol shuffling. The same is in my opinion probably true for humans (e.g. human mathematicians): a human brain is probably at the lowest level only doing something like "mere symbol shuffling" (the atoms and molecules in your brain do nothing more than obey the laws of physics), but still, as a whole, actual intelligence and understanding is emerging from these low-level processes.