Science topic

Cognitive Science and Artificial Thinking - Science topic

Explore the latest questions and answers in Cognitive Science and Artificial Thinking, and find Cognitive Science and Artificial Thinking experts.
Questions related to Cognitive Science and Artificial Thinking
  • asked a question related to Cognitive Science and Artificial Thinking
Question
288 answers
"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
Relevant answer
Answer
Arturo Geigel "I am one open to this dialogue because I recognize the need for philosophical contributions". Thank you for the momentum you bring to this Thread. There indeed is a need for Philosophy as the means humans have to understand fundamental truths about themselves, the world in which they live, and their relationships to the world and each other. In the world of today, AI appears as a powerful transformation in how things and ideas are designed and implemented in all areas of knowledge, technology, and way of life and thinking. In this regard, many questions should be asked: What role should Philosophy play in accompanying the predictable and almost inevitable advances and thrusts of AI? Can AI be involved in philosophical thinking? is AI capable of Philosophying? And in any case, should we preserve philosophical thought and place it, like a safeguard, above technical advances?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3057 answers
WHAT IS THE MYSTERIOUS STUFF OF INFORMATION?
Raphael Neelamkavil, Ph.D., Dr. phil.
Here I give a short description of a forthcoming book, titled: Cosmic Causality Code and Artificial Intelligence: Analytic Philosophy of Physics, Mind, and Virtual Worlds.
§1. Our Search: What Is the Mysterious Stuff of Information?: The most direct interpretations of the concept of information in both informatics and in the philosophy of informatics are, generally, either (1) that “information is nothing more than matter and energy themselves”, or (2) that “information is something mysterious, undefinable, and unidentifiable, but surprisingly it is different from matter and energy themselves”.
But if rightly not matter and energy, and if it is not anything mysteriously vacuous (and hence not existent like matter-energy, or pure matter, or pure energy), then how to explain ‘information’ in an all-inclusive and satisfying manner? Including only the humanly reached information does not suffice for this purpose. Nor can we limit ourselves to information outside of our brain-and-language context. Both the types need necessarily to be included in the definition and explanation.
§2. Our Search: What, in Fact, Can Exist?: First of all, what exist physically are matter and energy (I mean carrier wavicles of energy) themselves. In that case, information is not observable or quasi-observable like the things we see or like some of the “unobservables” which get proved later as quasi-observable. This is clearly because there are no separate energy wavicles that may be termed information particles / wavicles, say, “informatons”. I am subjectively sure that the time is not distant for a new mystery-monger theory of informatons will appear.
§3. Our Search: A Tentative General Definition: Secondly, since the above is the case with humanity at various apparently mysterious theoretical occasions, it is important to de-mystify information and find out what information is. ‘Information’ is a term to represent a causal group-effect of some matter-energy conglomerations or pure energy conglomerations, all of which (of each unit of information or units of information in each case) are in some way under relatively closely conglomerated motion, and together work out for a causal effect or effects on other matter-energy conglomerations or energy conglomerations.
§4. Our Search: In What Sense is Information Causal?: Thirdly, the causal effect being transferred is what we name a unit or units of information. Hence, in this roundabout sense, information too is causal. There may have been and may appear many claiming that information is something mysteriously different from matter-energy. Some of them have the intention of mystify consciousness in terms of information, or create a sort of soul out of immaterial and mysterious information conglomerations, and then create also an information-soul-ology. I believe that they will eventually fail.
§5. Our Search: Examples for Mystification: According to some theologians (whose namies avoid mentioning in order to avoid embarrassment) and New Age informaticians, God is the almighty totality of information, and human, animal, and vegetative souls are finite totalities of the same. Information for them is able to transmit itself without the medium of existent matter, energy, or matter-energy. Thus, their purpose would be served well! But such theories seem to have disappeared after the retirement of some of these theologians because there are not many takers for their theological stance. If they had not theologized on it, some in the scientific community would have lapped up such theories.
Hence, be sure that new, more sophisticated, and more radical ones will appear, because there will be more and more of others who do not want to directly put forth a theological agenda, and instead, would want to use the “mystery”-aspect of information as an instrument to create a cosmology or quantum cosmology in which the primary stuff of the cosmos is information and all matter and energy are just its expressions. Some concrete examples are the theories that (1) gravitation is not any effect carried by some wavicles (call them gravitons), but instead just a “vacuum effect”, (2) gravitation is another effect of electromagnetism that is different from its normal effects, etc.
§6. Why Such a Trend?: In my opinion, one reason for this trend is the false interpretation of causality by quantum physics and its manner of mystifying non-causality and statistical causality by use of spatialization and reification of mathematical concepts and effects as physical without any attempt to delimitation. There can be other reasons too.
§7. Our Attempt: All-Inclusive Definition of Information: Finally, my attempt above has been to take up a more general meaning of the notion ‘information’. For example, many speak of “units of information in informatics”, “information of types like in AI, internet, etc., that are stored in the internet in various repositories like the Cloud”, “information as the background ether of the universe (strangely and miraculously!)”, “loss of all information in the black hole”, “the quantum-cosmological re-cycling of information in the many worlds that get created (like mushrooms!) without any cause and without any matter-energy supply from anywhere, but merely by a (miraculously quantum-cosmological vacuum effect (!?)”, etc. We have been able to delve beyond the merely apparent in these notions.
Add to this list now also the humanly bound meanings of the notion of ‘information’ that we always know of. The human aspect of it is the conglomeration of various sorts of brain-level and language-level concatenations of universal notions (in the form of notions in the brain and nouns, verbs, etc. in language) with various other language-level and brain-level aspects which too have their origin in the brain.
In other words, these concatenations are the brain-level and language-level concatenative reflections of conglomerations of universals (which I call “ways of being of processes”) of existent physical processes (outside of us and inside us), which have their mental reflections as conceptual concatenations in brains and conceptual concatenations in language (which is always symbolic). Thus, by including this human brain-level and language-level aspect, we have a more general spectrum of the concept of information.
In view of this general sense of the term ‘information’, we need to broaden the definition of the source/s of information as something beyond the human use of the term that qualifies it as a symbolic instrument in language, and extend its source/s always to some causal conglomeration-effect that is already being carried out out-there in the physical world, in a manner that is not a mere construct of human minds without any amount of correspondence with the reality outside - here, considering also the stuff of the consciousness as something physically existent. That is, the causal source-aspect of anything happening as mental constructs (CUs and DUs) is a matter to be considered always as real beyond the CUs, DUs, and their concatenations. These out-there aspect consists of the Extension-Change-wise effects in existent physical processes, involving always and in each case OUs and their conglomerations.
§8. (1) Final Definitions: ‘Information’ in artificial intelligence is the “denotative” (see “denotative universals” below) name for any causally conglomerative effect in machine-coded matter-energy as the transfer agent of the said effects, and such effect is transferred in the manner of Extension-Change-wise (see below: always in finitely extended existence, always every part of the existent causing finite impacts inwards and outwards) existence and process by energy wavicles and/or matter-energy via machine-coded energy paths. The denotative name is formulated by means of connotation and denotation by minds and by machines together.
Information in biological mindsis the denotative name for any causally conglomerative effect in brain-type matter-energy and is transferred in the Extension-Change manner by brain-type matter-energy and/or energy wavicles. The denotative name here is formulated by means of connotation and denotation (see below) by minds and by symbolic-linguistic activities together.
Mind, in biologically coded information-based processes, is not the biological information alone or separately, but it is the very process in the brain and in the related body parts.
§9. (2) Summary: I summarize the present work now, beginning with a two-part thesis statement:
(a) Universal Causalityis the relation within every physically existent process and every part of it, by reason of which each of it has an Existence in which every non-vacuously extended (in Extension) part of each of it exerts a finite impact (in Change) on a finite number of other existents that are external and/or internal to the exerting part. (b) Machine coding and biological consciousness are non-interconvertible, because the space-time virtual information in both is non-interconvertible due to the non-interconvertibility of their information supports / carriers that are Categorially in Extension-Change-wise existence, i.e., in Universal Causality.
Do artificial and biological intelligences (AI, BI) converge and attain the same nature? Roger Penrose held so initially; Ray Kurzweil criticized it. Aeons of biological causation are not codified or codifiable by computer. Nor are virtual quantum worlds and modal worlds without physical properties to be taken as existent out there. According to the demands of existence, existents must be Extended and in Change. Hence, I develop a causal metaphysics, grounding AI and BI: Extension-Change-wise active-stable existence, equivalent to Universal Causality (Parts 2, 3).
Mathematical objects (numbers, points, … structures), other pure and natural characteristics, etc. yielding natural-coding information are ontological universals (OU) (generalities of natural kinds: qualities may be used as quantities) pertaining to processes. They do not exist like physical things. Connotative universals (CU) are vague conceptual reflections of OU, and exist as forms in minds. Words and terms are their formulations in discourse / language – called denotative universals (DU), based on CU and OU.
The mathematical objects of informatic coding (binaries, ternaries) are “as-if existent” OUs in symbolic CU and DU representation. Information-carriers exist, are non-vacuous, are extended, have parts, and are in the Category of Extension. Parts of existents move, make impact on others, and are in the Category of Change. Extension-Change-wise existence is Universal Causality, and is measured in CU-DU as space-time. Other qualities of existents are derivatives, pertain to existent processes, and hence, are real, not existents.
Properties are conglomerations of OUs. For example, glass has malleability, which is a property. Properties, as far as they are in consciousness, are as CUs’ concatenations, and in language they are as DUs’ concatenations. AI’s property-attributions are information, which in themselves are virtual constructs. The existent carriers of information are left aside in their concept. Scientists and philosophers misconceive them. AI and BI information networks are virtual, do not exist outside the conglomerations of their carriers, i.e., energy wavicles that exist in connection with matter, with which they are interconvertible.
Matter-energy evolution in AI and BI are of different classes. AI and BI are not in space-time, but in Extension-Change-level energy wavicles in physical and biological processes. Space-time do not exist, are absolute virtuals, and are epistemic and cognitive projections. Physical and biological causations are in Extension-Change, hence not interconvertible.
From the viewpoint of the purpose of creating an adequate theory of experience and information, for me the present work is a starting point to Universal-Causally investigate the primacy of mental and brain acts different from but foundational to thoughts and reasoning.
§10.(3) The Context of the Present Work: The reason why I wrote this little book deserves mention. Decades ago, Norbert Wiener said (See Chapter 1, Part 1) that information is nether matter nor energy but something else. What would have been his motive while positing information as such a mysterious mode of existence? I was surprised at this claim, because it would give rise to all kinds of sciences and philosophies of non-existent virtual stuff considered to arise from existent stuff or from nowhere!
In fact, such are what we experience in the various theories of quantum, quantum-cosmological, counterfactually possible, informatic, and other sorts of multiverses other than the probably existent multiverse that the infinite-content cosmos could be.
I searched for books and articles that deal with the stuff of information. I found hundreds of books and thousands of articles in the philosophical, ethical, informatically manipulation-oriented, mathematical, and on other aspects of the problem, but none on the question of information, as to whether information exists, etc. This surprised me further and this seemed to be a sign of scientocracy and technocracy.
I wanted to write a book that is a bit ferocious about the lack of works on the problem, given the fact that informatics is today much more wanted by all than physics, mathematics, biology, philosophy, etc., and of course the social sciences and human sciences.
For example, take the series to which belong the first two of the three books: (1) Harry Halpin e Alexandre Monnin, eds. [2014], Philosophical Engineering: Towards a Philosophy of the Web; (2) Patrick Allo, ed., Putting Information First: Luciano Floridi and the Philosophy of Information –both from Chichester: Wiley Blackwell; and (3) John von Neumann [1966], Theory of Self-Reproducing Automata, Urbana: University of Illinois Press.
These works do not treat of the fundamental question we have dealt with, and none of the other works that I have examined deals with it fundamentally – even the works by the best of informatics philosophers like Luciano Floridi. My intention in this work has not been making a good summary of the best works in the field and submitting some new connections or improvements, rather than offering something new.
Hence, I decided to develop a metaphysics of information and virtual worlds, which would be a fitting reply to Norbert Wiener, Saul Kripke, David Lewis, Jaakko Hintikka, and a few hundred other famous philosophers (let alone specialists in informatics, physics, cosmology, etc.), without turning the book into a thick volume full of quotes and evaluations related to the many authors on the topic.
Moreover, I have had experience of teaching and research in the philosophy of physics, analytic philosophy, phenomenology, process metaphysics, and in attempts to solve philosophical problems related to unobservables, possible worlds, multiverse, and cosmic vacuum energy that allegedly adds up to zero value and is still capable of creating an infinite number of worlds. Hence, I extended the metaphysics behind these realities that I have constructed (a new metaphysics) and developed it into the question of physically artificial and biological information, intelligence, etc.
The present work is a short metaphysical theory inherent in existents and non-existents, which will be useful not only for experts, but also for students, and well-educated and interested laypersons. What I have created in the present work is a new metaphysics of existent and non-existent objects.
Relevant answer
Answer
Malleability of the information itself
  • asked a question related to Cognitive Science and Artificial Thinking
Question
4 answers
For several years, scientists have been perfecting the technology of artificial intelligence to think like a human thinks. Is it possible?
Will it be possible to teach artificial intelligence to think and generate out-of-the-box, innovative solutions to specific problems and human-mandated tasks?
Is it possible to enrich highly advanced artificial intelligence technology into what will be called a digitised, abstract critical thinking process and into something that can be called artificial consciousness?
For several years, scientists have been refining artificial intelligence technology to think the way humans think. The coronavirus pandemic accelerated the digitalisation of remote, online communication, economic and other processes. During this period, online technology companies accelerated their growth and many companies in the sectors of manufacturing, commerce, services, tourism, catering, culture, etc. significantly increased the processes of internetisation of their business, communication with customers, supply logistics and procurement processes. During this period, many companies and enterprises have increased their investments in new ICT technologies and Industry 4.0 in order to streamline their business processes, improve business management processes, etc. The improvement of artificial intelligence technologies also accelerated during this period, including, for example, the development of ChatGPT technology. New applications of machine learning, deep learning and artificial intelligence technologies in various industries and sectors are developing rapidly. For several years, research and development work on improving artificial intelligence technology has entered a new phase involving, among other things, attempts to teach artificial intelligence to think in a model like that of the human brain. According to this plan, artificial intelligence is supposed to be able to imagine things that it has not previously known or seen, etc.
In the context of this kind of research and development work, it is fundamental to fully understand the processes that take place in the human brain within what we call thinking. A particular characteristic of human thinking processes is the ability to separate conscious thinking, awareness of one's own existence, abstract thinking, the formulation of questions within the framework of critical thinking from the selective, multi-criteria processing of knowledge and information. In addition, research is underway to create autonomous human-like robots, androids equipped not only with artificial intelligence, but also with what can be called artificial consciousness, i.e. a digitally created human-like consciousness. Still not fully resolved is the question of whether a digitally constructed artificial consciousness, which is a kind of supplement to a high generation of artificial intelligence, would really consist of a humanoid cyborg, a human-like android built to resemble a human, being aware of its own existence or merely behaving as if it were thinking, as if it were equipped with its own consciousness. Highly humanoid, autonomous androids are already being built that have 'human faces' equipped with the ability to express 'emotions' through a set of actuators installed in the robot's 'face' that imitate human facial expressions, human grimaces, representing various emotional states. Androids equipped with such humanoid facial expressions combined with the robot's ability to participate in discussions on various current issues and problems could be perceived by the humans discussing them as not only highly intelligent but also as aware of what they are saying and perhaps aware of their existence. But we still don't know that even in such a situation it could be 'just' a simulation of human emotions, human consciousness, human thinking, etc. by a machine equipped with highly advanced artificial intelligence. And when, in addition, an autonomous android equipped with an advanced generation of artificial intelligence is connected through Internet of Things technology, cloud computing to knowledge resources available on the Internet in a real-time formula, and is equipped with the ability to multi-criteria, multi-faceted processing of large sets of current information performed on Big Data Analytics platforms, then almost limitless possibilities for applications of such highly intelligent robots open up.
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
The more important question: Do you really want it?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
8 answers
I want to extract dissimilarity information of two images layout. Wanna check text, button, and text box alignment, text overlapping, and more other layouts difference. how we do that? any tool, that extra detail information or any image processing method.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
35 answers
There are two theories that are quite similar in nature, but different in substance, The theory of Mind and the theory of Mentaliz(S)ation, sorry, Im allergic to American spelling...pls dont kill me now :-) My understanding of them is this "Both of these concepts, mentalization and the theory of mind, describes processes that are metacognitive in their nature . Mentalization mainly concerns the reflection of affective or emotional mental states. In contrast however the, theory of mind focuses on things epistemic in nature such as beliefs, intentions and persuasions. My idea is that these two theories by them self are incomplete but combining elements of both, gives us a clearer understanding. Cognition and affect can't in my view be separated, they are both part of us as human beings and also a part of other animals. What are your thoughts? Am I wrong or right? I can stand criticism so bring it on...
Relevant answer
Answer
Dear Henrik G.S. Arvidsson I dont agree with you that mentalization and the theory of mind are incomplete, I would rather say they are vague and trying to beat about the bush. However, I agree with you that both facets; physical and non-physical go side by side as the two sides of the river, they are two different identities, yet essential and part of the same one entity the river. I think to understand mind we have to answer following questions:-
1. What are numerous non-physical entities, dimensions, constructs & elements.
2. What is the hierarchical / Interrelationship model.
3. How does theses non-physical entities function individually.
4. How does Multiple simultaneous occurrences and their effects occur.
5 What is the trans-formative phenomenon of mind and how it occurs.
6. What can be a perceived Mind Model.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
23 answers
Biomechanics face grand challenges due to the intricacy of living things. We need multidiciplinary approach (mechanical, chemical, electrical, and thermal ) to unravel these intricacies. We need to integrate observations from multiple length scales - from organ level to, tissue level, cell level, molecular level, atomic level, and then to energy level) Over these intricacies, their dynamism, the complexity of their response makes it very difficult to correlate empirical data with theoretical models. Among these challenges, which is the most important challenge. If we solve the most important challenge, we could solve most of the other challenges easily.
Relevant answer
Answer
Biomechanics is both Art & Science because it does not follow Newton's Three Primary Laws predictably. I can stop rolling down a hill biomechanically at will. A bird can fly away when dropped from a tree as opposed to an apple or a piece of gold of the same mass.
The main problem that I have encountered in researching and practicing biomechanics clinically is that its researchers and clinicians are deterministically trying to study it quantitatively (Science) when n=1, there are too many variables to do so. The best that can currently be done is to study it stochastically (Art) or some hybrid of both (Art & Science).
When studying mankind biomechanically we need to seek disruptive biomechanical theories with new terminology and methods of research, diagnosis and treatment. Ones that consider the myofascial organ, the endocannabinoid system and the true actions and purpose of the CNS and neural strategy.
We need to abandon subtalar joint neutral, pronation and "normal" for words that lead us to a better understanding and control of human stance and movement efficiently and without injury or degeneration.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
5 answers
There have been many emotion space models, be it the circumplex model or the PANA model or the Plutchik's wheel. But all of them are used to represent human emotions in an emotion space. The definitions for arousal and valence are easy to interpret as human beings as we have some understanding of pleasant/ unpleasant or intense/non intense stimuli. However, how can we define the same for a robot? What stimuli should be considered as Arousing and what should be Pleasant for a robot? I am interested in reading the responses from researchers in the field and having a discussion in this area. Any reference to relevant literature would also be highly appreciated.
Relevant answer
Answer
  • asked a question related to Cognitive Science and Artificial Thinking
Question
8 answers
Dear fellows, 
I am looking for some real-world examples where a cognitive assistant system is used. The system should rely on a user model that follows theoretical assumptions from either Psychology or Cognitive Science, ideally backed by some cognitive architecture.
I have done some literature search but did not come up with actual real-world systems. 
It would be great if someone could help. 
best regards, 
Patrick
Relevant answer
Answer
Dear Patrick,
We have developed Cognitive Human-Machine Systems (CHMS) for advanced aerospace and defence applications, including Avionics/Mission Systems, Air Traffic Management and One-to-Many UAS operations.
Our systems implement adaptive Human-Machine Interfaces and Interactions (HMI2) relying on real-time measurement of neuro-physiological parameters (EEG/fNIR, eye tracking, hart rate, respiration, perspiration, voice patterns and facial expression), processed by a neuro-fuzzy inference engine. This approach drives adaptation in CHMS both in terms of HMI2 and automation levels, creating a pathway to trusted autonomous operations.
Various research projects have been undertaken by my research group (over the past 5 years) in collaboration with Thales Australia, Norrhrop Grumman US and the Australian DoD (Defence Science and Stechnology Group). You may wish to check my RG repository to download our publications.
Please let me know if you require any additional information.
Kind regards,
Rob
  • asked a question related to Cognitive Science and Artificial Thinking
Question
68 answers
Mathematics is fundamental in all sciences and mainly in Physics, which has even had many contributions. It seems that the capacity to be applied would be the motor to be create. But this not what good mathematicians as Henry Poincarè or Hardy has said. What is the beauty in mathematics, in theoretical physics or in others which could be related subjects?
For me there are very beautiful mathematical results which sounds difficut to be applied or even against our reality, which are full of "beauty" or at least "surprise".
1.Sum of natural numbers = a negative number as - 1/12.
2. Polynomials with degree five or higher are without analytical expression of their roots.
3. Banach-Tarsky theorem
4. There cannot exit more than five regular polyhedra in three dimensions.
Relevant answer
Answer
The beauty of theoretical physics is that Maths is it’s language. The beauty of mathematics is in its remarkable success of describing the natural world.
it is therefore not surprising that most research mathematicians and theoretical physicists pepper their description of important research work with terms like “unexpected,” “elegance,” “simplicity” and “beauty.”
Let me make it easy for you:
Can you imagine a bride without a wedding dress?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
38 answers
"The AI Takeover Is Coming" this is what is the news these days. Is it really a trend setter for future years.
What is the impact over manual work due to this? just needed the audience thoughts over this hence started a conversation.
Your thoughts and expertise are welcome!
Thanks in advance 
Relevant answer
Answer
The answer I would give is yes, AI will be adopted in the future. It's an easy answer, because AI means different things to different people.
Maybe most people can agree that AI has a self-learning component. This aspect is necessary for any computer program to be able to accomplish tasks which have not explicitly been predicted, and appropriate algorithms developed ahead of time, by a programmer. If nothing else, one can imagine a control system that tests operational modes to determine safe operating limits. Such as, allow fuel flow to increase until temperature is no longer controllable, then set the limit below that point. Autonomous driving can certainly benefit from such learning, so the vehicle becomes safer with experience. Just like human drivers do, only better, because such algorithms wouldn't be encumbered with emotions, anxieties, distractions, fatigue, panic, and so on.
We already have systems available to the public, that take on some of these characteristics. For instance, in cars, modern engine controls and stability controls. These systems are always testing the limits, always learning, and reacting to conditions right now multiple times faster than humans can. Perhaps the familiarity we have with some of these modern controls makes us dismiss them. But hey. Imagine what someone would have thought just 50 years ago, about cars that can save themselves from skidding out of control, or can stop faster than that panicked human standing on the brakes, or can parallel park all  by themselves, or can constantly be tweaking the spark advance, to keep the engine always on the verge of pinging? All of these tasks accomplished not in some totally pre-programmed way, but by taking existing conditions into account, in real time.
Although some of what passes off as AI is not much more than rule-based programming. Big, nested, logical if statements, that a user would think behave like AI. Then again, isn't that a lot of what human intelligence is? We build a database of effects and their causes, and we act accordingly?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
9 answers
If i have to get the most possible generic steps of text analytics, what are the most commonly used steps for any text analysis model.
Any help and your expert guidance/ suggestions are welcome.
Thanks in advance
Relevant answer
There is no strict "rule", but I can provide you a simple example of framework, considering the text classification task:
STEP 1-Pre-Processing:
Activities that might be performed in this step:
(i) performing a preliminary descriptive statistics study of your collection of documents (e.g.: determine the frequency of each word in the collection, determine the strongest correlations among words, etc.).  
(ii) According to the results of your study you may apply a set of techniques to reduce the problem dimensionality  (stop words, stemming, feature selection, etc.).
STEP 2- Dataset Modeling
- Deciding how to construct your training dataset, i.e., how to transform the collection of documents into a dataset (example: bag of words, n-grams, etc.);
STEP 3- Analysis
- Construction of your text analysis model using one or more algorithms. In the case of classification you have a number of options: k-NN, SVM, Naive Bayes, Neural Networks, etc. 
As I said before, this is just a simplified example.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
14 answers
We can imagine devices as agents may be then its better to coin the question.
Relevant answer
Answer
I am aware of several lines of research which could be interesting for you.
1. Dr. Dörner's PSI theory - a digitally embodied system which shall explain various cognitive phenomena. From my perspective it is great model to understand how cognition relates to self-regulation and how chunking can be used to generate ontologies from episodal memory. It is a fantastic model with interesting answers to practical questions. However, the system failed for psychologist to accurately replicate human performance but that should not be a problem to engineers who are dealing with different sorts or challenges. Though, the model has two problems: It is mainly described in German (greatly limiting its popularity, but you could look up literature under the term micro-PSI) and the second problem could be that it is an overkill for technical applications where complete agent autonomy is not the goal.
2. You could be looking in Luc Steel's work on embodied agents. He used real robots which could learn about their environment and which could establish something like a language with humans and robots in order to coordinate. I am unsure if he kept up his work and what the status is today. Many researches try  architectures based on neuro-plasticity and resonance as it seems to promise to solve "cognition" and "action" at once but neural networks have difficulty to detach for simulation. This is often fixed with additional "deliberate" layers used for simulation and simulation is necessary for making decisions in complex situations with a dominating amount of bad options. I have also seen research with dedicated environment simulators and "dreaming" robots (which learn to solve problems when they are offline). Most difficult problems for robots today are about interfacing with simple objects such as coffee machine, etc. because they occur in great variety of looks and mechanics.
3. You could be interested in Numenta's HTMs. This is not a robot brain architecture but more like the essence of it which fits well in a machine learning framework. It is extremely well suited for detecting spatio-temporal patterns - in Numenta's speak a new object would be a new spatio-temporal hyperplane. Theory and a software framework are available from the homepage of Numenta. However, if you want to also design interactions and optimization of interactions then HTM will not be fully sufficient to get it done.
4. Then there is a whole collection of what I would call "classic" architectures like SOAR, CLARION, OpenCog, etc. which are basically blackboard-architectures describing how to organize blocks of function in order to make systems cognitive. It will still depend on your talent to write those blocks and your talent to define an intermediate form of "language" for those blocks to interact in order to arrive at a useful solution. I recently found literature with a reduced set of requirements that should be easily found if you google for "cognitive radio". Cognitive radio focuses more on basic communications and picks out only certain features from classic architectures.
I myself try to follow a holistic approach like in PSI theory but to arrive at something fundamentally more suitable to engineering. The idea that I am following is based on the idea that it is close to impossible to define "standard language" in classic cognitive architectures and that maybe this is not necessary to do. Instead of following the classic architecture's "data bus" approach I deliberately develop a system which is capable to build adapters to functional building blocks acquired from sensori-motor percepts which are chunked into ontologies like PSI did. This demands from the system that it is paradigm-free (not algebraic, not probabilistic, not logical, not geometric, etc.). The system is founded on very fundamental operator-space estimation. Developed operator-space-families will (hopefully) act like more sophisticated theoretical paradigms but they remain fundamentally "interoperable" between them - a major feature of humans that I have not seen replicated in machines.
Summa summarum, I am not aware of any work that is able to completely enable unconstrained learning of object interaction.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
2 answers
To
The scientists,
How to get rid of global tracking i.e the continuous signal processing to the brain,photo display system,the negative use of neural networking to gain money by unfair means ?
Kindly write few lines who have experience on this field.
Relevant answer
Answer
Dear Anjana,
if this is about your individual needs: you already started to get rid of these influences.
You cannot stop them other than cutting the power lines (which will leave us in the dark). Two tricks work:
1. ignoring (seems difficult but you get used to that)
2. developing a negative attitude regarding attempts to influence you helps a lot. This may not work vs. subliminal messages, but all other attempts are well covered. And if you are tempted to anyway follow such an attempts: take your computer, research a bit and decide then (maybe to detest the product/service/whatever advertised.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
10 answers
I am aware that you can use CBT to actively change your thoughts about memories.  However, I am interested in whether there are therapies to actively get rid of bad memories.  For example, when a person becomes stressed about a current activity, they may dream about previous bad experiences.  Is there a way to actively get rid of these memories so they no longer affect the person?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
6 answers
Well i am making an expert system using Hologram Technology.In it there is a virtual image of a person that gives a recorded lecture and after it there is a database that have a lot of answers of the questions when some one ask question it picks the best solution to answer it  so that it readily gives the best and optimize answers of the question that is asked? and if a question is been asked that is not in the database it directed toward google through internet and search for the best answers.
Relevant answer
Answer
I think you will find some answers to your question here. It is about the possibility of machines to learn from reading.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
41 answers
How could we describe the act of "thinking" with mathematical tools? Which paradigm is best suited for? What does "thought" mathematically mean? Is there any alternative to the procedural (linear) conception of neural calculus? 
Relevant answer
Answer
It is very unlikely to completely represent thinking mathematically, for otherwise  computers will replace humans with complete capacity. Human thinking is not completely mathematical,  imagination one of our distinctive cognitive parts of thinking which by no means can be described mathematically. That is why computers whose all information are coded mathematically  will never replace humans however knowledgeable and well equipped logically they are.
However mathematics enables us to augment our thinking capacity as technology does how we live and do things efficiently. 
  • asked a question related to Cognitive Science and Artificial Thinking
Question
12 answers
The background subtraction is achieved by running average method which con-
tinuously updates the background model. Hence if the hand is still for long
enough it is considered as the background and the gesture is not detected.
Relevant answer
Answer
Hello,
It is good if u could extract motion features to represent hand gestures. Motion features could be used to represent dynamic gestures.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
5 answers
I've gone through some scenarios, like hospital data process mining and restaurant process mining, but want to find a scenario that is not only new but whose log data is also accessible.
Relevant answer
Answer
Hi Ayesha, I think you should first decide, which kind of industry you are interested in. One of the main challanges in process mining is getting access to the event log data. You can use publicly available logs as suggested by Marta. But it depends on what you want to do with that data and what kind of research question you intend to answer. Getting access to real event log data usually requires the cooperation of a partner from industry, and setting up this cooperation is usually hard work. I guess that you will probably not be able to get easy access to such data without a partner from practice. From my perspective searching for publicly available event log data is not a good approach to identify a new application scenario for process mining. I would start with the identification of a research problem maybe in a specific industry and then look for a partner from this industry that is willing to provide the necessary data.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
47 answers
Hello everyone , I would like to assess the concept of  "Free Will" for humans by experiments similar to "Ants in the Box" for insects and observe their behaviour based on artificially created situation. Already I am working on assigning this concept to electronic circuitry by to modes , 1. Requirement mode and 2. Free will mode. Can anybody give suggestions on simple experiments to be carried by humans?
Relevant answer
Answer
One other option is that free will is not a testable phenomenon in itself but instead a premise in some sort of "intentional stance" that scientists assume by preference so they need not invoke everything since the Big Bang in deterministic accounts.
These empirical results may not so much shake free will strongly as it may point to relatively narrow preconceptions we might have concerning the temporal and spatial scales of a cognitive event such as a "will" to do X or Y.
Illusions are another preconceptual holdover of a classical view of reality as a singular, available, but distant standard towards which we poor fools are always struggling towards.
I think what needs to be shaken strongly is our perspective on how to approach what have been loose intuitions deeply entrenched by history.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
9 answers
I have the intuition that these two types of methodologies are related but I could not find any references nor any clear explanation of this relationship besides the fact that they are 2 types of modern, novel and evolved artificial neural networks.
Relevant answer
Reservoir computing generally refers to some kinds of recurrent neural networks where only the parameters of the final, non-recurrent output layer (known as the readout layer) are trained, while all the other parameters are randomly initialized subject to some condition that essentially prevents chaotic behavior and then they are left untrained.
There is also a non-recurrent analogue of reservoir computing, which undergoes various names including "extreme neural networks", that consists of plain feed-forward neural networks where only the readout layer is trained. All these methods can be considered to belong to the larger class of "random projection" techniques.
You can "unfold" a recurrent neural network into a feed-forward, generally deep, neural network where the internal layers are time-shifted replicas of each others. This is the intuition behind the backpropagation-through-time training algorithm.
In fact, if you train a deep neural network with vanilla backpropagation, or a recurrent neural network with vanilla backpropagation-through-time, you often observe that the parameters in the hidden/recurrent layers don't change much from the random values they got at initialization, due to an issue known as the "vanishing gradient problem" (there is also an "exploding gradient problem" that can cause chaotic behavior and numerical instability in some cases).
This is where reservoir computing and deep learning part ways:
"Extreme"/Reservoir computing argues that since backpropagation/backpropagation-through-time is computationally very expensive but typically doesn't affect much the internal layers and it can run into chaotic behavior and numerical instability, we can often avoid it altogether and only train the readout layer for a small fraction of the computational cost (since it is a generalized linear classification/regression problem), while avoiding any instability by enforcing a simple constraint on the random parameters of the internal layers. This works very well for some problems.
Deep learning, on the other hand, argues that there are very hard problems that  really do benefit from the training of the internal layers, and develops training algorithms, such as staged autoencoder pre-training, designed to overcome the limitations of vanilla backpropagation.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
25 answers
I propose the following:
How is information encoded within the mind (in the brain)?
What are the principles that determine its organization?
What are the emergent properties?
Are the conceptual and methodological tools that are currently available adequate in addressing the problems of cognition?
This list is certainly incomplete. Do you have any suggestion?
Relevant answer
Answer
"How is information encoded within the mind (in the brain)?"
It is not "encoded", i.e, there is no code, no symbol standing for the things around us. So I would say the main challenge in cognition is explaining it with arepresentational models, instead of proposing fancy chit-chat descriptive models of this and that (which in  fact explain nothing).
I might be wrong but this is how I see it.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
With reference to perception action cycle
Relevant answer
Answer
Emotions could basically be understood as an inner context that get associated with the sensorimotor processes of an agent.
You can also view the emotions as tighly linked with a system of homeostatic drives and allostatic control (see attached paper).
To me the question that remains really unclear is which sensory-emotive associations are innate (e.g fear of spiders), and which one are acquired through experience (e.g joy to see a snow flake).
  • asked a question related to Cognitive Science and Artificial Thinking
Question
22 answers
We are developing Attention-Aware Systems, which includes the Sensing (Estimation), Modeling and Management of user attention.
In my research activitise I was not able to find a generally valid metric, categorization or quantification of attention.
There are different approaches: in cognitive sciences, attention us usually analyzed as the performance in the fulfillment of given tasks => a percentage scale of an average performance. in HCI publications, researchers often use their own categories or scales that are chosen arbitrarily.
In my work I was using scales, as well as attention types as categories...
So my question is whether you know some way of parametrization for human attention, or have any creative approach to follow?
Thanks.
Relevant answer
Answer
Another suggestion, although you may have considered it already, is the measurement of executive functions (EFs). EFs are thought to be top-down processes responsible for the focusing and maintenance of attention. The three core EFs are working memory, inhibitory control and task switching/cognitive flexibility. A suite of reliable and representative performance-based tasks (not self-report) capturing all three can be as short as 5 minutes. If you're unfamiliar with these and interested, I'd strongly recommend "Executive Functions", by Adele Diamond (Annual Review of Psychology, 2013).
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
Text summarization approaches could be broadly classified into two categories: extractive and abstractive. Extractive approaches aim to select the most important pieces of information from an original document without adding any external material to the generated summary or having any deep understanding of the language. Abstractive approaches require a deep understanding of the language; and we find just few work in this direction since it aims to create a shorter version of the original document but not restricted to the material present in the original document. Most of the approaches that have followed an abstractive paradigm rely on predefined templates and cannot be imported to the open domain. So, my question is, do you think that it is possible to propose in the near future approaches that could deal with abstractive text summarization in the open domain? Or maybe using templates is the best choice?
Relevant answer
Answer
Abstractive approaches require deep knowledge of the target domain and complex substitution rules that require intensive groundwork and corpus specific tagging. You would need extensive semantic and pragmatic information to carry out this type of processing. Due to the domain knowledge requirements I would propose first a template approach from which to study if generalities in procedures can be obtained and then take it from there to see if open domain alternatives are possible.
Just a thought
  • asked a question related to Cognitive Science and Artificial Thinking
Question
15 answers
I work in industry and I completed undergrad almost 9 years back. However, I have some ideas and I want to publish or even collaborate if possible. What is the best place for such people? I do not have any professors or academic reviewers.
My interests are primarily in AI, logic and knowledge representation.
Relevant answer
Answer
In addition to journals, conferences, and patent submissions, I would suggest being active as an open source code developer and industry developer and author - for example, IBM has a nice program to publish and recognize the works of applied researchers and developers and has interest in machine learning, video analytics, Linux applications and systems development, and just general work in these areas - http://www.ibm.com/developerworks/aboutdw/dwa/about.html
I have written quite a few developer articles myself - http://www.cse.uaa.alaska.edu/~ssiewert/Sam-Siewert-Publications.pdf
Intel, NVIDIA, and many other computer engineering firms have similar developer web pages. For something more formal, but applied, you might submit to the Intel or IBM Research Journals.
The nice thing about Developer articles is that they are interested in early stage prototype, proof-of-concpet and idea stage work as long as you are ok sharing with other developers, but if you want collaboration, I suspect you are.
The web-based publishing is fast and invites feedback and collaboration.
Otherwise, I think if it's collaboration you seek, conference papers are best because you'll meet like minded researchers and developers, much more so than you would publishing in journals and filing patents (in my opinion and based on my experience).
Either way, the more you write, the better - good luck!
  • asked a question related to Cognitive Science and Artificial Thinking
Question
7 answers
There are some key principles of gestalt systems like emergence, reification, multistability and invariance. Do any neuronal models exist to explain these properties?
Relevant answer
Answer
Pieter R. Roelfsema addresses many of these issues in "Cortical Algorithms for
Perceptual Grouping" in Annual Review of Neuroscience. 2006. 29:203–27.
Also; Micah M. Murray and Christoph S. Herrmann 2013. Illusory contours: a window onto the neurophysiology of constructing perception. Trends in Cognitive Sciences 1–11. http://dx.doi.org/10.1016/j.tics.2013.07.004
Also:
Charles D. Gilbert1,* and Wu Li 2012. Adult Visual Cortical Plasticity. Neuron 75, 250-264.
This paper might also be useful:
M. Saifullah • C. Balkenius. A. Jonsson. 2014. A biologically based model for recognition of 2-D occluded patterns. Cogn Process. 15:13–28
Some of this research deals with proximity, similarity, closure etc., in terms of neural networks in early visual cortex, especially V1 and V2. Hebb's previous research on neurons that fire/wire together go together has been implicated in Gestalt grouping as well as the long range and short range neural interconnections in early visual cortex.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
2 answers
Heuristic approaches often come with cyclic graphs. But to map heuristic approachs in a Bayesian belief network, it would have to be unidirectional or acyclic in nature. How can we do that?
Relevant answer
Answer
Sounds like you're searching for spanning trees.
Also, have a look at Kruskal's algorithm.
Regards,
Joachim
  • asked a question related to Cognitive Science and Artificial Thinking
Question
7 answers
I proposed to use a neural network to generalize input-target data from a series of numerical simulations. I intend to use a feed-forward back-propagation neural network and I just need to understand how best to configure the number of hidden layers and amount of neurons in each. I have 6 input values and 6 discrete corresponding target values for each input-target set.
Relevant answer
Answer
Hi Alan,
As the above responses suggest, yours is a more complex question than it may seem. It remains unanswerable when framed that way! Neural network architecture is about trade-offs - one of the most well-considered is that between bias and variance. As NNs are data-driven, optimality must be achieved in relationship with the data itself. I have a quote here, "the optimal architecture is a network that is large enough to learn the underlying function and is small enough to generalize well" (Aran, Yildiz & Alpaydin 2009, p. 160). A dataset may reasonably suit more than one structure.
The size of your input and output layers are determined by the data itself, with the number of neurons in the input layer equal to the number of attributes in your dataset and the output layer equal to number of target values (ie 6 and 6).
Increasing the number of hidden neurons, whether they are arranged in a single or in multiple layers) increases the complexity of your model. Early research indicates that increasing the number of hidden neurons improves accuracy in output, (eg, Gorman & Sejnowski 1988) and reduces training time (Denker et al 1987). It has also been demonstrated that this improvement has limits and efficiency is reduced with increasing number of neurons in the hidden layer (Zeng & Yeung 2006).
As your network can not be independent of the data that it is created for, also consider the number of instances used for training as a significant factor in your choice of network architecture. Note here that the sufficiently large training set for a nonparametric classification scheme such as neural networks exists only in theory "since the training data will never "cover" the space of all possible inputs" (Geman, Bienenstock & Doursat 1992, p. 44).
Michal Hradis suggested an exhaustive search with an upper and lower bound on the number of neurons in each of two hidden layers. There has been much consideration on predetermining a network structure. In work on neural networks with a single hidden layer, Barron (1994) and Baum(1988) suggest, among other calculations,
1. #hidden = #instances in set used to train the network / #input neurons.
2. #hidden = #instances in training set / (#input neurons + #output neurons).
As you can see, such calculations are based on a perceived relationship between the training data and the network structure.
If you can source some of the articles referenced here, you may get an idea of the nature of the debate surrounding the question you ask. A short answer is there is no agreement on a correct approach. In the literature you will find many references to "rules of thumb" :)
Amanda
REFERENCES
Aran, Yildiz & Alpaydin 2009, 'An incremental framework based on cross-validation for estimating the architecture of a multilayer perceptron', International Journal of Pattern Recognition and Artificial Intelligence, vol. 23, no. 2, pp 159-190.
Barron, A 1994, 'Approximation and estimation bounds for artificial neural networks', Machine Learning, vol. 14, pp. 115-133.
Baum, EB 1988, 'On the capabilities of multilayer perceptrons', Journal of Complexity, vol. 4, no. 3, pp. 193-215.
Denker, Schwartz, Wittner, Solla, Howard, Jackel & Hopfield 1987, 'Large automatic learning, rule extraction and generalization', Complex Systems, vol. 1, no. 5, pp. 877-922.
Geman, Bienenstock & Doursat 1992, 'Neural networks and the bias/variance dilemma', Neural Computation vol.4 pp. 1-58.
Gorman & Sejnowski 1988, 'Analysis of hidden units in a layered network trained to classify sonar targets', Neural Networks, vol. 1, pp.75-89.
Zeng & Yeung 2006, 'Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure', Neurocomputing, vol. 69, no, 7-9, pp. 825-837.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
4 answers
In distributed constraints programming, many researchers have been interested in confidentiality in multi-agent systems. One of the most known techniques is lying and biphasic communication. In this context you can have many ethical problems:
How should an ethical-agent be protected against such reasoning?
How can we save or protect fundamental rights of agents?
Who will be responsible for unexpected consequences of this false information?
How can we deal with non-ethical agents?
Relevant answer
Answer
Thats an interesting question and in line with some recent works I saw at last years AAMAS. First, I think that mediating or representation agents could be a solution to the confidentiality problem, too, as they can easily reflect opinions of still anonymous agents. If you introduce lying, you can in parallel introduce a cost functional for the utility, so lying will dissatisfy the agent on the long-run. Another interesting aspect is forecasting, when the agents can predict that a certain lie will cost less satisfaction than the outcome of a decision might gain, he will follow the lie. Perhaps the method from Tarapore et al. http://www.ifaamas.org/Proceedings/aamas2013/docs/p23.pdf could also help to isolate such lying agents, especially if that behaviour is abnormal to the rest of the group.
Best regards, yours
Marcus
  • asked a question related to Cognitive Science and Artificial Thinking
Question
16 answers
According my thinking, this is process of finding similarity of our new percepts to the patterns representing concepts and ideas learned during life experience. They all must create coherent model of surround and reality. If this understanding fully reflects meaning of the notion "understanding"?
Relevant answer
Answer
Wieslaw,
Thank you for the answer.
Are you sudgesting that the visual system of new born babies are blank slates? That visual learning starts from scratch. That babies do not have any a priori visual knowledge? Absolutely all comes from experience and the basic visual architecture?
Newborns’ Face Recognition: Role of Inner and Outer Facial Features:
''Evidence supporting the claim that not only do
newborns differentiate between faces and nonface
visual objects but they also process information
about individual faces, derives mainly from the observation
that, within hours from birth, infants show
a visual preference for their mother’s face over a
female unfamiliar face (Bushnell, 2001; Bushnell, Sai,
& Mullin, 1989; Field, Cohen, Garcia, & Greenberg,
1984; Pascalis et al., 1995).''
This is just one among thousand of studies that show that we are borned with some visual competence and not with a visual blank state.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
13 answers
I'm interested in the power of re-authoring stories (often personal narratives) to change behavior or influence action.
Relevant answer
Answer
I'm using storytelling to support students' scientific writing — both with regard to process and content — growth and identity formation is behind it though (scientific writing is just an application). I've just submitted a related paper on course design to the 13th IEEE International Conference on Cognitive Informatics & Cognitive Computing (London, Aug 18-20, http://www.ucalgary.ca/icci_cc/iccicc-14/paper-submission). And I did some work on identity and avatars which, like all my work, is strongly informed by my experience as a storyteller and writer: https://en.wikiversity.org/wiki/User:MSB/Stockholm — I need to write this stuff up! — Besides that, like some others here, I use storytelling in class, too. Both in verbal and in visual mode (e.g. as videoprototyping).
  • asked a question related to Cognitive Science and Artificial Thinking
Question
156 answers
For example the two images, one having rose flower and other having lotus flower are having less similarity than the two images both having rose flowers.
Relevant answer
Answer
It depends of what you mean with similarity between the images. According to your question, what you want is, for instance, if you have two images in which both have the same object in this case a flower, then the similarity measure should be high otherwise should be low. That is more related to content analysis and classification. In that case is better to check some features extraction techniques as SIFT or something similar and classifications techniques. Also, you can see works like:
L. Wang, Y. Zhang and J. Feng, "On the Euclidean Distance of Images", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1453520&tag=1
Hope this will be helpful.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
You are coordially invited to propose your work and a special session at 9th ESAS (IEEE). It would be especially interesting for the participants to see how you're handling semantic intelligence concerning ambience, context, and mission planning.
Relevant answer
Answer
What a fantastic conference! I'll be following it with interest! =]
  • asked a question related to Cognitive Science and Artificial Thinking
Question
33 answers
My problem has 81 input features and 43 targets.
Relevant answer
Answer
The simplest method is probably to apply a k-fold cross validation:
Basically, it works like this:
1) You split your training data into k equal-sized parts (called folds). Typical numbers of k are between 3 and 10.
2) You choose a suitable number of candidate dimensionalities for your hidden layer, e.g. 40 neurons, 50 neurons, etc.
3) For each of these candidate dimensionalities, you train the network k times, using k-1 folds as training data and the k-th one as testing data.
4) You choose the number of neurons whose average testing error over the k trials of point (3) is lowest.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
6 answers
Quadcopters.
Relevant answer
Answer
Hmm, Nitish, this seems highly simplified. Although it is nowhere near my area of expertise, I suggest that the lift a helicopter or a quadcopter gets is in respect to the size of the rotors and their speed so they are very similar.
There doesn't really seem to be any complexity for even rotor numbers (e.g. standard helicopter) but uneven rotor numbers seem to introduce some (solved) complexity.
Gandhimathi, you can buy quadcopters or helicopters (min 2 rotors) at any toy store.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
6 answers
How could we perform our individual intelligence based on collective intelligence? What about measures of individual intelligence or collective intelligence?
Relevant answer
Answer
I suggest you read the book Programming Collective Intelligence. You may click the following link to get some information in detail.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
9 answers
This is for a software concept design project.
Relevant answer
Answer
Thank you, everyone!
  • asked a question related to Cognitive Science and Artificial Thinking
Question
9 answers
I want to run a controlled experiment to test students' understanding and correlate it with other parameters. I will give them a passage to read and then ask them some questions relating to the passage via multiple choice questionnaire. I want to know if this method will effectively test their understanding.
Relevant answer
Answer
This controlled experiment is very easy for readers to understand and can also contribute to see how others partake to adjust to the learning styles of this collaborative questioning. By just listening to the passage, and having multiple choice answers available to the students, the question is simplified to better understand the current knowledge at hand. Do students learn better under pressure? Or do they understand material better when they have the ability to go back through the passage and reread the information to understand it better? What is the effective method in this assessment?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
1 answer
In clinical setting decision making, practice guidelines should be stringent. Overall, practice guidelines are built basing on results of metaanalises and randomized trials, and basing on personal experts' opinion and other kind of data in literature. Should we consider building new semiquantitative statistical tools for quantesizing this "a posteriori" and "a priori" knowledge?
Relevant answer
Answer
Dear Dr. Ugo Indraccolo, There is no need, use SPSS software.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
2 answers
I don't have the book, but it is reportedly presented in Watzlawskys "How Real is Real". Without the research reference I'm afraid this excellent story is an urban myth.
Relevant answer
Answer
Thanks for looking Nuno.
It's a good story anyway.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
Emotional influence on the integration of sensory modalities in cognitive architectures.
Relevant answer
Answer
Sorry I could not be of more help, hope someone can answer your question soon.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
I read something about this complex test and try to search for an intelligent system that might do the test, is there any? If not, is it possible to design one?
Relevant answer
Answer
Online tests are available
But i think it would be more easier to implement HIT for vision based algorithms as it has much better defined criteria.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
Formal methods and unified cognitive modeling.
Relevant answer
Answer
I think, "A practical guide to modelling cognitive processes" can you help. It has been write by Maarten W. van Someren, Yvonne F. Barnard and Jacobijn A.C. Sandberg. I attach link to this book: ftp://akmc.biz/ShareSpace/ResMeth-IS-Spring2012/Zhora_el_Gauche/Reading%20Materials/Someren_et_al-The_Think_Aloud_Method.pdf
  • asked a question related to Cognitive Science and Artificial Thinking
Question
7 answers
I am interested in explicit representations of the self in an agent. Which features and structures may such representations have?
Relevant answer
Answer
Samer, I think the answer depends on the kind of application you have in mind. The architecture may spread from a complex model based on the human mind, towards a more affordable BDI architecture where "self" is represented by internal knowledge of own goals and capabilities.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
7 answers
Take the average person you meet - how do they make decisions? What about evolution - has the scientific method ever had any influence on genetic mutation, or has any other aspect of evolution had any influence on living beings?
Relevant answer
Answer
At its core, the scientific method is asking a question based on some observation you made, positing a possible answer, and then testing out that possible answer (your hypothesis). As practicing scientists we take it to an extreme level, focusing on statistics and peer-review, but at its heart, the scientific method is something that all people and probably most animals use every day, instinctively. The lion cub observes a porcupine, wonders if it's edible, and then tests that hypothesis of edibility. If her experiment falsifies her hypothesis (a mouthful of porcupine quills probably will!) she has her answer, and probably won't need to repeat the experiment. But that's the scientific method in action. I think it is as inherent to living creatures as is the ability to learn: not every species has it, but it's far more common than we might think, from our typical anthropocentric viewpoint. This ability to posit and answer questions would most certainly confer an evolutionary advantage to the actor.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
2 answers
two things
1> If we achieve/move with greater than speed of light.
2>if we success to increase the frequency of body very high ,greater than infrared.
Relevant answer
You have disregarded another possibility: "Everybody being blind."
Perhaps you think that I am joking, but my answer is equivalent to your second alternative. Increasing the frequency is nothing but using a light with respecto to which everybody is blind.
Of course, I am joking and I think that so are you. I encourage contributors to write better jokes. In fact, the sense of humor requires a lot of intelligence.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
13 answers
Dependency parsing technique (or dependency grammar model) is declared as syntactic model for natural language text. The result of dependency parsing is a graph of words and relations (dependencies) between them within a sentence. Examples of such parsers/model are: Link Grammar, Malt Parser, Stanford parser, etc.
There are several models built using result of this syntactic analysis which usually referred to as Shallow Semantic processing: Semantic Roles Labeling, Conceptual Dependencies, First-Order logic, etc. (in terms of D.Jurafsky Speech and Language Processing chapter 17).
As a user of NLP tools I have an option of using either one level of abstraction (syntactic parse) or another (shallow semantic analysis). Considering both of them are usually, mathematically speaking, graphs of some kind, I need to know what might be a benefit of using more complicated semantic processing in my task. Obviously, every next layer of processing adds more errors and usage of semantic processing should be justified.
In my research I trying to measure a benefit of shallow semantic processing phase applied to Question Answering task (IR). Therefore I need to define a strict demarcation line between these two layers and place some methods and tools in a layer of pure syntactic analysis, and other - in a shallow semantic analysis layer.
Is there any agreed definition for such borderline between syntax and semantic?
Relevant answer
Answer
The key differences between syntactic dependency parsing and shallow semantic parsing are: normalization of syntactic variants into similar semantic predicates and handling of multi-word expressions.
Consider the English sentences:
He took a shower
He took the book
The syntactic analyses of these 2 sentences will be very similar.
In contrast, you hope a semantic analyzer will yield very different representations.
Consider simple syntactic variants such as passive/active and dative move: the syntactic dependency trees of the variants will be very different, whereas you will find a semantic role labeller will provide similar outputs for the variants.
For the case of multi-word expressions, we expect a semantic role labeller to identify predicate arguments as complex chunks. This involves using the output of a named entity recognizer (most often running on the output of a syntactic parser but also relying on stochastic word sequence models and annotated entity linking data sets).
If the named entity recognizer is expanded to perform entity linking to a semantic resource such as freebase then effectively the semantic parser normalizes multiple expressions that are all variants of the same meaning so that Mr Obama or "the United States president" are recognized as 2 expressions denoting the same individual.
More advanced semantic parsers should be capable to identify co-referring and implicit expressions such as in the example:
John asked Mary to answer the proposal.
We can infer that Mary is the Agent in the predicate answer(Mary,proposal)
But in the sentence:
John promised Mary to answer the proposal
We should infer that John is the agent in answer(John,proposal)
For QA purposes, the capability to normalize syntactic variants, link multi-word expressions to ontologies such as freebase and recognize implicit or linked arguments are quite important.
Unfortunately, the state of the art in shallow semantic parsing remains quite low - about 60% accuracy / recall. So that if your application requires high accuracy, you may be better off relying on shallow Stanford-dependencies which reach much higher accuracy when used in-domain.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
5 answers
I need to implement OCR for my project.
Relevant answer
Answer
The tesseract algorithm is available on Google Code, and is one of the best open source OCR out there.
I have attached the link.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
5 answers
To enhance the decision support system of any application.
Relevant answer
Answer
The artificial neural agent is when a neuron or a network is posed as an agent architecture. For example in " Cooperative Behavior of Artificial Neural Agents Based on Evolutionary Architectures" by Londei, A. ; Savastano, P. ; Belardinelli, M.O. . The objective is to put neurons on a grid and according to the evolution of the system the neuron can die, and feed. This architecture allows for the evolution of a neuronal system with similar rules to that of a cellular automata.
While the example above is one way to implement it there are other options such as having a system of many neural networks and their interactions mediated through some established mechanism to form a system. Other related areas are evolutionary nerual networks, for example take a look at the following references:
*Evolving Connectionist Systems by Nikola Kasabov
*Automatic Generation of Neural Network Architecture Using Evolutionary Computation by E. Vonk, L. C. Jain , R. P. Johnson
  • asked a question related to Cognitive Science and Artificial Thinking
Question
14 answers
And what do you think the next big thing is in Machine Learning/AI/NLP?
Relevant answer
Answer
I do not think there is one answer. Cool by technology- and its novelty? Cool by the problem it wants to solve - the market? Cool by what investors think is cool? Here is the one that has received the vote of investors -they vote through their money (company's ability to demonstrate the type of problem it seeks to solve- of discovering new drugs and explore new energy sources perhaps adds to the cool factor!): http://techcrunch.com/2013/07/16/ayasdi-lands-30-6m-to-help-g-e-citi-the-u-s-government-and-more-find-needles-in-big-datas-haystack/
ps: I am skeptical on their approach/promise of automation"without requiring users to ask questions", as I am a strong believer of involving humans in the loop (if interested, you can find arguments and examples here: http://www.slideshare.net/apsheth/big-data-to-smart-data-keynote )
  • asked a question related to Cognitive Science and Artificial Thinking
Question
69 answers
In order to start a discussion, i would like to ask you all what your criteria for thinking would be? i mean, is it just giving an output when given a certain input? is it 'learning' as neural networks do? is it the production of an algorithm? what do you think?
Relevant answer
Answer
Warren,
" What about "emotions" -- they play a role in thinking. So does "willingness" e.g. discouragement can inhibit a process, well-meaning but possibly inappropriate advice can push the process in a particular direction."
Aspects of consciousness related to perceptions have been studied extensively but the emotional side of consciousness, the demand of the inside, have not been studied as much. Our inner self speaks to us through our emotions and what we perceived get meaning through our emotions. The intellectual process in my case is totally guided emotionally. I know emotionally if something is interesting instantly before I have any conscious understanding. I read a few words in the intro, in the conclusion and I know emotionally if something interesting is said in this book.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
28 answers
“AI researchers have focused (…)on the production of AI systems displaying intelligence regarding specific, highly constrained tasks. Increasingly, there is a call for a transition back to confronting the more difficult issues of “human-level intelligence” and more broadly artificial general intelligence.” according to AGI 13 conference to be hold in Beijing July 31 – August 3, 2013.
Do you share same call for transition ?
Relevant answer
As far as I know, all definitions of Artificial Intelligence have logical problems. They pretend to use a generic - specification approach of definition, but in fact they are circular definitions. The problem is that there is not a rigorous definition of Intelligence.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
Can we create an artificial system based on ANN and GA to solve complex cryptography?
Relevant answer
Answer
The initial work was done in:
D. Pointcheval. Les Reseaux de Neurones et leurs Applications Cryptographiques. Technical report, Technical Report, Laboratoire d’Informatique de l’Ecole Normale Superieure, 1995.This scheme was subsequently proven by Klimov
et al. to be particularly vulnerable to genetic algorithms, geometric
considerations, and probabilistic analysis. (Alexander Klimov, Anton Mityagin, and Adi Shamir. Analysis of Neural Cryptography. In ASIACRYPT, pages 288–298, 2002.)
Other references in this area are:
Kinzel, Wolfgang, and Ido Kanter. "Neural cryptography." Neural Information Processing, 2002. ICONIP'02. Proceedings of the 9th International Conference on. Vol. 3. IEEE, 2002.
Kanter, Ido, and Wolfgang Kinzel. "The theory of neural networks and cryptography." Proceedings of the XXII Solvay Conference on physics on the physics of communication. 2002.
Related to this topic is the area of information hiding in neural networks . This concept has been explored in:
Kaili Zhou, Taifan Quan, and Yaohong Kang. Study on Information
Hiding Algorithm Based on RBF and LSB. International Conference
on Natural Computation, 5:612–614, 2008
I also extended these concepts to create a Neural network Trojan which hides the Payload and is activated according to specific inputs. By providing different stopping criteria, the weight changes, and this makes the attack highly polymorphic.
Hope this helps
  • asked a question related to Cognitive Science and Artificial Thinking
Question
11 answers
It would be interesting to know what is the most promising prospect in an attempt to develop AI
Relevant answer
Answer
Hi Heman,
I understand your point of view.
I thought your question was too broad.
By reformulating it, I could answer. I try again.
We know that biological neural networks are well suited for Natural Intelligence. We can expect that a good simulation of biological neural networks should be well suited for Artificial Intelligence.
However, we observe that if we would like to solve a practical problem, neural networks are just one of the learning machines, which we can use. Moreover, the neural networks, which is the most used, the MLP, is not plausible from a biological point of view.
My point of view is that there are two possible ways:
- find the best simulation of biological neural networks to achieve Artificial Intelligence (the Graal),
- solve some tasks of Artificial Intelligence, using learning machines.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
30 answers
How is tomorrow's AI? From what Godel had ever proved, it seems that the original objectives of AI is a mission impossible, but nowadays we have been seeing a great many byproducts of the AI research as it was originally proposed by Turing and propelled by early pioneers in world war II, like speech recognition, machine learning, expert system, so on and so forth.
Back to the orginal primary targets of AI research, how about its future given we did not see too much progress in the past decades? Is is possible to bring into being the dream about an intelligent machine, as closely intelligent as humans? Where would it go in general?
Relevant answer
Answer
Haipeng,
Your questions beg meditation on the reasons for having an AI field different from what was conceptualized. I think it may be due to several factors:
Problems with the AI field
a) when the AI field was first conceived it was an idea and not an establishment. The ideas were not measured on papers and their incremental contribution based on reductionism. If a paper today is published on a complex system that produces something closely to human behaviour and intelligence, the first criticism from the AI community is on how to benchmark it so that I can quantify it. Most likely it will not pass the first round of reviewers that pose this criticism. The question is, does there exist right now a dataset on which to benchmark such system?
b) The notion of AI has evolved since its conception to encompass a different meaning of function optimization (notions such as “optimal learning” are just function optimization) as opposed to more general questions
Culture
a)When Devol and engelberger got their first patent in 1961 heralded a new generation in automation with the first robotic arm. The impediment from there on has been a cultural one in several countries were the notion of job displacement is a major concern. This has been translated to AI also.
b) Fiction writing which started with Karel Kapek's RUR in the 1920's features Intelligent machines trying to destroy humans. This genre has much success in portraying the research in AI as an endeavour that will most likely end in destruction.
Funding
While it may be a worthwhile effort there are no funding sources that I know of that currently have this topic in their list of grants.
Ethics
While there may be funding for AI there is a majority of funding for military applications bringing the cultural fears to reality. The real goal should be to provide funding on other truly worthwhile endeavours on more pressing issues for humans other than intelligent targeting systems destruction.
To bring back the original conceptualization we need to deal with these issues so that we can progress towards our field's original goal.
Note: This is just my biased opinion
  • asked a question related to Cognitive Science and Artificial Thinking
Question
157 answers
Mind modeling, relevant knowledge base, knowledge representation, cognition, computation
Relevant answer
Answer
What does it mean to understand mind?
The best scientific understanding possible, the foundation of the scientific method is a model making experimentally verifiable predictions, and then experimental verification of these predictions.
The model could be classical or quantum, exact or approximate - these are secondary to the very question of understanding. It is good if the model can specify accuracy of its predictions. But even qualitative predictions is a strong indication of understanding. When there is no predictions at all, this indicates that there is no understanding.
Does it make sense?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
Both are applied in the context of artificial neural networks.
Relevant answer
Answer
Generating fractals
Four common techniques for generating fractals are:
• Escape-time fractals — (also known as "orbits" fractals) These are defined by
a formula or recurrence relation at each point in a space (such as the complex plane).
Examples of this type are the Mandelbrot set, Julia set, the Burning Ship fractal, the Nova fractal and the Lyapunov fractal. The 2d vector fields that are generated by one or two iterations of escape-time formulae also give rise to a fractal form when points (or pixel data) are passed through this field repeatedly.
• Iterated function systems — These have a fixed geometric replacement rule. Cantor
set, Sierpinski carpet, Sierpinski gasket, Peano curve, Koch snowflake, Harter-Heighway dragoncurve, T-Square, Menger sponge, are some examples of such fractals.
• Random fractals — Generated by stochastic rather than deterministic processes, for example,trajectories of the Brownian motion, Lévy flight,fractal landscapes and the Brownian tree. The latter yields so-called mass- or dendritic fractals, for example, diffusion-limited aggregation orreactionlimited aggregation clusters.
• Strange attractors — Generated by iteration of a map or the solution of a system of initial-value differential equations that exhibit chaos.
Classification of fractals
Fractals can also be classified according to their self-similarity. There are three types of self-similarity found in fractals:
• Exact self-similarity — This is the strongest type of self-similarity; the fractal appears identical at different scales. Fractals defined by iterated function systems often display exact self-similarity.
• Quasi-self-similarity — This is a loose form of self-similarity; the fractal appears approximately (but not exactly) identical at different scales. Quasi-self-similar fractals contain small copies of the entire fractal in distorted and degenerate forms. Fractals defined by recurrence relations are usually quasi-self-similar but not exactly self-similar.
• Statistical self-similarity — This is the weakest type of self-similarity; the fractal has numerical or statistical measures which are preserved across scales. Most reasonable definitions of "fractal" trivially imply some form of statistical self-similarity. (Fractal dimension itself is a numerical measure which is preserved across scales.) Random fractals are examples of fractals which are statistically self-similar, but neither exactly nor quasi-self-similar.
I just found these
  • asked a question related to Cognitive Science and Artificial Thinking
Question
2 answers
Schwefel 2_22, Schwefel 1_2, Schwefel 2_21, Penalized_1 or H_COM (Hybrid Composition Function) and its rotated and rotated shifted versions?
Relevant answer
Answer
Hi, if you need first two function codes, see e.g. SCI2S:
Also, there are C++, Java, and Matlab test suite codes available from e.g. Congress on Evolutionary Computation (CEC) competitions, single objective 2005, 2006, 2008, 2010, 2011, 2013.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
7 answers
Due to the lack of possibilities to evaluate and compare cognitive architectures in agents in a formal way, what are the possibilities? Are competitions such as the bot prize appropriate? Or do we have to test them empirically in comparison to humans (i.e. classic psychological experiments)?
Relevant answer
Answer
The Agent paradigm is only an idea for a cognitive model and as stated by Leonid there is only one valid example - and we don't have a model for that yet.
Imagine that all of your cognitive aqents have a shared language (assumption: they all share the same environment) if you can get them to produce plausible discourse about their environment and it is understood and used by the others then you getting somewhere. Note discourse production is not the same a pre-planned/defined communications.
To solve this problem you will need to make their environment very rich and totally consistent. An then you need in internalised model of that environment within your agent.
Note: this problem has nothing to do with the Turing test - the key is the degree of coupling between agent and the environment and its internalisation as a model.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
3 answers
I have three features of lengths 100, 25 and 128 for every subject. I want to use SVM. What is the best approach? Of course, scaling/normalization will be done.
Question 1 : Should I place these three features in one vector for every subject or is there any other appropriate way to deal it.
Question 2 : Is feature extraction an art and based on gut feelings more than engineering ?
Relevant answer
Answer
In my experience, it is better to represent individual components of your three feature separately to construct a single feature vector. Also if you are planning to use SVM you can further extend the vector by using multiple binary features for a single categorical feature (Refer: Hsu, C. W., Chang, C. C., & Lin, C. J. (2003). A practical guide to support vector classification, Section 2.1 )
Choosing features for representing your data may not have a straight forward task. But once chosen you can computationally know which features are important for your problem. Often cross validation is performed for feature selection and parameter tuning. There are ready to use tools in case of SVM (Refer: FAQ section on libsvm site and look for feature selection tool)
  • asked a question related to Cognitive Science and Artificial Thinking
Question
6 answers
One of the greater challenges in the U.S. (and increasingly elsewhere) is the growing burden of costs associated with health-damaging but modifiable behaviors. I was recently asked by a program director of a national funding agency to suggest researchers in academia who are working on computational predictive models appropriate to capture health-related behaviors and behavior change, and so I am seeking your help in producing such a list.
It seems to me that the area of behavior change in real-world settings is ripe for predictive models after more than a 100 years of behavioral science and clinical studies, plus all the recent progress in cognitive models, neurocomputational models, user models, predictive analytics, machine learning, tutoring systems, smart homes, ubiquitous computing, and more.
Please send your suggestions to pirolli@parc.com
Relevant answer
Answer
These are weak candidates for models that are computational, but they do have grist for such models.
Consolvo, Sunny; McDonald, David W.; CHen, Mike Y; Froehlich, Jon; Harrison, Bevery; Klasnja, Predrag; LaMarca, Antony; LeGrand, Louis; Smith, Ian, & Landay, James A. (2008). Activity sensing in the wild: A field trial of UbiFit Garden. CHI 2008.
Marlow, Peter (2005). Literature review of the value of target setting. Technical Report HSL/2005/40, Health and Safety Laboratory, Broad Lane, Sheffield.
Duhigg, Chalres (2012). The Power of Habit. New York: Random House.
Heath, Chip & Heath, Dan (2010). Switch: How to change things when change is hard. Crown Business.
--Stu
  • asked a question related to Cognitive Science and Artificial Thinking
Question
17 answers
It would be great if there is any framework for linux environment.
Relevant answer
Answer
Thanks for citations! I am convinced that the logic consequences of "The Chinese Room"can prove that the intentionality and real understandind can't be created any AI systems. Robots and computers can imitation human-like behavior and "emotion". but these behaviors similar to human, however without any first person experinces (Gallagher, 2010).
The essence of consciousness is the experience itself, and experiences don't need Suzuki's robots to successfully performed any behavioral and emotional-like test.
It is necessary and sufficient to Turing test have been passed for robots, but this is the demand of the easy problem (Chalmesr, 1996).
  • asked a question related to Cognitive Science and Artificial Thinking
Question
10 answers
Say I have programmed something in computer which acts as a creative thinking, it could be an idea, plan or any artifact. I want to know how we will evaluate this artifact as being creative.. Are there any standards to measure creativity like Turing test?
Relevant answer
Answer
In the field of genetic programming there’s interest in ‘human competitiveness’ (http://www.genetic-programming.org/combined.html). There are a number of (fairly) well defined criteria for human competitiveness, including:
(1) "The [computer generated] result is publishable in its own right as a new scientific result — independent of the fact that the result was mechanically created.”
(2) “The [computer generated] result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs).”
One could argue that if a computer is generating novel, publishable results (which are generally attributed to human ‘creativity’) or beating humans in competitions in which success is assumed to be a product of human `creativity’, then the computer is creative – or at least is simulating creativity (if that’s possible) well enough to compete with human creativity.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
27 answers
Are questions about death, nothingness, world's and life's birth... etc linked to a defined anatomic brain zone (or several)? And in that case, is stimulation of these zones known? Would that contribute to an explain of people's various reactions, when faced with those questions?
  • asked a question related to Cognitive Science and Artificial Thinking
Question
9 answers
Most people I know have an idea of what is known by the general public within their culture, but I've rarely seen someone assert the opposite. Is it possible that people are aware of what everyone knows but are not sure about things which they don't know? Everyone rightly assumes that facts highly specific to one's daily life are not common knowledge, but as one generalizes I've noticed people become less sure about shared knowledge. This becomes clear in the assumptions people make when telling stories.
Has anyone else noticed this phenomenon? Is this something every other person knows and I've just missed? haha
Relevant answer
Answer
John,
It is your creative need speaking. Not everybody has it. A theory of cognitive dissonance, which has been developed since the 1950s, is among highly respected in psychology. It has demonstrated that most of people most of time are actively refusing to accept new knowledge. The reason is that new knowledge contradicts (some of) the old knowledge, this creates stress, and people actively avoid this stress.
Possibly this is why great things known to Aristotle are ignored by majority of peoplpe today.
You can look at an open preprint of our paper:
arXiv 1209.4017, Mozart Effect, Cognitive Dissonance, and the Pleasure of Music, Perlovsky, Cabanac, Bonniot-Cabanac
This paper discusses the role of music in accumulating new knowledge
  • asked a question related to Cognitive Science and Artificial Thinking
Question
4 answers
Considering the embodiment process of an organism, in which the autopoiesis plays its role across all the body cells, for Varela ad Maturana, "a cognitive system is a system whose organization defines a domain of interactions in which it can act with relevance to the maintenance of itself." (This domain of interactions seems to be the sufficient condition for a system, to be considered a cognitive system, so the "neurality" seems not to be necessary...)
Relevant answer
Answer
Luca,
Thank you for clarification. This is of course a very different topic.
Consciousness seems specific to restricted cells and networks. Some people believe in panpsychism - everything is conscious, there is no scientific reasons for this. Scientifically we know a lot about consciousness and cognition. My intuition is moved by what actually happens - there should be something observed or experienced that we would like to understand. May be if you ask your question differently I could engage.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
16 answers
Most of the investigated concepts are assumed consciously available. But what is about all the unconscious processes?
Relevant answer
Answer
An unconscious emotion could be represented by an emotion or a feeling, produced in the sleeping phase , during the phase in which we dream. It is possible that we cannot retain any recall of this emotion, during our daily life, but this does not means that we cannot have emotions during the sleep.. The problem is that in any case, conscious or unconscious, we have not instruments to detect an emotion outside us. In a book titled "The Mind's I" (Bantam, Reissue edition 1985, wrote in collaboration with Douglas Hofstadter), Daniel Dennet discusses about what is demonstrable or not, referring to a person different from ourself, when we talk about emotions. The book is an investigation into consciousness in search of what distinguishes a conscious action, from the same operation computed with automatic processes. The discussion is in part also developed by Jean-Pierre Changeux, in "L'homme de verite" (2004 The physiology of truth), speaking about the concept of "qualia", as elements to evaluate, in an objective manner, all the experiences lived by the consciousness in others.. (i.e. the red color is a color of "passion", and many people are in according to this truth, independently from traditional cultures, where traditions represent the automatic processes described by Dennet... etc). As just well described by Antonio Damasio, (that was mentioned in several posts published here) an "emotion" represents an unique experience that is consumed only within our ego, and that, differently from a "feeling" can be represented externally, since it is also represented by a sequence of metabolic fluxes and muscle movements, generally producing effects that go outside ourself.. (emotion from the latin ex-movere). Now, considering what Dennet and Changeaux say, the only way to catch an emotion is to evaluate all that "qualia" represent in a specific emotional state. In my opinion,I guess that only if we can single out a "qualia" to identify a particular emotional state of the unconscious, we could indirectly demonstrate the existence of an unconscious emotion, and successively we should be able to distinguish this unconscious emotion from the automatic processes induced by our education, traditions etc.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
21 answers
In context of intelligence, the discussion about mind-brain relation is the fundamental issue in modern cognitive sciences. Thus, the present discussion stresses the relationship between the mind and the brain from a cognitive point of view.
Relevant answer
Answer
To my thinking, "cognition" refers to the processes involved in the operation of a mind. So recognition (re-cognition) for instance is the running again of a process that was done before - as in the processes involved in identifying a familiar face. Also, when someone experiences a sense of deja vu from experiencing a confluence of thoughts that is similar to a previous train of thoughts, this can probably be viewed as a recognition of a repeated thought pattern.
A mind is just too awesome a phenomenon to be explained away by such a thing as a material brain. The notion that cognition must emerge from the operation of some kind of machine called a brain is compelling. However, must this machine be made of just matter? Matter is just one form of energy. Can this machine be a more general system of interacting forms of energy - perhaps including but not restricted to those forms of energy that we know how to detect? Perhaps we find it hard to believe that a mass of gelatinous grey matter can manifest a mind because the brain actually consists of more than just that mass.
  • asked a question related to Cognitive Science and Artificial Thinking
Question
84 answers
A dynamic of consciousness, "discovery" is one of the essential driving forces of living entities. Even basic primate behaviors such as the drive for food, sex, social interplay can be said to be based in the act of "discovery". So, what is the nature of this drive? Could a machine be instilled with this? Is it simple matter of novelty, or is it a factor of "learning"? It seems to be a blend of feeling and logic resulting in development of conceptualization, often leading to further investigation or parsing of root cause (reflection)..
Relevant answer
Answer
The problem of self-motivation towards new "discoveries" is ill-defined. It can be formulated in terms of an endless list of multi-disciplinary approaches.
With that being said, taking an information theoretic perspective, I'd say what you are looking for can be defined mathematically as a trade-off between data compressibility (i.e., the agent's ability to consistently "summarize" and "recall" events and patterns) and surprise (i.e., how unlike is the observed event given the agent's current internal model of the world surrounding it).
One cannot be motivated towards "discovering" facts or patterns already known. On the other hand, one must be able to "compress" the new knowledge otherwise it is likely regard as "too complex" to be worth analysing, given the agent's current learning capabilities.
Therefore, I'd say this problem cannot be put in absolute terms, but only in the context of the learning model an agent has access to.
See: Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
Autonomous Mental Development, IEEE Transactions on
Date of Publication: Sept. 2010
Author(s): Schmidhuber, J.
Dalle Molle Inst. for Artificial Intell., Univ. of Lugano, Manno, Switzerland
Volume: 2 , Issue: 3
Page(s): 230 - 247
  • asked a question related to Cognitive Science and Artificial Thinking
Question
21 answers
Can the impact of learning a second or third language on the mindset and cognition of the people be scientifically proven or not?
Relevant answer
Answer
There is very good evidence that – to accommodate 2 languages – the brains of bilinguals differ from those of monolinguals. And while there are many performance differences (both bilingual advantages and disadvantages) between these groups, there is scant evidence that bilinguals show non-linguistic advantages over monolinguals on executive control measures presumed to reflect inhibitory control (conflict resolution). The literature on this topic was thoroughly reviewed by my graduate student, Matt Hilchey
(abstract excerpted below; see also:
Psychon Bull Rev (2011) 18:625–658
DOI 10.3758/s13423-011-0116-7)
Even the global advantages we describe and thought were ubiquitous, however, have recently been shown not to be so in studies by Natalie Phillips and Ken Paap.
Are there bilingual advantages on nonlinguistic interference tasks? Implications for the plasticity of executive
control processes
Matthew D. Hilchey & Raymond M. Klein
Abstract It has been proposed that the unique need for early bilinguals to manage multiple languages while their executive control mechanisms are developing might result in long-term cognitive advantages on inhibitory control processes that generalize beyond the language domain. We review the empirical data fromthe literature on nonlinguistic interference tasks to assess the validity of this proposed bilingual inhibitory control advantage. Our review of these findings reveals that the bilingual advantage on conflict resolution, which by hypothesis is mediated by inhibitory control, is sporadic at best, and in some cases conspicuously absent. A robust finding from this review is that bilinguals typically outperform monolinguals on both compatible and incompatible
trials, often by similar magnitudes. Together, these findings suggest that bilinguals do enjoy a more widespread
cognitive advantage (a bilingual executive processing advantage) that is likely observable on a variety of cognitive
assessment tools but that, somewhat ironically, is most often not apparent on traditional assays of nonlinguistic inhibitory control processes.