Science topics: Artificial IntelligenceCognitive Science and Artificial Thinking
Science topic
Cognitive Science and Artificial Thinking - Science topic
Explore the latest questions and answers in Cognitive Science and Artificial Thinking, and find Cognitive Science and Artificial Thinking experts.
Questions related to Cognitive Science and Artificial Thinking
"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
WHAT IS THE MYSTERIOUS STUFF OF INFORMATION?
Raphael Neelamkavil, Ph.D., Dr. phil.
Here I give a short description of a forthcoming book, titled: Cosmic Causality Code and Artificial Intelligence: Analytic Philosophy of Physics, Mind, and Virtual Worlds.
§1. Our Search: What Is the Mysterious Stuff of Information?: The most direct interpretations of the concept of information in both informatics and in the philosophy of informatics are, generally, either (1) that “information is nothing more than matter and energy themselves”, or (2) that “information is something mysterious, undefinable, and unidentifiable, but surprisingly it is different from matter and energy themselves”.
But if rightly not matter and energy, and if it is not anything mysteriously vacuous (and hence not existent like matter-energy, or pure matter, or pure energy), then how to explain ‘information’ in an all-inclusive and satisfying manner? Including only the humanly reached information does not suffice for this purpose. Nor can we limit ourselves to information outside of our brain-and-language context. Both the types need necessarily to be included in the definition and explanation.
§2. Our Search: What, in Fact, Can Exist?: First of all, what exist physically are matter and energy (I mean carrier wavicles of energy) themselves. In that case, information is not observable or quasi-observable like the things we see or like some of the “unobservables” which get proved later as quasi-observable. This is clearly because there are no separate energy wavicles that may be termed information particles / wavicles, say, “informatons”. I am subjectively sure that the time is not distant for a new mystery-monger theory of informatons will appear.
§3. Our Search: A Tentative General Definition: Secondly, since the above is the case with humanity at various apparently mysterious theoretical occasions, it is important to de-mystify information and find out what information is. ‘Information’ is a term to represent a causal group-effect of some matter-energy conglomerations or pure energy conglomerations, all of which (of each unit of information or units of information in each case) are in some way under relatively closely conglomerated motion, and together work out for a causal effect or effects on other matter-energy conglomerations or energy conglomerations.
§4. Our Search: In What Sense is Information Causal?: Thirdly, the causal effect being transferred is what we name a unit or units of information. Hence, in this roundabout sense, information too is causal. There may have been and may appear many claiming that information is something mysteriously different from matter-energy. Some of them have the intention of mystify consciousness in terms of information, or create a sort of soul out of immaterial and mysterious information conglomerations, and then create also an information-soul-ology. I believe that they will eventually fail.
§5. Our Search: Examples for Mystification: According to some theologians (whose namies avoid mentioning in order to avoid embarrassment) and New Age informaticians, God is the almighty totality of information, and human, animal, and vegetative souls are finite totalities of the same. Information for them is able to transmit itself without the medium of existent matter, energy, or matter-energy. Thus, their purpose would be served well! But such theories seem to have disappeared after the retirement of some of these theologians because there are not many takers for their theological stance. If they had not theologized on it, some in the scientific community would have lapped up such theories.
Hence, be sure that new, more sophisticated, and more radical ones will appear, because there will be more and more of others who do not want to directly put forth a theological agenda, and instead, would want to use the “mystery”-aspect of information as an instrument to create a cosmology or quantum cosmology in which the primary stuff of the cosmos is information and all matter and energy are just its expressions. Some concrete examples are the theories that (1) gravitation is not any effect carried by some wavicles (call them gravitons), but instead just a “vacuum effect”, (2) gravitation is another effect of electromagnetism that is different from its normal effects, etc.
§6. Why Such a Trend?: In my opinion, one reason for this trend is the false interpretation of causality by quantum physics and its manner of mystifying non-causality and statistical causality by use of spatialization and reification of mathematical concepts and effects as physical without any attempt to delimitation. There can be other reasons too.
§7. Our Attempt: All-Inclusive Definition of Information: Finally, my attempt above has been to take up a more general meaning of the notion ‘information’. For example, many speak of “units of information in informatics”, “information of types like in AI, internet, etc., that are stored in the internet in various repositories like the Cloud”, “information as the background ether of the universe (strangely and miraculously!)”, “loss of all information in the black hole”, “the quantum-cosmological re-cycling of information in the many worlds that get created (like mushrooms!) without any cause and without any matter-energy supply from anywhere, but merely by a (miraculously quantum-cosmological vacuum effect (!?)”, etc. We have been able to delve beyond the merely apparent in these notions.
Add to this list now also the humanly bound meanings of the notion of ‘information’ that we always know of. The human aspect of it is the conglomeration of various sorts of brain-level and language-level concatenations of universal notions (in the form of notions in the brain and nouns, verbs, etc. in language) with various other language-level and brain-level aspects which too have their origin in the brain.
In other words, these concatenations are the brain-level and language-level concatenative reflections of conglomerations of universals (which I call “ways of being of processes”) of existent physical processes (outside of us and inside us), which have their mental reflections as conceptual concatenations in brains and conceptual concatenations in language (which is always symbolic). Thus, by including this human brain-level and language-level aspect, we have a more general spectrum of the concept of information.
In view of this general sense of the term ‘information’, we need to broaden the definition of the source/s of information as something beyond the human use of the term that qualifies it as a symbolic instrument in language, and extend its source/s always to some causal conglomeration-effect that is already being carried out out-there in the physical world, in a manner that is not a mere construct of human minds without any amount of correspondence with the reality outside - here, considering also the stuff of the consciousness as something physically existent. That is, the causal source-aspect of anything happening as mental constructs (CUs and DUs) is a matter to be considered always as real beyond the CUs, DUs, and their concatenations. These out-there aspect consists of the Extension-Change-wise effects in existent physical processes, involving always and in each case OUs and their conglomerations.
§8. (1) Final Definitions: ‘Information’ in artificial intelligence is the “denotative” (see “denotative universals” below) name for any causally conglomerative effect in machine-coded matter-energy as the transfer agent of the said effects, and such effect is transferred in the manner of Extension-Change-wise (see below: always in finitely extended existence, always every part of the existent causing finite impacts inwards and outwards) existence and process by energy wavicles and/or matter-energy via machine-coded energy paths. The denotative name is formulated by means of connotation and denotation by minds and by machines together.
Information in biological mindsis the denotative name for any causally conglomerative effect in brain-type matter-energy and is transferred in the Extension-Change manner by brain-type matter-energy and/or energy wavicles. The denotative name here is formulated by means of connotation and denotation (see below) by minds and by symbolic-linguistic activities together.
Mind, in biologically coded information-based processes, is not the biological information alone or separately, but it is the very process in the brain and in the related body parts.
§9. (2) Summary: I summarize the present work now, beginning with a two-part thesis statement:
(a) Universal Causalityis the relation within every physically existent process and every part of it, by reason of which each of it has an Existence in which every non-vacuously extended (in Extension) part of each of it exerts a finite impact (in Change) on a finite number of other existents that are external and/or internal to the exerting part. (b) Machine coding and biological consciousness are non-interconvertible, because the space-time virtual information in both is non-interconvertible due to the non-interconvertibility of their information supports / carriers that are Categorially in Extension-Change-wise existence, i.e., in Universal Causality.
Do artificial and biological intelligences (AI, BI) converge and attain the same nature? Roger Penrose held so initially; Ray Kurzweil criticized it. Aeons of biological causation are not codified or codifiable by computer. Nor are virtual quantum worlds and modal worlds without physical properties to be taken as existent out there. According to the demands of existence, existents must be Extended and in Change. Hence, I develop a causal metaphysics, grounding AI and BI: Extension-Change-wise active-stable existence, equivalent to Universal Causality (Parts 2, 3).
Mathematical objects (numbers, points, … structures), other pure and natural characteristics, etc. yielding natural-coding information are ontological universals (OU) (generalities of natural kinds: qualities may be used as quantities) pertaining to processes. They do not exist like physical things. Connotative universals (CU) are vague conceptual reflections of OU, and exist as forms in minds. Words and terms are their formulations in discourse / language – called denotative universals (DU), based on CU and OU.
The mathematical objects of informatic coding (binaries, ternaries) are “as-if existent” OUs in symbolic CU and DU representation. Information-carriers exist, are non-vacuous, are extended, have parts, and are in the Category of Extension. Parts of existents move, make impact on others, and are in the Category of Change. Extension-Change-wise existence is Universal Causality, and is measured in CU-DU as space-time. Other qualities of existents are derivatives, pertain to existent processes, and hence, are real, not existents.
Properties are conglomerations of OUs. For example, glass has malleability, which is a property. Properties, as far as they are in consciousness, are as CUs’ concatenations, and in language they are as DUs’ concatenations. AI’s property-attributions are information, which in themselves are virtual constructs. The existent carriers of information are left aside in their concept. Scientists and philosophers misconceive them. AI and BI information networks are virtual, do not exist outside the conglomerations of their carriers, i.e., energy wavicles that exist in connection with matter, with which they are interconvertible.
Matter-energy evolution in AI and BI are of different classes. AI and BI are not in space-time, but in Extension-Change-level energy wavicles in physical and biological processes. Space-time do not exist, are absolute virtuals, and are epistemic and cognitive projections. Physical and biological causations are in Extension-Change, hence not interconvertible.
From the viewpoint of the purpose of creating an adequate theory of experience and information, for me the present work is a starting point to Universal-Causally investigate the primacy of mental and brain acts different from but foundational to thoughts and reasoning.
§10.(3) The Context of the Present Work: The reason why I wrote this little book deserves mention. Decades ago, Norbert Wiener said (See Chapter 1, Part 1) that information is nether matter nor energy but something else. What would have been his motive while positing information as such a mysterious mode of existence? I was surprised at this claim, because it would give rise to all kinds of sciences and philosophies of non-existent virtual stuff considered to arise from existent stuff or from nowhere!
In fact, such are what we experience in the various theories of quantum, quantum-cosmological, counterfactually possible, informatic, and other sorts of multiverses other than the probably existent multiverse that the infinite-content cosmos could be.
I searched for books and articles that deal with the stuff of information. I found hundreds of books and thousands of articles in the philosophical, ethical, informatically manipulation-oriented, mathematical, and on other aspects of the problem, but none on the question of information, as to whether information exists, etc. This surprised me further and this seemed to be a sign of scientocracy and technocracy.
I wanted to write a book that is a bit ferocious about the lack of works on the problem, given the fact that informatics is today much more wanted by all than physics, mathematics, biology, philosophy, etc., and of course the social sciences and human sciences.
For example, take the series to which belong the first two of the three books: (1) Harry Halpin e Alexandre Monnin, eds. [2014], Philosophical Engineering: Towards a Philosophy of the Web; (2) Patrick Allo, ed., Putting Information First: Luciano Floridi and the Philosophy of Information –both from Chichester: Wiley Blackwell; and (3) John von Neumann [1966], Theory of Self-Reproducing Automata, Urbana: University of Illinois Press.
These works do not treat of the fundamental question we have dealt with, and none of the other works that I have examined deals with it fundamentally – even the works by the best of informatics philosophers like Luciano Floridi. My intention in this work has not been making a good summary of the best works in the field and submitting some new connections or improvements, rather than offering something new.
Hence, I decided to develop a metaphysics of information and virtual worlds, which would be a fitting reply to Norbert Wiener, Saul Kripke, David Lewis, Jaakko Hintikka, and a few hundred other famous philosophers (let alone specialists in informatics, physics, cosmology, etc.), without turning the book into a thick volume full of quotes and evaluations related to the many authors on the topic.
Moreover, I have had experience of teaching and research in the philosophy of physics, analytic philosophy, phenomenology, process metaphysics, and in attempts to solve philosophical problems related to unobservables, possible worlds, multiverse, and cosmic vacuum energy that allegedly adds up to zero value and is still capable of creating an infinite number of worlds. Hence, I extended the metaphysics behind these realities that I have constructed (a new metaphysics) and developed it into the question of physically artificial and biological information, intelligence, etc.
The present work is a short metaphysical theory inherent in existents and non-existents, which will be useful not only for experts, but also for students, and well-educated and interested laypersons. What I have created in the present work is a new metaphysics of existent and non-existent objects.
For several years, scientists have been perfecting the technology of artificial intelligence to think like a human thinks. Is it possible?
Will it be possible to teach artificial intelligence to think and generate out-of-the-box, innovative solutions to specific problems and human-mandated tasks?
Is it possible to enrich highly advanced artificial intelligence technology into what will be called a digitised, abstract critical thinking process and into something that can be called artificial consciousness?
For several years, scientists have been refining artificial intelligence technology to think the way humans think. The coronavirus pandemic accelerated the digitalisation of remote, online communication, economic and other processes. During this period, online technology companies accelerated their growth and many companies in the sectors of manufacturing, commerce, services, tourism, catering, culture, etc. significantly increased the processes of internetisation of their business, communication with customers, supply logistics and procurement processes. During this period, many companies and enterprises have increased their investments in new ICT technologies and Industry 4.0 in order to streamline their business processes, improve business management processes, etc. The improvement of artificial intelligence technologies also accelerated during this period, including, for example, the development of ChatGPT technology. New applications of machine learning, deep learning and artificial intelligence technologies in various industries and sectors are developing rapidly. For several years, research and development work on improving artificial intelligence technology has entered a new phase involving, among other things, attempts to teach artificial intelligence to think in a model like that of the human brain. According to this plan, artificial intelligence is supposed to be able to imagine things that it has not previously known or seen, etc.
In the context of this kind of research and development work, it is fundamental to fully understand the processes that take place in the human brain within what we call thinking. A particular characteristic of human thinking processes is the ability to separate conscious thinking, awareness of one's own existence, abstract thinking, the formulation of questions within the framework of critical thinking from the selective, multi-criteria processing of knowledge and information. In addition, research is underway to create autonomous human-like robots, androids equipped not only with artificial intelligence, but also with what can be called artificial consciousness, i.e. a digitally created human-like consciousness. Still not fully resolved is the question of whether a digitally constructed artificial consciousness, which is a kind of supplement to a high generation of artificial intelligence, would really consist of a humanoid cyborg, a human-like android built to resemble a human, being aware of its own existence or merely behaving as if it were thinking, as if it were equipped with its own consciousness. Highly humanoid, autonomous androids are already being built that have 'human faces' equipped with the ability to express 'emotions' through a set of actuators installed in the robot's 'face' that imitate human facial expressions, human grimaces, representing various emotional states. Androids equipped with such humanoid facial expressions combined with the robot's ability to participate in discussions on various current issues and problems could be perceived by the humans discussing them as not only highly intelligent but also as aware of what they are saying and perhaps aware of their existence. But we still don't know that even in such a situation it could be 'just' a simulation of human emotions, human consciousness, human thinking, etc. by a machine equipped with highly advanced artificial intelligence. And when, in addition, an autonomous android equipped with an advanced generation of artificial intelligence is connected through Internet of Things technology, cloud computing to knowledge resources available on the Internet in a real-time formula, and is equipped with the ability to multi-criteria, multi-faceted processing of large sets of current information performed on Big Data Analytics platforms, then almost limitless possibilities for applications of such highly intelligent robots open up.
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
I want to extract dissimilarity information of two images layout. Wanna check text, button, and text box alignment, text overlapping, and more other layouts difference. how we do that? any tool, that extra detail information or any image processing method.
There are two theories that are quite similar in nature, but different in substance, The theory of Mind and the theory of Mentaliz(S)ation, sorry, Im allergic to American spelling...pls dont kill me now :-) My understanding of them is this "Both of these concepts, mentalization and the theory of mind, describes processes that are metacognitive in their nature . Mentalization mainly concerns the reflection of affective or emotional mental states. In contrast however the, theory of mind focuses on things epistemic in nature such as beliefs, intentions and persuasions. My idea is that these two theories by them self are incomplete but combining elements of both, gives us a clearer understanding. Cognition and affect can't in my view be separated, they are both part of us as human beings and also a part of other animals. What are your thoughts? Am I wrong or right? I can stand criticism so bring it on...
Biomechanics face grand challenges due to the intricacy of living things. We need multidiciplinary approach (mechanical, chemical, electrical, and thermal ) to unravel these intricacies. We need to integrate observations from multiple length scales - from organ level to, tissue level, cell level, molecular level, atomic level, and then to energy level) Over these intricacies, their dynamism, the complexity of their response makes it very difficult to correlate empirical data with theoretical models. Among these challenges, which is the most important challenge. If we solve the most important challenge, we could solve most of the other challenges easily.
There have been many emotion space models, be it the circumplex model or the PANA model or the Plutchik's wheel. But all of them are used to represent human emotions in an emotion space. The definitions for arousal and valence are easy to interpret as human beings as we have some understanding of pleasant/ unpleasant or intense/non intense stimuli. However, how can we define the same for a robot? What stimuli should be considered as Arousing and what should be Pleasant for a robot? I am interested in reading the responses from researchers in the field and having a discussion in this area. Any reference to relevant literature would also be highly appreciated.
Dear fellows,
I am looking for some real-world examples where a cognitive assistant system is used. The system should rely on a user model that follows theoretical assumptions from either Psychology or Cognitive Science, ideally backed by some cognitive architecture.
I have done some literature search but did not come up with actual real-world systems.
It would be great if someone could help.
best regards,
Patrick
Mathematics is fundamental in all sciences and mainly in Physics, which has even had many contributions. It seems that the capacity to be applied would be the motor to be create. But this not what good mathematicians as Henry Poincarè or Hardy has said. What is the beauty in mathematics, in theoretical physics or in others which could be related subjects?
For me there are very beautiful mathematical results which sounds difficut to be applied or even against our reality, which are full of "beauty" or at least "surprise".
1.Sum of natural numbers = a negative number as - 1/12.
2. Polynomials with degree five or higher are without analytical expression of their roots.
3. Banach-Tarsky theorem
4. There cannot exit more than five regular polyhedra in three dimensions.
"The AI Takeover Is Coming" this is what is the news these days. Is it really a trend setter for future years.
What is the impact over manual work due to this? just needed the audience thoughts over this hence started a conversation.
Your thoughts and expertise are welcome!
Thanks in advance
If i have to get the most possible generic steps of text analytics, what are the most commonly used steps for any text analysis model.
Any help and your expert guidance/ suggestions are welcome.
Thanks in advance
We can imagine devices as agents may be then its better to coin the question.
To
The scientists,
How to get rid of global tracking i.e the continuous signal processing to the brain,photo display system,the negative use of neural networking to gain money by unfair means ?
Kindly write few lines who have experience on this field.
I am aware that you can use CBT to actively change your thoughts about memories. However, I am interested in whether there are therapies to actively get rid of bad memories. For example, when a person becomes stressed about a current activity, they may dream about previous bad experiences. Is there a way to actively get rid of these memories so they no longer affect the person?
Well i am making an expert system using Hologram Technology.In it there is a virtual image of a person that gives a recorded lecture and after it there is a database that have a lot of answers of the questions when some one ask question it picks the best solution to answer it so that it readily gives the best and optimize answers of the question that is asked? and if a question is been asked that is not in the database it directed toward google through internet and search for the best answers.
How could we describe the act of "thinking" with mathematical tools? Which paradigm is best suited for? What does "thought" mathematically mean? Is there any alternative to the procedural (linear) conception of neural calculus?
The background subtraction is achieved by running average method which con-
tinuously updates the background model. Hence if the hand is still for long
enough it is considered as the background and the gesture is not detected.
I've gone through some scenarios, like hospital data process mining and restaurant process mining, but want to find a scenario that is not only new but whose log data is also accessible.
Hello everyone , I would like to assess the concept of "Free Will" for humans by experiments similar to "Ants in the Box" for insects and observe their behaviour based on artificially created situation. Already I am working on assigning this concept to electronic circuitry by to modes , 1. Requirement mode and 2. Free will mode. Can anybody give suggestions on simple experiments to be carried by humans?
I have the intuition that these two types of methodologies are related but I could not find any references nor any clear explanation of this relationship besides the fact that they are 2 types of modern, novel and evolved artificial neural networks.
I propose the following:
How is information encoded within the mind (in the brain)?
What are the principles that determine its organization?
What are the emergent properties?
Are the conceptual and methodological tools that are currently available adequate in addressing the problems of cognition?
This list is certainly incomplete. Do you have any suggestion?
With reference to perception action cycle
We are developing Attention-Aware Systems, which includes the Sensing (Estimation), Modeling and Management of user attention.
In my research activitise I was not able to find a generally valid metric, categorization or quantification of attention.
There are different approaches: in cognitive sciences, attention us usually analyzed as the performance in the fulfillment of given tasks => a percentage scale of an average performance. in HCI publications, researchers often use their own categories or scales that are chosen arbitrarily.
In my work I was using scales, as well as attention types as categories...
So my question is whether you know some way of parametrization for human attention, or have any creative approach to follow?
Thanks.
Text summarization approaches could be broadly classified into two categories: extractive and abstractive. Extractive approaches aim to select the most important pieces of information from an original document without adding any external material to the generated summary or having any deep understanding of the language. Abstractive approaches require a deep understanding of the language; and we find just few work in this direction since it aims to create a shorter version of the original document but not restricted to the material present in the original document. Most of the approaches that have followed an abstractive paradigm rely on predefined templates and cannot be imported to the open domain. So, my question is, do you think that it is possible to propose in the near future approaches that could deal with abstractive text summarization in the open domain? Or maybe using templates is the best choice?
I work in industry and I completed undergrad almost 9 years back. However, I have some ideas and I want to publish or even collaborate if possible. What is the best place for such people? I do not have any professors or academic reviewers.
My interests are primarily in AI, logic and knowledge representation.
There are some key principles of gestalt systems like emergence, reification, multistability and invariance. Do any neuronal models exist to explain these properties?
Heuristic approaches often come with cyclic graphs. But to map heuristic approachs in a Bayesian belief network, it would have to be unidirectional or acyclic in nature. How can we do that?
I proposed to use a neural network to generalize input-target data from a series of numerical simulations. I intend to use a feed-forward back-propagation neural network and I just need to understand how best to configure the number of hidden layers and amount of neurons in each. I have 6 input values and 6 discrete corresponding target values for each input-target set.
In distributed constraints programming, many researchers have been interested in confidentiality in multi-agent systems. One of the most known techniques is lying and biphasic communication. In this context you can have many ethical problems:
How should an ethical-agent be protected against such reasoning?
How can we save or protect fundamental rights of agents?
Who will be responsible for unexpected consequences of this false information?
How can we deal with non-ethical agents?
According my thinking, this is process of finding similarity of our new percepts to the patterns representing concepts and ideas learned during life experience. They all must create coherent model of surround and reality. If this understanding fully reflects meaning of the notion "understanding"?
I'm interested in the power of re-authoring stories (often personal narratives) to change behavior or influence action.
For example the two images, one having rose flower and other having lotus flower are having less similarity than the two images both having rose flowers.
You are coordially invited to propose your work and a special session at 9th ESAS (IEEE). It would be especially interesting for the participants to see how you're handling semantic intelligence concerning ambience, context, and mission planning.
Take a look at the CFP at http://compsac.cs.iastate.edu/esas2014.php.
My problem has 81 input features and 43 targets.
How could we perform our individual intelligence based on collective intelligence? What about measures of individual intelligence or collective intelligence?
This is for a software concept design project.
I want to run a controlled experiment to test students' understanding and correlate it with other parameters. I will give them a passage to read and then ask them some questions relating to the passage via multiple choice questionnaire. I want to know if this method will effectively test their understanding.
In clinical setting decision making, practice guidelines should be stringent. Overall, practice guidelines are built basing on results of metaanalises and randomized trials, and basing on personal experts' opinion and other kind of data in literature. Should we consider building new semiquantitative statistical tools for quantesizing this "a posteriori" and "a priori" knowledge?
I don't have the book, but it is reportedly presented in Watzlawskys "How Real is Real". Without the research reference I'm afraid this excellent story is an urban myth.
Emotional influence on the integration of sensory modalities in cognitive architectures.
I read something about this complex test and try to search for an intelligent system that might do the test, is there any? If not, is it possible to design one?
Formal methods and unified cognitive modeling.
I am interested in explicit representations of the self in an agent. Which features and structures may such representations have?
Take the average person you meet - how do they make decisions? What about evolution - has the scientific method ever had any influence on genetic mutation, or has any other aspect of evolution had any influence on living beings?
two things
1> If we achieve/move with greater than speed of light.
2>if we success to increase the frequency of body very high ,greater than infrared.
Dependency parsing technique (or dependency grammar model) is declared as syntactic model for natural language text. The result of dependency parsing is a graph of words and relations (dependencies) between them within a sentence. Examples of such parsers/model are: Link Grammar, Malt Parser, Stanford parser, etc.
There are several models built using result of this syntactic analysis which usually referred to as Shallow Semantic processing: Semantic Roles Labeling, Conceptual Dependencies, First-Order logic, etc. (in terms of D.Jurafsky Speech and Language Processing chapter 17).
As a user of NLP tools I have an option of using either one level of abstraction (syntactic parse) or another (shallow semantic analysis). Considering both of them are usually, mathematically speaking, graphs of some kind, I need to know what might be a benefit of using more complicated semantic processing in my task. Obviously, every next layer of processing adds more errors and usage of semantic processing should be justified.
In my research I trying to measure a benefit of shallow semantic processing phase applied to Question Answering task (IR). Therefore I need to define a strict demarcation line between these two layers and place some methods and tools in a layer of pure syntactic analysis, and other - in a shallow semantic analysis layer.
Is there any agreed definition for such borderline between syntax and semantic?
And what do you think the next big thing is in Machine Learning/AI/NLP?
In order to start a discussion, i would like to ask you all what your criteria for thinking would be? i mean, is it just giving an output when given a certain input? is it 'learning' as neural networks do? is it the production of an algorithm? what do you think?
“AI researchers have focused (…)on the production of AI systems displaying intelligence regarding specific, highly constrained tasks. Increasingly, there is a call for a transition back to confronting the more difficult issues of “human-level intelligence” and more broadly artificial general intelligence.” according to AGI 13 conference to be hold in Beijing July 31 – August 3, 2013.
Do you share same call for transition ?
Can we create an artificial system based on ANN and GA to solve complex cryptography?
It would be interesting to know what is the most promising prospect in an attempt to develop AI
How is tomorrow's AI? From what Godel had ever proved, it seems that the original objectives of AI is a mission impossible, but nowadays we have been seeing a great many byproducts of the AI research as it was originally proposed by Turing and propelled by early pioneers in world war II, like speech recognition, machine learning, expert system, so on and so forth.
Back to the orginal primary targets of AI research, how about its future given we did not see too much progress in the past decades? Is is possible to bring into being the dream about an intelligent machine, as closely intelligent as humans? Where would it go in general?
Mind modeling, relevant knowledge base, knowledge representation, cognition, computation
Both are applied in the context of artificial neural networks.
Schwefel 2_22, Schwefel 1_2, Schwefel 2_21, Penalized_1 or H_COM (Hybrid Composition Function) and its rotated and rotated shifted versions?
Due to the lack of possibilities to evaluate and compare cognitive architectures in agents in a formal way, what are the possibilities? Are competitions such as the bot prize appropriate? Or do we have to test them empirically in comparison to humans (i.e. classic psychological experiments)?
I have three features of lengths 100, 25 and 128 for every subject. I want to use SVM. What is the best approach? Of course, scaling/normalization will be done.
Question 1 : Should I place these three features in one vector for every subject or is there any other appropriate way to deal it.
Question 2 : Is feature extraction an art and based on gut feelings more than engineering ?
One of the greater challenges in the U.S. (and increasingly elsewhere) is the growing burden of costs associated with health-damaging but modifiable behaviors. I was recently asked by a program director of a national funding agency to suggest researchers in academia who are working on computational predictive models appropriate to capture health-related behaviors and behavior change, and so I am seeking your help in producing such a list.
It seems to me that the area of behavior change in real-world settings is ripe for predictive models after more than a 100 years of behavioral science and clinical studies, plus all the recent progress in cognitive models, neurocomputational models, user models, predictive analytics, machine learning, tutoring systems, smart homes, ubiquitous computing, and more.
Please send your suggestions to pirolli@parc.com
It would be great if there is any framework for linux environment.
Say I have programmed something in computer which acts as a creative thinking, it could be an idea, plan or any artifact. I want to know how we will evaluate this artifact as being creative.. Are there any standards to measure creativity like Turing test?
Are questions about death, nothingness, world's and life's birth... etc linked to a defined anatomic brain zone (or several)? And in that case, is stimulation of these zones known? Would that contribute to an explain of people's various reactions, when faced with those questions?
Most people I know have an idea of what is known by the general public within their culture, but I've rarely seen someone assert the opposite. Is it possible that people are aware of what everyone knows but are not sure about things which they don't know? Everyone rightly assumes that facts highly specific to one's daily life are not common knowledge, but as one generalizes I've noticed people become less sure about shared knowledge. This becomes clear in the assumptions people make when telling stories.
Has anyone else noticed this phenomenon? Is this something every other person knows and I've just missed? haha
Considering the embodiment process of an organism, in which the autopoiesis plays its role across all the body cells, for Varela ad Maturana, "a cognitive system is a system whose organization defines a domain of interactions in which it can act with relevance to the maintenance of itself." (This domain of interactions seems to be the sufficient condition for a system, to be considered a cognitive system, so the "neurality" seems not to be necessary...)
Most of the investigated concepts are assumed consciously available. But what is about all the unconscious processes?
In context of intelligence, the discussion about mind-brain relation is the fundamental issue in modern cognitive sciences. Thus, the present discussion stresses the relationship between the mind and the brain from a cognitive point of view.
A dynamic of consciousness, "discovery" is one of the essential driving forces of living entities. Even basic primate behaviors such as the drive for food, sex, social interplay can be said to be based in the act of "discovery". So, what is the nature of this drive? Could a machine be instilled with this? Is it simple matter of novelty, or is it a factor of "learning"? It seems to be a blend of feeling and logic resulting in development of conceptualization, often leading to further investigation or parsing of root cause (reflection)..
Can the impact of learning a second or third language on the mindset and cognition of the people be scientifically proven or not?