Science topic

Artificial Consciousness - Science topic

Is Consciousness Synthesizeable? And if so how?
Questions related to Artificial Consciousness
  • asked a question related to Artificial Consciousness
Question
11 answers
Human consciousness and artificial consciousness
Relevant answer
Answer
The informational structure does not exist on its own, so there is no such thing as artificial consciousness. However, Strong AI is possible by decoding the "consciousness code" using methods described in my paper:
  • asked a question related to Artificial Consciousness
Question
23 answers
In the not-too-distant future, will it be possible to merge human consciousness with a computer, or to transfer human consciousness and knowledge to a computer system equipped with sufficiently highly advanced artificial intelligence?
This kind of vision involving the transfer of the consciousness and knowledge of a specific human being to a computer system equipped with a suitably highly advanced artificial intelligence was depicted in a science fiction film titled "Transcendence" (starring Jonny Deep) It has been reported that research work is underway at one of Elon Musk's technology companies to create an intelligent computerized system that can communicate with the human brain in a way that is far more technologically advanced than current standards. The goal is to create an intelligent computerized system, equipped with a new generation of artificial intelligence technology so that it will be possible to transfer a copy of human knowledge and consciousness contained in the brain of a specific person according to a concept similar to that depicted in a science fiction film titled "Transcendence." In considering the possible future feasibility of such concepts concerning the transfer of human consciousness and knowledge to an information system equipped with advanced artificial intelligence, the paraphilosophical question of extending the life of a human being whose consciousness functions in a suitably advanced intelligent information system is taken into account, while the human being from whom this consciousness originated previously died. And even if this were possible in the future, how should this issue be defined in terms of the ethics of science, the essence of humanity, etc.? On the other hand, research and research-implementation work is already underway in many technology companies' laboratories to create a system of non-verbal communication, where certain messages are transmitted from a human to a computer without the use of a keyboard, etc., only through systems that read people's minds, for example. through systems that recognize specific messages formulated non-verbally in the form of thoughts only and a computer system equipped with electrical impulse and brain wave sensors specially created for this purpose would read human thoughts and transmit the information thus read, i.e., messages to the artificial intelligence system. This kind of solution will probably soon be available, as it does not require as advanced artificial intelligence technology as would be required for a suitably intelligent information system into which the consciousness and knowledge of a specific human person could be uploaded. Ethical considerations arise for the realization of this kind of transfer and perhaps through it the creation of artificial consciousness.
In view of the above, I address the following question to the esteemed community of researchers and scientists:
In the not-too-distant future, will it be possible to merge human consciousness with a computer or transfer human consciousness and knowledge to a computer system equipped with sufficiently highly advanced artificial intelligence?
And if so, what do you think about this in terms of the ethics of science, the essence of humanity, etc.?
And what is your opinion on this topic?
What do you think on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
Computer is a mechanical device for our mind & it will help us to carry out the same with our programming performance in a right mode with this it will be ridiculous to join human consciousness it cannot be a subject of measurement to place computer in the near future.
This is my personal opinion
  • asked a question related to Artificial Consciousness
Question
1489 answers
Naseer Bhat asked "What is consciousness? What is its nature and origin?" We do not know. We can speculate about nature and origin but for what should this be good? I think there is a necessitiy in data processing which forced the evolutionary process to create this phenomenon. I am sure anticipation, association and social interaction are part in this process. May be the analyssis of wet brains will bring some light in this question, but we should follow this question step by step in bottom up manner asking what an organism needs to process the environmental and inner data. To decide if there is consciousness we need a significant prove method. This would be a much harder problem then creating a consciousness automata.
Relevant answer
Answer
I came across this paper and felt like sharing with the team
Hope so this is quite important and might be of helpful to the team for further research
  • asked a question related to Artificial Consciousness
Question
180 answers
Will it be possible to build an artificial consciousness similar to human consciousness in digitized structures of artificial intelligence if in specific structures of artificial intelligence will digitally reproduce the artificial structures of neurons and the entire central nervous system of humans?
If artificial intelligence that mapped human neurons was built, then it would be a very advanced artificial intelligence. If artificial intelligence was built in such a way that all human neurons would be reconstructed in digital technology, it would mean the possibility of building cybernetic structures capable of collecting and processing data in a much larger database capacity than at present. However, if it would only be the reproduction of simple neural structures and their reproduction to the number of neurons contained in the human organism, then only or mainly quantitative and not necessarily qualitative factors that characterize the collection and processing of data in the human brain would be achieved. Without achieving all of the qualitative variables typical of the human nervous system in a cybernetic counterpart, it might be doubtful to create in this cybernetic structure an artificial nervous system of cybernetic consciousness which is the equivalent of human consciousness.
Do you agree with me on the above matter?
In the context of the above issues, I am asking you the following question:
Will it be possible to build an artificial consciousness similar to human consciousness in digitized structures of artificial intelligence if in specific structures of artificial intelligence will digitally reproduce the artificial structures of neurons and the entire central nervous system of humans?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
Dear Gerard van Reekum,
Yes of course. As of today, it is not possible to be sure of the implementation of a specific scenario of future developments, including the development of artificial intelligence technologies. However, in our discussion, we try to consider the most likely scenarios for future developments.
Thank you, Regards,
Dariusz Prokopowicz
  • asked a question related to Artificial Consciousness
Question
32 answers
Will artificial neural structures become such advanced artificial intelligence that artificial consciousness will arise? Theoretically, such projects can be considered, but to really verify this, artificial neural structures should be created. From research on the human brain, it appears that this is a very complex and yet not fully understood neural structure. The brain has various centers, areas that manage the functioning of specific organs and processes of the human body. In addition, Sylvia is also complex and consists of elements of emotional, abstract, creative, etc. intelligence that also function in separate sectors of the human brain.
In view of the above, does research on the human brain and progress in the construction of ever more complex structures of artificial intelligence lead to synergy in the development of these fields of science? Will the development of these fields of science lead to the integration of research into the analysis of human brain activity and the construction of more and more complex structures of artificial intelligence equipped with elements of emotional, creative intelligence, etc.?
Besides, does the improvement of artificial intelligence lead to the emergence of artificial emotional intelligence and, consequently, to autonomous robots that will be sensitive to specific changes in environmental factors, factors of the surrounding environment? Will specific changes in the surrounding environment trigger programmed reactions of advanced artificial emotional intelligence, ie activation of pre-programmed algorithms of implemented activities and learning processes as part of improving the learning processes of machines.
Therefore, another important question arises in this area:
Is it possible to create an artificial consciousness that will function with the structure of an artificial electronic neural network built in such a way as to reflect the structure of the human brain? In this way, the structure of advanced artificial intelligence will be able to improve on the basis of acquired knowledge eg from external internet databases?
Do you agree with me on the above matter?
In the context of the above issues, I am asking you the following question:
Will it be possible to build artificial emotional intelligence?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
Dear Baydaa A. Hassan,
I also wish you good luck and invite you to our discussion.
Thank you, Regards,
Dariusz Prokopowicz
  • asked a question related to Artificial Consciousness
Question
7 answers
Data collection is an important step when doing any research or experiment. Data collection can be defined as the process of gathering and processing the information to evaluate the outcomes and use them for the researches. But with the development in methods of collecting data by artificial intelligence , do methods evolve in collecting data for social media.
Relevant answer
Answer
Dear Roberto - Minadeo Can you provide me with artificial intelligence techniques to collect data?
Regards
  • asked a question related to Artificial Consciousness
Question
2 answers
One of the problems with testing for consciousness, is whether we want to test to see if a machine has certain characteristics as suggested in Dr. Baars Functional School, or whether we want to test to see if a machine is missing certain characteristics as suggested by Lofti Zadeh?
As we have seen, the first type of test invites "Teaching to the Test" type responses, where the designer simply defines the minimal machine that meets the tests, and that is all they try for. A good example is Stan Franklins IDA/LIDA architecture, it was designed to meet Baars Functional Consciousness guidelines, whether it is actually conscious or not, is something even Stan Franklin wasn't willing to commit on at first.
The second type of test, however, proposes the necessity of solving problems set deliberately high, in order to force the designer to work harder. We run the risk of setting the bar so high that no one can even approach the solution, as some of the Phenomenal Philosophers would have us do.
Somehow we need reachable goals, that stretch the design but not so much as to break the interest of the designer.
Relevant answer
Answer
Interesting
  • asked a question related to Artificial Consciousness
Question
57 answers
The knowledge claim about the possibility of Artificial Super-Intelligence in the future raised several questions for us. is it a metaphysical possibility or a philosophical jargon? Can artificial intelligence surpass human intelligence- can A.I machines (which are functionally and behaviourally identical to human agent ) builds independently without the intervention of human intelligence (the A.I machines not only can work but also think like human beings)? Can there be a singularity in the field of artificial intelligence in the future? The fastest development in the field of A.I. within two decades makes us think about future prospects of A.I and the possible threats to humanity in the future. There are several ethical issues are concerned which has to be addressed. If rationality is the criterion for the autonomy of the agency of an organism, as stated by Immanuel Kant, then can Artificial Intelligent machines qualify the criteria of rationality for the status of Autonomy which is applied to the human organism.
Relevant answer
Answer
Interesting thought, and still with an unexpected result.
Super-intellect is created by man through mathematical algorithms. Naturally, when the database becomes more than our consciousness, it looks amazing. Even at times it is pretty trusting that the yak is frightening at times. But that is not how it all depends on us creators, how much we trust the presented results of artificial intelligence. Of course, not all tasks can be solved with artificial intelligence because people are different from each other and it can always be that a solution cannot be solved.
Only if a new algorithm is created without human intervention, and then a super intellect can then be surprised.
  • asked a question related to Artificial Consciousness
Question
2 answers
Chalmer’s contemplated in [1] the Chinese room argument for both the connectionist and symbolic approaches in AI as I have in the thread [2]. I would expand upon the axiom ‘Syntax is not sufficient for semantics’ comments that as presented in the diagram of the thread (attached here also) there is another error in Searle’s argument.
The neural network system drawn there is a complex distributed system drawn at reflecting accuracy in translating a 1-gram not n-gram models where if taken as words would account for semantic interpretation( which would be another neural network).
I would like counterpoints which can refine the argument
References
[1] Subsymbolic Computation and the Chinese Room by David J. Chalmers http://consc.net/papers/subsymbolic.pdf
Relevant answer
Answer
Here is the attachment of the figure for easy reference
  • asked a question related to Artificial Consciousness
Question
45 answers
The question concerns ontological, epistemological, methodological and praxiological relationships.
Relevant answer
Answer
Philadelphia, PA
Dear Rinke & readers,
Thanks for your contribution to this thread of discussion.
It seems doubtful that we should regard consciousness as a kind of knowledge, as you have it--though knowing is a way of being conscious The standard definition of "knowledge," is going to be something like, "justified, true belief" --at the least, on accepted philosophical account, the concept of knowledge involves these elements. (I won't go into technical problems in the traditional definition--deriving from Plato.)
The chief point is that consciousness need not involve belief, conceptually formulated or "justified belief." Perception or even simple sensation involves consciousness, but does not seem to be a matter of "knowledge" or justification or belief. There seems to be some anthropomorphism in your talk of consciousness--attributing characteristics of human mentality and its peculiar, developed and interesting forms to consciousness generally.
Perhaps a counter-example will make the point. Imagine an infant which sees its feeding bottle and reaches for it. The child is certainly conscious, but perhaps at and age still lacking in conceptual development. We might say the infant doesn't know what a bottle is. What is going on in the seeing and the reaching would not seem to be a matter of "knowledge," it is instead that we too easily easily reach for our own familiar concepts in describing the situation. Something is definitely going on between the seeing and the reaching for the bottle, and we may suppose that there is some sort of pre-existing "action potential" involved, but it would seem to be sub-reflective, and based on simple past association of the bottle with the pleasurable experience of drinking.
I suppose that even a flat-worm with simple "eye patches" may be aware of the difference between lighter and darker, and this would seem to be a kind of sensation. But I would think it implausible to describe such consciousness in terms of knowledge, belief, or justification. The general point is that there are grades of consciousness and we should not attribute "knowledge" which is a conceptual form, in every case of conscious experience.
I suppose that in order to give any account of "artificial consciousness," we first have to be pretty clear on consciousness and its varieties.
H.G. Callaway
---you wrote---
Consciousness I interpret as knowledge as well, but it is focused about your self and about your surrounding and its interactions in between. When you are able to generate this kind of knowledge and a formal description of the interactions through some kind of algorithms. Using expert system technology or "problem solving algorithms" you can conclude using this knowledge. So you have "artificial consciousness", which (I believe) is under the hood of "artificial intelligence".
  • asked a question related to Artificial Consciousness
Question
4 answers
Can we model stress. Just curious about it.
(keeping in mind that this is the era of artificial intelligence, machine learning, data science and so on)...
Can we have a predictive model on stress and behaviour as KPIs.
Also, philosophically there must be a path between Stress, behaviour, emotions, intelligence etc... can we also test and find the coefficient for these paths??
Regards,
Abhay
Relevant answer
Answer
Dear Luca,
Thank you very much for references and suggestions.
Regards,
Abhay
  • asked a question related to Artificial Consciousness
Question
20 answers
I need a feedback on review o myu paper on EMF effects in nonthermal doses on living creatures which is based on storage capacity of DNA. Can reincarnation be explained by physical mechanisms and can DNa MEMORIZE THE KNOWLEDGE OF OUR ANCESTORS ?
Relevant answer
Answer
No.  There's no evidence for what you're discussing.  More than 350,000 people are born every day, and if such things as you're discussing existed, there'd surely be evidence that some of those newborn infants had some kind of conscious knowledge without needing to learn it -- e.g., a child born in China who could speak Japanese, without being exposed to it.  You seem to be positing a kind of Lamarckian inheritance of acquired characteristics, or in this case, acquired knowledge.  Do you really think that the more you study physics, the better your children and grandchildren will be at physics, simply because they inherit your DNA?  There's no evidence for that.  Certainly, we inherit a range of reflexes, instinct, capacities, and even emotions.  This can be explained based on natural selection, and to some extent, accounted for by molecular genetics.  We humans inherit the ability to learn language -- however we don't inherit conscious knowledge of any particular existing (or ancient) languages.  In a way, certain kinds of knowledge may be passed down from previous generations -- for example, fear of heights or reactions to certain kinds of predators -- this can be explained through natural selection.  But there's no evidence of inheritance of the kinds of things you're talking about (e.g., knowledge of science).  To say that "the total number of souls is constant," and try to justify that statement with reference to conservation laws in physics, makes no sense.  Those laws say that (for example) energy is conserved BUT can be transformed.  So whatever energy may be associated with conscious activity could be transformed (for example) into thermal energy.  You seem to be starting with a conclusion that you want to justify (reincarnation) and then to be searching around for anything that might support such a conclusion.  That's not how science is supposed to work.  If anything, you should be looking to see if there is any evidence to falsify your conclusion, and the theories associated with your conclusion.  You are free to have whatever faith you want about life after death.  But pretending that science supports that faith is a completely different matter, and really not acceptable.  
  • asked a question related to Artificial Consciousness
Question
6 answers
The phenomenalist school believes there is something irreduceable about consciousness the nearest we are going to be able to get is a simulation of it, and there will probably be something wrong with the simulation.
The only group to claim success at fomring consciousness under this school that I know of is Dr. Edelman and his cronies at NSI, who claim "The Phenomenal Gift of Consciousness" for what are basically organ level simulations that have been combined to form a brain like simulation.
Relevant answer
Answer
Consciousness seems to be a discussion in the Humanities in regards to artificial consciousness, I have not seen any projects on the subject. I have been looking at The PlaNet algorithm in relation to a paper,
Progress in machine consciousness
By David Gamez, in his paper, he refers to two other studies in machine consciousness;
"
5.1. Axioms and neural representation modelling
Aleksander and Dunmall, 2003 ;  Aleksander, 2005 have developed an approach to machine consciousness based around five axioms, which they believe are minimally necessary for consciousness:
1.
Depiction. The system has perceptual states that ‘represent’ elements of the world and their location.
2.
Imagination. The system can recall parts of the world or create sensations that are like parts of the world.
3.
Attention. The system is capable of selecting which parts of the world to depict or imagine.
4.
Planning. The system has control over sequences of states to plan actions.
5.
Emotion. The system has affective states that evaluate planned actions and determine the ensuing action."
Defining consciousness in this way road marks a journey to theoretical AC. Susan Schneider on seeks an intuitive/emotional path and sees a problem.
"It is time to ask: could these vastly smarter beings have conscious experiences — could it feel a certain way to be them? When we experience the warm hues of a sunrise, or hear the scream of an espresso machine, there is a felt quality to our mental lives. We are conscious.
A superintelligent AI could solve problems that even the brightest humans are unable to solve, but being made of a different substrate, would it have conscious experience? Could it feel the burning of curiosity, or the pangs of grief? Let us call this “the problem of AI consciousness.” "
Georgios I. Doukidis and Marios C. Angelides have a paper 
A framework for Integrating Artificial Intelligence and Simulation
Indirectly discussing Gomaz.s axioms regarding simulation, but Lecun from Facebook AI requires of AI a " common sense" a world simulation that it can base predictions on. Of most interest to me recently is the rise in the field of Emotional Intelligence from Affectiva, who have an algorithm that can detect faces and read emotions. 
Within this disparate "soup" of developments, Artificial Consciousness has been disregarded, even as we begin to program Ethics into it. Illah R. Nourbakhsh has called for AI to be Programmed with Ethics. 
Ethics? In AI? if we have a program of Ethical Consideration by AI, it should out do us. This Phenomena, of singular tasks of ML or AI, outperforming humans! From Arthur Samuel's checkers game to Deep Blue to Watson. So we make an Ethical AI, Do we make a virtual Saint or Skynet? To draw from fiction You need to replicate the circumstances in one way or another where an AI has the codes and the sensors to model future human problems, and destroy them or do we make a KITT or a Data ( from Star Trek)
AC is a fascinating Idea that invites discussion from disparate fields, an early task would be to Describe this Phenomena of the Singularity, is the achievement of AC? Or is it AGI? What is the difference? How would Heidegger or Merleau-Ponty see it. Fredrik Jameson has approached the subject of the singularity, I find it difficult to discern between the threat of Job losses and the response of a Universal Basic Income, versus the Post-Scarcity Economy. Any possible conflict within the structures of capitalism is an opportunity to overthrow it.  In Hugo Degaris the Artilect Wars, the artilects inspire a rift between Terrans and Cosmists. That humanity could shed the skin of the LEFT RIGHT paradigm could be the liberation of Humanity from Capitalist Hegemony and Left wing subjugation hegemony and distraction by the Hype and Simulacra of Spectacle Capitalism. 
If anyone is wondering, I will be following AI thanks. 
I have included papers and links to the resources mentioned above for your Perusal. 
  • asked a question related to Artificial Consciousness
Question
2 answers
Stan Franklin has been instrumental in building Cognitive Architectures based on Global Workspace Theory, his architectures have ranged from a pandemonium based Conference Organizer to a full implementation of Bernard Baars Cognitive Engine, certified by Bernard himself as being "Functionally Conscious". LISA, the Learning version of ISA the consciousness architecture has been deployed with some success.
Relevant answer
Answer
I developed a cognitive architecture that includes a data state that corresponds, in a way, to Baars global workspace and that I call the Current Situation. This is a dynamic representation that interacts with a behavior generation subsystem. One of the interesting features of the Current Situation is that it uses similar structures to develop representations that span many different durations (this is a difficulty that I don't know is addressed elsewhere: how to organize a representation on multiple temporal levels). I don't refer to this as Global Workspace because Baars, like so many others defines this from a starting perspective of subjective experience trying to replicate what it feels like. I radically reject any approach based on subjective sensations. The Current Situation is an integral component of a well defined architecture. You can get a free synopsis of it by visiting  jetardy.com/mecasapiens. The epub costs $30 but just the original structure I propose to organize temporal durations would make this well worth it.
  • asked a question related to Artificial Consciousness
Question
9 answers
I apologise if the question can seem so naive. "Subjective experience" appears to be a sort of pleonasm or redundancy. But is experience possible without subjective (phenomenal?) instances? I was thinking about some pathological states (e.g. blindsight patients: they can avoid an obstacle without "consciously seeing" it. They perceive the obstacle in an unconscious way, so is it possible to consider this type of experience as "non-subjective"?).
Furthermore, in Tononi's Integrated Information Theory of consciousness, experience is defined as integrated information; in this sense, a simple photodiode can integrate 1 bit of information, so it can have a sort of experience. If the theory was right, how would this experience be "subjective"?
If you could answer to my questions and/or indicate some references, I would really appreciate it. Thank you
Relevant answer
Answer
Stephen,
Our understanding of animal physiology , including the sensory-motor system allow us to understand a lot about the internal working of these systems but does not make us understand what awareness is.  Our awareness or the awareness of an animal is total mystery as far as  science goes.
That animals evolve sensory-motor system adapted to certain aspect of their environment (their unwelt) is easy to understand from an evolutionary point of view. 
''Uexküll's discussion of a tick, saying,
"...this eyeless animal finds the way to her watchpoint [at the top of a tall blade of grass] with the help of only its skin’s general sensitivity to light. The approach of her prey becomes apparent to this blind and deaf bandit only through her sense of smell. The odor of butyric acid, which emanates from the sebaceous follicles of all mammals, works on the tick as a signal that causes her to abandon her post (on top of the blade of grass/bush) and fall blindly downward toward her prey. If she is fortunate enough to fall on something warm (which she perceives by means of an organ sensible to a precise temperature) then she has attained her prey, the warm-blooded animal, and thereafter needs only the help of her sense of touch to find the least hairy spot possible and embed herself up to her head in the cutaneous tissue of her prey. She can now slowly suck up a stream of warm blood."
Thus, for the tick, the Umwelt is reduced to only three (biosemiotic) carriers of significance:
(1) The odor of butyric acid, which emanates from the sebaceous follicles of all mammals,
(2) The temperature of 37 degrees Celsius (corresponding to the blood of all mammals),
(3) The hairiness of mammals.''
===================
The above description of the tick behavior, how it detect a mammal etc, is an totally external/objective description.  Nothing in this description demands that the tick experience anything.  Doe the tick experience the odor of butyric acid, or the temperature, or the hairiness of mammals?  I am convinced that there there is a what is like to be a tick and that the tick experience but all of the above does not provide any evidence of it.  Science is objective and so cannot provide evidence of subjectivity.  
  • asked a question related to Artificial Consciousness
Question
19 answers
For all the formidable progress made in numerous fields by cognitive neurosciences, we are still in the dark about very many aspects of attention. One thing that is now beyond doubt is the multiplicity of processes that underlie it, for attention is involved in numerous other fundamental cognitive processes — perception, motor action, memory — and any attempt to isolate it in order to study its constant features is bound to prove sterile. For over a century and a half attention was a crucial topic in neurophysiology and psychology. In the early days of scientific psychology it was viewed as an autonomous function that could be isolated from the rest of psychic activity. However, this idea soon came to be seen as inadequate. At the beginning of the 20th century researchers became convinced that attention underpinned a general energetic condition involving the whole of the personality. Within a few years the emergence of the Gestalt and Behaviourism paradigms caused these studies to be overshadowed, and it was not until the second half of last century that they regained their importance.
For a long time the debate was influenced by the hypothesis that attention constitutes a level of consciousness varying widely in extension and clarity and only functioning in relation to its variations: from sleep to wakefulness, from somnolent to crepuscular, from confusion to hyper-lucidity, from oneiric to oneiroid states, and so on. Subsequently other approaches of considerable theoretical importance linked attention to emotion, affectivity and psychic energy or social determinants. Yet what do we really know about attention, the sphere of our life which orients mental activity towards objects, actions and objectives, maintaining itself at a certain level of tension for variable periods of time? How and to what extent is attention related to consciousness? Why does only a minimal part of the information from the external world reach the brain even though the physical inputs strike our senses with the same intensity? And why is it that, although they enter our field of consciousness, most of these inputs do not surface in our awareness? It is well known that in the selection of stimuli, attention is strongly influenced by individual expectations. They ‘decide’ which objects and events appear in our awareness, and which are destined never to appear. The law of interest regulates a large part of the selection of the objects and topics on which our attention is focused. 
Relevant answer
Answer
all human activities need attention. If the attention system has little energy they are structured pathological reactions. for example, the depressive disorder has, in many cases, a deficiency of attentional processes such as: sustained attention, attentional flexibility, etc.
  • asked a question related to Artificial Consciousness
Question
6 answers
Hello researchers,
I have done a distribution power flow in an IEEE 123 bus distribution system. The results obtained seems to be logical; in the sense voltages relation ship between buses are perfect (when considering zero generation in the system and all the power is imported from the slack bus to satisfy load). 
But there is mismatches in relationship of  Pgeneration=Pload+Plosses. Kindly provide your valuable suggestions for resolving the issues or using alternative platforms. Any previous results or materials related to this will be helpful.
Thanks! 
Relevant answer
Answer
Which platform you have used for performing backward/forward sweep method?Have you written your own codes ? Can you tell me whether MATPOWER can be used to perform this load flow with FDLF BX method,if yes then where can I find the required input data for IEEE 123 test feeder which can be accessed by MATPOWER  ??
  • asked a question related to Artificial Consciousness
Question
18 answers
Hi to All, I am new in this group with aim of learning
Relevant answer
Answer
It depends on what you call "Consciousness" whether this last article is worth reading. Frankly I am sceptical that 1 is true, I am amused at 2, I cant stop laughing at 3, and I am shocked at 4, but 5 can't be proven.
  • asked a question related to Artificial Consciousness
Question
13 answers
Some Philosophers feel that there is a gap between what can be physically explained and by what we take as necessary for consciousness. This is called the Explanatory gap. But does it really exist? Or are the so called Phenomenal states of mind all explainable we just haven't yet come up with the explanation?
Relevant answer
Answer
Bhakti,
I have just waded through some 20 different updates from you, all stating the same thing on different channels. What you have done is simply clone the same announcement and drop it on multiple channels at once. Not only is this bad manners, but it is darned boring wading through 20 channels just to read the same message in each one. If you don't get knocked off the system for this spamming, please understand that some people pay by the message for their internet email accounts and each time you update an thread you send a new message to all the subscribers threads.
While you make controversial assertions in your work you are under the burden of proof for those statements in science. I see no evidence of an attempt to support the assertions in your announcement, which means that it is uninformative and could easily be mistaken for propaganda. In short quit being so opinionated and look at what the topic of discussion is, before you paste an answer that doesn't fit the discussion.
  • asked a question related to Artificial Consciousness
Question
1 answer
Dear Group members,
I'm currently doing research in how we can make robots learn to dance and for this research. At present, I have programmed a robot (simulation and real robot) to 'learn' to dance, without any preprogrammed actions. The robot first builds its own actions, which it then combines to form a dance, but there is still a lot to look into.
I have come to the conclusion that in order for this to be fully accomplished, there are two different trypes of results that need to analysed; one that shows a computational result to a dancing model and the other that is based on what people think (i.e. their perceptions of a robot dance).
I have developed a questionnaire in the form of a website that demonstrates some key points on dance and was wondering if you wouldn't mind taking part by honestly giving your comments on the robot's dancing.
All I need is approx. 30mins of your time please. Your feedback would greatly assist me on this research.
Furthermore, if anyone is doing work in this area, or has suggestions or interest, then it would be great to hear from you.
Thank you
Kind regards
ibs
Relevant answer
Answer
Our article published in peer-reviewed Journal "Communicative & Integrative Biology". A few major points discussed in the paper:
(1) Brain is not the source of consciousness.
(2) Consciousness is ubiquitous in all living organisms, starting from bacteria to human beings.
(3) The individual cells in the multicellular organisms are also individually cognitive entities.
(4) Proposals like “artificial life”, “artificial intelligence”, “sentient machines” and so on are only fairytales because no designer can produce an artifact with the properties like internal teleology (Naturzweck) and formative force (bildende Kraft).
(5) The material origin of life and objective evolution are only misconceptions that biologists must overcome.
  • asked a question related to Artificial Consciousness
Question
2 answers
It has been said that the brain is just a pattern matching machine and thus that consciousness is all about pattern matching. Some people have even gone so far as to write up mathematical discussions that attempt to equate memory with bayesian inferences and Markov Chains.
It is my contention that these people are barking up the wrong tree, and that the implicit memory works in a weaker vein than pattern matching. Something I have taken to calling Similarity Selection.
In the similarity selection model. implicit memory is a satisfycing system not a pattern matching system, and because it matches using satisfycing, it only matches at the similarity level, not the pattern level.
In my Digital circuit I used a satisfycing gate to weaken the logical selection capability in the CAM circuits, so that they didn't match on the patterns, but on partial patterns. What this means is that in order to detect a pattern you need multiple layers of similarity detectors where you would only need one in a logical pattern/matching version of the circuit.
This increase in detection opportunities means that we can store more information content in the same role of matching a pattern, and thus through redundancy we can link patterns that would have no direct links of their own, at the higher order similarity levels.
In a computer for instance we cannot give different types of data the same value, because there is no way to combine them to form a single value. In similarity selection that limitation is removed, since at the higher level of selection, many different forms of the same value may be combined to form the higher level version. Since these higher forms of selection, feed back to the original values in the real implicit memory system, the choice of which elements to select is reinforced even though the actual satisfycing gate weakens the selection opportunity at the lowest level.
Consider the fact that the value 1 has different meanings depending on what form it takes.
bcd 0001
ebcdic 0000 0001
int (16) 0000 0000 0000 0001
int (32) 0000 0000 0000 0000 0000 0000 0000 0001
Floating point 0.1 E1
Binary True
Power On
In a pattern matching venue, each of these forms is unique, and thus unconnected
In similarity selection they all represent forms of the same value because their similarity is more important than the pattern that they are based on.
Relevant answer
Answer
Our article published in peer-reviewed Journal "Communicative & Integrative Biology". A few major points discussed in the paper:
(1) Brain is not the source of consciousness.
(2) Consciousness is ubiquitous in all living organisms, starting from bacteria to human beings.
(3) The individual cells in the multicellular organisms are also individually cognitive entities.
(4) Proposals like “artificial life”, “artificial intelligence”, “sentient machines” and so on are only fairytales because no designer can produce an artifact with the properties like internal teleology (Naturzweck) and formative force (bildende Kraft).
(5) The material origin of life and objective evolution are only misconceptions that biologists must overcome.
  • asked a question related to Artificial Consciousness
Question
103 answers
Instead of gradually replacing biological neurons with silicon neurons as in Chalmers' Fading Qualia, I attempt to gradually replace dividable functions of biological neurons with silicon emulation.
The question is, at which manipulation stage does our brain lose consciousness (qualia)?
1)   Replacement of axonal spike propagation with an external artificial mechanism that uses radio transmission (e.g. WiFi): Causality between presynaptic neuronal firings and postsynaptic PSPs is preserved, but now neurons are physically isolated.
2)   Further replacement of postsynaptic PSP integration with an external artificial mechanism: Causality between presynaptic neuronal firings and postsynaptic somatic membrane potential is preserved, but now without sophisticated dendritic-somatic computation.
3)   Further replacement of transformation from postsynaptic somatic membrane potential to postsynaptic firing (Hodgkin-Huxley Eq. mechanisms) with an external artificial mechanism that integrates presynaptic firings and activates postsynaptic neurons by current injection accordingly: Causality between presynaptic neuronal firings and postsynaptic neuronal firings is preserved, but now without an intact internal variable, the membrane potential.
4)   Mere replay of spatio-temporal neuronal firing patterns by external current injection: Zero causal interactions among neurons.
Relevant answer
Answer
Dear Nordin,
In a sense all your experiments have been done. The neurons whose firing correlates most closely with 'basic qualia' are those in primary sense receptors. If you poison these with simple things like a very bright light (to bleach retinal receptors temporarily) or chilli pepper (to block taste buds) or background noise (to block high frequency hair cell responses in the cochlea) you lose the qualia. You can replace the qualia with bionic implants at least for cochlea and now a bit for retina. Things are a bit more complex because colours do depend on integration in visual cortex, but we have a pretty good idea which neurons correlate best to which qualia.
But that has got us nowhere. Because it seems very clear that you do not actually experience anything if only these cells fire and not some cells further forward in the cortex. You get 'cortical blindness' and such things. Yet you need the early cortical cells even for imagining the qualia. The problem is that in a functioning brain the firing of specific neurons is irrevocably correlated with certain pathways of message sending and it seems likely that qualia arise once those messages have been sent and arrived somewhere where they can be experienced. Where are the 'qualia' in an expresso coffee machine? For sure they are in the sachet you put in the top - either dark arabica or columbia mild aromatic. But you get no taste until you put a cup under the bottom and wait for the machine to work. 
Qualia are not where a certain cell is firing. That I think we can be sure of, because the firing of a cell soma means nothing to anything until some neurotransmitter has arrived further along. Only God could know the cells are firing otherwise and it is 'me' seeing the red, not God. The tricky part is knowing what 'me' is. What is 'me' in your model?
  • asked a question related to Artificial Consciousness
Question
186 answers
What are the existing tests for machine consciousness that directly tests qualia generated in a device? I find many proposals, but they only seem to test functional aspects of consciousness related neural processing (e.g. binding, attentional mechanisms, broadcasting of information), but not consciousness itself.
I have a proposal of my own and would like to know how it compares with other existing ideas.
The basic idea is to connect the device to our brain and test if qualia is generated in our "device visual field". The actual key to my proposal is how we connect the device and how we set the criteria for passing the test, since modern neurosynthesis (e.g. artificial retina)  readily leads to sensory experience.
My short answer is to connect the device to one of our cortical hemispheres by mimicking inter-hemispheric connectivity and let the device take over the whole visual hemifield. We may test various theories of consciousness by implementing candidate neural mechanisms onto it and test whether subjective experience is evoked in the device's visual hemifield.
If we experience qualia in the "device visual hemifield" with the full artificial hemisphere, but not when the device is replaced with a look-up table that preserves all brain-device interaction, we have to say that something special, say consciousness, has emerged in the full device. We may conclude that the experienced qualia is due to some visual processing that was omitted in the look-up table. This is because, in regard to the biological hemisphere, the neural states would remain identical between the two experimental conditions.
The above argument stems from my view that, in case of biological to biological interhemispheric interaction, two potentially independent streams of consciousness seated in the two cortical hemispheres are "interlinked" via "thin inter-hemispheric connectivity", without necessarily exchanging all  Shannon information sufficient to construct our bilateral visual percept.
Interhemispheric connectivity is "thin" in the sense that low-mid level visual areas are only connected at the vertical meridian. We need to go up to TE, TEO to have full hemifield connectivity. Then again, at TE, TEO, the visual representation is abstract, and most probably not rich enough to support our conscious vision as in Jackendoff's "Intermediate Level Theory of Consciousness".
The first realistic step would be to test the idea with two biological hemispheres, where we may assume that both are "conscious". As in the last part of the linked video above, we may rewire inter-hemispheric connectivity on split brain animals to totally monitor and manipulate inter-hemispheric neural interaction. Investigating conditions which regains bilateral percept (e.g. capability of conducting bilateral matching tasks) would let us test existing ideas on conscious neural mechanisms.
Relevant answer
Answer
Masataka,
''Machine consciousness can exist without emotion. What do you think?''
I do not think so. Emotions are about being concerned and not indifferent of what we do or about what is happening.  This is related to the basic motivationa and desires of action. No emotion, no action, no desire to live basically.  Desire to survive, or not being hurt are totally necessary.  Most of consciousness is about these emotions . It is why marketing is targetting emotions and not reason to steer the will of peoples. 
An agent is conscious while doing somthing if and only if there is a WHAT IS LIKE TO DO THIS THING for the agent.  So only the agent itself can experience it.  No automated non conscious test can provide an objective diagnostic if an agent is experiencing a what is like to do the action.  Externally, the only thing that can be tested by a non-conscious agent is what is being done;  Any qualia associated with what is being done can be externally observed.  We humans have a theory of mind, i.e. a built-in capacity to attribute a What is like to do something to an other human.  Not only do we observe the external facial expression of another human but attribute and experience what the other person feel.  We simply put ourself in the shoe of the other person doing this expression and knows what that person feel because we assume that the other person is more or less like ourself and attribute to the person our own emotions when we do such experession.  An automata or a test cannot have a theory of mind because it has no possibility of attributing emotions.  BUt the human theory of mind is far from being 100% correct.  As a child I remember crying when watching Bambi.  Bambi is simply a sequence of drawins and all the feelings that I attributed to bambi where only in me and not in Bambi. 
  • asked a question related to Artificial Consciousness
Question
354 answers
I have a thought experiment (video link: "Paradox of Subjective Bilateral Vision"16:00-28:00) that results in very strange situations if "high-level visual areas themselves are not sufficient for conscious vision, (or low/mid-level visual areas are necessary)", namely, that the neural mechanism of conscious vision, its verbal report and solving of perceptual visual tasks (e.g. bilateral symmetry detection) violates physics that we know of today. I would like to know if there is any experimental/theoretical evidence on this issue. Thanks in advance!
Thanks to the two contributors, the above question has developed into a discussion on how subjective vision gain simultaneous holistic access to spatially distributed neural codes. There have been claims that 'holistic access' should be considered as a serious constraint on the neural mechanism of subjective experience. In case of vision, the seamless and the unified nature of our bilateral percept can be thought as an indicator of our consciousness mechanism having holistic access to wide-spread neural representation.
Unlike many popular theories of consciousness, some scientists believe that holistic access should be solved by actual physical processes in the realm of established science. In other words, there should be some single 'entity' that has causal physical access with consequences, to all subjectively experienced information. Although, there are surprising small number of models on consciousness that actually implement such a mechanism.
I explain my "Chaotic Spatiotemporal Fluctuation" hypothesis in the linked video (40:00 - 50:00), where holistic access is implemented by deterministic chaos components in neural fluctuation. Here, I define holistic access as 'every local change in the distributed neural code evoking global system-level changes in neural fluctuation', which relies on the so-called 'butterfly effect' of deterministic chaos. For the sake of clarification, the link between 'holistic access' and 'subjective experience' goes beyond physics that we know of today.
I would very much appreciate comments on the first question too.    
Relevant answer
Answer
Arnold,
'For example, in retinoid space, the color of a red car would properly overlap the shape of the car at one location, while the color of a house nearby would properly over lap its shape at a different location. This is commonly referred to as perceptual/phenomenal binding.'
So does this happen also cross-modally?
In other words, would an example super-linear cross modal retinoid neuron would be something like the below?
super-linear: A (brass,x1,y1,z1) + A(trumpet tone,x1,y1,z1) < A( brass,x1,y1,z1) AND trumpet tone,x1,y1,z1))
  • asked a question related to Artificial Consciousness
Question
33 answers
Philosophers do not agree on what consciousness is, Neuropsychologists do not agree on the Neural Correlates for consciousness, and Engineers keep telling us that it is a control system, but it feels like there is more to the story. Perhaps there is, but at the heart of the system, it seems to take on the role of a form of control system, with strange unexplained expereriences. Some of these experiences like the experience of being conscious from wake-up to sleep-time are beginning to seem more and more unlikely as we find out for instance that the experience of our time sense is more complicated than a clock would be. Perhaps it is not an indivisible state of being aware, so much as the illusion of being aware from wake-up to sleep-time.
My theory is that it is easier to control the organism if it doesn't bother with the complexity of dealing with discrete starts and stops of consciousness, but treats the range of sub-conscious and conscious states as if they were all part of consciousness. In this way the phasing between states does not affect the self-image of the individual organism, and so simplifies its control interface.
Relevant answer
Answer
Graeme,
I do not understand what you mean by will give too much credance to anything that smacks of agency.  
We human are descendant of primates which are higly social animals with a theory of Mind.  We think/feel that we are conscious (that it is an illusion or not is irrelevant to this belief/feeling) and we naturally attribute that conscious attribute to other human being based on their behavior.  We also naturally attribute certain level of consciousness to animal we live with.  All dog owner would not keep a dog which they feel it is totally unconscious.  So whatever agent is out there, if we naturally feel that this agent has a level of consciousness then this judgement has to be trusted because it is the only one we have since thousand of years, it worked so far and why suddenly doubting it?  If I feel that a robotic device after interacting with it I feel it is in some manner really conscious then it is.  If someone invent a so-called objective critera for what it is for a being to be conscious and this critera contradict my judgement then my judgement remain true in term of what it is for a human being to feel that some agent is conscious and that other so-called objective critera has to find a way to assess its truth.
Suppose that like in the film Matrix a woman with a pink dress walk in the street towards you and that you find her beautifull and attractive.  Then someone tell you that you made a wrong assessment because it is not a realm woman but a robot.  Maybe she is a robot but your attractiveness was real.  It is the same with the assessment of consciousness.  If we interact with an agent and assess this agent conscious then it is conscious.  That is it.
  • asked a question related to Artificial Consciousness
Question
10 answers
The criticality hypothesis asserts that the brain is a critical system, like a paramagnetic material in the critical temperature. being in the critical state can maximize the repertoire of a system (physicists call it susceptibility) beside unpredictability and coherence of states.
There are experimental evidences for it, neural avalanches show a great similarity to critical systems. Dante chialvo and other physicists provided a good understanding of the critical brain in recent years.
BUT, can Criticality hypothesis spark a new understanding in the field of consciousness research?
A.A
conscioustronics
Relevant answer
Answer
Thank you Abolfazl, I will attempt to though I despair of the details. It seems that Maclean got the Reptile part of the brain wrong. Mammals branched off before reptiles in the Cade analysis.
In any case, the idea is that evolution has brought a number of versions of the brain into existence over time. While the exact nature of the evolution of the brain is still controversial the idea is that Human consciousness is the result of a process of evolution that resulted in the development of the modern brain.
The evolution of the brain brings with it, a more elaborated brain, with greater capabilities by picking up on specific epoches in the process we can see a sort of progression from basic core consciousness to a more fully elaborated human consciousness.
While I am not sure of the exact epoches to use, and we can be sure that within those epoches some species may tend to reduce their elaborations rather than increase them, The idea is that the process of elaboration took time, and was done in stages.
We only have the end results to work from, which means that we have lost some of the information in between losing some of the enabling steps due to die offs and extinctions. As well, evolutionary evidence does not stay still, some portion of the evidence is always changing, so even the less elaborated brains that exist today are probably not the same as the ones that existed in prehistoric times.
But beyond all those caveats, we can see a sort of progression of elaborations that might explain the eventual development of the human brain.
I am currently studying an introductory work on comparative neuro-anatomy of Vertebrates under the mentorship of John LaMuth. So hopefully my theory of consciousness will develop under his tutelage.
In any case, the idea of brain development in stages brings with it, the idea of a progression of stages that results eventually in the human brain. This should be seen as different from scala Naturalae in that it doesn't assume that the human brain is the pinnacle of evolution, just the most elaborated of the brains under study.
Evolution moves in both the direction of elaboration and simplification, depending on the ecological niche of the species, so it is possible that a simplified species is actually better evolved for its niche than humans are for theirs.
Whether or not we look to cade analysis, consciousness probably developed in stages along with the brain. My research is attempting to find a model that will explain the complexity of human consciousness in the best way possible while it assures that the complexity grows according to recognizeable steps.
  • asked a question related to Artificial Consciousness
Question
3 answers
Dr. Gerald M. Edelman, is one of the leaders of a simulations based approach to consciousness. The idea is that we really don't know enough about the way the brain works to make informed functional statements, and therefore do away with the unfortunate assumptions of A.I. and look instead to Neuroscience and simulation for models of the mind that perform work similar to that done by the brain.
The main caveat against this approach, is that it doesn't explain why certain simulations are needed, it just assumes that there are biological reasons, and attempts to copy the neuroscience in a simulation.
Relevant answer
Answer
Simulation is the only way to test ideas and hypothesis of consciousness. This phenomenon is individual and not objectively measurable. On the other hand it is also not measurable if the machine or program is conscious. So we have a dilemma which is not trivial solvable.
  • asked a question related to Artificial Consciousness
Question
15 answers
I work in industry and I completed undergrad almost 9 years back. However, I have some ideas and I want to publish or even collaborate if possible. What is the best place for such people? I do not have any professors or academic reviewers.
My interests are primarily in AI, logic and knowledge representation.
Relevant answer
Answer
In addition to journals, conferences, and patent submissions, I would suggest being active as an open source code developer and industry developer and author - for example, IBM has a nice program to publish and recognize the works of applied researchers and developers and has interest in machine learning, video analytics, Linux applications and systems development, and just general work in these areas - http://www.ibm.com/developerworks/aboutdw/dwa/about.html
I have written quite a few developer articles myself - http://www.cse.uaa.alaska.edu/~ssiewert/Sam-Siewert-Publications.pdf
Intel, NVIDIA, and many other computer engineering firms have similar developer web pages. For something more formal, but applied, you might submit to the Intel or IBM Research Journals.
The nice thing about Developer articles is that they are interested in early stage prototype, proof-of-concpet and idea stage work as long as you are ok sharing with other developers, but if you want collaboration, I suspect you are.
The web-based publishing is fast and invites feedback and collaboration.
Otherwise, I think if it's collaboration you seek, conference papers are best because you'll meet like minded researchers and developers, much more so than you would publishing in journals and filing patents (in my opinion and based on my experience).
Either way, the more you write, the better - good luck!
  • asked a question related to Artificial Consciousness
Question
7 answers
Nowadays we try to simulate each process by a computer program which hashes parallel tasks in a sequence of partial tasks. Do we lose in such a mode the dynamic of the entity? Artificial neuronal networks were also simulated as simple brain models.
Is there a risk in such a computer model that important phenomena would be lost which could result in an analog network interesting phenomena?
Relevant answer
Answer
I recommend the dialogue between Rosen and Pattee (and the consequent breakdown in their dialogue) regarding the "modeling relation" and the "epistemic cut."
Ultimately, my colleagues and I on both sides of the Rosen-Pattee fall-out end up finding more inspiration from analog simulations.
Pattee's commentary on his relationship with Rosen
Arran Gare's commentary on Rosen and Pattee
Also, see Cariani's review of Gordon Pask's electrochemical ear as a classic case of what analog simulations can do:
  • asked a question related to Artificial Consciousness
Question
9 answers
The measure "phi" as the capability of a system to produce integrated information seems to just define necessary connections. However, it seems that it doesn't indicate what kind of neural dynamics integrates the whole existing information all across a complex. is it synchronization, recurrent activity or something else?
Relevant answer
Answer
Hi Abolfazl,
I agree we cannot rely on IIT too much but I am not sure that the conclusion that a Hopfield network would be conscious is crucial. I think the deeper problem is that any assignation of phi is arbitrary except for a set of signals that are convergent on a single event, as Leibniz pointed out. (They knew enough neuroanatomy in the seventeenth century to have the same conversation we are having but people forget historical literature.)
The 'well-establishment' of an idea is never a scientific argument, Abolfazl. The received wisdom is usually wrong. And in philosophical circles emergence is highly contentious. I personally think that emergence does exist, but not in complex aggregate systems. These are explained by their parts. Examples of emergence I would give would be chirality or the acquisition of Goldstone modes by crystals. Goldstone modes can be completely unrelated to any parts.
I am not sure that you really mean 'disambiguate'. Maybe you just mean 'say what you mean'. But I think I did. It was just a bit surprising. The idea that one neuron can be conscious has been suggested by many people for three hundred years (since cells were seen). There is nothing new about it and in the nineteenth century it was commonplace, when it was called polyzoism. The theory is very much alive and well, with some recent modifications variously by myself, Steven Sevush and Erhardt Bieberich.
My proposal would certainly entail that connecting a single neuron to very carefully designed inputs could give it a sense of watching 'Star Wars' on DVD, but the problem is not that this is implausible, simply that ascertainment of the existence of the experience is impossible. We need more subtle approaches. They may pose intractable problems but maybe no more than the Higgs boson.
You think it unlikely that a single cell could host an experience of Star Wars. But an intuitive sense of implausibility needs to backed by scientific argument. Why should this be implausible, other than in intuitive terms? William James said in 1890 that it was the only solution that was not self-contradictory. I don't think much has changed since then. We need to be 'open minded'.
Very best wishes
Jo
  • asked a question related to Artificial Consciousness
Question
4 answers
For some time now there has been controversy between the people who think that the cerebral cortex is important to consciousness and those that think it is sited in the brain stem. In this question I note that the arrangement of connection between the precuneus and the PAG would offer a compromise allowing PAG based influence to directly affect cortex influence. In which case both schools of thought are vindicated.
Relevant answer
Answer
Consciousness must be based in cognition. The site http://www.cognitivestyles.com has a review of relevant direct quotes from about 300 recent papers (‘1a Lateral Frontopolar’ to ‘1f. Aspects of Depression’ on the drop-down list) which together suggest that cognition is based solely in the lateral frontopolar, medial frontopolar, anterior cingulate extending into dorsolateral prefrontal working memory, superior parietal and inferior frontal gyrus. Nothing more. You can look through the quotes and see what you think.
There is also an extended pictorial model of consciousness extracted from psychology and philosophy (eg Heidegger’s ‘Being and Time’ with both its spatiality and temporality in diagrams). Evidence from history identifies the nodes in the model with the cognitive modules in the cortex.
The model predicts that consciousness (in contrast to cognition which can be unihemispheric) always involves communication across the hemispheres, between the nodes. This communication is probably occurring somewhere down lower. This must extend far beyond something merely between the precuneus and the PAG - I too would like to know where this location is.
Both schools thus appear to be correct.
  • asked a question related to Artificial Consciousness
Question
6 answers
After reading a recent (2011) article on cooperation of the default mode network, and the frontal-parietal network in internal trains of thought, it became obvious to me that there were two workspace like hubs, one the Angular Gyrus connected to the frontal-parietal network, and one the supramarginal gyrus connected to the Default Mode Network, this played into my work on weak attention, in that it suggested that if the angular gyrus fedfoward into the supramarginal gyrus via workspace like transmission it would tend to support my assertion that complicit attention was the result of two different networks, working together.
Relevant answer
Answer
Right. One area of the precuneus, which is functionally connected part of the 'default' network, connects to inferior parietal, angular gyrus mostly.
  • asked a question related to Artificial Consciousness
Question
1 answer
There are some people who believe that the ability to build a conscious robot will doom the earth. In a Terminator like total war, the human race will destroy itself, and end all life on earth.
This is a powerful metaphor, I call the Special Enemy Metaphor, where we project our human failings onto our devices, and they destroy us.
An alternate metaphor, that is just as devastating is that robots clean up the earth, almost despite humans and cosset the human race, extending its life, at the cost of what makes us individuals. We become hive minds or mindless animals repeating human-like activities in the end no better than zombies.
Both of these scenerios are what I call the Dark Side, of the Singularity problem. They assume that the results of conscious robots will be the destruction of something unique in humans.
Relevant answer
Answer
The singularity problem is Science Fiction. We should discuss in 500 years once more.
Well, conscious robots will not exist in the next years - anyway we can discuss the ethical problems if it would arise reality. If conscious robots exist or we are able to transfer the human consciousness to a robot we have a phase transition. Machines are not only artefacts but human like subjects. The question is, what will the robots do with this conscious mind?
  • asked a question related to Artificial Consciousness
Question
4 answers
Agency, is the ability to self-direct.
There are many aspects of agency that are involved in Consciousness Research, and many levels at which these aspects have been tried. The Global Workspace Theory for instance has been implemented as a multiple-Agent architecture. There is much discussion as to whether or not, agents that are conscious, make up consciousness, or whether consciousness is derived from agents that are non-conscious.
Where it is penetrating Neuroscience, is in the study of schizophrenia, and other failures of Agency that have been described in medical literature. These failures are traced back in Lesion studies to where in the brain the damage was done, and estimates of what the failure means are used as indications of what that part of the brain probably does.
One area of especial interest is the Orbito-Frontal Lobe, which has been thought to be involved in failures of a sense of self. People with damage in this area tend not to accept statements about what they have previously done as true. Suggesting that they take no ownership of the activity.
Another interesting effect is a sort of failure of a log of activity, where people lose track of what they have done, not because of a lack of a sense of self, but because of a lack of accountability for what they do "my cousin moved that arm" they may say, because they don't remember moving it themselves.
Relevant answer
Answer
A.I. is the way solving problems by algorithms. Rule following programs will never solve questions how the brain works. The efforts are very impressive, but we can not learn anything about the function of a brain. The loud promises of the A.I. research centers are show business to get money for the research. The mountain shrieks and gives birth to a mouse.
  • asked a question related to Artificial Consciousness
Question
17 answers
While Cognitive Models based on Production Rules, are quick and relatively easy to design, they do not tell us as much about how the mind works, as the inventors might have thought. Worse, they don't really capture the richness of human consciousness, although they are certainly Autonomous in behavior. One wonders if they are truly conscious, or are just an approximation of a Zombie.
It doesn't help that the theoretical zombie is able to do everything that a conscious mind could do without consciousness. Functional Consciousness, seems most likely to achieve this zombie stature, if only because, Functional Consciousness does not really explain what consciousness is.
Jerry A. Fodor in his "The mind doesn't work that way!" book, got me thinking about the nature of neural networks and the constraints that they place on the circuits in the brain. With the help of David LaBerge's work on Attention, linking the Thalamus, and the PFC to cortex function, I designed a Memory Model, that linked implicit memory, explicit memory, working memory, skill memory and declarative memory into a memory model based heavily on a Weak Attention Model with 9 or more Epoch's during which processing on memory gets done. Braaks book on the "Architectonics of the Telencephalic Cortex", was instrumental in linking the basic architecture back to the micro-architecture of the brain.
Quickly I learned that a Constraints based approach, made understanding the architecture of the brain, even down to the micro-architecture of the Telencephalic Cortex, more approachable.
A definition of Consciousness emerged that while it is not accepted by the extreme phenomenalists, offers an opportunity, to explain the nature of phenomenal events. It will be many years before I can build a physical model to prove my contentions because I will have to invent among other things a new architecture for computers, but the preliminary research seems supportive of such technology being practical.
Relevant answer
Answer
Wilfried, a zombie is an intuition pump meant to illustrate a philosophical concept. As such there is as yet no equivalent structure as far as I know on which to make a proof. The zombie argument only has credibility because some A.I. researchers are trying to simulate the mind without consciousness, they haven't gotten close again as far as I know.
  • asked a question related to Artificial Consciousness
Question
6 answers
There can be several types of record keeping required regarding “Universal Management System”. It can be divided in to two main subjects Quantitative and Qualitative Records. (it might be possible that at a certain level both have some but same unique mechanism). Observations can be find in the known phenomenon about/among different constituents in “Universal Management System” which may indicated that at least for smooth running it’s vital to get/make Record of position and place, Record of carried/issued energy, Record of distance and angle with others to avoid “unwanted” collision, Record for “guided map” of route of movement, speed, carried energy, side of spinning movement, Record of voluntarily actions of system as according to the in-bound intelligence level of system needed for cope up the arising situation, Record for Involunteer actions of a system as according to “given-guided Approach”, Record of all inputs and all out puts by systems etc etc. Kindly note we can assume that every type of particle & wave existed in Nature can be the simplest form of a system but by it self.
What can be a good approach to make/establish a “modulating scenes” about “Natural Record Keeping” for our best understanding & learning? And what the level of Artificial Intelligence will be required for it, especially which generation of AI will be suitable for this work or its needed to go further from Artificial Intelligence toward “Artificial wisdom”? Can any researcher does help to summarize this set of questions or asked in more better way?
Relevant answer
Answer
Perhaps the best way to get started is to focus on local phenomena in nature rather than attempt record keeping on a global scale. Local phenomena include, for example, wave action for a portion of a shoreline along a lake or sea or ocean (this can be done by a human observer with a notebook and camera or by a highly focused satellite camera).
I would go one step further with natural record keeping and look for set patterns in observed phenomena. See, for example, a source of wave action patterns in the attached contour plot from
  • asked a question related to Artificial Consciousness
Question
3 answers
The idea is that Motivation or Libido, as it is sometimes called, can be explained by a 5 emotion matrix. Each Emotion is linked to an instinctive drive. As the drives requirements are met, they reduce in strength, and so each drive is operating at a number of levels over time. If we look at Abraham Maslows Hierarchy of needs, and assume that the basic need is an instinctual drive, we can see that the drives are ranked according to survival potential for first the individual, then the species, then the immediate social group, then the society, and finally the mental health of the individual
The idea that there are only 5 emotions, therefore seems to fly in the face of previous thought.
However this cognitive model has had some important successes at predicting behavior.
Relevant answer
Answer
Emotions are also linked to the various drives that set up priorities for processing in the brain. For instance fear is associated with the drive not to allow damage to the body or mind. Disgust is associated with the drive to keep from eating poisonous food, etc.
  • asked a question related to Artificial Consciousness
Question
5 answers
Although I started studying Artificial Intelligence and Neural Networks, and developed a theory of consciousness, I felt that my studying in A.I. and my theory were somehow incompatible, especially with the Agent Oriented Concepts being pushed by A.I.
That is why I see Artificial Consciousness as being a different discipline, and I am trying to associate it with Neuro-psychology rather than A.I. It is too easy to get locked into the Parallel Distributed Processing mindset when studying neural networks from an A.I. perspective, but when you are studying micro-anatomy from a Neuro-Psychology mindset, the missing parts of the connectionist model can be more easily seen.
Relevant answer
Answer
There are different ways in AI to solve the problem of "intelligent" problem solving. Early programs are algorithmic approaches, newer programs try to solve their task by brute force with search and statistics. These programs are very good (Watson in Jeopardy), but there is no intelligence than the programmers intelligence. This is an impasse for an artificial conscious system.
We need an idea about qualia to understand perception. This would be the first step to a conscious machine.
  • asked a question related to Artificial Consciousness
Question
1 answer
Can a digital circuit be used to simulate an analog cell? Of course, we simulate analog signals all the time, in computing. The question is how close an emulation we want, and what we expect the circuit to do.
Recently I started working on a Digital Implicit Memory Project based on the idea of a Content addressable memory cell. The memory cell is not all that original, they were thought up years ago, and are currently used in routing, and in Astronomical Instruments. The question was could I implement an equivalent to an implicit memory, using such a mechanism.
The first problem was to achieve a feasibililty test on the ability to store complex data in implicit memory cells. To do this, I built a 4 bit implicit memory array, just large enough to store a single digit in BCD coding.
I eventually got a circuit running using two RS flipflops, with an NXOR gate to link to the lines of the match bus. It took much longer than necessary because I had to review my digital electronics, and some of my analog electronics, then hit on a workable architecture since none of the available circuit diagrams seemed to have enough detail to allow me to copy the circuit. Eventually I used a 3input Nand gate chip and a two input nand gate chip, to implement the circuit.
What the circuit is, it seems is a simple Static Ram cell, and an NXOR gate.
To get the circuit to respond to incomplete data, I wanted something like a majority gate, to allow the cells of the circuit to vote on whether or not the circuit would output a 1. I had to settle for a 2 or more gate, because there weren't an odd number of bits in the storage element.
If I wanted just a recognition circuit, I would use a 4 input AND gate, and be done with it. Instead what I want is a redundant cloud of degenerate data, that will allow me to, in a second level of implicit memory, find patterns in the redundancy and degenerate nature of the coding, that strips away the actual storage code, revealing the underlying data.
One of the problems with using CAM (Content Addressable Memory) is that it is often very restricted in its function because it only recognizes content used in the same coding scheme. Such a selective role is ok for routers, and Astronomical Gear, that is dependent on using the same representation for the same data, in which case CAM becomes a recognition device, but the real world is not all that willing to represent everything in exactly the same way every time you want to recognize it.
For instance, if your girl-friend wears a green sweater on Monday and a red sweater on Tuesday, should you be able to recognize her on Tuesday? You had better, or she will make your life miserable!
The problem with using CAM as a recognition mechanism is simply that to get the four input AND gate to fire your girl-friend would always have to wear green sweaters. Not going to happen!
The real world is more complex than that. So we do not want to recognize using CAM at the signal level, we want a more general system. Now here we get into the realm of Soft Computing, we want to fuzzify the signal in order to get it to recognize a wider range of signals as being the same thing.
However as far as I know there isn't a Fuzzy AND gate, so what we need is a mixture of AND gates and OR gates, or Nand and Nor gates, or something that will allow us selectivity, and agglomeration into fuzzier sets, but based on digital logic. The result is something I call a Satisfycing gate, in that it indicates that the memory in the match bus, satisfyces a high percentage of CAM locations in that cell, and therefore might be related to the data in that cell. Now a lot of digital codes will satisfyce the gate, and therefore the implicit memory cell, is not recognizing the actual code, but looking for something smiilar to the code used to define the cells value.
I actually designed the satisfycing gate (2 or more gate) three times before I hit on an architecture that would fit the exact TTL logic chips I had available, (I curse the fact that I had hundreds of TTL logic chips in storage until about 6 months ago when the storage unit was sold out from under them.)
All in all, a nibble based Satisfycing CAM circuit, in TTL logic, takes up about 11 basic logic gates chips.
And the function it performs is slightly less than the function of even one neuron. In fact most of the neurons in at least the cerebral neo-cortex, have literally tens of thousands of connections with other cells. if we recognized each connection with a single
CAM cell,we would still need literally thousands of those cells to simulate a single neuron. The satisfycing gate for a 10 thousand cell CAM array, would be a sight to see. It might be better to do what the cell does and use a threshold effect instead. One way of doing it is to count the number of cells that are active and compare it against an arbitrary number that defines the threshold. A slightly better approach is to let the cell adjust the value of the threshold according to some rules.
Relevant answer
Answer
I think, an associative memory is able to solve your problem. You have some input data and some stored data. Now you ask to which fits the input data best? That's it. The more data is stored, the better fits an input. This is a statistical classification, so errors may be possible, but this is a sort of fuzziness.
  • asked a question related to Artificial Consciousness
Question
4 answers
Dr. Sun, a leader in the Hybrid School of Consciousness, built a Cognitive Architecture based on a 4 module system that was separated into Non-Action Oriented and Action Oriented sides of the system. Although two of the modules were neural network oriented, the rest of the modules were Functionally Oriented, and relied heavily on a rule-base. This type of architecture was seen as acting in a manner similar to human action, and it was theorized that it might be conscious, but no explanation for how it became conscious was offered.
In essence this system uses neural simulation to filter a basically functional architecture, but because approximately 1/2 of the system is simulated, it can be said to be a hybrid system.
Relevant answer
Answer
Here's a publication list from rpi
No idea how up to date it is, though.
  • asked a question related to Artificial Consciousness
Question
11 answers
A recent video from MIT quoted what must be a Cliche in that institute.
If you can't build a model of something, You don't really know it.
Well that isn't exactly correct, but is certainly simple enough. Actually if you can't build a model that seems to operate in a similar manner to the original, You don't really know the original.
Thinkers have been building models of the mind for centuries. None of them work much like the original, you would think we would give up or learn how to get it right. Well we might yet manage to get it right, but to do so requires a lot of knowledge that we can only speculate about at this time. The reason is simple, the only working model we can copy from is so complex internally that we are constantly losing track of how the pieces work together.
It's a little like the blind men and the elephant. As you remember 6 blind philosophers were brought into a room and introduced to an elephant. One of them stood at the head of the elephant and commented on how rope-like it was. One stood at the back end and commented how stinky and stringy it was. One touched the skin, and talked about its pebbly texture, one felt the leg and commented how much it felt like a tree. One felt the ear, and talked about how floppy it was, and one felt the mouth and commented on the size of the teeth.
Now it is easy to see that there was no consensus among the blind men as to what they were discussing. However those of us outside the story know that they were discussing different aspects of the same thing, an elephant. The mind is a lot like that, we have a number of different disciplines all looking at the mind and getting completely different results, and there is very little likelihood that the practitioners of these disciplines will step back and look at the whole beast, because each discipline is blind in its own way.
The problem is how to learn from all these disciplines how to build a model of the mind. I am encouraged by the nature of the successes I have had integrating information from a number of disciplines that such a model might be possible. I think that attempting the model is important because it will inform us about the places where each discipline might be blind to information available from the other disciplines. What we need is a framework within which to gather the information that is available and build from it a better model. I have called this framework Artificial Consciousness.
There is just one little problem, each discipline has its own jargon, and its own unique viewpoint, and each discipline defends its viewpoint against all other viewpoints, despite the fact that there might be cross over between the disciplines. What we need is a special breed of researcher who can understand the individual disciplines on their own merits, and yet ignore the defensiveness, and fit the concepts together into a common framework of thought. The problem comes when different disciplines have different idea of what the same jargon terms mean.
It gets quite difficult to bring together the disciplines while ignoring the jargon, because often the information is only available in terms defined by jargon. For instance is there any reason why we still use latin names for body parts? Except that we always have? And when we do, is there any reason why one discipline in medicine uses a different term for the same body part than another discipline? Actually yes there is, but the answers lie in the history of the disciplines which someone coming from outside the discipline is not likely to know.
To integrate a number of disciplines of the mind, into a greater model, is challenging, and unrewarding in that none of the disciplines want to see their work reinterpreted into a model that might be incompatible with their comfort zone. The scientists in each discipline will not thank you for reinterpreting their work to make it fit with other disciplines, and as a result there will be actual friction and attacks against such an attempt. Any such model will have to be defended constantly against such attacks, which will in turn result in blindness, that will assure that the model will probably be incomplete, but if the model works better than previous models it is still an indicator of an approach that might work better than the approaches used by the existing disciplines. For this reason I think that it is a valid and important thing to attempt.
Relevant answer
Answer
The body has more degrees of freedom than can be constrained by the chromosomes, so I disagree with your assumption Vitaly that we can claim all we do is encoded in our chromosomes. Our chromosomes influence all that we do, but other constraints are needed to deal with the remaining degrees of freedom.
  • asked a question related to Artificial Consciousness
Question
6 answers
Artificial Consciousness is quite simply the art of making a machine that is aware of itself and what it knows.
We know that there is a machine already that is self-aware and has declarative memory, to whit the human mind. Although there are still some that would claim that a machine made of nerve cells is not a machine at all.
Others say that just because the machine uses a different technology, (Neurons) instead of Transistors, that that does not mean that we can't transfer the functions. Over to another machine.
Others say that even if we did, it would not feel to that machine, like it was human, or a bat, or maybe even that the machine would not be able to feel.
Even yet another group has said, that if we can create an analog that does everything that conscious mechines can do, it doesn't matter if it is conscious. (This group is called the Zombie Group)
Another group says that we will only get to consciousness through a full implementation of a consciousness architecture simulated on artificial neurons.
My own personal view, is that by combining simulation of the brain with A.I. principles we can make an Artificial consciousness that is not a zombie, even if it does not know how to feel like a bat, or a human.
To get there, however I think we need to expand our understanding of Histo-Psychology, or how brain tissues affect the functions of the brain.
Relevant answer
Answer
Wilfried,
Mary Shelly explored some aspects of this hypotetical moral dilemma in her novel: Frankeinstein. It is not certain that all artificial system can be reversibly reset. Suppose that the system involves highly chaotic process that emerges and thus are not controled and which hold an irreductible aspect of consciousness of such system then such system would be irreversible.
  • asked a question related to Artificial Consciousness
Question
3 answers
Lofti Zadeh has suggested a test for human level machine intelligence, he says talk for 20 minutes on a topic of your choice, and then ask the machine to summarize the talk, in 20 words or less. He claims that no machine yet produced can complete the task.
Relevant answer
Answer
20 words is not very much. A random grasp of a substantive will have a good chance to win. Watson (from Jeopardy-show) would have no problems ... but, sorry, there is no intelligence only brute force search and a lot of statistics.
  • asked a question related to Artificial Consciousness
Question
6 answers
I am looking for an information on MVR using Microsoft Excel and also what are the differences between using Multi-Variables Regression (MVR) and Artificial Neural Network (ANN) [if there any comparison between MVR and ANN]
Relevant answer
Answer
MVR regression will try to fit a particular function, such as, linear. ANNs can fit nonlinear functions, although the form the function may not be clear. To test for a particular application, you could take some sample test data and run them both to get an error estimate. Of'course on the internet there are several resources, as mentioned by others that can help you in doing MVR in excel as well as ANNs. Now, there are many techniques under each method. If you hve Matlab or maybe R that may help you in running some tests. Hoep this helps a little. Good Luck.
  • asked a question related to Artificial Consciousness
Question
28 answers
“AI researchers have focused (…)on the production of AI systems displaying intelligence regarding specific, highly constrained tasks. Increasingly, there is a call for a transition back to confronting the more difficult issues of “human-level intelligence” and more broadly artificial general intelligence.” according to AGI 13 conference to be hold in Beijing July 31 – August 3, 2013.
Do you share same call for transition ?
Relevant answer
As far as I know, all definitions of Artificial Intelligence have logical problems. They pretend to use a generic - specification approach of definition, but in fact they are circular definitions. The problem is that there is not a rigorous definition of Intelligence.
  • asked a question related to Artificial Consciousness
Question
22 answers
Computational Intelligence is the hot field of research in AI, however, more needs to be done about Artificial Consciousness. An interdisciplinary approach can make this possible.
Relevant answer
Answer
Going back to the original question, I am not at all sure that psychology or philosophy necessarily has to play a critical role in the creation of improved AI, and I am speaking as a psychological scientist with keen interests in the philosophy of mind so I am not at all downplaying these fields. It seems to me totally imaginable that the next advance in AI might occur in an atheoretical manner, with the problem simply yielding to the convergence of technology, demand, and probably luck.
Also, psychology will only matter if the aim of AI was to REPLICATE human thought. It is totally possible that future AI will be "intelligent" in ways very different from the way humans are intelligent, as much as they will be different from current AI. After all, why frame this problem in terms of human "intelligence" when there are other models of information processing out there.
  • asked a question related to Artificial Consciousness
Question
17 answers
It would be great if there is any framework for linux environment.
Relevant answer
Answer
Thanks for citations! I am convinced that the logic consequences of "The Chinese Room"can prove that the intentionality and real understandind can't be created any AI systems. Robots and computers can imitation human-like behavior and "emotion". but these behaviors similar to human, however without any first person experinces (Gallagher, 2010).
The essence of consciousness is the experience itself, and experiences don't need Suzuki's robots to successfully performed any behavioral and emotional-like test.
It is necessary and sufficient to Turing test have been passed for robots, but this is the demand of the easy problem (Chalmesr, 1996).
  • asked a question related to Artificial Consciousness
Question
89 answers
That is, how can we make one network aware of another network so that the first can direct the learning activity of the second? I would like to have specific references and technical information about appropiate environments where the question may be answered.
Relevant answer
Answer
The problem with measuring Consciousness is partly coming to an agreement on what to measure. Both Baars and Sun put together full cognitive frameworks without actually defining consciousness except in relation to the cognitive framework they proposed. Since there is little theoretical crossover between the two cognitive frameworks, it is not obvious that they are describing the same process, let alone defining it in such a way as to be measurable.
The zombie theory, is more than just another obstructionist attempt to slow down work on consciousness, although it is being used in that manner here, it asks the basic question can a simulation replace the real thing, but it presupposes that the real thing is not itself a simulation, which it seems to be, to me. The architectural magic that makes the serial nature of thought possible on a massively parallel architecture can only be achieved by simulation I maintain so the only part of simulation of consciousness that is impossible is to arrive at a specific entity without a history, whether that entity is copiable or rebootable or not is really beside the point.
  • asked a question related to Artificial Consciousness
Question
33 answers
It is my belief that the difference between a conscious and an unconscious machine, is that the conscious machine is aware of its actions and reacts not only to the environment but to its own impact on the environment including its own internal states as well.
I suggest that, however there are machines that are aware, but not conscious, and this causes some consternation in scientists that ascribe awareness to consciousness.
Consider the word "Sorry" Many Robots would accidentally roll their wheels over your toes. But not many of them would be aware that they had done so, and fewer yet, would realize that you hopping about was probably due to the fact that your toes hurt, and fewer yet would be able to make the connection that it was the action of rolling over the toes that had caused the pain. And fewer yet would be able to connect the action of rolling over the toes, being related to their own actions, and thus ascribe responsibility to themselves, even fewer yet would be able to recognize that in fact that implied a social requirement to appologize. And even fewer could learn to say "I'm sorry" without preprogramming.
For all their vaunted power,
Computers today are dumber than a sack of hammers. Most of them have no conception of what they are doing, and no memory of what they have done, except the data files hidden on their hard drives, somewhere. Consider Google, Google is a search engine that searches the internet for information.
But how many times have you had to redo a search because it sent you false positives, assuming that because the words severally and in the wrong order received hits, that that is what you wanted?
This by itself is not bad, it just indicates laziness on your part that you haven't learned the interface well enough to create the right search term for every search. But how many times have you reworked the search and noticed that Google has no idea that the two searches were connected in any way?
If Google was conscious, it could help you refine the search, and save processing cycles, but because it isn't, the Google Organizers can make more money by displaying advertisements that are not part of your search, but can be hidden in the false positives.
Relevant answer
Answer
How do you make your assumptions about consciousness being a Semantic Reasoning Circle based on a type of instance based semantic network, and the idea that the robot will have independent learning ability match up? Is the reasoning circle some form of learning?
  • asked a question related to Artificial Consciousness
Question
7 answers
Recent Congruence between my understanding of the declarative memory system, and my theoretical model of consciousness, has led me to question whether the particular implementation of the declarative memory system found in vertebrates, might be the basis for consciousness as we know it.
Recent research into fruit fly brains, has shown that mushroom bodies have a role in the action management system of the brain, but as far as I remember from Braak's Architectonics of the human telencephalic cortex, those structures are more likely part of the human declarative memory, and the recent suggestion that the parahippocampus might be the location of "What" mapping in the declarative memory suggests that in fact, a change in use might have occurred between insects and vertebrata, that would have freed up the hippocampal area for use in the declarative memory. I trace that change in use to the formation of the vertebrate cerebellum.
Since the Craniata do not exhibit cerebellar structures but the vertebrata do, it is my thought that consciousness as we humanly know it, is a function of the freed action management system, repurposed to become declarative memory sometime after the evolution of vertebrata. To understand why I claim consciousness for this area, will require some discussion. This discussion will open the subject to further study, and possibly future research potential.
Relevant answer
Answer
Since the thread is still alive may I suggest a recent reference studying hippocampus as playing a pivotal role in human memory function:
Bartsch T. “The Clinical Neurobiology of the Hippocampus - An integrative view” Oxford U.P. (01/07/2012).
( Some pages can be freely consulted in google books. )
  • asked a question related to Artificial Consciousness
Question
4 answers
(news.stjosef.at) Unter der Heftüberschrift „2045 – The Year Man Becomes Immortal“ (2045 – das Jahr, in dem der Mensch unsterblich wird) bietet die aktuelle Ausgabe des TIME-Magazins (21. Februar 2011) eine Titelgeschichte von Lev Grossman. In der Darstellung geht es um jene utopisch wirkende Überzeugung der so genannten „Singularity“-Bewegung, wonach in nicht allzu ferner Zukunft der einzigartige („singuläre“) Moment in der Menschheitsgeschichte eintritt, dass Menschen und Maschinen (sprich Computer) in einer nicht näher definierten Weise „eins“ werden und damit die Spezies Mensch in der heutigen Form zu bestehen aufhört. Der Technik-Guru Raymund Kurzweil, Inhaber zahlreicher wissenschaftlicher Patente, meint, es komme aufgrund einer jetzt schon feststellbaren exponentiellen Zunahme an Wissen und künstlicher „Intelligenz“ um das Jahr 2045 zum Umschlag in eine regelrecht „übermenschliche“ Intelligenz. Der Mensch könne dann prinzipiell auch vom Computer abgelöst werden, der es perfekt verstehen würde, menschliches Denken zu simulieren und „Unsterblichkeit“ zu sichern.
Kommentar (Josef Spindelböck): In dieser utopischen Vision zeigt sich eine säkularisierte Form messianischer Erwartung. Nachdem man Gott den Abschied gegeben hat, tritt an seine Stelle der Mensch und wird schließlich durch den Computer ersetzt. Es handelt sich um nichts anderes als um die Abschaffung des Menschen! Eine derartige Konzeption lebt von der gnostischen Hybris, alles selbst in die Hand zu nehmen und eine neue Wirklichkeit ohne Gott zu erschaffen, bis schließlich auch der Mensch selbst wegrationalisiert wird. Was übrig bleibt, sind seelenlose Apparate mit höchster künstlicher „Intelligenz“, aber ohne die Fähigkeit geistiger Einsicht und personaler Liebe! Man mag sich damit trösten, dass dies nie wirklich so werden kann, schon allein aufgrund der metaphysischen Unmöglichkeit, dass aus einem unbelebten Ding (dazu gehört auch ein Rechner = Computer) etwas Lebendiges oder gar ein Wesen mit Bewusstsein entstehen könnte. In der monistischen Vorstellung des „neuen“ Materialismus lässt sich dies jedoch nicht ausschließen und verbindet sich, gestützt durch die Theorie des Evolutionismus, zu einem gleichsam unaufhaltsamen „Selbstläufer“. Dagegen lässt sich einwenden: Auch der beste Computer bleibt eine Maschine, die zwar fähig ist auf quantitativer Basis Berechnungen anzustellen, die aber dann versagt, wenn es um das nur einem geistigen Wesen mögliche Verstehen von Zusammenhängen geht, und die schon gar nicht zu wirklichen Lebensvollzügen fähig ist. Es wird immer vom Menschen abhängen, wie er mit dem Universalwerkzeug Computer umgeht und wie er diesen einsetzt – ob zum Fluch oder zum Segen! Das ewige Leben lässt sich – entgegen der Meinung der Singularitätstheoretiker – auf diese Weise sicher nicht erwerben oder gar „simulieren“.
Relevant answer
Answer
Thank you for the translation, I saw this earlier and couldn't comment, because I don't understand german, and never think to use the translator.
Ok, there are a number of important concepts here:
Kurzweils spiritual Computers. Can, indeed a computer be spiritual? even kurzweil doesn't know for sure.
The Singularity, the article in question suggests the "End of Humanity as we know it" but doesn't suggest what is left afterwords.
There has been a lot of speculation in the Science Fiction Genre, about the nature of the forms that the singularity might take. Whether transhumanism where humans are obsolete, or hive-mind, where humans get coopted into a network, and eventually replaced, or indeed where we allow a caretaker robot to set all the laws, and then humanity dies from loss of interest.
Note that most of these predictions are based on A.I. the projection of future trends from existing Artificial Intelligence theories. and not on Artificial Consciousness based on a human-like architecture. If they read like bad science fiction, it's because they are old science fiction, being based on predictions made by Isaac Asimov as plot devices for his "Robots" series nearly 20 years ago.
The issues lie in the question of humanism as a connection to god, or as a function of the body.
Many philosophers feel that the "Humanism as a function of a body" researchers are getting too sure of themselves, and in an attempt to slow down the perceived singularity event, are trying to convince science that a connection to god is needed, and that without it computers are soul-less machines. Kurzweil meets this concern head on, and in some ways, exasperates it, by claiming both eternal life, and spirituality for the machine.
The fact is that neither is the case at the current moment. Computers which are the standard basis on which it is assumed A.I. will eventually rise, are not all that spiritual, (often being dumber than a sack of hammers, just when you need some intelligence) and certainly not at all capable of eternal life. But there is no reason to stick to the current computer architecture, and I have been working on a system of implicit memory that will it is hoped change the nature of how the machine works, so that many of the "Soul-less" elements it has today will become more human-like.
I of course cannot claim to understand Mr. Spindlebocks definition of God, if only because there are so many such definitions, and even people who want to turn computers into Gods, so they don't have to think anymore.
But, if the "humanism as a function of a human body" people are correct, and more and more it seems that they are, we will by studying the human body, learn everything that is necessary to make a computer seem as spiritual as any human, and if we understand the nature of spritualism perhaps more spiritual than humans. This too was predicted as a plot device in Isaac Asimovs robot series. Will we then, in our hubris, impose on computers "Laws" of design that insist that human life is more important than the needs of the spiritual computer? Isaac used that as a plot device as well, but the nature of the "Laws" exceeds our abililties to program actual computers, we will need a new architecture for that.
Most of this Angst comes from the "Special Enemy" school of thought, that presupposes an enemy of humanity that is an implacable machine. An extension of the mind-set of Shelly's Frankenstein it pushes at the deep seated insecurities in humans, to trigger angst about machines that become our especial enemy (Ala Terminator. bolo, Flinx ) Isaac Asimov always countered this with a similar myth, the especial friend (ala Zero-th law) where the computers infiltrate our civilization and maintain it despite the limits of unadapted humans, and the hostility that human based civilizations would eventually arrive at to A.I.
It's a wonderful plot device and I have spent may hours reading spin offs of the original concept, and wondering just what will actually happen, but I think that most of us realize that it is a plot device and not reality, Mr. Spindelbock, does not seem capable of making this realization.
His "self-runners" concept for instance assumes that robots today are not already self-programming, and not threatening civilization at all from their homes in our laboratories.
Most robots today are not capable of escaping into the wild and causing much mayhem, if only because they are not either intelligent enough, nor conscious enough to be a problem.
The one question I would like to ask, is, if a computer can be redesigned to understand relationships, why would it want to destroy them?
  • asked a question related to Artificial Consciousness
Question
1 answer
What is intelligence is the key to understanding life and evolution..
Relevant answer
Answer
Actually no, Evolution does not need intelligence. It is an automatic prototyping technique based on populations of prototypes, mutating generation after generation in minor ways, and being tested by the need to survive to breeding age.
I would also have to say, that intelligence is relative, a cyano-bacterium needs less intelligence than a multi-cellular organism, if only because it only has to coordinate a single cell. So learning about the intelligence that makes it possible for a cyano-bacteria to react to the day/night cycle, will only partially teach you about the same problem and how an multi-cellular organism achieves the same effect.
But go ahead, we are open to all types of discussion here, as long as you follow some very basic rules.
Don't attack anyone personally, their ideas are fair game, and don't spam the group with empty patter.
  • asked a question related to Artificial Consciousness
Question
3 answers
Until the mid 80's scientists were secure in the thought that they would quickly find out how to decode the brain. Today's Computational Biology was founded on the idea that we could somehow put numbers to what was going on in the body, and learn how it worked. I still hear scientists echoing that thinking, even though today we have realized that the early computational approaches were flawed.
Back then, it was known as a fact that the DNA in our bodies was a blueprint from which the organization of our body into organs and so on, naturally fell. Of course we didn't know how the DNA worked yet, but it was only a matter of time, until we did, and then, all the information about our bodies would be exposed, and we would be able to mix and match until we approached perfection.
Why do we keep raising these false hopes, only to have them dashed on the rocks of reality?
Since then we have found that there were a lot less genes in the DNA than we had originally expected, and that the number of genes have a combinatorial complexity so far below the actual complexity of the body, that they must act more like guides, than actual blue prints. Somehow the rest of the complexity of the body, self-organizes around these guides, to produce the bodies we know and inhabit.
Now of course it is not surprising that at the connection level, the level of uncertainty as to which neurons are connected to which others, is so high as to destroy any chance of mapping the connections and mathematically calculating the results at the neuron level, But back in the 1970's David Marr was convinced that simply by applying probability mathematics he could winkle out the proper operational function of the neocortex, the cerebellum, and the hippocampus. He fully expected to be able to make some sense out of the connections at the neuron level, but never published a paper explaining how.
In 1983, Sir J.C. Eccles published a paper describing the micro-architecture of the neocortex, and neatly shot down David Marrs hypothesis about how the neocortex was organized. Where Marrs work centered around a 4 layer model, Eccles required six, and while Marrs model centered around his so called codon, Eccles model centered around the discovery of cylindrical groups of neurons that acted together, firing as a unit instead of severally.
Somehow the neurons were self-organized into clusters, that fired as a group. The concept began to be called Neural groups. But could both scientists be right, even though they described such different architectures? many thought not. Back later with a new installment
Relevant answer
Answer
It was Jerry Fodor, who brought home to me, that we need something like a constraints based approach to understanding the micro-architecture of the brain. What he mentioned in his book "The Brain Doesn't Work that way!" was simply that scientists had found no easy way to implement a discrete memory in a neural network. If neural networks don't have discrete memories then how can we retrieve them via our Declarative Memory, Obviously the problem was that we were trying to use a simple network to descirbe a problem that needed a more complex network.
This caused some confusion, at first, because nothing looks more simple on the surface than computer memory where a bit is stored in a cell, and sampled whenever the computer wants to know what is in the cell. Addressing logic, lets us select a specific cell, and so we can use a "Place-Code" similar to your postal code, to find a specific cell.
But neural networks act not by storing a bit in a cell, but by storing relationships between the neuron and the surrounding neurons in special communications links called synapses. This indirection causes significant problems with location of the data, if only because the data is not directly stored anywhere, what is stored is the linkages that are changed by the data.
No only that, but at the synapse level, there is redundancy and uncertainty, because often the same cell has multiple links to each surrounding cell, and there is no guarantee that any two cells will be connected. Lately we have seen evidence of especially interesting links between the strength of some chemicals and the growth of neural processes like dendrites and axons, but we are still somewhat at a loss as to how the chemical gradients are laid down.
There is the possibility that quite simple rules defined by genetics guide the growth of cellular processes, but we have not yet described these rules, and the result at the end of the life of the cell, seems to be a snarl of processes with no constancy across the species. Until we find the hidden order in the rules that define when a process grows and in what direction, we will have to accept a certain level of uncertainty.
  • asked a question related to Artificial Consciousness
Question
1 answer
Dear friend,
I am writing this message to invite you to participate in a research study where I am currently conducting as a Ph.D. student at Middle East Technical University, Ankara, Turkey, regarding the utilization of social network web site (Facebook.com) in education.
In this stage, I am focusing on “Uses and Gratification Facebook.com”. The survey will take approximately 6-8 minutes. Your responses are entirely anonymous and your participation is greatly appreciated. You can see survey by moving to the following web site:
Please feel free to forward this message to your friends.
Thank you for your time and effort in helping me complete this study.
Relevant answer
Answer
thanks
  • asked a question related to Artificial Consciousness
Question
15 answers
There is only one known test for consciousness, and it is tied up in assumptions about intelligence.
That test was first proposed by Alan Turing back in the early annals of Computer Science, before we really had much of a computer to work with. I call it the zeroeth test for consciousness because it doesn't test for consciousness nor for intelligence it tests for Antropomorphism. Unfortunately humans are all too willing to Anthropomorphize machines, so informal Turing Tests have been passed every time you are gulled by the latest chatbot. Dr. Saibatso on steroids there is a question as to how intelligent chatbots are, and as to whether they can even represent internally anything more than a well turned phrase.
Other tests exist of course but they are fragmentary things based on limited knowledge about what consciousness is, and since there are at least 9 definitions of consciousness and the biggest model only deals with two, really detecting consciousness will have to wait for a better definition than science is ready and willing to offer today.
Relevant answer
Answer
The primary flaw I see in the Mirror Method is that it assumes that consciousness of self, is self-consciousness. The mirror method, indicates consciuosness of self and the ability to pick the self out of the mirror image instead of mistaking it for someone else.
This has more to do with the ability to process a self-image and compare it to the mirror image, than it has to do with how conscious the animal is of its self. Animals that are merely aware, and not conscious would not be able to do this. However in my model, animals that are conscious, would have an extra processing step, that analyzes the meta-cognitive signals including the signal of self, and the underlying awareness behind them at the same time, allowing the animal to pick out the simularities between its actions and the actions of the animal in the mirror. It might therefore be a good test of consciousness but not of selfconsciousness.
  • asked a question related to Artificial Consciousness
Question
16 answers
Actually no, Evolution is anything but random.
People think that Evolution is random because it is driven by mutation. However Evolution is one of a number of Algorythms that can convert indeterminate perterbations into movement in a specific direction.
For those of you that have ever held a ratchet in your hand, consider the futility of waving the handle of a wrench back and forth over a bolt. When the bolt is loose enough you can wave it back and forth, and the bolt will not tighten, but put a bit of resistance into the mix, and suddenly the bolt begins to turn in a specific direction. Anybody who doesn't understand the mechanism of a ratchet, would assume that it was still futile to wave the handle back and forth over the bolt. But as the bolt tightens you have the satisfaction of knowing that it was the ratcheting mechanism that made it possible.
Evolution is a little like that. Chance mutation is direction neutral just like your waving the handle back and forth is more or less neutral. However Charles Darwin discovered that the Ratchet mechanism for Evolution was survival of the individual. Survival isn't direction neutral, and so it gives the impression of progress. Evolution seems to be going somewhere. Humans wouldn't exist without it.
Relevant answer
Answer
What is Truth? How many Philosophers have attempted to answer that question?
It is not, and has never been the courts place to decide whether a truth, is meritorious, or not, merely to deal with the truths as they have been presented.
It is not me that you have to convince that the big bang provides no obstacle to christians, it is the less than properly educated christians, of which I have no doubt a number in my own family. Further, I don't believe in the big bang, being more interested in an incremental increase in energy model, that more favors a power curve. The fact is that Less than properly educated christians are choosing to reduce the education of their own children so as to increase the population of less than properly educated christians.
Politically they are forcing governments to legistlate an end to universal education. They claim it is because of evolution, but that is not the real issue, the real issue is political power, Scientists have had the temerity to explain to those ordained by god, that their religion is based on an inaccurate book that many believe to be divinely inspired. Of course they are going to get their back up, especially when the receipts drop so that they can't keep their church doors open.
It really doesn't matter what a properly educated christian would do, because they aren't the people involved.
  • asked a question related to Artificial Consciousness
Question
24 answers
There are some that believe that the human brain, and the human body, are kluges, things that were cobbled together using biological mechanisms that are not actually effective or efficient. Some would redesign a biological body, but some say, why bother if our technology could build a better body, or if we could download ourselves into different bodies, for different tasks, or better yet, simply take our brains out of our bodies, and situate them into machines as we are needed. The result would be that boxes would act like people, and the distinction between a robot and a person would be blurred, so that the only way we could tell would be the size of the brain box, and maybe not even then.
Relevant answer
Answer
If you follow the Architectural-Illusion camp, where the phenomenal aspects of consciousness, personality, etc. are simplifications of complex systems outputs created by the control mechanisms to constrain the number of responses, then personality is an illusion created by the control mechanisms, in order to constrain the complexity of the strategic selection of resources.
There is no question that the complex system would be model-able, So what I read your question to mean is that the simpler constrained system might not be model-able. It is interesting to note that one of the functions of Anthropomorphism is to make complex structures seem more constrained, so that they can be predictable. We simplify our angst about a balky computer with the decision that it must hate us. Hate is something we understand because other humans do it, so imbuing our computer with that emotion, even though computers are incapable of experiencing emotion, is as natural as breathing to us.
One of the problems with Anthropomorphism is the need to determine if a machine actually has the characteristic, or is just imbued with it, by human observations of it.
When we imbue cognitive architectures with emotions, we tend to worry that the stronger emotions like love and hate will overcome the more gentle emotions, hence the "Special Freind" and "Special Enemy" mythos that has crept out to label A.C. dangerous.
So, to get back to the point, what is personality, but a personal tendency to pick one type of strategy over another, and will it be possible for a computer to have the same type of tendencies? I think it will, but it might take a design change to bring it out.
  • asked a question related to Artificial Consciousness
Question
3 answers
My Appologies to any Neuroscientists, however I find that many people even with biological backgrounds do not know enough about Neurons to understand the discussions that make this group significant.
A Neuron is a specialized type of cell. It has evolved to be able to transfer information above and beyond the information that is needed for development of the basic cellular structure of an organism. It also stores this information, and to some extent processes it.
To do so, information has to be able to pass through the Cellular Membrane that separates the cell from the environment. In neurons this cellular membrane consists mostly of a Protein-Lipid+Lipid-Protein structure that is polarized to always have proteins with a specific charge on the inside of the cell and proteins with the opposite charge on the outside of the cell. As a result of this polarization there is an electrostatic gradient across the membrane that acts to repell ions of a specific charge, and attract ions of the opposite charge. This creates a natural charge separation which is capacitative in effect, something that will be explained in the basic electrical theory section.
To get a message through the cell membrane something would have to stick out of the cell, and either detect the presence of a chemical, or conduct electricity into the cell. We call proteins that stick through the cell membrane Permease Molecules. Neurons work mostly by allowing these permease molecules to trigger special openings in the membrane that are sized and designed to allow only passage of specific ions either into or out of the cell, sometimes against the electrostatic gradient resulting in an increase in charge separation, and sometimes with the gradient, resulting in a loss of charge separation.
Ions passing through the membrane create ionic currents, that like electrical currents move charge
across the membrane. As a result of this charge can either increase or decrease within the cell, creating a variable called the "Action Potential" for each neuron, that determines how much charge is kept within the cell.
Along with the Action Potential comes the relative difference in charge between the cell and its surrounding environment, creating a voltage that in turn creates EMF a force that tries to push the electrons out of the cell, and into the surrounding environment. If this EMF becomes too steep, it results in depolarization of the membrane and mass migration of ions into the surrounding environment and from the environment into the cell.
In an attempt to protect the cell from losing all its nutrients, and filling up with its own waste products from the external environment, the cell reacts by opening ion channels in the intact portions of its membrane, and renormalizing the action potential, allowing the breach in the membrane to self-organize back into its polar form, and thus protecting the cell.
This effect creates an interesting electrical spike called Firing, that signals the depolarization of the membrane.
Communication happens via the management of the ion channels by using different permease molecules to activate different ion channels. These in turn are triggered by specific chemicals called neurotransmitters, which can be specifically detected by the permease molecules in relatively small proportions such as millions of parts per Liter.
Special areas of the cell called Pre-synaptic patches, secrete these chemicals into the extracellular fluid, at special locations called synapses, where they diffuse across the gap between the cells, and are detected by the permease molecules. The Neurotransmitters are then gathered to clear the fluid, and stored in the pre-synaptic bud, for later reuse.
The Post-synaptic sensitive patch contains a segment of cell membrane that is specially built to detect specific Neuro-transmitters. Thus each synapse can be said to be either inhibitive, excitory, a shunt, or if it triggers the production of a secondary transmitter inside the cell, Modifying.
The sensitive patch outsurvives the proteins it is made of, suggesting that there might be a mechanism called the membrane replacement mechanism that trades old ion channels and perease molecules for new periodically over the life of the membrane. This mechanism has been implicated in the adjustment of the number of ion channels per permease in the sensitive patch, a measure called the weight of the synapse that allows it to learn to favor synapses that are more active.
A hypothesis exists that protein creation in the sensitive patch, triggers an increase in the number of ion channels that are added to the patch, thus increasing the weight. Denaturing of the proteins as they get digested by the cell probably results in a loss of ion channels over time, allowing synapses that are less active to lose their weights over time allowing the nervous system to adjust to changes in signals.
The cell tends to be polarized with signal gathering elements on one side of the cell, and signal distribution elements on the other. These are called Dendrites and Axons respectively. Pre-synaptic buds tend to gather on Axons, and Post-Synaptic Sensitive Patches on Dendrites, although both have been found on the Soma, and some dendrites have pre-synaptic buds, and some Axons have sensitive patches suggesting that this is just an average and that special conditions can apply.
Relevant answer
Answer
One reason why a Bio-psych course might be slightly misleading, is that it is a psychology course, and thus has to pay lip service to psychological theories, that might be incompatible with the theories I propound on here. To continue for those of you too busy to take the course:
The neuron can be said to act in one of two ways. Parallel Distributed Processing theory says that it is a process so it can be separated into input processing, and output functions, as indeed one is tempted to do with dendrites soma and Axon. However, I noted that some dendrites have presynaptic buds, and some axons have post synaptic buds on them. These are seen as being the exceptions, but where on the axon determines how broad the distribution of the control that is represented by the synapses. The closer to the tips of the branches the more specific the control, the closer to the soma the more distributed the control. Synapses on the soma of course have the greatest distribution.
The other way of looking at neurons is that they are specialized for transport of signals, this viewpoint gives the function of the neurons in terms of storage, processing and transport. This is the viewpoint I will take. The ion channel mechanism was originally a mechanism for gathering nutrients and expelling wastes too small to bother with creating vacuoles or little bubbles of membrane material around.
Because neurons are supported by glial cells, this role is less important, and can be reapportioned to selective transport of signals. The nature of the ion channels has therefore changed in the neuron, with the advent of larger and more complex ion channels. A set of Calcium Ion channels are currently being studied because of their role in triggering long-term memory. There are at least three or four different types of calcium channels ranging from basic pore-like channels, to complex channels like the NMDA Ion channel that are bidirectional in that they pump one ion out, and another one into the cell, voltage sensitive, in that they are blocked until a reasonably high voltage is achieved within the cell, and trigger chemical cascade reactions whenever they are activated, possibly by releasing a secondary transmitter into the cell to trigger the chemical reactions.
Calcium is important to neurons because it is often used in the axon process to transport signals down the process, and it is used in the pre-synaptic bud in some manner necessary to releasing neurotransmitter, so that if calcium levels drop in the presynaptic bud, the result is that the signal levels will also drop. This loss of signal strength is called habituation, and has been noted to happen when signals are repetative, suggestnig that depletion of calcium levels is rapid during firing.
Some calcium channels like in the S type synapse act to increase the calcium levels in the Axon, and can be seen to thus literally recover from habituation instantly. This effect which is called facilitation, extends the life of a signal beyond the period where normally habituation occurs. The S synapse while simpler than the NMDA synapse described before, still pumps calcium, and releases the same secondary messenger, suggesting that it might have a role in the triggering of Long-term memory. This would be especially the case in the Cerebral Cortex, where S synapses are more prevalent than NMDA synapses.
  • asked a question related to Artificial Consciousness
Question
2 answers
Essentially ACT is a two part system with a section that senses the environment and a section that applies rules to the environment as sensed and decides what action to take. It is based very heavily on the theory of a production system, so the rulebase in the second half of the system is based on "Production rules".
An Open Source version of it can be found at JACTR.org, which is meant to operate on the Eclipse Java Develompent Platform, an open source Java IDE.
Relevant answer
Answer
Actually the ACT architecture was based on production rules rather than blackboards. I think that the blackboard systems are still with us, but in a different cognitive architecture, if you want I will open a thread on the Global Workspace Theory which is the most modern blackboard design I am aware of.