Science topic
Artificial Consciousness - Science topic
Is Consciousness Synthesizeable? And if so how?
Questions related to Artificial Consciousness
In the not-too-distant future, will it be possible to merge human consciousness with a computer, or to transfer human consciousness and knowledge to a computer system equipped with sufficiently highly advanced artificial intelligence?
This kind of vision involving the transfer of the consciousness and knowledge of a specific human being to a computer system equipped with a suitably highly advanced artificial intelligence was depicted in a science fiction film titled "Transcendence" (starring Jonny Deep) It has been reported that research work is underway at one of Elon Musk's technology companies to create an intelligent computerized system that can communicate with the human brain in a way that is far more technologically advanced than current standards. The goal is to create an intelligent computerized system, equipped with a new generation of artificial intelligence technology so that it will be possible to transfer a copy of human knowledge and consciousness contained in the brain of a specific person according to a concept similar to that depicted in a science fiction film titled "Transcendence." In considering the possible future feasibility of such concepts concerning the transfer of human consciousness and knowledge to an information system equipped with advanced artificial intelligence, the paraphilosophical question of extending the life of a human being whose consciousness functions in a suitably advanced intelligent information system is taken into account, while the human being from whom this consciousness originated previously died. And even if this were possible in the future, how should this issue be defined in terms of the ethics of science, the essence of humanity, etc.? On the other hand, research and research-implementation work is already underway in many technology companies' laboratories to create a system of non-verbal communication, where certain messages are transmitted from a human to a computer without the use of a keyboard, etc., only through systems that read people's minds, for example. through systems that recognize specific messages formulated non-verbally in the form of thoughts only and a computer system equipped with electrical impulse and brain wave sensors specially created for this purpose would read human thoughts and transmit the information thus read, i.e., messages to the artificial intelligence system. This kind of solution will probably soon be available, as it does not require as advanced artificial intelligence technology as would be required for a suitably intelligent information system into which the consciousness and knowledge of a specific human person could be uploaded. Ethical considerations arise for the realization of this kind of transfer and perhaps through it the creation of artificial consciousness.
In view of the above, I address the following question to the esteemed community of researchers and scientists:
In the not-too-distant future, will it be possible to merge human consciousness with a computer or transfer human consciousness and knowledge to a computer system equipped with sufficiently highly advanced artificial intelligence?
And if so, what do you think about this in terms of the ethics of science, the essence of humanity, etc.?
And what is your opinion on this topic?
What do you think on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Naseer Bhat asked "What is consciousness? What is its nature and origin?" We do not know. We can speculate about nature and origin but for what should this be good? I think there is a necessitiy in data processing which forced the evolutionary process to create this phenomenon. I am sure anticipation, association and social interaction are part in this process. May be the analyssis of wet brains will bring some light in this question, but we should follow this question step by step in bottom up manner asking what an organism needs to process the environmental and inner data. To decide if there is consciousness we need a significant prove method. This would be a much harder problem then creating a consciousness automata.
Will it be possible to build an artificial consciousness similar to human consciousness in digitized structures of artificial intelligence if in specific structures of artificial intelligence will digitally reproduce the artificial structures of neurons and the entire central nervous system of humans?
If artificial intelligence that mapped human neurons was built, then it would be a very advanced artificial intelligence. If artificial intelligence was built in such a way that all human neurons would be reconstructed in digital technology, it would mean the possibility of building cybernetic structures capable of collecting and processing data in a much larger database capacity than at present. However, if it would only be the reproduction of simple neural structures and their reproduction to the number of neurons contained in the human organism, then only or mainly quantitative and not necessarily qualitative factors that characterize the collection and processing of data in the human brain would be achieved. Without achieving all of the qualitative variables typical of the human nervous system in a cybernetic counterpart, it might be doubtful to create in this cybernetic structure an artificial nervous system of cybernetic consciousness which is the equivalent of human consciousness.
Do you agree with me on the above matter?
In the context of the above issues, I am asking you the following question:
Will it be possible to build an artificial consciousness similar to human consciousness in digitized structures of artificial intelligence if in specific structures of artificial intelligence will digitally reproduce the artificial structures of neurons and the entire central nervous system of humans?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Will artificial neural structures become such advanced artificial intelligence that artificial consciousness will arise? Theoretically, such projects can be considered, but to really verify this, artificial neural structures should be created. From research on the human brain, it appears that this is a very complex and yet not fully understood neural structure. The brain has various centers, areas that manage the functioning of specific organs and processes of the human body. In addition, Sylvia is also complex and consists of elements of emotional, abstract, creative, etc. intelligence that also function in separate sectors of the human brain.
In view of the above, does research on the human brain and progress in the construction of ever more complex structures of artificial intelligence lead to synergy in the development of these fields of science? Will the development of these fields of science lead to the integration of research into the analysis of human brain activity and the construction of more and more complex structures of artificial intelligence equipped with elements of emotional, creative intelligence, etc.?
Besides, does the improvement of artificial intelligence lead to the emergence of artificial emotional intelligence and, consequently, to autonomous robots that will be sensitive to specific changes in environmental factors, factors of the surrounding environment? Will specific changes in the surrounding environment trigger programmed reactions of advanced artificial emotional intelligence, ie activation of pre-programmed algorithms of implemented activities and learning processes as part of improving the learning processes of machines.
Therefore, another important question arises in this area:
Is it possible to create an artificial consciousness that will function with the structure of an artificial electronic neural network built in such a way as to reflect the structure of the human brain? In this way, the structure of advanced artificial intelligence will be able to improve on the basis of acquired knowledge eg from external internet databases?
Do you agree with me on the above matter?
In the context of the above issues, I am asking you the following question:
Will it be possible to build artificial emotional intelligence?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Data collection is an important step when doing any research or experiment. Data collection can be defined as the process of gathering and processing the information to evaluate the outcomes and use them for the researches. But with the development in methods of collecting data by artificial intelligence , do methods evolve in collecting data for social media.
One of the problems with testing for consciousness, is whether we want to test to see if a machine has certain characteristics as suggested in Dr. Baars Functional School, or whether we want to test to see if a machine is missing certain characteristics as suggested by Lofti Zadeh?
As we have seen, the first type of test invites "Teaching to the Test" type responses, where the designer simply defines the minimal machine that meets the tests, and that is all they try for. A good example is Stan Franklins IDA/LIDA architecture, it was designed to meet Baars Functional Consciousness guidelines, whether it is actually conscious or not, is something even Stan Franklin wasn't willing to commit on at first.
The second type of test, however, proposes the necessity of solving problems set deliberately high, in order to force the designer to work harder. We run the risk of setting the bar so high that no one can even approach the solution, as some of the Phenomenal Philosophers would have us do.
Somehow we need reachable goals, that stretch the design but not so much as to break the interest of the designer.
The knowledge claim about the possibility of Artificial Super-Intelligence in the future raised several questions for us. is it a metaphysical possibility or a philosophical jargon? Can artificial intelligence surpass human intelligence- can A.I machines (which are functionally and behaviourally identical to human agent ) builds independently without the intervention of human intelligence (the A.I machines not only can work but also think like human beings)? Can there be a singularity in the field of artificial intelligence in the future? The fastest development in the field of A.I. within two decades makes us think about future prospects of A.I and the possible threats to humanity in the future. There are several ethical issues are concerned which has to be addressed. If rationality is the criterion for the autonomy of the agency of an organism, as stated by Immanuel Kant, then can Artificial Intelligent machines qualify the criteria of rationality for the status of Autonomy which is applied to the human organism.
Chalmer’s contemplated in [1] the Chinese room argument for both the connectionist and symbolic approaches in AI as I have in the thread [2]. I would expand upon the axiom ‘Syntax is not sufficient for semantics’ comments that as presented in the diagram of the thread (attached here also) there is another error in Searle’s argument.
The neural network system drawn there is a complex distributed system drawn at reflecting accuracy in translating a 1-gram not n-gram models where if taken as words would account for semantic interpretation( which would be another neural network).
I would like counterpoints which can refine the argument
References
[1] Subsymbolic Computation and the Chinese Room by David J. Chalmers http://consc.net/papers/subsymbolic.pdf
The question concerns ontological, epistemological, methodological and praxiological relationships.
Can we model stress. Just curious about it.
(keeping in mind that this is the era of artificial intelligence, machine learning, data science and so on)...
Can we have a predictive model on stress and behaviour as KPIs.
Also, philosophically there must be a path between Stress, behaviour, emotions, intelligence etc... can we also test and find the coefficient for these paths??
Regards,
Abhay
I need a feedback on review o myu paper on EMF effects in nonthermal doses on living creatures which is based on storage capacity of DNA. Can reincarnation be explained by physical mechanisms and can DNa MEMORIZE THE KNOWLEDGE OF OUR ANCESTORS ?
The phenomenalist school believes there is something irreduceable about consciousness the nearest we are going to be able to get is a simulation of it, and there will probably be something wrong with the simulation.
The only group to claim success at fomring consciousness under this school that I know of is Dr. Edelman and his cronies at NSI, who claim "The Phenomenal Gift of Consciousness" for what are basically organ level simulations that have been combined to form a brain like simulation.
Stan Franklin has been instrumental in building Cognitive Architectures based on Global Workspace Theory, his architectures have ranged from a pandemonium based Conference Organizer to a full implementation of Bernard Baars Cognitive Engine, certified by Bernard himself as being "Functionally Conscious". LISA, the Learning version of ISA the consciousness architecture has been deployed with some success.
I apologise if the question can seem so naive. "Subjective experience" appears to be a sort of pleonasm or redundancy. But is experience possible without subjective (phenomenal?) instances? I was thinking about some pathological states (e.g. blindsight patients: they can avoid an obstacle without "consciously seeing" it. They perceive the obstacle in an unconscious way, so is it possible to consider this type of experience as "non-subjective"?).
Furthermore, in Tononi's Integrated Information Theory of consciousness, experience is defined as integrated information; in this sense, a simple photodiode can integrate 1 bit of information, so it can have a sort of experience. If the theory was right, how would this experience be "subjective"?
If you could answer to my questions and/or indicate some references, I would really appreciate it. Thank you
For all the formidable progress made in numerous fields by cognitive neurosciences, we are still in the dark about very many aspects of attention. One thing that is now beyond doubt is the multiplicity of processes that underlie it, for attention is involved in numerous other fundamental cognitive processes — perception, motor action, memory — and any attempt to isolate it in order to study its constant features is bound to prove sterile. For over a century and a half attention was a crucial topic in neurophysiology and psychology. In the early days of scientific psychology it was viewed as an autonomous function that could be isolated from the rest of psychic activity. However, this idea soon came to be seen as inadequate. At the beginning of the 20th century researchers became convinced that attention underpinned a general energetic condition involving the whole of the personality. Within a few years the emergence of the Gestalt and Behaviourism paradigms caused these studies to be overshadowed, and it was not until the second half of last century that they regained their importance.
For a long time the debate was influenced by the hypothesis that attention constitutes a level of consciousness varying widely in extension and clarity and only functioning in relation to its variations: from sleep to wakefulness, from somnolent to crepuscular, from confusion to hyper-lucidity, from oneiric to oneiroid states, and so on. Subsequently other approaches of considerable theoretical importance linked attention to emotion, affectivity and psychic energy or social determinants. Yet what do we really know about attention, the sphere of our life which orients mental activity towards objects, actions and objectives, maintaining itself at a certain level of tension for variable periods of time? How and to what extent is attention related to consciousness? Why does only a minimal part of the information from the external world reach the brain even though the physical inputs strike our senses with the same intensity? And why is it that, although they enter our field of consciousness, most of these inputs do not surface in our awareness? It is well known that in the selection of stimuli, attention is strongly influenced by individual expectations. They ‘decide’ which objects and events appear in our awareness, and which are destined never to appear. The law of interest regulates a large part of the selection of the objects and topics on which our attention is focused.
Hello researchers,
I have done a distribution power flow in an IEEE 123 bus distribution system. The results obtained seems to be logical; in the sense voltages relation ship between buses are perfect (when considering zero generation in the system and all the power is imported from the slack bus to satisfy load).
But there is mismatches in relationship of Pgeneration=Pload+Plosses. Kindly provide your valuable suggestions for resolving the issues or using alternative platforms. Any previous results or materials related to this will be helpful.
Thanks!
Some Philosophers feel that there is a gap between what can be physically explained and by what we take as necessary for consciousness. This is called the Explanatory gap. But does it really exist? Or are the so called Phenomenal states of mind all explainable we just haven't yet come up with the explanation?
Dear Group members,
I'm currently doing research in how we can make robots learn to dance and for this research. At present, I have programmed a robot (simulation and real robot) to 'learn' to dance, without any preprogrammed actions. The robot first builds its own actions, which it then combines to form a dance, but there is still a lot to look into.
I have come to the conclusion that in order for this to be fully accomplished, there are two different trypes of results that need to analysed; one that shows a computational result to a dancing model and the other that is based on what people think (i.e. their perceptions of a robot dance).
I have developed a questionnaire in the form of a website that demonstrates some key points on dance and was wondering if you wouldn't mind taking part by honestly giving your comments on the robot's dancing.
The website is: http://www-staff.lboro.ac.uk/~coit2/index.html
All I need is approx. 30mins of your time please. Your feedback would greatly assist me on this research.
Furthermore, if anyone is doing work in this area, or has suggestions or interest, then it would be great to hear from you.
Thank you
Kind regards
ibs
It has been said that the brain is just a pattern matching machine and thus that consciousness is all about pattern matching. Some people have even gone so far as to write up mathematical discussions that attempt to equate memory with bayesian inferences and Markov Chains.
It is my contention that these people are barking up the wrong tree, and that the implicit memory works in a weaker vein than pattern matching. Something I have taken to calling Similarity Selection.
In the similarity selection model. implicit memory is a satisfycing system not a pattern matching system, and because it matches using satisfycing, it only matches at the similarity level, not the pattern level.
In my Digital circuit I used a satisfycing gate to weaken the logical selection capability in the CAM circuits, so that they didn't match on the patterns, but on partial patterns. What this means is that in order to detect a pattern you need multiple layers of similarity detectors where you would only need one in a logical pattern/matching version of the circuit.
This increase in detection opportunities means that we can store more information content in the same role of matching a pattern, and thus through redundancy we can link patterns that would have no direct links of their own, at the higher order similarity levels.
In a computer for instance we cannot give different types of data the same value, because there is no way to combine them to form a single value. In similarity selection that limitation is removed, since at the higher level of selection, many different forms of the same value may be combined to form the higher level version. Since these higher forms of selection, feed back to the original values in the real implicit memory system, the choice of which elements to select is reinforced even though the actual satisfycing gate weakens the selection opportunity at the lowest level.
Consider the fact that the value 1 has different meanings depending on what form it takes.
bcd 0001
ebcdic 0000 0001
int (16) 0000 0000 0000 0001
int (32) 0000 0000 0000 0000 0000 0000 0000 0001
Floating point 0.1 E1
Binary True
Power On
In a pattern matching venue, each of these forms is unique, and thus unconnected
In similarity selection they all represent forms of the same value because their similarity is more important than the pattern that they are based on.
Instead of gradually replacing biological neurons with silicon neurons as in Chalmers' Fading Qualia, I attempt to gradually replace dividable functions of biological neurons with silicon emulation.
The question is, at which manipulation stage does our brain lose consciousness (qualia)?
1) Replacement of axonal spike propagation with an external artificial mechanism that uses radio transmission (e.g. WiFi): Causality between presynaptic neuronal firings and postsynaptic PSPs is preserved, but now neurons are physically isolated.
2) Further replacement of postsynaptic PSP integration with an external artificial mechanism: Causality between presynaptic neuronal firings and postsynaptic somatic membrane potential is preserved, but now without sophisticated dendritic-somatic computation.
3) Further replacement of transformation from postsynaptic somatic membrane potential to postsynaptic firing (Hodgkin-Huxley Eq. mechanisms) with an external artificial mechanism that integrates presynaptic firings and activates postsynaptic neurons by current injection accordingly: Causality between presynaptic neuronal firings and postsynaptic neuronal firings is preserved, but now without an intact internal variable, the membrane potential.
4) Mere replay of spatio-temporal neuronal firing patterns by external current injection: Zero causal interactions among neurons.
What are the existing tests for machine consciousness that directly tests qualia generated in a device? I find many proposals, but they only seem to test functional aspects of consciousness related neural processing (e.g. binding, attentional mechanisms, broadcasting of information), but not consciousness itself.
I have a proposal of my own and would like to know how it compares with other existing ideas.
The basic idea is to connect the device to our brain and test if qualia is generated in our "device visual field". The actual key to my proposal is how we connect the device and how we set the criteria for passing the test, since modern neurosynthesis (e.g. artificial retina) readily leads to sensory experience.
My short answer is to connect the device to one of our cortical hemispheres by mimicking inter-hemispheric connectivity and let the device take over the whole visual hemifield. We may test various theories of consciousness by implementing candidate neural mechanisms onto it and test whether subjective experience is evoked in the device's visual hemifield.
If we experience qualia in the "device visual hemifield" with the full artificial hemisphere, but not when the device is replaced with a look-up table that preserves all brain-device interaction, we have to say that something special, say consciousness, has emerged in the full device. We may conclude that the experienced qualia is due to some visual processing that was omitted in the look-up table. This is because, in regard to the biological hemisphere, the neural states would remain identical between the two experimental conditions.
The above argument stems from my view that, in case of biological to biological interhemispheric interaction, two potentially independent streams of consciousness seated in the two cortical hemispheres are "interlinked" via "thin inter-hemispheric connectivity", without necessarily exchanging all Shannon information sufficient to construct our bilateral visual percept.
Interhemispheric connectivity is "thin" in the sense that low-mid level visual areas are only connected at the vertical meridian. We need to go up to TE, TEO to have full hemifield connectivity. Then again, at TE, TEO, the visual representation is abstract, and most probably not rich enough to support our conscious vision as in Jackendoff's "Intermediate Level Theory of Consciousness".
The first realistic step would be to test the idea with two biological hemispheres, where we may assume that both are "conscious". As in the last part of the linked video above, we may rewire inter-hemispheric connectivity on split brain animals to totally monitor and manipulate inter-hemispheric neural interaction. Investigating conditions which regains bilateral percept (e.g. capability of conducting bilateral matching tasks) would let us test existing ideas on conscious neural mechanisms.
I have a thought experiment (video link: "Paradox of Subjective Bilateral Vision"16:00-28:00) that results in very strange situations if "high-level visual areas themselves are not sufficient for conscious vision, (or low/mid-level visual areas are necessary)", namely, that the neural mechanism of conscious vision, its verbal report and solving of perceptual visual tasks (e.g. bilateral symmetry detection) violates physics that we know of today. I would like to know if there is any experimental/theoretical evidence on this issue. Thanks in advance!
Thanks to the two contributors, the above question has developed into a discussion on how subjective vision gain simultaneous holistic access to spatially distributed neural codes. There have been claims that 'holistic access' should be considered as a serious constraint on the neural mechanism of subjective experience. In case of vision, the seamless and the unified nature of our bilateral percept can be thought as an indicator of our consciousness mechanism having holistic access to wide-spread neural representation.
Unlike many popular theories of consciousness, some scientists believe that holistic access should be solved by actual physical processes in the realm of established science. In other words, there should be some single 'entity' that has causal physical access with consequences, to all subjectively experienced information. Although, there are surprising small number of models on consciousness that actually implement such a mechanism.
I explain my "Chaotic Spatiotemporal Fluctuation" hypothesis in the linked video (40:00 - 50:00), where holistic access is implemented by deterministic chaos components in neural fluctuation. Here, I define holistic access as 'every local change in the distributed neural code evoking global system-level changes in neural fluctuation', which relies on the so-called 'butterfly effect' of deterministic chaos. For the sake of clarification, the link between 'holistic access' and 'subjective experience' goes beyond physics that we know of today.
I would very much appreciate comments on the first question too.
Philosophers do not agree on what consciousness is, Neuropsychologists do not agree on the Neural Correlates for consciousness, and Engineers keep telling us that it is a control system, but it feels like there is more to the story. Perhaps there is, but at the heart of the system, it seems to take on the role of a form of control system, with strange unexplained expereriences. Some of these experiences like the experience of being conscious from wake-up to sleep-time are beginning to seem more and more unlikely as we find out for instance that the experience of our time sense is more complicated than a clock would be. Perhaps it is not an indivisible state of being aware, so much as the illusion of being aware from wake-up to sleep-time.
My theory is that it is easier to control the organism if it doesn't bother with the complexity of dealing with discrete starts and stops of consciousness, but treats the range of sub-conscious and conscious states as if they were all part of consciousness. In this way the phasing between states does not affect the self-image of the individual organism, and so simplifies its control interface.
The criticality hypothesis asserts that the brain is a critical system, like a paramagnetic material in the critical temperature. being in the critical state can maximize the repertoire of a system (physicists call it susceptibility) beside unpredictability and coherence of states.
There are experimental evidences for it, neural avalanches show a great similarity to critical systems. Dante chialvo and other physicists provided a good understanding of the critical brain in recent years.
BUT, can Criticality hypothesis spark a new understanding in the field of consciousness research?
A.A
conscioustronics
Dr. Gerald M. Edelman, is one of the leaders of a simulations based approach to consciousness. The idea is that we really don't know enough about the way the brain works to make informed functional statements, and therefore do away with the unfortunate assumptions of A.I. and look instead to Neuroscience and simulation for models of the mind that perform work similar to that done by the brain.
The main caveat against this approach, is that it doesn't explain why certain simulations are needed, it just assumes that there are biological reasons, and attempts to copy the neuroscience in a simulation.
I work in industry and I completed undergrad almost 9 years back. However, I have some ideas and I want to publish or even collaborate if possible. What is the best place for such people? I do not have any professors or academic reviewers.
My interests are primarily in AI, logic and knowledge representation.
Nowadays we try to simulate each process by a computer program which hashes parallel tasks in a sequence of partial tasks. Do we lose in such a mode the dynamic of the entity? Artificial neuronal networks were also simulated as simple brain models.
Is there a risk in such a computer model that important phenomena would be lost which could result in an analog network interesting phenomena?
The measure "phi" as the capability of a system to produce integrated information seems to just define necessary connections. However, it seems that it doesn't indicate what kind of neural dynamics integrates the whole existing information all across a complex. is it synchronization, recurrent activity or something else?
For some time now there has been controversy between the people who think that the cerebral cortex is important to consciousness and those that think it is sited in the brain stem. In this question I note that the arrangement of connection between the precuneus and the PAG would offer a compromise allowing PAG based influence to directly affect cortex influence. In which case both schools of thought are vindicated.
After reading a recent (2011) article on cooperation of the default mode network, and the frontal-parietal network in internal trains of thought, it became obvious to me that there were two workspace like hubs, one the Angular Gyrus connected to the frontal-parietal network, and one the supramarginal gyrus connected to the Default Mode Network, this played into my work on weak attention, in that it suggested that if the angular gyrus fedfoward into the supramarginal gyrus via workspace like transmission it would tend to support my assertion that complicit attention was the result of two different networks, working together.
There are some people who believe that the ability to build a conscious robot will doom the earth. In a Terminator like total war, the human race will destroy itself, and end all life on earth.
This is a powerful metaphor, I call the Special Enemy Metaphor, where we project our human failings onto our devices, and they destroy us.
An alternate metaphor, that is just as devastating is that robots clean up the earth, almost despite humans and cosset the human race, extending its life, at the cost of what makes us individuals. We become hive minds or mindless animals repeating human-like activities in the end no better than zombies.
Both of these scenerios are what I call the Dark Side, of the Singularity problem. They assume that the results of conscious robots will be the destruction of something unique in humans.
Agency, is the ability to self-direct.
There are many aspects of agency that are involved in Consciousness Research, and many levels at which these aspects have been tried. The Global Workspace Theory for instance has been implemented as a multiple-Agent architecture. There is much discussion as to whether or not, agents that are conscious, make up consciousness, or whether consciousness is derived from agents that are non-conscious.
Where it is penetrating Neuroscience, is in the study of schizophrenia, and other failures of Agency that have been described in medical literature. These failures are traced back in Lesion studies to where in the brain the damage was done, and estimates of what the failure means are used as indications of what that part of the brain probably does.
One area of especial interest is the Orbito-Frontal Lobe, which has been thought to be involved in failures of a sense of self. People with damage in this area tend not to accept statements about what they have previously done as true. Suggesting that they take no ownership of the activity.
Another interesting effect is a sort of failure of a log of activity, where people lose track of what they have done, not because of a lack of a sense of self, but because of a lack of accountability for what they do "my cousin moved that arm" they may say, because they don't remember moving it themselves.
While Cognitive Models based on Production Rules, are quick and relatively easy to design, they do not tell us as much about how the mind works, as the inventors might have thought. Worse, they don't really capture the richness of human consciousness, although they are certainly Autonomous in behavior. One wonders if they are truly conscious, or are just an approximation of a Zombie.
It doesn't help that the theoretical zombie is able to do everything that a conscious mind could do without consciousness. Functional Consciousness, seems most likely to achieve this zombie stature, if only because, Functional Consciousness does not really explain what consciousness is.
Jerry A. Fodor in his "The mind doesn't work that way!" book, got me thinking about the nature of neural networks and the constraints that they place on the circuits in the brain. With the help of David LaBerge's work on Attention, linking the Thalamus, and the PFC to cortex function, I designed a Memory Model, that linked implicit memory, explicit memory, working memory, skill memory and declarative memory into a memory model based heavily on a Weak Attention Model with 9 or more Epoch's during which processing on memory gets done. Braaks book on the "Architectonics of the Telencephalic Cortex", was instrumental in linking the basic architecture back to the micro-architecture of the brain.
Quickly I learned that a Constraints based approach, made understanding the architecture of the brain, even down to the micro-architecture of the Telencephalic Cortex, more approachable.
A definition of Consciousness emerged that while it is not accepted by the extreme phenomenalists, offers an opportunity, to explain the nature of phenomenal events. It will be many years before I can build a physical model to prove my contentions because I will have to invent among other things a new architecture for computers, but the preliminary research seems supportive of such technology being practical.
There can be several types of record keeping required regarding “Universal Management System”. It can be divided in to two main subjects Quantitative and Qualitative Records. (it might be possible that at a certain level both have some but same unique mechanism). Observations can be find in the known phenomenon about/among different constituents in “Universal Management System” which may indicated that at least for smooth running it’s vital to get/make Record of position and place, Record of carried/issued energy, Record of distance and angle with others to avoid “unwanted” collision, Record for “guided map” of route of movement, speed, carried energy, side of spinning movement, Record of voluntarily actions of system as according to the in-bound intelligence level of system needed for cope up the arising situation, Record for Involunteer actions of a system as according to “given-guided Approach”, Record of all inputs and all out puts by systems etc etc. Kindly note we can assume that every type of particle & wave existed in Nature can be the simplest form of a system but by it self.
What can be a good approach to make/establish a “modulating scenes” about “Natural Record Keeping” for our best understanding & learning? And what the level of Artificial Intelligence will be required for it, especially which generation of AI will be suitable for this work or its needed to go further from Artificial Intelligence toward “Artificial wisdom”? Can any researcher does help to summarize this set of questions or asked in more better way?
The idea is that Motivation or Libido, as it is sometimes called, can be explained by a 5 emotion matrix. Each Emotion is linked to an instinctive drive. As the drives requirements are met, they reduce in strength, and so each drive is operating at a number of levels over time. If we look at Abraham Maslows Hierarchy of needs, and assume that the basic need is an instinctual drive, we can see that the drives are ranked according to survival potential for first the individual, then the species, then the immediate social group, then the society, and finally the mental health of the individual
The idea that there are only 5 emotions, therefore seems to fly in the face of previous thought.
However this cognitive model has had some important successes at predicting behavior.
Although I started studying Artificial Intelligence and Neural Networks, and developed a theory of consciousness, I felt that my studying in A.I. and my theory were somehow incompatible, especially with the Agent Oriented Concepts being pushed by A.I.
That is why I see Artificial Consciousness as being a different discipline, and I am trying to associate it with Neuro-psychology rather than A.I. It is too easy to get locked into the Parallel Distributed Processing mindset when studying neural networks from an A.I. perspective, but when you are studying micro-anatomy from a Neuro-Psychology mindset, the missing parts of the connectionist model can be more easily seen.
Can a digital circuit be used to simulate an analog cell? Of course, we simulate analog signals all the time, in computing. The question is how close an emulation we want, and what we expect the circuit to do.
Recently I started working on a Digital Implicit Memory Project based on the idea of a Content addressable memory cell. The memory cell is not all that original, they were thought up years ago, and are currently used in routing, and in Astronomical Instruments. The question was could I implement an equivalent to an implicit memory, using such a mechanism.
The first problem was to achieve a feasibililty test on the ability to store complex data in implicit memory cells. To do this, I built a 4 bit implicit memory array, just large enough to store a single digit in BCD coding.
I eventually got a circuit running using two RS flipflops, with an NXOR gate to link to the lines of the match bus. It took much longer than necessary because I had to review my digital electronics, and some of my analog electronics, then hit on a workable architecture since none of the available circuit diagrams seemed to have enough detail to allow me to copy the circuit. Eventually I used a 3input Nand gate chip and a two input nand gate chip, to implement the circuit.
What the circuit is, it seems is a simple Static Ram cell, and an NXOR gate.
To get the circuit to respond to incomplete data, I wanted something like a majority gate, to allow the cells of the circuit to vote on whether or not the circuit would output a 1. I had to settle for a 2 or more gate, because there weren't an odd number of bits in the storage element.
If I wanted just a recognition circuit, I would use a 4 input AND gate, and be done with it. Instead what I want is a redundant cloud of degenerate data, that will allow me to, in a second level of implicit memory, find patterns in the redundancy and degenerate nature of the coding, that strips away the actual storage code, revealing the underlying data.
One of the problems with using CAM (Content Addressable Memory) is that it is often very restricted in its function because it only recognizes content used in the same coding scheme. Such a selective role is ok for routers, and Astronomical Gear, that is dependent on using the same representation for the same data, in which case CAM becomes a recognition device, but the real world is not all that willing to represent everything in exactly the same way every time you want to recognize it.
For instance, if your girl-friend wears a green sweater on Monday and a red sweater on Tuesday, should you be able to recognize her on Tuesday? You had better, or she will make your life miserable!
The problem with using CAM as a recognition mechanism is simply that to get the four input AND gate to fire your girl-friend would always have to wear green sweaters. Not going to happen!
The real world is more complex than that. So we do not want to recognize using CAM at the signal level, we want a more general system. Now here we get into the realm of Soft Computing, we want to fuzzify the signal in order to get it to recognize a wider range of signals as being the same thing.
However as far as I know there isn't a Fuzzy AND gate, so what we need is a mixture of AND gates and OR gates, or Nand and Nor gates, or something that will allow us selectivity, and agglomeration into fuzzier sets, but based on digital logic. The result is something I call a Satisfycing gate, in that it indicates that the memory in the match bus, satisfyces a high percentage of CAM locations in that cell, and therefore might be related to the data in that cell. Now a lot of digital codes will satisfyce the gate, and therefore the implicit memory cell, is not recognizing the actual code, but looking for something smiilar to the code used to define the cells value.
I actually designed the satisfycing gate (2 or more gate) three times before I hit on an architecture that would fit the exact TTL logic chips I had available, (I curse the fact that I had hundreds of TTL logic chips in storage until about 6 months ago when the storage unit was sold out from under them.)
All in all, a nibble based Satisfycing CAM circuit, in TTL logic, takes up about 11 basic logic gates chips.
And the function it performs is slightly less than the function of even one neuron. In fact most of the neurons in at least the cerebral neo-cortex, have literally tens of thousands of connections with other cells. if we recognized each connection with a single
CAM cell,we would still need literally thousands of those cells to simulate a single neuron. The satisfycing gate for a 10 thousand cell CAM array, would be a sight to see. It might be better to do what the cell does and use a threshold effect instead. One way of doing it is to count the number of cells that are active and compare it against an arbitrary number that defines the threshold. A slightly better approach is to let the cell adjust the value of the threshold according to some rules.
Dr. Sun, a leader in the Hybrid School of Consciousness, built a Cognitive Architecture based on a 4 module system that was separated into Non-Action Oriented and Action Oriented sides of the system. Although two of the modules were neural network oriented, the rest of the modules were Functionally Oriented, and relied heavily on a rule-base. This type of architecture was seen as acting in a manner similar to human action, and it was theorized that it might be conscious, but no explanation for how it became conscious was offered.
In essence this system uses neural simulation to filter a basically functional architecture, but because approximately 1/2 of the system is simulated, it can be said to be a hybrid system.
A recent video from MIT quoted what must be a Cliche in that institute.
If you can't build a model of something, You don't really know it.
Well that isn't exactly correct, but is certainly simple enough. Actually if you can't build a model that seems to operate in a similar manner to the original, You don't really know the original.
Thinkers have been building models of the mind for centuries. None of them work much like the original, you would think we would give up or learn how to get it right. Well we might yet manage to get it right, but to do so requires a lot of knowledge that we can only speculate about at this time. The reason is simple, the only working model we can copy from is so complex internally that we are constantly losing track of how the pieces work together.
It's a little like the blind men and the elephant. As you remember 6 blind philosophers were brought into a room and introduced to an elephant. One of them stood at the head of the elephant and commented on how rope-like it was. One stood at the back end and commented how stinky and stringy it was. One touched the skin, and talked about its pebbly texture, one felt the leg and commented how much it felt like a tree. One felt the ear, and talked about how floppy it was, and one felt the mouth and commented on the size of the teeth.
Now it is easy to see that there was no consensus among the blind men as to what they were discussing. However those of us outside the story know that they were discussing different aspects of the same thing, an elephant. The mind is a lot like that, we have a number of different disciplines all looking at the mind and getting completely different results, and there is very little likelihood that the practitioners of these disciplines will step back and look at the whole beast, because each discipline is blind in its own way.
The problem is how to learn from all these disciplines how to build a model of the mind. I am encouraged by the nature of the successes I have had integrating information from a number of disciplines that such a model might be possible. I think that attempting the model is important because it will inform us about the places where each discipline might be blind to information available from the other disciplines. What we need is a framework within which to gather the information that is available and build from it a better model. I have called this framework Artificial Consciousness.
There is just one little problem, each discipline has its own jargon, and its own unique viewpoint, and each discipline defends its viewpoint against all other viewpoints, despite the fact that there might be cross over between the disciplines. What we need is a special breed of researcher who can understand the individual disciplines on their own merits, and yet ignore the defensiveness, and fit the concepts together into a common framework of thought. The problem comes when different disciplines have different idea of what the same jargon terms mean.
It gets quite difficult to bring together the disciplines while ignoring the jargon, because often the information is only available in terms defined by jargon. For instance is there any reason why we still use latin names for body parts? Except that we always have? And when we do, is there any reason why one discipline in medicine uses a different term for the same body part than another discipline? Actually yes there is, but the answers lie in the history of the disciplines which someone coming from outside the discipline is not likely to know.
To integrate a number of disciplines of the mind, into a greater model, is challenging, and unrewarding in that none of the disciplines want to see their work reinterpreted into a model that might be incompatible with their comfort zone. The scientists in each discipline will not thank you for reinterpreting their work to make it fit with other disciplines, and as a result there will be actual friction and attacks against such an attempt. Any such model will have to be defended constantly against such attacks, which will in turn result in blindness, that will assure that the model will probably be incomplete, but if the model works better than previous models it is still an indicator of an approach that might work better than the approaches used by the existing disciplines. For this reason I think that it is a valid and important thing to attempt.
Artificial Consciousness is quite simply the art of making a machine that is aware of itself and what it knows.
We know that there is a machine already that is self-aware and has declarative memory, to whit the human mind. Although there are still some that would claim that a machine made of nerve cells is not a machine at all.
Others say that just because the machine uses a different technology, (Neurons) instead of Transistors, that that does not mean that we can't transfer the functions. Over to another machine.
Others say that even if we did, it would not feel to that machine, like it was human, or a bat, or maybe even that the machine would not be able to feel.
Even yet another group has said, that if we can create an analog that does everything that conscious mechines can do, it doesn't matter if it is conscious. (This group is called the Zombie Group)
Another group says that we will only get to consciousness through a full implementation of a consciousness architecture simulated on artificial neurons.
My own personal view, is that by combining simulation of the brain with A.I. principles we can make an Artificial consciousness that is not a zombie, even if it does not know how to feel like a bat, or a human.
To get there, however I think we need to expand our understanding of Histo-Psychology, or how brain tissues affect the functions of the brain.
Lofti Zadeh has suggested a test for human level machine intelligence, he says talk for 20 minutes on a topic of your choice, and then ask the machine to summarize the talk, in 20 words or less. He claims that no machine yet produced can complete the task.
I am looking for an information on MVR using Microsoft Excel and also what are the differences between using Multi-Variables Regression (MVR) and Artificial Neural Network (ANN) [if there any comparison between MVR and ANN]
“AI researchers have focused (…)on the production of AI systems displaying intelligence regarding specific, highly constrained tasks. Increasingly, there is a call for a transition back to confronting the more difficult issues of “human-level intelligence” and more broadly artificial general intelligence.” according to AGI 13 conference to be hold in Beijing July 31 – August 3, 2013.
Do you share same call for transition ?
Computational Intelligence is the hot field of research in AI, however, more needs to be done about Artificial Consciousness. An interdisciplinary approach can make this possible.
It would be great if there is any framework for linux environment.
That is, how can we make one network aware of another network so that the first can direct the learning activity of the second? I would like to have specific references and technical information about appropiate environments where the question may be answered.
It is my belief that the difference between a conscious and an unconscious machine, is that the conscious machine is aware of its actions and reacts not only to the environment but to its own impact on the environment including its own internal states as well.
I suggest that, however there are machines that are aware, but not conscious, and this causes some consternation in scientists that ascribe awareness to consciousness.
Consider the word "Sorry" Many Robots would accidentally roll their wheels over your toes. But not many of them would be aware that they had done so, and fewer yet, would realize that you hopping about was probably due to the fact that your toes hurt, and fewer yet would be able to make the connection that it was the action of rolling over the toes that had caused the pain. And fewer yet would be able to connect the action of rolling over the toes, being related to their own actions, and thus ascribe responsibility to themselves, even fewer yet would be able to recognize that in fact that implied a social requirement to appologize. And even fewer could learn to say "I'm sorry" without preprogramming.
For all their vaunted power,
Computers today are dumber than a sack of hammers. Most of them have no conception of what they are doing, and no memory of what they have done, except the data files hidden on their hard drives, somewhere. Consider Google, Google is a search engine that searches the internet for information.
But how many times have you had to redo a search because it sent you false positives, assuming that because the words severally and in the wrong order received hits, that that is what you wanted?
This by itself is not bad, it just indicates laziness on your part that you haven't learned the interface well enough to create the right search term for every search. But how many times have you reworked the search and noticed that Google has no idea that the two searches were connected in any way?
If Google was conscious, it could help you refine the search, and save processing cycles, but because it isn't, the Google Organizers can make more money by displaying advertisements that are not part of your search, but can be hidden in the false positives.
Recent Congruence between my understanding of the declarative memory system, and my theoretical model of consciousness, has led me to question whether the particular implementation of the declarative memory system found in vertebrates, might be the basis for consciousness as we know it.
Recent research into fruit fly brains, has shown that mushroom bodies have a role in the action management system of the brain, but as far as I remember from Braak's Architectonics of the human telencephalic cortex, those structures are more likely part of the human declarative memory, and the recent suggestion that the parahippocampus might be the location of "What" mapping in the declarative memory suggests that in fact, a change in use might have occurred between insects and vertebrata, that would have freed up the hippocampal area for use in the declarative memory. I trace that change in use to the formation of the vertebrate cerebellum.
Since the Craniata do not exhibit cerebellar structures but the vertebrata do, it is my thought that consciousness as we humanly know it, is a function of the freed action management system, repurposed to become declarative memory sometime after the evolution of vertebrata. To understand why I claim consciousness for this area, will require some discussion. This discussion will open the subject to further study, and possibly future research potential.
(news.stjosef.at) Unter der Heftüberschrift „2045 – The Year Man Becomes Immortal“ (2045 – das Jahr, in dem der Mensch unsterblich wird) bietet die aktuelle Ausgabe des TIME-Magazins (21. Februar 2011) eine Titelgeschichte von Lev Grossman. In der Darstellung geht es um jene utopisch wirkende Überzeugung der so genannten „Singularity“-Bewegung, wonach in nicht allzu ferner Zukunft der einzigartige („singuläre“) Moment in der Menschheitsgeschichte eintritt, dass Menschen und Maschinen (sprich Computer) in einer nicht näher definierten Weise „eins“ werden und damit die Spezies Mensch in der heutigen Form zu bestehen aufhört. Der Technik-Guru Raymund Kurzweil, Inhaber zahlreicher wissenschaftlicher Patente, meint, es komme aufgrund einer jetzt schon feststellbaren exponentiellen Zunahme an Wissen und künstlicher „Intelligenz“ um das Jahr 2045 zum Umschlag in eine regelrecht „übermenschliche“ Intelligenz. Der Mensch könne dann prinzipiell auch vom Computer abgelöst werden, der es perfekt verstehen würde, menschliches Denken zu simulieren und „Unsterblichkeit“ zu sichern.
Kommentar (Josef Spindelböck): In dieser utopischen Vision zeigt sich eine säkularisierte Form messianischer Erwartung. Nachdem man Gott den Abschied gegeben hat, tritt an seine Stelle der Mensch und wird schließlich durch den Computer ersetzt. Es handelt sich um nichts anderes als um die Abschaffung des Menschen! Eine derartige Konzeption lebt von der gnostischen Hybris, alles selbst in die Hand zu nehmen und eine neue Wirklichkeit ohne Gott zu erschaffen, bis schließlich auch der Mensch selbst wegrationalisiert wird. Was übrig bleibt, sind seelenlose Apparate mit höchster künstlicher „Intelligenz“, aber ohne die Fähigkeit geistiger Einsicht und personaler Liebe! Man mag sich damit trösten, dass dies nie wirklich so werden kann, schon allein aufgrund der metaphysischen Unmöglichkeit, dass aus einem unbelebten Ding (dazu gehört auch ein Rechner = Computer) etwas Lebendiges oder gar ein Wesen mit Bewusstsein entstehen könnte. In der monistischen Vorstellung des „neuen“ Materialismus lässt sich dies jedoch nicht ausschließen und verbindet sich, gestützt durch die Theorie des Evolutionismus, zu einem gleichsam unaufhaltsamen „Selbstläufer“. Dagegen lässt sich einwenden: Auch der beste Computer bleibt eine Maschine, die zwar fähig ist auf quantitativer Basis Berechnungen anzustellen, die aber dann versagt, wenn es um das nur einem geistigen Wesen mögliche Verstehen von Zusammenhängen geht, und die schon gar nicht zu wirklichen Lebensvollzügen fähig ist. Es wird immer vom Menschen abhängen, wie er mit dem Universalwerkzeug Computer umgeht und wie er diesen einsetzt – ob zum Fluch oder zum Segen! Das ewige Leben lässt sich – entgegen der Meinung der Singularitätstheoretiker – auf diese Weise sicher nicht erwerben oder gar „simulieren“.
What is intelligence is the key to understanding life and evolution..
Until the mid 80's scientists were secure in the thought that they would quickly find out how to decode the brain. Today's Computational Biology was founded on the idea that we could somehow put numbers to what was going on in the body, and learn how it worked. I still hear scientists echoing that thinking, even though today we have realized that the early computational approaches were flawed.
Back then, it was known as a fact that the DNA in our bodies was a blueprint from which the organization of our body into organs and so on, naturally fell. Of course we didn't know how the DNA worked yet, but it was only a matter of time, until we did, and then, all the information about our bodies would be exposed, and we would be able to mix and match until we approached perfection.
Why do we keep raising these false hopes, only to have them dashed on the rocks of reality?
Since then we have found that there were a lot less genes in the DNA than we had originally expected, and that the number of genes have a combinatorial complexity so far below the actual complexity of the body, that they must act more like guides, than actual blue prints. Somehow the rest of the complexity of the body, self-organizes around these guides, to produce the bodies we know and inhabit.
Now of course it is not surprising that at the connection level, the level of uncertainty as to which neurons are connected to which others, is so high as to destroy any chance of mapping the connections and mathematically calculating the results at the neuron level, But back in the 1970's David Marr was convinced that simply by applying probability mathematics he could winkle out the proper operational function of the neocortex, the cerebellum, and the hippocampus. He fully expected to be able to make some sense out of the connections at the neuron level, but never published a paper explaining how.
In 1983, Sir J.C. Eccles published a paper describing the micro-architecture of the neocortex, and neatly shot down David Marrs hypothesis about how the neocortex was organized. Where Marrs work centered around a 4 layer model, Eccles required six, and while Marrs model centered around his so called codon, Eccles model centered around the discovery of cylindrical groups of neurons that acted together, firing as a unit instead of severally.
Somehow the neurons were self-organized into clusters, that fired as a group. The concept began to be called Neural groups. But could both scientists be right, even though they described such different architectures? many thought not. Back later with a new installment
Dear friend,
I am writing this message to invite you to participate in a research study where I am currently conducting as a Ph.D. student at Middle East Technical University, Ankara, Turkey, regarding the utilization of social network web site (Facebook.com) in education.
In this stage, I am focusing on “Uses and Gratification Facebook.com”. The survey will take approximately 6-8 minutes. Your responses are entirely anonymous and your participation is greatly appreciated. You can see survey by moving to the following web site:
Please feel free to forward this message to your friends.
Thank you for your time and effort in helping me complete this study.
There is only one known test for consciousness, and it is tied up in assumptions about intelligence.
That test was first proposed by Alan Turing back in the early annals of Computer Science, before we really had much of a computer to work with. I call it the zeroeth test for consciousness because it doesn't test for consciousness nor for intelligence it tests for Antropomorphism. Unfortunately humans are all too willing to Anthropomorphize machines, so informal Turing Tests have been passed every time you are gulled by the latest chatbot. Dr. Saibatso on steroids there is a question as to how intelligent chatbots are, and as to whether they can even represent internally anything more than a well turned phrase.
Other tests exist of course but they are fragmentary things based on limited knowledge about what consciousness is, and since there are at least 9 definitions of consciousness and the biggest model only deals with two, really detecting consciousness will have to wait for a better definition than science is ready and willing to offer today.
Actually no, Evolution is anything but random.
People think that Evolution is random because it is driven by mutation. However Evolution is one of a number of Algorythms that can convert indeterminate perterbations into movement in a specific direction.
For those of you that have ever held a ratchet in your hand, consider the futility of waving the handle of a wrench back and forth over a bolt. When the bolt is loose enough you can wave it back and forth, and the bolt will not tighten, but put a bit of resistance into the mix, and suddenly the bolt begins to turn in a specific direction. Anybody who doesn't understand the mechanism of a ratchet, would assume that it was still futile to wave the handle back and forth over the bolt. But as the bolt tightens you have the satisfaction of knowing that it was the ratcheting mechanism that made it possible.
Evolution is a little like that. Chance mutation is direction neutral just like your waving the handle back and forth is more or less neutral. However Charles Darwin discovered that the Ratchet mechanism for Evolution was survival of the individual. Survival isn't direction neutral, and so it gives the impression of progress. Evolution seems to be going somewhere. Humans wouldn't exist without it.
There are some that believe that the human brain, and the human body, are kluges, things that were cobbled together using biological mechanisms that are not actually effective or efficient. Some would redesign a biological body, but some say, why bother if our technology could build a better body, or if we could download ourselves into different bodies, for different tasks, or better yet, simply take our brains out of our bodies, and situate them into machines as we are needed. The result would be that boxes would act like people, and the distinction between a robot and a person would be blurred, so that the only way we could tell would be the size of the brain box, and maybe not even then.
My Appologies to any Neuroscientists, however I find that many people even with biological backgrounds do not know enough about Neurons to understand the discussions that make this group significant.
A Neuron is a specialized type of cell. It has evolved to be able to transfer information above and beyond the information that is needed for development of the basic cellular structure of an organism. It also stores this information, and to some extent processes it.
To do so, information has to be able to pass through the Cellular Membrane that separates the cell from the environment. In neurons this cellular membrane consists mostly of a Protein-Lipid+Lipid-Protein structure that is polarized to always have proteins with a specific charge on the inside of the cell and proteins with the opposite charge on the outside of the cell. As a result of this polarization there is an electrostatic gradient across the membrane that acts to repell ions of a specific charge, and attract ions of the opposite charge. This creates a natural charge separation which is capacitative in effect, something that will be explained in the basic electrical theory section.
To get a message through the cell membrane something would have to stick out of the cell, and either detect the presence of a chemical, or conduct electricity into the cell. We call proteins that stick through the cell membrane Permease Molecules. Neurons work mostly by allowing these permease molecules to trigger special openings in the membrane that are sized and designed to allow only passage of specific ions either into or out of the cell, sometimes against the electrostatic gradient resulting in an increase in charge separation, and sometimes with the gradient, resulting in a loss of charge separation.
Ions passing through the membrane create ionic currents, that like electrical currents move charge
across the membrane. As a result of this charge can either increase or decrease within the cell, creating a variable called the "Action Potential" for each neuron, that determines how much charge is kept within the cell.
Along with the Action Potential comes the relative difference in charge between the cell and its surrounding environment, creating a voltage that in turn creates EMF a force that tries to push the electrons out of the cell, and into the surrounding environment. If this EMF becomes too steep, it results in depolarization of the membrane and mass migration of ions into the surrounding environment and from the environment into the cell.
In an attempt to protect the cell from losing all its nutrients, and filling up with its own waste products from the external environment, the cell reacts by opening ion channels in the intact portions of its membrane, and renormalizing the action potential, allowing the breach in the membrane to self-organize back into its polar form, and thus protecting the cell.
This effect creates an interesting electrical spike called Firing, that signals the depolarization of the membrane.
Communication happens via the management of the ion channels by using different permease molecules to activate different ion channels. These in turn are triggered by specific chemicals called neurotransmitters, which can be specifically detected by the permease molecules in relatively small proportions such as millions of parts per Liter.
Special areas of the cell called Pre-synaptic patches, secrete these chemicals into the extracellular fluid, at special locations called synapses, where they diffuse across the gap between the cells, and are detected by the permease molecules. The Neurotransmitters are then gathered to clear the fluid, and stored in the pre-synaptic bud, for later reuse.
The Post-synaptic sensitive patch contains a segment of cell membrane that is specially built to detect specific Neuro-transmitters. Thus each synapse can be said to be either inhibitive, excitory, a shunt, or if it triggers the production of a secondary transmitter inside the cell, Modifying.
The sensitive patch outsurvives the proteins it is made of, suggesting that there might be a mechanism called the membrane replacement mechanism that trades old ion channels and perease molecules for new periodically over the life of the membrane. This mechanism has been implicated in the adjustment of the number of ion channels per permease in the sensitive patch, a measure called the weight of the synapse that allows it to learn to favor synapses that are more active.
A hypothesis exists that protein creation in the sensitive patch, triggers an increase in the number of ion channels that are added to the patch, thus increasing the weight. Denaturing of the proteins as they get digested by the cell probably results in a loss of ion channels over time, allowing synapses that are less active to lose their weights over time allowing the nervous system to adjust to changes in signals.
The cell tends to be polarized with signal gathering elements on one side of the cell, and signal distribution elements on the other. These are called Dendrites and Axons respectively. Pre-synaptic buds tend to gather on Axons, and Post-Synaptic Sensitive Patches on Dendrites, although both have been found on the Soma, and some dendrites have pre-synaptic buds, and some Axons have sensitive patches suggesting that this is just an average and that special conditions can apply.
Essentially ACT is a two part system with a section that senses the environment and a section that applies rules to the environment as sensed and decides what action to take. It is based very heavily on the theory of a production system, so the rulebase in the second half of the system is based on "Production rules".
An Open Source version of it can be found at JACTR.org, which is meant to operate on the Eclipse Java Develompent Platform, an open source Java IDE.