ArticlePDF Available

Abstract and Figures

Humans are capable of identifying their own image when reflected in a mirror. This mechanism is a mystery that has never been solved. I created a program that achieved such awareness on a robot and conducted three experiments. The first was an experiment on a self robot that imitates its own self-image reflected in a mirror. The second experiment had a robot imitating another robot of the same type which was made to perform the same behavior. The last was an experiment using two robots of the same type and the same functions to imitate each other. In these experiments, the program calculated the coincidence rate of behavior between the self robot and the other robot. I found that compared with the case in which the robot performed a behavior according to its own judgment, the coincidence rate was always higher in the case of the mirror image. At this time, when the target moved according to its own judgment in the second experiment, this can be considered as a part of the self, similar to the use of "human hands and feet." From this result, the mirror image can be judged to "exist closer to the self than a part of the self" and can in fact be considered a "self." I thought that the result of these experiments indicated that mirror image cognition of the robot succeeded completely and that this would be the "first example toward explaining physically the mirror image cognition capability of humans." This paper details logical and physical explanations for achieving the results, and in addition, presents several considerations and prospects derived from the experiment results.
Content may be subject to copyright.
A Robot Succeeds in 100% Mirror Image Cognition
Junichi Takeno
1-1-1 Higashimita, Tama-ku, Kawasaki-shi, 214-8571 Kanagawa, Japan
Department of Science and Technology, Meiji University Graduate School
TEL +81-44-934-7454
Fax +81-44-934-7912
takeno@cs.meiji.ac.jp (juntakeno@gmail.com)
Abstract - Humans are capable of identifying their own image when reflected in a mirror. This
mechanism is a mystery that has never been solved. I created a program that achieved such awareness
on a robot and conducted three experiments. The first was an experiment on a self robot that imitates
its own self-image reflected in a mirror. The second experiment had a robot imitating another robot of
the same type which was made to perform the same behavior. The last was an experiment using two
robots of the same type and the same functions to imitate each other. In these experiments, the
program calculated the coincidence rate of behavior between the self robot and the other robot. I found
that compared with the case in which the robot performed a behavior according to its own judgment,
the coincidence rate was always higher in the case of the mirror image. At this time, when the target
moved according to its own judgment in the second experiment, this can be considered as a part of the
self, similar to the use of “human hands and feet.” From this result, the mirror image can be judged to
“exist closer to the self than a part of the self” and can in fact be considered a “self.” I thought that
the result of these experiments indicated that mirror image cognition of the robot succeeded completely
and that this would be the “first example toward explaining physically the mirror image cognition
capability of humans.” This paper details logical and physical explanations for achieving the results,
and in addition, presents several considerations and prospects derived from the experiment results.
Index terms: Conscious system, robot demonstration, mirror test, mirror image cognition, behavior and
cognition, self awareness, human consciousness, cognitivism.
I. INTRODUCTION
Humans can easily identify their own image when reflected in a mirror. This is a rather strange
capability because without the aid of something like a mirror humans are not capable of seeing
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
891
their own face directly. Humans often say that they know themselves when looking in a mirror.
This is, however, not an adequate explanation. Humans are said to become aware of their own
image reflected in a mirror when they are about two years old [1].
To investigate the existence of this awareness capability, G. Gallup, Jr. proposed a mirror test in
which chimpanzees [2], orangutans, dolphins, Indian elephants, and magpies all succeeded.
Among philosophers and psychologists, Jacques Lacan presented his “hypothesis of the mirror
stage” in which this phenomenon was considered to be an important base point in human growth
and development [3].
The phenomenon in which humans become aware that the image reflected in a mirror is their
own image is said to indicate the existence of self-consciousness. This is because humans make
up and dress themselves using a mirror. In other words, one can think that solving the problem of
awareness of self image in a mirror can lead to solving the problem of consciousness.
Reference to consciousness, however, creates a difficult situation.
At present, there are two camps: those who do not acknowledge the existence of consciousness
and others who do acknowledge it.
It seems that many scientists in the former group take their stand from an engineering viewpoint,
while many scientists in the latter see the situation from a scientific viewpoint. This separation is
caused by the fact that the phenomenon of consciousness is not described definitely, in other
words, the definition is not clear.
As an extreme standpoint of the former, there are opinions that consciousness is a subjective
phenomenon and cannot be mathematically described, and therefore its existence is not
acceptable. There are also opinions that consciousness is at present not acceptable but various
types of human recognition and functions are acknowledged, and that in the future consciousness
will be explained mathematically with an evolved integrated connection (called emergence).
The latter group is of the opinion that acknowledges the existence of human consciousness and
will try to identify consciousness in the mechanism of the human brain.
It is very clear, however, that trying to elucidate the mechanism for identifying one’s self image
in a mirror as a problem of consciousness will cause heated debate both for and against.
However, I decided to challenge this investigation in a direction that had not been attempted
much, that is, to consider the content of research regarding consciousness that is already known
and to define human consciousness physically and mathematically. The reason why such an
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
892
approach had not been conducted often by other researchers is that there is a strong mental
resistance to attempts that seek to describe human consciousness physically and mathematically.
This is caused by the strong belief that humans exist differently from machines. Also the
possibility of a major finding in the initial stage is doubtful. This is because the definition of
personal consciousness is seen to differ greatly from universal truth.
However, if attempts to define consciousness physically and mathematically in a concrete
manner are always put off, we cannot hope to take even a first step toward understanding the
mechanism of human consciousness. We can use an evolving method for creating an object like
consciousness by combining recognition processes, and even if we can obtain various types of
knowledge about the brain and the body in repeated research in brain science, we think that the
method for recognizing consciousness returns finally to the “problem of defining consciousness.”
In other words, it is natural that even if consciousness is not yet defined clearly, attempting to
define “What is consciousness?” is always necessary. We are certain that this process is an
important scientific method for understanding human consciousness.
Why should we promote an understanding of human consciousness?
It is natural that achieving an excellent result by understanding human consciousness, that is, the
mechanism of thought and action, is very attractive, but I wish to place emphasis particularly on
the following three items.
First, this can contribute to understanding brain-related disease including schizophrenia and to
the discovery of methods of medical treatment. Second, this allows the development of a method
to ameliorate from loss of consciousness caused by some accident. Third, this allows the
development of artificial limbs that could be considered to be one’s own by individuals suffering
limb loss in accidents.
I installed a program in an existing small robot. The program was based on architecture that I
created from my subjective definition of consciousness. Although it was designed in a top-down
method based on my original idea, the program itself functions in a bottom-up and top down
process. The program provides not only a physical explanation of mirror image cognition by
driving the robot but also provides an explanation of most items regarding consciousness that are
already known [4][5].
The robot imitates the behavior of another robot in front of it according to the program and
calculates the coincidence rate of the behavior imitations between itself and the other robot.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
893
Among several experiments performed with the robot, three important experiments provide a
completely physical explanation of the problem of mirror image cognition. This is a 100%
success. These experiments succeeded on Sep. 1
st
, 2004 [4]. The details were presented on
Discovery Channel TV (Web) in the USA on Dec. 21, 2005 [6][7]. Three years have passed
since this presentation and I felt that the importance of this success had become increasingly
clearer. When writing this paper, I reconsidered the experiments and the results by examining
various comments that have been made during this period.
Herein, I declare the 100% success of the mirror image cognition achieved by a conscious robot.
Also, I present, at the end, several topics that can be considered from the results obtained from
the experiments.
II. WHAT IS MIRROR IMAGE COGNITION?
Mirror image cognition refers to the phenomenon in which humans are aware of their self-image
in a mirror. I can identify my own image in a mirror when looking at several images. (Figure 1)
I think other people can also identify their own image in a mirror just as I can, because they use a
mirror to get dressed or put on makeup, while looking in a mirror.
Many scientists say typically that this phenomenon is not a theme of scientific study because it is
considered to be a subjective matter. But, scientific research is necessary to answer the question,
“Why can I identify my own image in a mirror?”
I call this problem “the mystery of mirror image cognition”.
I’ve been trying to solve this mystery using a mechanical system, a robot.
Unlike the many mysteries of humans and animals, all parts and details of a robot are
scientifically demonstrable and the processes involved should be understood universally by
humans.
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
894
Figure 1. Where am I? (Horta house in Brussels)
I am building a robot that is capable of scientifically demonstrating mirror image cognition.
If we could build a robot that would be capable of scientifically demonstrating mirror image
cognition, we would be able to clarify the mirror image cognition of humans by analyzing the
robot in detail.
Gallup’s mirror test was devised to estimate the presence of a high-level capability for
recognizing one’s own self image by animal subjects[2].
This mirror test has been reportedly successful when conducted with chimpanzees, orangutans,
dolphins, Indian elephants, and magpies.
But it is impossible to conduct a scientific investigation into how these animals attained their
capabilities for self-recognition of mirror images.
This is also impossible with humans.
However scientific investigation is possible in the case of a robot.
I believe that the demonstration of a robot will enable us to elucidate self awareness of humans
and also scientifically demonstrate existence of consciousness in humans.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
895
I believe the robot is the mirror for scientifically showing existence of “I.”
III. DEVELOPMENT OF A ROBOT TO DEMONSTRATE
MIRROR IMAGE COGNITION
I believe that two methods are available: An engineering-based approach and a conscious system
structure.
The former is an attempt to attain the goal through engineering without elucidating the
consciousness of humans.
Success of Mirror Test has been "appeared", they claimed.
It is absolutely impossible, however, to account for the human functions of cognition and
consciousness (K. Gold [8], P. Haikonen [9] ).
For example, clearly, a robot which can recognize family members and their smiling faces can be
created without employing the human functions of cognition and consciousness.
The point that I wish to describe here is that the functions of human cognition and consciousness
can most certainly also be realized as a set of the recognition and drive program that has no
relationship with the “consciousness and cognition” shown in this example. Nevertheless,
creating a robot that has the capability of mirror image cognition is a very difficult task when
several points are considered as described later (see to 4.2).
I call this an engineering-based approach. Consequently, I think it is natural that a robot created
using this approach cannot explain human “cognition and consciousness” at all.
This type of study may be “useful” naturally, but it does not go to the root of my research theme
normally.
In the latter, consciousness, a subjective phenomenon that occurs inside the self, is considered a
physical phenomenon ( J. Tani [10], M. Kawato [11], I. Aleksander [12]).
Subjective functions can be tried to build in a robot. The objective is to reveal the truth
objectively and physically through a demonstration performed by the robot.
This technique is a part of scientific positivism.
My approach is in the later.
Compared with the engineering-based approach, success with the conscious system structure
often becomes a breakthrough. This is because the former is only making of a part of all the
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
896
functions, like a component, but the latter has the possibility of explaining the principle of the
whole consciousness. Specifically, while the former can produce almost no expansive hypothesis
for the future, the latter has a greater possibility of comprehensively solving unsolved problems
and producing more abundant hypotheses. When choosing a method as a scientist, I choose the
latter because I acknowledge the scientific rationality of the heliocentric theory by Nicolaus
Copernicus (1473-1543) compared with the geocentric theory.
IV. STAGES IN THE DEVELOPMENT OF A CONSCIOUS ROBOT
I will try to construct a conscious machine.
Although consciousness is a subjective matter, we deem it to be a physical phenomenon and
constructed it on a mechanical system.
Mechanical systems, such as robots, allow us to conduct objective and scientific research and
observation. They offer a base for scientific observation of subjective phenomena.
We will establish the phenomenon of consciousness as an objective reality using mechanical
systems.
The stages are:
(a1) Define the meaning of “consciousness.”
(a2) Define a concept model based on the definition of consciousness.
(a3) Replace the concept model with a neural model.
(a4) Incorporate the neural model into a robot.
(a5) Have the robot achieve mirror image cognition of its self.
a. Define the meaning of “consciousness.”
Consciousness can be defined referring to the widely available base of knowledge in the fields of
philosophy, psychology, brain science and neurology.
(b1) Duality of consciousness.
Humans can be aware that “they are aware [13].” If duality is achievable, then multiplicity is also
achievable. Regarding triplicity and greater multiplicity, we say bodily feeling is missing based
on a subjective sensation, we think them to treat as the result of symbol processing like as the
‘concept of infinity.’ (Takeno's Triplicity Hypothesis)
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
897
(b2) Consciousness has embodiment.
Consciousness is inseparable from any physical body response [14].
(b3) Consciousness is closely related to imitative behavior.
We agree with the theory that humans recognize the existence of the self and others through
imitative behavior. By repeating this learning behavior, humans establish their own self and
develop their social nature [15][16].
(b4) Discovery of mirror neurons [17].
I made a definition of consciousness by using these considerations.
I found that an important meaning of consciousness is to be aware that one is behaving. And, to
think is just identical to behaving.
And finally I decided the definition, "consistency of cognition and behavior is the origin of
consciousness."
b. Important points when considering the development of a mirror image cognition robot.
Since birth, no human has ever seen his or her own face. Humans possess no prior information
about their own image, in particular their own face.
The first: this means that, at the outset, a robot used for studying mirror image cognition should
never be given any complete information about itself.
Humans cannot discern their own self image in a mirror immediately after birth. But they can do
so at about 2 years of age.
The second: to solve this mystery of humans, we need to account for the process of development
of cognition from the stage of being unable to discern one’s own self image in a mirror to the
stage of being able to do so.
The third: In addition, we should remember that the information reflected in a mirror is not
always perfect. In other words, the reflectivity and flatness of the mirror may not always be
100%. Even if the accuracy of the self-identifying information is preserved, the information of
the self image reflected back from a mirror cannot match it theoretically.
And the fourth: the functions enabled by the computer programs embedded in the robot must be
able to describe facts that are generally known to be the working of human consciousness.
These facts include, for example, self awareness, multiplicity of consciousness and
consciousness of the others.
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
898
V. CONSCIOUS ROBOT AND MIRROR IMAGE COGNITION EXPERIMENTS
Our robot is a small, a commercial robot, Khepera II. We incorporated a neural network
program into the robot. The program uses recurrent networks, called MoNADs, as basic modules
(Figure 2).
These networks are arranged hierarchically with three MoNADs at the experiments (Figure 3).
The MoNAD system has huge merits.
The system can solve the Symbol Grounding Problem [18] because the system can learn the
relation between an environment and the symbol representation.
The system can solve the Binding Problem [19] because the system can cycle through
cognition-behavior.
MoNAD operating mechanism performs neural-calculation for the current behavior and the
current cognition representation using external information from the world, the behavior in one
step prior and cognition representation in one step prior. The derived information is used
recursively. Relying on recursive information from one step prior means that the past
information is used retroactively to determine the behavior (use of experience).
Figure 2. Module of Nerves Advanced Dynamics (MoNAD). MoNAD has two
circulations (
ZZKYYK
, ) of information that connect to common nerve
cell groups
K
. One is somatic, and the other is related to representation.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
899
The robot mimics the behavior of a second ‘partner’ robot in the mirror using the MoNADs.
The robot recognizes the behavior of the self and the ‘partner’ simultaneously, and calculates the
success rate (the coincidence rate) of its imitative behavior.
The success rate was about 70% in our experiments [4].
Although the success rate has not yet reached 100%, we came to the conclusion that the robot
discovers its own mirror image 100% physically.
We call this robot equipped with hierarchical MoNADs networks a “conscious robot” because it
achieved mirror image cognition of its self.
Figure 3. The networks for the experiments are arranged hierarchically with three
MoNADs. The imitation MoNAD interprets the behavior of the other and instructs
the motors to behave in the same way. The distance MoNAD measures the distance
to the other. It instructs the motors to withdraw if the distance is small and to
advance if the distance is large. The settlement MoNAD restricts the behavior of
related subordinate MoNADs.
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
900
a. The conscious robot.
I incorporated the conscious system into the robot.
Three kinds of MoNADs were used in the conscious system (Figure 4).
- Imitation MoNAD
- Distance MoNAD
- Settlement MoNAD
While repeating the imitative behavior, the consciousness system repeats the calculation to
cognize the behavior of the self and the other simultaneously.
Figure 4. Structure of the mirror image cognition robot. While repeating the imitative
behavior, the consciousness system repeats the calculation to cognize the behavior of the
self and the other simultaneously. The blue LED lights up for each successful imitation as
determined by calculation. The coincidence rate of the imitation is recorded. When the
coincidence rate exceeds a threshold value, the other is interpreted as the self.
The blue LED lights up for each successful imitation as determined by calculation.
The coincidence rate of the imitation is recorded.
When the coincidence rate exceeds a threshold value, the other is interpreted as the self.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
901
The imitation MoNAD interprets the behavior of the other and instructs the motors to behave in
the same way (a simple reasoning system).
The distance MoNAD measures the distance to the other. It instructs the motors to withdraw if
the distance is small and to advance if the distance is large (a simple feelings system).
The settlement MoNAD restricts the behavior of related subordinate MoNADs (a simple
association system). This MoNAD is not a‘central control tower (a homunculus )’in this system
because its behavior is determined by information from subordinate MoNADs.
The LED controller compares the representation of the imitation MoNADs and lights up the
LED when the behaviors of the self and the other agree.
Figure 5. Experiment 1: The robot Rs imitates the action of its own image Rm as
reflected in a mirror. You can watch the video (v1) until the year 2013.
b. The experiments
Experiment 1: The robot Rs imitates the action of its own image Rm as reflected in a mirror
(Figure 5). The infrared reflectance of the mirror used in our experiments was 98%.
(Reflectance of mirrors typically used in daily life is normally 85%.)
(e1-1) The self robot Rs is equipped with the conscious system.
(e1-2) The self robot Rs performs imitative behaviors relative to its mirror image Rm.
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
902
Figure 6. Experiment 2: This experiment is conducted in an environment where the
other robot Rc is controlled completely via cables from the self-robot Rs to imitate
the behavior. You can watch the video (v2) until the year 2013.
Experiment 2: This experiment is conducted in an environment where the other robot Rc is
controlled completely via cables from the self-robot Rs to imitate the behaviour (Figure 6).
(e2-1) The other robot Rc is placed in front of the self robot Rs without a mirror set in between
them. Robot Rc is physically almost identical to robot Rs.
(e2-2) Both robots are connected by control cables. Commands are transmitted through the
cables to make the other robot behave in the same way as the self robot.
(e2-3) The other robot is equipped with a software program of the simple reflex system to
implement the given command.
(e2-4) The self robot imitates the behavior of the other robot Rc.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
903
Figure 7. Experiment 3: The robot Rs, equipped with the same hardware and
software, imitates the other robot Ro. Both robots repeatedly imitate each other.
Experiment 3: The robot Rs, equipped with the same hardware and software, imitates the other
robot Ro. Both robots repeatedly imitate each other (Figure 7).
(e3-1) The cables that make the other robot behave in the same way as the self robot are removed
from the robots.
(e3-2) The reflex system software is removed from the other robot Rc and the same conscious
system as that for the self robot Rs is incorporated. With this, the self robot Rs and the other
robot Ro are exactly the same in terms of both hardware and software except each individuality.
(e3-3) The self robot and the other robot imitate each other.
c. Observation of results of experiments
Coincidence rate of the mirror image robot Rm is about 70%.
Coincidence rate of the controlled robot Rc is about 60%.
Coincidence rate of the other robot Ro is about 50%.
Each of these values changed without intersecting one another.
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
904
Figure 8. Increasing complexity of other robots: Rm is more complex than Rs (a)(b).
Rc is more complex than Rm (c)(d)(e). Ro is the most complex (f).
VI. WHY DO COINCIDENCE RATES DIFFER DEPENDING ON THE ROBOT?
All the robots used in our experiments were commercially same robots.
It means that all robots had nearly identical physical characteristics and functional specifications
in the original.
Robot Rc used in Experiment 2 was used as robot Ro in Experiment 3.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
905
The differences of coincidence rate arise from the difference of complexity of each partner robot,
I found. The complexity is defined from physical properties and functional specifications of each
robot (Figure 8).
Table 1 Summary of the complexity.
Rs vs. Rm: The physical properties of robots Rm and Rs were identical because robot Rm is
simply the mirror image of robot Rs. Rm is, however, more complex than Rs from the viewpoint
of influence of mirror reflectivity (a) that never reaches 100% and external disturbance to
infrared sensor (b).
Rm vs. Rc: The other robot Rc is physically more complex than the other (the mirror image)
robot Rm. Although Rc has no problem in recognizing the mirror reflection compared with Rm,
it has more complexity due to the problem of different friction (c) that occurs in the actual
movement of the robot on the floor surface, the different personality of the robot (d) resulting
from the slightly different functions of its motor and sensor, and the mounting of the simple
reflex system (e). Consequently, the physical complexity can be considered larger in the overall
system.
Rc vs. Ro: The other robot Ro is functionally more complex than the other robot Rc.
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
906
Ro is more complex than Rc because it is equipped with the conscious system (f). This is
because the conscious system is a more complex program than the simple reflex system.
In other words, the difference between Rc and Ro may be considered caused principally by the
increased functional complexity.
An overview of the complexity is given in Table 1.
VII. CONCLUSIONS
The cable-controlled robot Rc can be considered to be a part of the self robot Rs because it was
connected by cables and moved according to instructions from Rs.
According to the results of our experiments and physical observations, the success rate of the
mirror image robot Rm was always higher than that of the cable-controlled robot Rc.
Therefore, I conclude that:
The robot Rs decides that the mirror image Rm is a part of the self and controlled from the self
like the robot Rc.
According to our experiments, the self robot Rs determines whether the other robot is the self or
the other based on the behavior coincidence rate (success rate). The threshold is 60% for the
robot Rs.
Specifically, with a success rate 60% or above, the self robot judges that the other is the self.
In other words, the judgment of the self or the other by the robot is based upon the behavior
coincidence rate of the ‘part of the body” of the self.
Therefore we conclude that, on the condition that all the robots used in our experiments have the
same normalized functional specifications, a 100% cognition rate has been achieved.
VIII. INVESTIGATIONS AND PROSPECTS
a. An elucidation.
The mirror image robot Rm is closer to the self robot Rs than robot Rc which is part of the self
(from the results of our experiments).
Humans sense that their mirror image is as part of the self (Self-Body Theory).
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
907
In other words, I found a physical meaning in the fact that humans can recognize that an image
in a mirror is their own self.
According to these investigations, the self image in the mirror is the other that is separated from
the self. It is the special ‘other,’ however, that generates the sense of being part of the robot’s
own body.
The LED lights up not because the behavior coincidence rate has reached 100% between the self
robot and its mirror image. The robot recognizes that the mirror image is closer to the self than
being just part of the self.
In other words, the robot’s recognition that it is closer to the self than part of the self has reached
100%.
We have thus solved the mystery: the robot ambiguously recognizes its mirror image, and the
mirror image is felt to be a part of the robot’s own body.
b. The mirror box therapy.
The human brain can feel existence of a lost limb.
When a person loses a limb, for instance in an accident, there is sometimes a feeling that the limb
still exists. This is called the phantom limb phenomenon.
Phantom limbs may be accompanied by ‘phantom pain.’
Dr. V.S. Ramachandran, an American neurologist, has successfully eliminated patients’ phantom
pain using his mirror box.
My theoretical description that “The mirror image of the self is part of the self-body” provides
the physical grounds for the mirror box therapy conducted by him.
c. The mirror stage.
This investigation is also a physical demonstration (Self-Body Theory) of the mirror stage
hypothesis introduced by the French psychopathologist Jacques Lacan (1901 – 81).
Mirror Stage Hypothesis asserts that infants, at an early stage in the development of their neural
systems, grow up and establish the self in stages as they recognize their mirror image and
become aware of the integrated physical body.
This is because, in my theory, to be able to “cognize the self image using the cognition result of
self behavior and the behavior of others” or to succeed in mirror image cognition, cognition of
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
908
the self and others is necessary in advance. The Mirror Stage can be considered to be the stage in
which self behavior and the behavior of others can be cognized discriminately and the
relationship between the self and others (meanings of the self and others) can be cognized.
The results of our experiments using robots for mirror image cognition support the mirror stage
hypothesis of Lacan in that infants become aware of the self using their mirror images and
develop cognition.
d. Can the self robot discriminate itself from any other robots?
No, it can’t.
If the other owns a performance that physically exceeds the capability of the self (mobility,
sensing ability, etc.), the robot will determine that the other is part of the self.
In truth, no conceivable super-robot can exceed the performance of its own mirror image.
This observation provides physical grounds for believing that any artificial limbs exceeding the
capacity of the self, even if they are not the real own living limb, are a real part of the self.
Namely, the artificial limbs can be judged as a real part of the self ( Artificial Limb
Hypothesis).
This hypothesis will become welcome news to persons who have lost limbs and must live their
lives with artificial limbs. This is because the hypothesis provides a physical theory in which an
artificial limb is “accepted by the brain as one’s own limb.”
e. Mysteries of the illusions of reality
From the result of Experiment 1, we may theoretically say that the behavior coincidence rate of a
robot cannot reach 100% due to various natural interferences.
This leads to the hypothesis that human recognition is always ambiguous (Ambiguous
Recognition Hypothesis).
As proved in experiments, two-point discrimination on the skin of a human is not always
successful depending on location [20].
You cannot always discern the sex of humans you might see walking along the street.
Doctors cannot always make an accurate diagnosis of internal disorders.
Although this hypothesis is already considered to be true, the following mystery remains.
The mystery is that humans feel ‘certainty of their existence’ despite any ambiguous senses.
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
909
For example, that is phenomena of phantom limbs and phantom pain.
These phenomena are called "illusions of reality."
Humans’ recognition function consists of bio-machines.
Because machines are involved in the recognition function of humans, various interferences from
the external world (including physical interferences from the human body) affect the process of
recognition in the brain, and thus the result of recognition is always ambiguous, both
theoretically and physically.
Nevertheless, the brain never fails to realize the existence of reality, even when based upon such
ambiguous recognition.
This mystery is called the illusion of reality.
f. Considering the human brain from the mirror image cognition robot
The MoNAD module that I proposed can explain many phenomena of human consciousness. It
is natural outcome because my definition of consciousness is derived from the knowledge of
human consciousness phenomena. According to scientific knowledge, the human brain is
composed of about 100 billion cells and information is contained in each cell and is output from
the cells. Also information is passed from one brain cell to another. To some extent, some
information is passed from a part of the brain to another part for communication and is circulated
based on some rules. For example, information that passes from the body through the spinal cord
and enters from the low part of brain into the occipital lobe, information processed from the lobe
to the parietal lobe, information processed from the parietal lobe to the frontal lobe, information
exchanged between the frontal lobe and the center of brain, and information from the center of
brain to the body are known [21]. Although the possibility that the function of human
consciousness is influenced by unknown substances remains, I think that we should try to
identify the function of consciousness using only scientific knowledge that is already known. In
other words, I estimated that human consciousness was generated by not only information
circulating in the brain itself but also the circulation of information between the brain and the
body. I estimated the existence of the MoNAD from the circulation of such information and
judged that human consciousness may be physically explained using it. Since the robot that used
this MoNAD had successful mirror image cognition, it is natural to think that the MoNAD
structure of the brain can provide the first step toward physically explaining human
consciousness (Brain-MoNAD Hypothesis).
Junichi Takeno, A Robot Succeeds in 100% Mirror Image Cognition
910
References
[1] B. Amsterdam, “Mirror self-image reactions before age two, Developmental Psychobiology”, Volume 5, Issue 4,
pp.297 – 305, John Wiley & Sons, Inc, 1972.
[2] G. G. Gallup Jr, “Chimpanzees: Self-recognition”, Science 167: 86–87, 1970.
[3] J. Lacan, “Ecrits”, W. W. Norton & Company, October 1982.
[4] J. Takeno, K. Inaba, T. Suzuki, “Experiments and examination of mirror image cognition using a small robot”,
The 6th IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp.493-498,
CIRA 2005, IEEE Catalog: 05EX1153C, ISBN: 0-7803-9356-2, June 27-30, Espoo Finland, 2005.
[5] J. Takeno, “The Self Aware Robot”, HRI-Press, August 2005.
[6]
http://www.rs.cs.meiji.ac.jp/Takeno_Archive/DiscoveryNewsAwareRobot211205.pdf
(http://dsc.discovery.com/news/briefs/20051219/awarerobot_tec_print.html)
[7]
http://www.rs.cs.meiji.ac.jp/Takeno_Archive.html
[8] P. Michel, K. Gold, B. Scassellati, “Motion-Based Robotic Self-Recognition”, The Proceedings of 2004
IEEE/RSJ International Conference on Intelligent Robots and Systems: pp.2763–2768, 2004.
[9] P.O A Haikonen, “Reflections of Consciousness; The Mirror Test”, AAAI Symposium, Washington DC, 2007
(
http://www.consciousness.it/cai/online_papers/haikonen.pdf)
[10] J. Tani, “On the dynamics of robot exploration learning”, Cognitive Systems Research pp.459-470, 2002
[11] M. Kawato, “Using humanoid robots to study human behavior”, IEEE Intelligent Systems: Special Issue on
Humanoid Robotics, Vol.15, pp.46-56, 2000
[12] I. Aleksander, “Impossible Minds, My Neurons, My Consciousness”, Imperial College Press, 1996
[13] E. Husserl, “The Essential Husserl: Basic Writings in Transcendental Phenomenology”, Indiana University
Press.
[14] M. Merleau-Ponty, “La phenomenology de la perception”, Gallimard, 1945.
[15] A. N. Meltzoff, M. K. Moore, “Imitation of facial and manual gestures by human neonate”, American
Association for the Advancement of Science, Vol. 198, pp. 75-78, 1977.
[16] M. Donald, “Origin of the Modern Mind”, Harvard University Press, Cambridge, 1991.
[17] V. Gallese, L. Fadiga, G. Rizzolati, “Action recognition in the premotor cortex”, Brain 119, pp.593-600, 1996.
[18]
S. Harnad, “The Symbol Grounding Problem”, Physica D 42, pp.335-346,1990.
[19] A. Revonsuo, J. Newman, “Binding and Consciousness”, Consciousness and Cognition 8, pp.123-127, 1999.
[20]
S. I. van Nes, C. G. Faber, and et al.,Revising two-point discrimination assessment in normal aging and in
patients with polyneuropathies”, Journal of Neurology, Neurosurgery, and Psychiatry, Vol.79, pp.832-834, 2008
[21] R. Carter, “Consciousness”, Weidenfeld & Nicolson -The Orion Publishing Group Ltd, pp.212, 2002
[v1]
http://www.rs.cs.meiji.ac.jp/Robot_Mirror_Image_Cognition.VOB
[v2] http://www.rs.cs.meiji.ac.jp/Part_of_Body.VOB
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS, VOL. 1, NO. 4, DECEMBER 2008
911
... The most important point in the development of artificial consciousness artificial intelligence; consciousness 553 or the clarification of human consciousness is the development of a self-awareness function, and he claims to have demonstrated physical and mathematical proof of this in his thesis. [68] He also demonstrated that robots can study from memory episodes in which emotions were stimulated and use this experience to take predictive actions to prevent unpleasant emotions from reoccurring (Torigoe, Takeno 2009). Aleksander's impossible mind Igor Aleksander, Emeritus Professor of Neural Systems Engineering at Imperial College, has researched artificial neural networks extensively and wrote in his 1996 book "Impossible Minds: My Neurons, My Consciousness" that the principles for creating a conscious machine already exist, but that it would take forty years to train such a machine to understand language. ...
Article
Full-text available
This paper presents an extensive review of the debates on the question: if artificial intelligence can or cannot be conscious? The goal is educational and in this way we want to offer the reader an insight into problematic questions in science, not just the facts that are already well known. The work is part of chapter 11 of our book entitled "What is meant by consciousness today"?
... This is a remarkable technical achievement. They call these devices "conscious robots" (Takeno 2017), referring precisely to these special properties, which in their view clearly go beyond the purely technical dimension. ...
Article
Full-text available
Journal of AI Humanities Vol.9 (December 2021) Will artificial systems one day be able to have consciousness? Would this be human-like, or completely different? And would we even be able to detect this consciousness, or would we create our own illusion of something that does not exist at all. With the AI hype of recent years, considerations of AI consciousness have also become popular again, and indeed various scientists are working on building conscious artificial systems. The article reports from a research project that approaches potential AI Consciousness from a technology assessment perspective. It examined which actors and networks are working on AI consciousness, what basic positions are held, what promises and what concerns are discussed. The project produced a science podcast in which experts from a wide range of disciplines (from robotics to theology) were interviewed, addressed to the interested public. The project results lead to the recommendation to classify potential AI Consciousness ex ante as high-risk AI.
... Once we recognize that the agent passes the putative test through mechanical theoremproving, the test's relevance to "self-consciousness" seems to evaporate. Similar considerations apply to other robotic successes at passing this test, whether they employ simple circuits (Haikonen 2007) or neural nets (Takeno 2008;Torigoe et al. 2009). ...
Article
Full-text available
This study introduces a novel methodology for consciousness science. Consciousness as we understand it pretheoretically is inherently subjective, yet the data available to science are irreducibly intersubjective. This poses a unique challenge for attempts to investigate consciousness empirically. We meet this challenge by combining two insights. First, we emphasize the role that computational models play in integrating results relevant to consciousness from across the cognitive sciences. This move echoes Alan Newell’s call that the language and concepts of computer science serve as a lingua franca for integrative cognitive science. Second, our central contribution is a new method for validating computational models that treats them as providing negative data on consciousness: data about what consciousness is not. This method is designed to support a quantitative science of consciousness while avoiding metaphysical commitments. We discuss how this methodology applies to current and future research and address questions that others have raised.
... In recent years, with the technological breakthroughs in brain science and neuroscience, scientists also concentrate on the self-consciousness of robots. For example, Takeno has begun the related research since 2005, by demonstrating that robots could recognize themselves in the mirror as what humans do [6]. It is worth noting that for robots, the rudimentary consciousness and awareness are still in infancy, and more advanced study is being push forward steadily. ...
Preprint
Full-text available
With the overwhelming advances in Artificial Intelligence (AI), brain science and neuroscience, robots are developing towards a direction of much more human-like and human-friendly. We can't help but wonder whether robots could be regarded as humans in future? In this article, we propose a novel perspective to analyze the essential difference between humans and robots, that is based on their respective living spaces, particularly the independent and intrinsic thinking space. We finally come to the conclusion that, only when robots own the independent and intrinsic thinking space as humans, could they have the prerequisites to be regarded as humans.
... The key idea follows Justin W. Hart & Brian Scassellati from Yale University (developers of the robot Nico which is able to perform various tasks following the image seen in a mirror) supposing that such a robot should recognize itself in the mirror due to perfect correlation between the body movement and the image seen [4]. Unlike them and just like Junichi Takeno who developed a simple robot which has passed the Dot Mirror Test [10], we work with a simplified body model of the robot. ...
Conference Paper
Full-text available
We introduce a simple example of artificial system which aims to mimic processes behind cognition. In particular we explore the mirror self-recognition-ability limited to very few species. We assume that evolution of species is reflected in the structure of the underlying control mechanism and design its modules concerning their incremental development. On this example, we demonstrate modular architecture suitable for such task. It is based on decentralization and massive parallelism and enables incremental building of control system which is running in real-time and easily combines modules operating at different pace.
... The predicted next state and the actual next state of the world are compared at each time step, and when these two states are the same a robot is declared to be conscious ("consistency of cognition and behavior generates consciousness"). In support of this concept of consciousness, these expectation-driven robots have been shown to be capable of self-recognition in a mirror [52], effectively passing the "mirror test" of self-awareness used with animals [53] and allowing the investigators to claim that it represents self-recognition. The predictive value of a self-model such as that used here has previously been argued by others to be associated with consciousness [54]. ...
Article
Full-text available
While substantial progress has been made in the field known as artificial consciousness, at the present time there is no generally accepted phenomenally conscious machine, nor even a clear route to how one might be produced should we decide to try. Here, we take the position that, from our computer science perspective, a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. We explain how addressing the computational explanatory gap can identify computational correlates of consciousness. We suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but would also inform the search for neurobiological correlates of consciousness and would, with high probability, contribute to demystifying the “hard problem” of understanding the mind–brain relationship. We compile a listing of previously proposed computational correlates of consciousness and, based on the results of recent computational modeling, suggest that the gating mechanisms associated with top-down cognitive control of working memory should be added to this list. We conclude that developing neurocognitive architectures that contribute to bridging the computational explanatory gap provides a credible and achievable roadmap to understanding the ultimate prospects for a conscious machine, and to a better understanding of the mind–brain problem in general.
Chapter
The paper describes the approach and solution for personalized assessment of social interaction patterns in online social networks. The approach and the solution are used for temporal monitoring and study of social communication dynamics, as well as for the personal reputation management.
Conference Paper
Full-text available
Prezentujeme riadiaci systém robota, ktorý interaguje so svojím obrazom v zrkadle za účelom pochopenia, že sa pozerá sám na seba. Skúmame pritom z akých jednoduchších mechanizmov schopnosť uvidieť sa v zrkadle môže povstávať. Vychádza nám nasledovné. V prvom rade je to schopnosť vytvárať si model vlastného tela. V druhom rade schopnosť vytvárať analogický model z pozorovaného tela. Asociácia videného s vlastným telom potom môže vzniknúť na základe dokonalej korelácie medzi týmito modelmi. Potrebujeme však ešte nejaký mechanizmus, ktorý uvedie oba tieto modely do pohybu-dobre tu poslúži napríklad imitácia. Dôležitá je ďalej kategorizácia takýchto imitácií, typická pre členov nejakej society. Schopnosti rozlíšiť seba od ostatných tu podľa nás prechádza schopnosť rozlíšiť ostatných navzájom.
Article
Cognitive phenomenology refers to the idea that our subjective experiences include deliberative thought processes and high-level cognition. The recent ascendance of cognitive phenomenology in philosophy has important implications for biologically-inspired cognitive architectures and the role that these models can play in understanding the fundamental nature of consciousness. To the extent that cognitive phenomenology occurs, it provides a new route to a deeper understanding of consciousness via neurocomputational studies of cognition. This route involves identifying computational correlates of consciousness in neurocomputational models of high-level cognitive functions that are associated with subjective mental states. Here we develop this idea and compile a summary of potential neurocomputational correlates of consciousness that have been proposed/recognized during the last several years based on biologically-inspired cognitive architectures. We conclude that the identification and study of computational correlates of consciousness will lead to a better understanding of phenomenal consciousness, a framework for creating a conscious machine, and a better understanding of the mind-brain problem in general.
Article
Full-text available
After prolonged exposure to their reflected images in mirrors, chimpanzees marked with red dye showed evidence of being able to recognize their own reflections. Monkeys did not appear to have this capacity.
Article
Full-text available
Humans and some animals are able to recognize themselves in a mirror. This ability has been taken as a demonstration of self-consciousness. Consequently, it has been proposed that the self-recognition in the mirror image, the mirror test, could also be used to determine the potential self-consciousness of cognitive machines. It is shown that very simple machinery is able to pass the mirror test and consequently it is argued that the passing of the mirror test per se does not demonstrate the existence of self-consciousness. Next, the realization of more complex machinery that passes the mirror test and could be considered to possess self-consciousness is outlined.
Article
Infants between 12 and 21 days of age can imitate both facial and manual gestures; this behavior cannot be explained in terms of either conditioning or innate releasing mechanisms. Such imitation implies that human neonates can equate their own unseen behaviors with gestures they see others perform.
Article
We recorded electrical activity from 532 neurons in the rostral part of inferior area 6 (area F5) of two macaque monkeys. Previous data had shown that neurons of this area discharge during goal-directed hand and mouth movements. We describe here the properties of a newly discovered set of F5 neurons ("mirror neurons', n = 92) all of which became active both when the monkey performed a given action and when it observed a similar action performed by the experimenter. Mirror neurons, in order to be visually triggered, required an interaction between the agent of the action and the object of it. The sight of the agent alone or of the object alone (three-dimensional objects, food) were ineffective. Hand and the mouth were by far the most effective agents. The actions most represented among those activating mirror neurons were grasping, manipulating and placing. In most mirror neurons (92%) there was a clear relation between the visual action they responded to and the motor response they coded. In approximately 30% of mirror neurons the congruence was very strict and the effective observed and executed actions corresponded both in terms of general action (e.g. grasping) and in terms of the way in which that action was executed (e.g. precision grip). We conclude by proposing that mirror neurons form a system for matching observation and execution of motor actions. We discuss the possible role of this system in action recognition and, given the proposed homology between F5 and human Brocca's region, we posit that a matching system, similar to that of mirror neurons exists in humans and could be involved in recognition of actions as well as phonetic gestures.
Article
Impossible Minds: My Neurons, My Consciousness has been written to satisfy the curiosity each and every one of us has about our own consciousness. It takes the view that the neurons in our heads are the source of consciousness and attempts to explain how this happens. Although it talks of neural networks, it explains what they are and what they do in such a way that anyone may understand. While the topic is partly philosophical, the text makes no assumptions of prior knowledge of philosophy; and so contains easy excursions into the important ideas of philosophy that may be missing in the education of a computer scientist. The approach is pragmatic throughout; there are many references to material on experiments that were done in our laboratories.The first edition of the book was written to introduce curious readers to the way that the consciousness we all enjoy might depend on the networks of neurons that make up the brain. In this second edition, it is recognized that these arguments still stand, but that they have been taken much further by an increasing number of researchers. A post-script has now been written for each chapter to inform the reader of these developments and provide an up-to-date bibliography. A new epilogue has been written to summarize the state-of-the art of the search for consciousness in neural automata, for researchers in computation, students of philosophy, and anyone who is fascinated by what is one of the most engaging scientific endeavours of the day.This book also tells a story. A story of a land where people think that they are automata without much in the way of consciousness, a story of cormorants and cliffs by the sea, a story of what it might be like to be a conscious machine …
Article
How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) "iconic representations," which are analogs of the proximal sensory projections of distal objects and events, and (2) "categorical representations," which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) "symbolic representations," grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., "An X is a Y that is Z").