Content uploaded by Gabriel G. De la Torre
Author content
All content in this area was uploaded by Gabriel G. De la Torre on Feb 20, 2020
Content may be subject to copyright.
Contents lists available at ScienceDirect
Acta Astronautica
journal homepage: www.elsevier.com/locate/actaastro
Research paper
Does artificial intelligence dream of non-terrestrial techno-signatures?
Gabriel G. De la Torre
Neuropsychology and Experimental Psychology Lab University of Cadiz, Campus Rio San Pedro, Puerto Real, 11510, Spain
ABSTRACT
Today, we live in the midst of a surge in the use of artificial intelligence in many scientific and technological applications, including the Search for Extraterrestrial
Intelligence (SETI). However, human perception and decision-making is still the last part of the chain in any data analysis or interpretation of results or outcomes.
One of the potential applications of artificial intelligence is not only to assist in big data analysis but to help to discern possible artificiality or oddities in patterns of
either radio signals, megastructures or techno-signatures in general. In this study, we review the comparative results of an experiment based on geometric patterns
reconnaissance and a perception task, performed by 163 human volunteers and an artificial intelligence convolutional neural network (CNN) computer vision model.
To test the model, we used an image of the famous bright spots on the Occator crater on Ceres. We wanted to investigate how the search for techno-signatures or
oddities might be influenced by our cognitive skills and consciousness, and whether artificial intelligence could help or not in this task. This article also discusses how
unintentional human cognitive bias might affect the search for extraterrestrial intelligence and techno-signatures compared with artificial intelligence models, and
how such artificial intelligence models might perform in this type of task. We discuss how searching for unexpected, irregular features might prevent us from
detecting other nearside or in-plain-sight rare and unexpected signs. The results strikingly showed that a CNN trained to detect triangles and squares scored positive
hits on these two geometric shapes as some humans did.
1. Introduction
In its exobiology funding programme aims for 2019, NASA has in-
cluded a non-radio techno-signature option, while China has built the
biggest radio telescope on Earth. However, the old questions still re-
main, as does the silence. The SETI strategy, primarily based on radio
signals, has been unsuccessful for decades. Although the goal and spirit
remain in good shape it is the approach itself that may be suffering from
tunnel vision, otherwise known as the cosmic gorilla effect [1]. In the
1990s, psychologists Simon and Chabris from Harvard popularised an
experiment in which half the observers missed a man in a gorilla cos-
tume crossing the scene because they were busy counting how many
ball passes men in white t-shirts performed. This was due to what is
now called the inattentional blindness effect [2]. Our anthropomorphic
or somehow still Ptolemaic scientific view of the cosmos is due to our
brain structure, senses, evolution and mind. We can only grasp a very
limited portion of physical reality, and the scale of the universe is mind-
challenging itself. If we agree that science has evolved dramatically
since the late ‘50s and ‘60s when the radio signal approach was con-
ceived, now—in a digital, quantum era—maybe it is time for us to
update this view with other possibilities. Our concept of what an ex-
traterrestrial is could be wrong. We suppose that ‘they’ have to travel
vast distances, send radio, laser beams or burst space probes through
the immense cosmos. However, other human mind-challenging possi-
bilities may exist. Independently of the methods or technology they
may use, we could focus on the traces, techno-signatures or bio-
signatures that they might leave behind. We propose that they could
leave behind signs or forms of techno-signatures other than radio sig-
nals that we might be able to detect if we look in the right direction and
pay attention. Although this may sound obvious, it can become really
tricky and confusing.
We humans conceive and model reality to fit our own convenience,
experience and concepts—extraterrestrial intelligence being no excep-
tion. Hollywood movies influence our lives and shape our view of ex-
traterrestrial civilisations and intelligence, from the pet-like ET to
militarised Star Wars and predatory aliens, all very naïve and humane
in essence. Frequently when we, including scientists, talk about extra-
terrestrials, we tend to see them as somehow akin either to us or to
robots, using radio waves and numbers, sending blueprints as an act of
goodwill or even living around Dyson sphere-like [3] megastructures.
The truth could be quite different. More advanced civilisations are
perhaps simply incomprehensible to us. Known intelligent civilisation
classifications to date consider energy consumption as a key factor.
Such classifications may represent a short-range approach to the pro-
blem. Most probably, advanced civilisations will be beyond our tech-
nologically comprehensible horizon, may dominate dark matter/un-
known energies and may be multidimensional but we really don't know.
The fact is that silence in our spectrum persists. Among the reasons
for this silence we can mention an array of factors, including 1) the
wrong technological approach on our side, (2) human brain/con-
sciousness and (3) ‘their’ nature and intentions [1]. It is interesting to
note that to date some efforts and attention have been directed at factor
https://doi.org/10.1016/j.actaastro.2019.11.013
Received 22 August 2019; Received in revised form 6 October 2019; Accepted 9 November 2019
E-mail address: gabriel.delatorre@uca.es.
Acta Astronautica 167 (2020) 280–285
Available online 15 November 2019
0094-5765/ © 2019 IAA. Published by Elsevier Ltd. All rights reserved.
T
1, but less or none at factors 2 and 3. For example, in the new no-
menclature proposed for SETI, an old term, techno-signature, has been
brought back, which involves the detection of radio signals, lasers, at-
mospheric pollution, radiation leakage from megastructures or sidereal
installations such as Dyson spheres, Shkadov thrusters [4] with the
power to alter the orbits of stars around the Galactic Center, etc. Some
authors have postulated the possibility of previous ancient civilisations
indigenous to our solar system [5] having left behind some techno-
signatures that we might find. However, if we look for these techno-
signatures, artificial structures or signs, our minds can easily become
confused when confronted with the unexpected. The question is whe-
ther our minds are ready and capable of finding and understanding such
techno-signatures, or whether we need to wait for our consciousness to
be able to apprehend and comprehend these phenomena. In the
meantime, perhaps, we could get some help with this task from artifi-
cial intelligence.
Searching for unexpected infrequent elements may prevent us from
detecting other nearside infrequent unexpected signs; or as Aristotle put
it, “‘persons do not perceive what is brought before their eyes, if they
are at the time in deep thought, or in a fright, or listening to some loud
noise”. A real possibility is that factor 2, human brain functioning or
our level of consciousness, is limiting our search and our understanding
of the universe and more advanced intelligent life living within it. Our
understanding of reality is limited and its determined by the circuitry of
our brains, and this put us in a difficult position when confronted with
the unknown and unexpected, such as another advanced intelligence or
other cosmological aspects. Factor 3 is also relevant because some ad-
vanced intelligences may simply prefer to remain undetected without
renouncing to interact. This strategy could be interpreted as a form of
ecological/naturalistic research approach as we sometimes do when
interacting with other species in nature. Another possibility is inten-
tional avoidance.
Artificial intelligence (AI) models in various scientific fields have
been used to improve prediction power and forecasting accuracy over
older methods. Within these AI models, computer vision helps on tasks
such as pattern recognition and the classification and reclassification of
events or images, etc. Computer vision model applications include
Medicine, Agriculture, Safety and Astronomy [6,7] In this paper, for our
first aim we focused on factor 2 to test our consciousness and cognitive
modus operandi compared with AI, by way of a visual perception ex-
periment using a Convolutional Neural Network (CNN) AI computer
vision model.
A second aim of this study was to test whether a trained AI CNN
model could help to discover new patterns where such patterns may
have been overlooked, and to categorise and classify the data free of
human influence and cognitive limitations in the least biased way
possible. Possible applications from this type of study could be new AI
algorithms resulting in standardised tools and methodologies that could
be applicable to different types of techno-signature search (radio,
image, etc.). These AI models could provide probabilistic data about
what type of signal or pattern might possibly be detected and its po-
tential artificiality characteristics.
2. Methods
A comparative visual perception and pattern recognition experi-
ment was performed by humans and by an AI CNN model of computer
vision. For the experiment we used a section of the NASA Dawn probe
image PIA21925 (Fig. 1), specifically a section from the Vinalia Faculae
region in Ceres's Occator crater, i.e. part of the popular bright spots.
This particular image was chosen for two main reasons: first because of
the interest of astronomy and the general public in the bright spots and
their possible origin, making it a good, controversial candidate to elu-
cidate; and second, because in a pre-pooled selection of planetary
images, including several from Mars and Ceres, many volunteers per-
ceived some geometrical patterns, particularly squared patterns, in it. In
a popular NASA online questionnaire surveying general public opinion
on the origin of the bright spots, 10% of respondents opted for volcano,
10% geyser, 6% rock, 30% ice and 8% salt deposit, while 38% opted for
‘other’, before the Dawn spacecraft arrived and closer views of the
formations on Occator could be studied. The more likely scientific op-
tion was salt formation.
According to NASA, this ‘image was obtained by NASA's Dawn
spacecraft in its second extended mission, from an altitude as low as 21
miles (34 km). The contrast in resolution obtained by the two phases is
visible, reflected by a few gaps in the high-resolution coverage (blurry
parts). This image is superposed to an equivalent scene acquired in the
low-altitude mapping orbit of the mission from an altitude of about 240
miles (385 km)’ (https://solarsystem.nasa.gov/resources/1095/mosaic-
of-the-vinalia-faculae-in-occator-crater/).
2.1. Human perception task
For the purpose of the task testing human performance, we re-
cruited a sample of 163 participants. All participants were older adults,
40 men and 123 women with a mean age of 22.29 years (SD 3.48), and
all were volunteers. None of them had training in astronomy, a related
specialty or expert satellite imagery analysis. The task consisted of three
stages. In first stage, an example was given to the participants in which
they were shown a satellite picture with some clear geometric forms on
it (square) and asked to draw over any geometric patterns they thought
they could detect. After reading the instructions they had to turn over
the page to where the image PIA 2195 section of Vinalia Faculae
(Occator crater on Ceres) was shown (Fig. 2). Once they felt they had
completed that task they could go on to page 3, where two questions
had to be answered: one of the questions asked whether they had de-
tected a big triangle pattern in the picture (a traced image including this
triangle was shown (see Fig. 2)); another was whether they believed
they could now make out the traced triangle after not having done so
previously.
2.2. Convolutional Neural Network Model
The computer vision model used to analyse the image was an AI
Fig. 1. PIA 21,925 image, Occator crater, Ceres. NASA Dawn Probe. Inset upper
right: section image from original PIA21925 used for the experiment. Original
image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
G.G. De la Torre Acta Astronautica 167 (2020) 280–285
281
Fig. 2. Test image (section of PIA21925) (left), example of participant's response (center) and traced image for final questionnaire (right). Original image credit:
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
Fig. 3. Convolutional Neural Network Model (CNN) architecture used in the study. In this study, we focused on images containing geometric shapes. Specifically, the
database employed contained 10,000 images (Fig. 4), both colour and gray-scale.
Fig. 4. Sample images used for the training of the Artificial Intelligence (AI) computer vision of Convolutional Neural Network Model (CNN). Upper row: Images with
geometric properties including artificially generated images representing shapes in satellite imagery. Lower row: background images, non-targeted geometric figures
and unrelated imagery.
G.G. De la Torre Acta Astronautica 167 (2020) 280–285
282
model based on Convolutional Neural Networks (CNNs) (Fig. 3) [8], a
type of computer vision model based on deep learning [9]. CNNs are a
category of neural networks that have been successful in areas such as
image recognition and classification. Some examples of their applica-
tion include the recognition of faces, objects and traffic lights. In ad-
dition, they have revolutionised robotic vision systems and autonomous
vehicles. CNNs have to be trained in order to equip them with adap-
tation skills. During training, the CNN learns by using a set of re-
presentative images that depict the different objects to be classified or
detected. The following categories were used:
•Artificially generated images representing shapes without borders.
•Artificially generated images representing shapes with borders.
•Artificially generated images representing shapes with distorted
borders, simulating freehand drawings.
•Artificially generated images displaying only a background.
•Natural images displaying different objects, obtained from the VOC
2007 dataset [10] and unrelated.
•Artificially generated images representing shapes simulating sa-
tellite imagery.
Since we wanted to analyse images obtained from space probes, for
the CNN to obtain accurate results we needed similar images in the
training database. However, it is obviously difficult to obtain real
images that include geometric shapes. For this reason, we adapted the
rest of the artificially generated images using a CNN able to combine an
input image with a style image [11]. During the training of the model
we applied data augmentation techniques. Specifically, the images
could be rotated up to 45° and flipped horizontally and vertically, and
could suffer small variations in lighting and zoom. As for the archi-
tecture of the CNN, we used the pre-trained ResNet 34 network model
with the weights obtained for the ImageNet dataset [12]. We fine-tuned
the weights using fast.ai and pytorch libraries. The final validation
accuracy obtained was 99.49% (Fig. 5). To classify a new image, we
used the test-time augmentation technique, which aggregates n pre-
dictions obtained using data augmentation on the new image. Specifi-
cally, we used n = 20.
3. Results
In Fig. 7 we can see the five patterns most frequently recognised by
our human participants. Feature 1 was the pattern most frequently re-
cognised, as predicted, because it represented a well-defined rectan-
gular area in which the space probe Dawn did not get the best resolu-
tion conforming to a rectangle section in the image. This pattern was
considered our control stimulus because it was obviously artificial. It
was followed in popularity by pattern 3, which was perceived to be a
circle. Pattern 5 (the big, darker triangle) was the least often detected
feature in the first instance. However, after participants performed the
reconnaissance task, a posteriori when they were asked whether they
could see a real triangle after it was traced, the percentage detecting it
rose from 7.1% to 56% (Table 1). The five patterns most frequently
detected and perceived by participants conform to an interesting
overall figure that is very interrelated geometrically (Fig. 6).
As for the CNN AI model, the results are strikingly similar to those
obtained by the humans for the images of VF-1. However, the CNN AI
model was also tested with two other Dawn images for contrast and
observation of the model's performance. To our surprise, the model still
detected both triangle and square formations or patterns on images
PIA22626 at 58 km of altitude and PIA20653 at 385 km, both images
having been obtained by Dawn at earlier stages of the Ceres exploration
(Table 2). There is a constant detection result for both triangular and
square patterns. This confirms that the CNN computer vision AI model
detected at least two patterns compatible with two different geometric
forms (triangle and square) in the same formation in three different
images of the same location at Vinalia Faculae, Occator crater, the
bright region on Ceres.
Older imaging of the region of interest performed by Dawn includes
the PIA22626 image, which contrary to previous images of the area
appeared 180° vertically tilted on Dawn's gallery. This image was
captured on July 6, 2018 from an altitude of about 36 miles (58 km),
The sub-spacecraft position from which this image was taken is about
20.7° north latitude and 242.0° east longitude. This was the last image
of the region published by NASA in Dawn's Ceres picture gallery
(Fig. 7).
4. Discussion and conclusions
In a time where the search for techno-signatures is about to reshape
SETI strategies and goals, a new perceptual and cognitive approach is
needed. SETI does not depend exclusively on technology or algorithms
but constitutes a cognitive task in itself, a human task limited as many
others are by certain aspects dependent on our brain architecture,
consciousness and neurophysiology. Here, we have presented some
extraordinary examples that perhaps illustrate how our mind can easily
Fig. 5. Validation accuracy curve for trained model.
Table 1
Recognition percentages for the most frequent patterns perceived by human
participants. 1: Big rectangular formation, left center. 2: Big square formation,
center. 3:circular shape, center of square 3. 4: small squared formation, up
right. 5A: Big Triangle before perception test, 5B: Big Triangle after perception
test.
Recognition 5A 5B 1 2 3 4
yes 11.00 63.20 85.30 47.20 66.90 36.20
no 89.00 36.80 14.70 52.80 33.10 63.80
Fig. 6. Left: section of PIA2195 used for the test. Right: Most frequent patterns
perceived by human participants in the reconnaissance task. Original image
credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
G.G. De la Torre Acta Astronautica 167 (2020) 280–285
283
experience cognitive dissonance when confronted with the unexpected.
This leads us to two possible explanations: 1) cases such as Occator's
VF-1 formation discussed here may represent examples where our
perception may be insufficient because of biological and neurocognitive
bias (participants were primed for squares not triangles), in a
Schiaparelli Mars canali, or Face on Mars, fashion; 2) alternatively, some
extraordinary characteristics have appeared in this particular VF-1 case.
The presence of possible multiple, interrelated, geometries are certainly
surprising but happened for both humans and AI, despite the plausible
geological nature of this formation (salts, carbonates, etc.).
It has been widely discussed in the field of Psychology that in-
attention produces a failure of conscious perception. According to Mack
[13] the unattended stimuli to which observers are functionally blind
are perceptually as well as cognitively processed. They are analysed,
and they consequently produce an implicit percept which is then en-
coded into an implicit memory store [13].
This percept, which has no presence in conscious awareness, is
memory encoded and stored but there is no conscious access to it unless
it is revealed through priming. If the implicit percept captures
attention, it then becomes an explicit percept; that is, a conscious
percept. If not, it remains an implicit memory. “Implicit or unconscious
perception is fully processed, and is capable of capturing attention and
will do so if it is highly meaningful to the observer when it is viewed
under conditions of inattention” [13].
Cumulative implicit percepts and memories on the specific topic of
the possible existence of other non-terrestrial intelligence may be stored
for generations—probably based on unattended perceptions—building
a not yet fully conscious construct or explicit percept until it becomes
meaningful enough to become conscious or technological advances (AI)
catalyse it. This type of cognitive phenomena, similar to how magicians
manage to trick our attention, could be the effect of deliberately de-
ceptive actions by advanced intelligences in unbalanced interactions, or
may happen solely as a result of neurobiological/technological limita-
tions of the species, or even a combination of both.
In our experiment, an AI CNN model trained to detect triangles and
squares obtained similar results to those of humans, but critically dif-
ferent ones in the most controversial part. The results of our study raise
some questions that are difficult to answer. First, if we suppose that the
Fig. 7. Vinalia Faculae detail (VF-1): a1) 180° tilted
image as it was published (July 16th) at 58 Km of
altitude by Dawn. a2) Detail of a1 tilted of VF-1 to
match the same orientation of the other existing
images of the same area. b1) Mosaic image detail of
VF-1 at 34 Km of altitude. b2) Image detail of b1.
Credit. NASA JPL. Dawn. C) 3D elevation model of
VF-1 performed by author using NormalMap free-
ware tool. Original image credit: NASA/JPL-Caltech/
UCLA/MPS/DLR/IDA/PSI.
Table 2
CNN AI computer vision model detection rate for the test image and two more of the same location at Vinalia Faculae, Occator crater, Ceres. Original image credit:
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
Date and altitude PIA21925 (VF-1) PIA22626 PIA20653
July 16th, 2018. 34 km July 6th, 2018. 58 km March 26th, 2018. 385 km
Triangle Square Triangle Square Triangle Square
% detect 52.69 86.16 51.79 91.18 57.52 87.30
G.G. De la Torre
Acta Astronautica 167 (2020) 280–285
284
‘Vinalia Faculae anomaly’(VF-1) is just a perceptual anomaly, AI did not
help to disclose its real nature by acting in a way different from human
bias but gave us a false positive. AI bias may arise as a possible problem
in this field, as has been shown before in other scientific domains
[14–17] This may be a concern for the future use of AI model appli-
cations in the SETI or exobiology searches. Second, AI offered marginal
positive detection (triangle), creating a hard to solve and accept cog-
nitive dissonance for VF-1 and its possible artificiality.
We did not include segmentation (exact localisation) processing in
our analysis, but this could be an interesting line of research for the
future analysis of VF-1. We believe that using AI systems may improve
SETI performance and help overcome human bias in SETI tasks, but AI
ethical aspects and human readiness for the possible outcome are yet-
to-be defined factors requiring further research. It may be hard to ac-
cept that AI systems can surpass humans in more and more everyday
activities including scientific tasks, not only because of our resulting
secondary role in such activities but also because of the new concepts
and realities that these AI systems may take us to. The question here is if
we are ready to accept the outcome.
We do not have to completely abandon old strategies, but we can
add new ones, including AI, to bring a different perspective to the
search for evidence, but not only far away in cosmos but nearby too,
because “life is fascinating: you only have to look at it through the right
glasses” [18].
Funding
This study received partial funding support from the Department of
Psychology of University of Cadiz. Spain. ID: 20DPPSOT00.
Declaration of competing interest
The author declares no competing interests.
Acknowledgment
Thanks Enrique Muñoz from BIT METRICS for his help with the
CNN training and design.
References
[1] G.G. De la Torre, M.A. Garcia, The cosmic gorilla effect or the problem of un-
detected non terrestrial intelligent signals, Acta Astronaut. 146 (2018) 83–91.
[2] D.J. Simons, C.F. Chabris, Gorillas in our midst: sustained inattentional blindness
for dynamic events, Perception 28 (1999) 1059–1074.
[3] F.J. Dyson, Search for artificial stellar sources of infrared radiation, Science 131
(3414) (1960) 1667–1668.
[4] D.H. Forgan, On the Possibility of Detecting Class A Stellar Engines Using Exoplanet
Transit Curves, (2013) arXiv preprint arXiv:1306.1672.
[5] J.T. Wright, Prior indigenous technological species, Int. J. Astrobiol. 17 (1) (2018)
96–100.
[6] Y. Zhang, Y. Zhao, Astronomy in the big data era, Data Sci. J. 14 (2015) 11, https://
doi.org/10.5334/dsj-2015-011.
[7] Y.G. Zhang, K.H. Won, S.W. Son, A. Siemion, S. Croft, Self-supervised Anomaly
Detection for Narrowband SETI, (2019) arXiv preprint arXiv:1901.04636.
[8] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep con-
volutional neural networks, Adv. Neural Inf. Process. Syst. (2012) 1097–1105.
[9] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436.
[10] M. Everingham, L. Van Gool, C.K. Williams, J. Winn, A. Zisserman, The PASCAL
Visual Object Classes Challenge 2007 (VOC2007) Results, (2007).
[11] L.A. Gatys, A.S. Ecker, M. Bethge, A neural algorithm of artistic style, arXiv preprint
arXiv:1508.06576 (2015).
[12] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2016, pp. 770–778.
[13] A. Mack, Inattentional blindness: reply to, Psyche 7 (2001) 16.
[14] K. Crawford, Artificial intelligence's white guy problem, N. Y. Times 25 (2016).
[15] G.F. Luger, Artificial Intelligence: Structures and Strategies for Complex Problem
Solving. Pearson Education, (2005).
[16] A. Verghese, N.H. Shah, R.A. Harrington, What this computer needs is a physician:
humanism and artificial intelligence, Jama 319 (1) (2018) 19–20.
[17] M.A. Williams, Risky bias in artificial intelligence, Australas. Sci. 39 (4) (2018) 43.
[18] A. Dumas, La dame aux camélias, París : Le livre de poche, (1848).
Gabriel G. De la Torre, PhD. Professor Gabriel G. De la
Torre is a Clinical Neuropsychologist and Human Factors
Specialist, Associate Professor of Psychology at Department
of Psychology of University of Cádiz in Spain. He obtained
his PhD at Experimental Psychology Department and
Human Neuropsychology Laboratory at University of
Seville. He is member of the International Academy of
Astronautics (IAA) (Life Sciences). He has been an expert of
groups SG. 3.9 (Global exploration), SG 3.12 (Space ex-
ploration: the next steps) and SG3.16 (Mars exploration).
He has been coordinator of the Research Topical Team on
Psychosocial and Neurobehavioral Aspects of Human
Spaceflight funded by European Space Agency (ESA). He
was PI of one Mars-500 experiment on cognition and par-
ticipated as an expert and collaborator on other international projects and committees
such as the FP7 project named THESEUS (Towards Human Exploration of Space: A
European Strategy of European Science Foundation (ESF). He is collaborator in the team
of NASA NSCOR for Evaluating Risk Factors and Biomarkers for Adaptation and
Resilience to Spaceflight: Emotional Valence and Social Processes in ICC/ICE
Environments. He is also participating in other analog and Mars simulation research
programs such as AMADEE, Astroland and MDRS. His research interests are Cognitive
Neuroscience, Brain Injury, Performance, Space Psychology, SETI and Consciousness.
G.G. De la Torre
Acta Astronautica 167 (2020) 280–285
285