ArticlePDF Available

Does Artificial Intelligence dream of non-terrestrial techno-signatures?

Authors:

Abstract and Figures

Today, we live in the midst of a surge in the use of artificial intelligence in many scientific and technological applications, including the Search for Extraterrestrial Intelligence (SETI). However, human perception and decision-making is still the last part of the chain in any data analysis or interpretation of results or outcomes. One of the potential applications of artificial intelligence is not only to assist in big data analysis but to help to discern possible artificiality or oddities in patterns of either radio signals, megastructures or techno-signatures in general. In this study, we review the comparative results of an experiment based on geometric patterns reconnaissance and a perception task, performed by 163 human volunteers and an artificial intelligence convolutional neural network (CNN) computer vision model. To test the model, we used an image of the famous bright spots on the Occator crater on Ceres. We wanted to investigate how the search for techno-signatures or oddities might be influenced by our cognitive skills and consciousness, and whether artificial intelligence could help or not in this task. This article also discusses how unintentional human cognitive bias might affect the search for extraterrestrial intelligence and techno-signatures compared with artificial intelligence models, and how such artificial intelligence models might perform in this type of task. We discuss how searching for unexpected, irregular features might prevent us from detecting other nearside or in-plain-sight rare and unexpected signs. The results strikingly showed that a CNN trained to detect triangles and squares scored positive hits on these two geometric shapes as some humans did.
Content may be subject to copyright.
Contents lists available at ScienceDirect
Acta Astronautica
journal homepage: www.elsevier.com/locate/actaastro
Research paper
Does artificial intelligence dream of non-terrestrial techno-signatures?
Gabriel G. De la Torre
Neuropsychology and Experimental Psychology Lab University of Cadiz, Campus Rio San Pedro, Puerto Real, 11510, Spain
ABSTRACT
Today, we live in the midst of a surge in the use of artificial intelligence in many scientific and technological applications, including the Search for Extraterrestrial
Intelligence (SETI). However, human perception and decision-making is still the last part of the chain in any data analysis or interpretation of results or outcomes.
One of the potential applications of artificial intelligence is not only to assist in big data analysis but to help to discern possible artificiality or oddities in patterns of
either radio signals, megastructures or techno-signatures in general. In this study, we review the comparative results of an experiment based on geometric patterns
reconnaissance and a perception task, performed by 163 human volunteers and an artificial intelligence convolutional neural network (CNN) computer vision model.
To test the model, we used an image of the famous bright spots on the Occator crater on Ceres. We wanted to investigate how the search for techno-signatures or
oddities might be influenced by our cognitive skills and consciousness, and whether artificial intelligence could help or not in this task. This article also discusses how
unintentional human cognitive bias might affect the search for extraterrestrial intelligence and techno-signatures compared with artificial intelligence models, and
how such artificial intelligence models might perform in this type of task. We discuss how searching for unexpected, irregular features might prevent us from
detecting other nearside or in-plain-sight rare and unexpected signs. The results strikingly showed that a CNN trained to detect triangles and squares scored positive
hits on these two geometric shapes as some humans did.
1. Introduction
In its exobiology funding programme aims for 2019, NASA has in-
cluded a non-radio techno-signature option, while China has built the
biggest radio telescope on Earth. However, the old questions still re-
main, as does the silence. The SETI strategy, primarily based on radio
signals, has been unsuccessful for decades. Although the goal and spirit
remain in good shape it is the approach itself that may be suffering from
tunnel vision, otherwise known as the cosmic gorilla effect [1]. In the
1990s, psychologists Simon and Chabris from Harvard popularised an
experiment in which half the observers missed a man in a gorilla cos-
tume crossing the scene because they were busy counting how many
ball passes men in white t-shirts performed. This was due to what is
now called the inattentional blindness effect [2]. Our anthropomorphic
or somehow still Ptolemaic scientific view of the cosmos is due to our
brain structure, senses, evolution and mind. We can only grasp a very
limited portion of physical reality, and the scale of the universe is mind-
challenging itself. If we agree that science has evolved dramatically
since the late ‘50s and ‘60s when the radio signal approach was con-
ceived, now—in a digital, quantum era—maybe it is time for us to
update this view with other possibilities. Our concept of what an ex-
traterrestrial is could be wrong. We suppose that ‘they’ have to travel
vast distances, send radio, laser beams or burst space probes through
the immense cosmos. However, other human mind-challenging possi-
bilities may exist. Independently of the methods or technology they
may use, we could focus on the traces, techno-signatures or bio-
signatures that they might leave behind. We propose that they could
leave behind signs or forms of techno-signatures other than radio sig-
nals that we might be able to detect if we look in the right direction and
pay attention. Although this may sound obvious, it can become really
tricky and confusing.
We humans conceive and model reality to fit our own convenience,
experience and concepts—extraterrestrial intelligence being no excep-
tion. Hollywood movies influence our lives and shape our view of ex-
traterrestrial civilisations and intelligence, from the pet-like ET to
militarised Star Wars and predatory aliens, all very naïve and humane
in essence. Frequently when we, including scientists, talk about extra-
terrestrials, we tend to see them as somehow akin either to us or to
robots, using radio waves and numbers, sending blueprints as an act of
goodwill or even living around Dyson sphere-like [3] megastructures.
The truth could be quite different. More advanced civilisations are
perhaps simply incomprehensible to us. Known intelligent civilisation
classifications to date consider energy consumption as a key factor.
Such classifications may represent a short-range approach to the pro-
blem. Most probably, advanced civilisations will be beyond our tech-
nologically comprehensible horizon, may dominate dark matter/un-
known energies and may be multidimensional but we really don't know.
The fact is that silence in our spectrum persists. Among the reasons
for this silence we can mention an array of factors, including 1) the
wrong technological approach on our side, (2) human brain/con-
sciousness and (3) ‘their’ nature and intentions [1]. It is interesting to
note that to date some efforts and attention have been directed at factor
https://doi.org/10.1016/j.actaastro.2019.11.013
Received 22 August 2019; Received in revised form 6 October 2019; Accepted 9 November 2019
E-mail address: gabriel.delatorre@uca.es.
Acta Astronautica 167 (2020) 280–285
Available online 15 November 2019
0094-5765/ © 2019 IAA. Published by Elsevier Ltd. All rights reserved.
T
1, but less or none at factors 2 and 3. For example, in the new no-
menclature proposed for SETI, an old term, techno-signature, has been
brought back, which involves the detection of radio signals, lasers, at-
mospheric pollution, radiation leakage from megastructures or sidereal
installations such as Dyson spheres, Shkadov thrusters [4] with the
power to alter the orbits of stars around the Galactic Center, etc. Some
authors have postulated the possibility of previous ancient civilisations
indigenous to our solar system [5] having left behind some techno-
signatures that we might find. However, if we look for these techno-
signatures, artificial structures or signs, our minds can easily become
confused when confronted with the unexpected. The question is whe-
ther our minds are ready and capable of finding and understanding such
techno-signatures, or whether we need to wait for our consciousness to
be able to apprehend and comprehend these phenomena. In the
meantime, perhaps, we could get some help with this task from artifi-
cial intelligence.
Searching for unexpected infrequent elements may prevent us from
detecting other nearside infrequent unexpected signs; or as Aristotle put
it, “‘persons do not perceive what is brought before their eyes, if they
are at the time in deep thought, or in a fright, or listening to some loud
noise”. A real possibility is that factor 2, human brain functioning or
our level of consciousness, is limiting our search and our understanding
of the universe and more advanced intelligent life living within it. Our
understanding of reality is limited and its determined by the circuitry of
our brains, and this put us in a difficult position when confronted with
the unknown and unexpected, such as another advanced intelligence or
other cosmological aspects. Factor 3 is also relevant because some ad-
vanced intelligences may simply prefer to remain undetected without
renouncing to interact. This strategy could be interpreted as a form of
ecological/naturalistic research approach as we sometimes do when
interacting with other species in nature. Another possibility is inten-
tional avoidance.
Artificial intelligence (AI) models in various scientific fields have
been used to improve prediction power and forecasting accuracy over
older methods. Within these AI models, computer vision helps on tasks
such as pattern recognition and the classification and reclassification of
events or images, etc. Computer vision model applications include
Medicine, Agriculture, Safety and Astronomy [6,7] In this paper, for our
first aim we focused on factor 2 to test our consciousness and cognitive
modus operandi compared with AI, by way of a visual perception ex-
periment using a Convolutional Neural Network (CNN) AI computer
vision model.
A second aim of this study was to test whether a trained AI CNN
model could help to discover new patterns where such patterns may
have been overlooked, and to categorise and classify the data free of
human influence and cognitive limitations in the least biased way
possible. Possible applications from this type of study could be new AI
algorithms resulting in standardised tools and methodologies that could
be applicable to different types of techno-signature search (radio,
image, etc.). These AI models could provide probabilistic data about
what type of signal or pattern might possibly be detected and its po-
tential artificiality characteristics.
2. Methods
A comparative visual perception and pattern recognition experi-
ment was performed by humans and by an AI CNN model of computer
vision. For the experiment we used a section of the NASA Dawn probe
image PIA21925 (Fig. 1), specifically a section from the Vinalia Faculae
region in Ceres's Occator crater, i.e. part of the popular bright spots.
This particular image was chosen for two main reasons: first because of
the interest of astronomy and the general public in the bright spots and
their possible origin, making it a good, controversial candidate to elu-
cidate; and second, because in a pre-pooled selection of planetary
images, including several from Mars and Ceres, many volunteers per-
ceived some geometrical patterns, particularly squared patterns, in it. In
a popular NASA online questionnaire surveying general public opinion
on the origin of the bright spots, 10% of respondents opted for volcano,
10% geyser, 6% rock, 30% ice and 8% salt deposit, while 38% opted for
other’, before the Dawn spacecraft arrived and closer views of the
formations on Occator could be studied. The more likely scientific op-
tion was salt formation.
According to NASA, this ‘image was obtained by NASA's Dawn
spacecraft in its second extended mission, from an altitude as low as 21
miles (34 km). The contrast in resolution obtained by the two phases is
visible, reflected by a few gaps in the high-resolution coverage (blurry
parts). This image is superposed to an equivalent scene acquired in the
low-altitude mapping orbit of the mission from an altitude of about 240
miles (385 km)’ (https://solarsystem.nasa.gov/resources/1095/mosaic-
of-the-vinalia-faculae-in-occator-crater/).
2.1. Human perception task
For the purpose of the task testing human performance, we re-
cruited a sample of 163 participants. All participants were older adults,
40 men and 123 women with a mean age of 22.29 years (SD 3.48), and
all were volunteers. None of them had training in astronomy, a related
specialty or expert satellite imagery analysis. The task consisted of three
stages. In first stage, an example was given to the participants in which
they were shown a satellite picture with some clear geometric forms on
it (square) and asked to draw over any geometric patterns they thought
they could detect. After reading the instructions they had to turn over
the page to where the image PIA 2195 section of Vinalia Faculae
(Occator crater on Ceres) was shown (Fig. 2). Once they felt they had
completed that task they could go on to page 3, where two questions
had to be answered: one of the questions asked whether they had de-
tected a big triangle pattern in the picture (a traced image including this
triangle was shown (see Fig. 2)); another was whether they believed
they could now make out the traced triangle after not having done so
previously.
2.2. Convolutional Neural Network Model
The computer vision model used to analyse the image was an AI
Fig. 1. PIA 21,925 image, Occator crater, Ceres. NASA Dawn Probe. Inset upper
right: section image from original PIA21925 used for the experiment. Original
image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
G.G. De la Torre Acta Astronautica 167 (2020) 280–285
281
Fig. 2. Test image (section of PIA21925) (left), example of participant's response (center) and traced image for final questionnaire (right). Original image credit:
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
Fig. 3. Convolutional Neural Network Model (CNN) architecture used in the study. In this study, we focused on images containing geometric shapes. Specifically, the
database employed contained 10,000 images (Fig. 4), both colour and gray-scale.
Fig. 4. Sample images used for the training of the Artificial Intelligence (AI) computer vision of Convolutional Neural Network Model (CNN). Upper row: Images with
geometric properties including artificially generated images representing shapes in satellite imagery. Lower row: background images, non-targeted geometric figures
and unrelated imagery.
G.G. De la Torre Acta Astronautica 167 (2020) 280–285
282
model based on Convolutional Neural Networks (CNNs) (Fig. 3) [8], a
type of computer vision model based on deep learning [9]. CNNs are a
category of neural networks that have been successful in areas such as
image recognition and classification. Some examples of their applica-
tion include the recognition of faces, objects and traffic lights. In ad-
dition, they have revolutionised robotic vision systems and autonomous
vehicles. CNNs have to be trained in order to equip them with adap-
tation skills. During training, the CNN learns by using a set of re-
presentative images that depict the different objects to be classified or
detected. The following categories were used:
Artificially generated images representing shapes without borders.
Artificially generated images representing shapes with borders.
Artificially generated images representing shapes with distorted
borders, simulating freehand drawings.
Artificially generated images displaying only a background.
Natural images displaying different objects, obtained from the VOC
2007 dataset [10] and unrelated.
Artificially generated images representing shapes simulating sa-
tellite imagery.
Since we wanted to analyse images obtained from space probes, for
the CNN to obtain accurate results we needed similar images in the
training database. However, it is obviously difficult to obtain real
images that include geometric shapes. For this reason, we adapted the
rest of the artificially generated images using a CNN able to combine an
input image with a style image [11]. During the training of the model
we applied data augmentation techniques. Specifically, the images
could be rotated up to 45° and flipped horizontally and vertically, and
could suffer small variations in lighting and zoom. As for the archi-
tecture of the CNN, we used the pre-trained ResNet 34 network model
with the weights obtained for the ImageNet dataset [12]. We fine-tuned
the weights using fast.ai and pytorch libraries. The final validation
accuracy obtained was 99.49% (Fig. 5). To classify a new image, we
used the test-time augmentation technique, which aggregates n pre-
dictions obtained using data augmentation on the new image. Specifi-
cally, we used n = 20.
3. Results
In Fig. 7 we can see the five patterns most frequently recognised by
our human participants. Feature 1 was the pattern most frequently re-
cognised, as predicted, because it represented a well-defined rectan-
gular area in which the space probe Dawn did not get the best resolu-
tion conforming to a rectangle section in the image. This pattern was
considered our control stimulus because it was obviously artificial. It
was followed in popularity by pattern 3, which was perceived to be a
circle. Pattern 5 (the big, darker triangle) was the least often detected
feature in the first instance. However, after participants performed the
reconnaissance task, a posteriori when they were asked whether they
could see a real triangle after it was traced, the percentage detecting it
rose from 7.1% to 56% (Table 1). The five patterns most frequently
detected and perceived by participants conform to an interesting
overall figure that is very interrelated geometrically (Fig. 6).
As for the CNN AI model, the results are strikingly similar to those
obtained by the humans for the images of VF-1. However, the CNN AI
model was also tested with two other Dawn images for contrast and
observation of the model's performance. To our surprise, the model still
detected both triangle and square formations or patterns on images
PIA22626 at 58 km of altitude and PIA20653 at 385 km, both images
having been obtained by Dawn at earlier stages of the Ceres exploration
(Table 2). There is a constant detection result for both triangular and
square patterns. This confirms that the CNN computer vision AI model
detected at least two patterns compatible with two different geometric
forms (triangle and square) in the same formation in three different
images of the same location at Vinalia Faculae, Occator crater, the
bright region on Ceres.
Older imaging of the region of interest performed by Dawn includes
the PIA22626 image, which contrary to previous images of the area
appeared 180° vertically tilted on Dawn's gallery. This image was
captured on July 6, 2018 from an altitude of about 36 miles (58 km),
The sub-spacecraft position from which this image was taken is about
20.7° north latitude and 242.0° east longitude. This was the last image
of the region published by NASA in Dawn's Ceres picture gallery
(Fig. 7).
4. Discussion and conclusions
In a time where the search for techno-signatures is about to reshape
SETI strategies and goals, a new perceptual and cognitive approach is
needed. SETI does not depend exclusively on technology or algorithms
but constitutes a cognitive task in itself, a human task limited as many
others are by certain aspects dependent on our brain architecture,
consciousness and neurophysiology. Here, we have presented some
extraordinary examples that perhaps illustrate how our mind can easily
Fig. 5. Validation accuracy curve for trained model.
Table 1
Recognition percentages for the most frequent patterns perceived by human
participants. 1: Big rectangular formation, left center. 2: Big square formation,
center. 3:circular shape, center of square 3. 4: small squared formation, up
right. 5A: Big Triangle before perception test, 5B: Big Triangle after perception
test.
Recognition 5A 5B 1 2 3 4
yes 11.00 63.20 85.30 47.20 66.90 36.20
no 89.00 36.80 14.70 52.80 33.10 63.80
Fig. 6. Left: section of PIA2195 used for the test. Right: Most frequent patterns
perceived by human participants in the reconnaissance task. Original image
credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
G.G. De la Torre Acta Astronautica 167 (2020) 280–285
283
experience cognitive dissonance when confronted with the unexpected.
This leads us to two possible explanations: 1) cases such as Occator's
VF-1 formation discussed here may represent examples where our
perception may be insufficient because of biological and neurocognitive
bias (participants were primed for squares not triangles), in a
Schiaparelli Mars canali, or Face on Mars, fashion; 2) alternatively, some
extraordinary characteristics have appeared in this particular VF-1 case.
The presence of possible multiple, interrelated, geometries are certainly
surprising but happened for both humans and AI, despite the plausible
geological nature of this formation (salts, carbonates, etc.).
It has been widely discussed in the field of Psychology that in-
attention produces a failure of conscious perception. According to Mack
[13] the unattended stimuli to which observers are functionally blind
are perceptually as well as cognitively processed. They are analysed,
and they consequently produce an implicit percept which is then en-
coded into an implicit memory store [13].
This percept, which has no presence in conscious awareness, is
memory encoded and stored but there is no conscious access to it unless
it is revealed through priming. If the implicit percept captures
attention, it then becomes an explicit percept; that is, a conscious
percept. If not, it remains an implicit memory. “Implicit or unconscious
perception is fully processed, and is capable of capturing attention and
will do so if it is highly meaningful to the observer when it is viewed
under conditions of inattention” [13].
Cumulative implicit percepts and memories on the specific topic of
the possible existence of other non-terrestrial intelligence may be stored
for generations—probably based on unattended perceptions—building
a not yet fully conscious construct or explicit percept until it becomes
meaningful enough to become conscious or technological advances (AI)
catalyse it. This type of cognitive phenomena, similar to how magicians
manage to trick our attention, could be the effect of deliberately de-
ceptive actions by advanced intelligences in unbalanced interactions, or
may happen solely as a result of neurobiological/technological limita-
tions of the species, or even a combination of both.
In our experiment, an AI CNN model trained to detect triangles and
squares obtained similar results to those of humans, but critically dif-
ferent ones in the most controversial part. The results of our study raise
some questions that are difficult to answer. First, if we suppose that the
Fig. 7. Vinalia Faculae detail (VF-1): a1) 180° tilted
image as it was published (July 16th) at 58 Km of
altitude by Dawn. a2) Detail of a1 tilted of VF-1 to
match the same orientation of the other existing
images of the same area. b1) Mosaic image detail of
VF-1 at 34 Km of altitude. b2) Image detail of b1.
Credit. NASA JPL. Dawn. C) 3D elevation model of
VF-1 performed by author using NormalMap free-
ware tool. Original image credit: NASA/JPL-Caltech/
UCLA/MPS/DLR/IDA/PSI.
Table 2
CNN AI computer vision model detection rate for the test image and two more of the same location at Vinalia Faculae, Occator crater, Ceres. Original image credit:
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/PSI.
Date and altitude PIA21925 (VF-1) PIA22626 PIA20653
July 16th, 2018. 34 km July 6th, 2018. 58 km March 26th, 2018. 385 km
Triangle Square Triangle Square Triangle Square
% detect 52.69 86.16 51.79 91.18 57.52 87.30
G.G. De la Torre
Acta Astronautica 167 (2020) 280–285
284
‘Vinalia Faculae anomaly’(VF-1) is just a perceptual anomaly, AI did not
help to disclose its real nature by acting in a way different from human
bias but gave us a false positive. AI bias may arise as a possible problem
in this field, as has been shown before in other scientific domains
[14–17] This may be a concern for the future use of AI model appli-
cations in the SETI or exobiology searches. Second, AI offered marginal
positive detection (triangle), creating a hard to solve and accept cog-
nitive dissonance for VF-1 and its possible artificiality.
We did not include segmentation (exact localisation) processing in
our analysis, but this could be an interesting line of research for the
future analysis of VF-1. We believe that using AI systems may improve
SETI performance and help overcome human bias in SETI tasks, but AI
ethical aspects and human readiness for the possible outcome are yet-
to-be defined factors requiring further research. It may be hard to ac-
cept that AI systems can surpass humans in more and more everyday
activities including scientific tasks, not only because of our resulting
secondary role in such activities but also because of the new concepts
and realities that these AI systems may take us to. The question here is if
we are ready to accept the outcome.
We do not have to completely abandon old strategies, but we can
add new ones, including AI, to bring a different perspective to the
search for evidence, but not only far away in cosmos but nearby too,
because “life is fascinating: you only have to look at it through the right
glasses” [18].
Funding
This study received partial funding support from the Department of
Psychology of University of Cadiz. Spain. ID: 20DPPSOT00.
Declaration of competing interest
The author declares no competing interests.
Acknowledgment
Thanks Enrique Muñoz from BIT METRICS for his help with the
CNN training and design.
References
[1] G.G. De la Torre, M.A. Garcia, The cosmic gorilla effect or the problem of un-
detected non terrestrial intelligent signals, Acta Astronaut. 146 (2018) 83–91.
[2] D.J. Simons, C.F. Chabris, Gorillas in our midst: sustained inattentional blindness
for dynamic events, Perception 28 (1999) 1059–1074.
[3] F.J. Dyson, Search for artificial stellar sources of infrared radiation, Science 131
(3414) (1960) 1667–1668.
[4] D.H. Forgan, On the Possibility of Detecting Class A Stellar Engines Using Exoplanet
Transit Curves, (2013) arXiv preprint arXiv:1306.1672.
[5] J.T. Wright, Prior indigenous technological species, Int. J. Astrobiol. 17 (1) (2018)
96–100.
[6] Y. Zhang, Y. Zhao, Astronomy in the big data era, Data Sci. J. 14 (2015) 11, https://
doi.org/10.5334/dsj-2015-011.
[7] Y.G. Zhang, K.H. Won, S.W. Son, A. Siemion, S. Croft, Self-supervised Anomaly
Detection for Narrowband SETI, (2019) arXiv preprint arXiv:1901.04636.
[8] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep con-
volutional neural networks, Adv. Neural Inf. Process. Syst. (2012) 1097–1105.
[9] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436.
[10] M. Everingham, L. Van Gool, C.K. Williams, J. Winn, A. Zisserman, The PASCAL
Visual Object Classes Challenge 2007 (VOC2007) Results, (2007).
[11] L.A. Gatys, A.S. Ecker, M. Bethge, A neural algorithm of artistic style, arXiv preprint
arXiv:1508.06576 (2015).
[12] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2016, pp. 770–778.
[13] A. Mack, Inattentional blindness: reply to, Psyche 7 (2001) 16.
[14] K. Crawford, Artificial intelligence's white guy problem, N. Y. Times 25 (2016).
[15] G.F. Luger, Artificial Intelligence: Structures and Strategies for Complex Problem
Solving. Pearson Education, (2005).
[16] A. Verghese, N.H. Shah, R.A. Harrington, What this computer needs is a physician:
humanism and artificial intelligence, Jama 319 (1) (2018) 19–20.
[17] M.A. Williams, Risky bias in artificial intelligence, Australas. Sci. 39 (4) (2018) 43.
[18] A. Dumas, La dame aux camélias, París : Le livre de poche, (1848).
Gabriel G. De la Torre, PhD. Professor Gabriel G. De la
Torre is a Clinical Neuropsychologist and Human Factors
Specialist, Associate Professor of Psychology at Department
of Psychology of University of Cádiz in Spain. He obtained
his PhD at Experimental Psychology Department and
Human Neuropsychology Laboratory at University of
Seville. He is member of the International Academy of
Astronautics (IAA) (Life Sciences). He has been an expert of
groups SG. 3.9 (Global exploration), SG 3.12 (Space ex-
ploration: the next steps) and SG3.16 (Mars exploration).
He has been coordinator of the Research Topical Team on
Psychosocial and Neurobehavioral Aspects of Human
Spaceflight funded by European Space Agency (ESA). He
was PI of one Mars-500 experiment on cognition and par-
ticipated as an expert and collaborator on other international projects and committees
such as the FP7 project named THESEUS (Towards Human Exploration of Space: A
European Strategy of European Science Foundation (ESF). He is collaborator in the team
of NASA NSCOR for Evaluating Risk Factors and Biomarkers for Adaptation and
Resilience to Spaceflight: Emotional Valence and Social Processes in ICC/ICE
Environments. He is also participating in other analog and Mars simulation research
programs such as AMADEE, Astroland and MDRS. His research interests are Cognitive
Neuroscience, Brain Injury, Performance, Space Psychology, SETI and Consciousness.
G.G. De la Torre
Acta Astronautica 167 (2020) 280–285
285
... These bright spots are known as faculae, and previous studies have reported that the faculae are mainly sodium carbonate structures [6] and are suggested to be significantly younger than the impact crater itself [5,7], although low altitude mapping orbit (LAMO) imaging by the Dawn probe was insufficient for a reliable age determination. Some apparently geometric formations in its interior have also been reported [8] (Figure 1). According to [1] "the~17-km-wide and 4-km-high Ahuna Mons has a distinct size, shape, and morphology ( Figure 2). ...
... determination. Some apparently geometric formations in its interior have also been reported [8] (Figure 1). According to [1] "the ~17-km-wide and 4-km-high Ahuna Mons has a distinct size, shape, and morphology ( Figure 2). ...
... This is the case of a recent experiment, where humans and AI models were compared when looking for geometric patterns on Ceres (Vinalia Faculae in the Occator crater). The results of this research showed that both humans and AI-supervised machine learning models identified geometric patterns in one particular feature in this region (a square inside a triangle ( Figure 1)) [8]. Supervised deep learning models where the experimenter has to feed previous sets of stimuli are sensible to bias, while simpler computer vision/feature detection models represent a very efficient, fast, and free-of-bias strategy. ...
Article
Full-text available
Ahuna Mons is a 4 km particular geologic feature on the surface of Ceres, of possibly cryovolcanic origin. The special characteristics of Ahuna Mons are also interesting in regard of its surrounding area, especially for the big crater beside it. This crater possesses similarities with Ahuna Mons including diameter, age, morphology, etc. Under the cognitive psychology perspective and using current computer vision models, we analyzed these two features on Ceres for comparison and pattern-recognition similarities. Speeded up robust features (SURF), oriented features from accelerated segment test (FAST), rotated binary robust independent elementary features (BRIEF), Canny edge detector, and scale invariant feature transform (SIFT) algorithms were employed as feature-detection algorithms, avoiding human cognitive bias. The 3D analysis of images of both features’ (Ahuna Mons and Crater B) characteristics is discussed. Results showed positive results for these algorithms about the similarities of both features. Canny edge resulted as the most efficient algorithm. The 3D objects of Ahuna Mons and Crater B showed good-fitting results. Discussion is provided about the results of this computer-vision-techniques experiment for Ahuna Mons. Results showed the potential for the computer vision models in combination with 3D imaging to be free of bias and to detect potential geoengineered formations in the future. This study also brings forward the potential problem of both human and cognitive bias in artificial-intelligence-based models and the risks for the task of searching for technosignatures.
... The truth could be quite different. More advanced civilisations are perhaps simply incomprehensible to us [5]. However, this vision of extra-terrestrial intelligence is oppositive to divine interpretations that other humans did in the past when confronted to strange, supposedly more advanced civilizations. ...
Conference Paper
Full-text available
Romanticism was an intellectual orientation with an impact on arts, philosophy and science from the late 18 th to mid 19 th centuries. It surged as a response to classicism and neoclassicism, rationalism and Enlightenment of previous century. Romanticism favoured the subjective, imaginative, visionary and transcendental views. This had an impact on science and also in Astronomy, showing a form of cognitive bias that we can still perceive active in current scientific views, especially in the search of extra-terrestrial intelligence, because they are somehow inherent to human nature.
... Furthermore, considering the agentic self-identification [61], the ontological status of an agent and its actions is partially independent of an observer. Here, research addressing various technosignatures and the general limitations of our search process can produce valuable insights [41,43,66,84] to mitigate identification problems. Combining this approach with an interdisciplinary effort that also theorizes about concepts that are not yet directly observable can reduce the likelihood of false positives and negatives [39]. ...
Article
Despite lacking scientific proof, thinking about extraterrestrials and extraterrestrial intelligence is part of our psychological reality. It is often stated that cultural and scientific reception and representation of these strange entities suffer from anthropocentric bias. To profoundly investigate such bias and the minds of extraterrestrials, we propose a revised definition for the psychological discipline called “exopsychology.” We define exopsychology as a sub-discipline of psychology, which investigates the cognition, behavior, affects, and motives of extraterrestrial agents and their human-specific representation. It is argued that the concept of intelligence is not suited for application in SETI. Thus, inherent in exopsychology is the conception of extraterrestrials as higherorder cognitive agents and as strangest strangers. We discuss the possibilities and limitations of conclusions about extraterrestrials, which leads us to hypothesize that limited statements about them might be possible, even though still influenced by anthropocentrism. We argue that it is possible to utilize anthropocentric knowledge and distinguish between admissible and inadmissible anthropocentrism. Although the first contact between extraterrestrials and humanity might never occur, scientific thinking about extraterrestrials will improve our understanding of ourselves and our place in the universe.
... Both of which are already being used in technosignature research e.g.,[8][9] ...
Article
In the spirit of Trimble’s “Astrophysics in XXXX” series, I very briefly and subjectively review developments in SETI in 2020. My primary focus is 75 papers and books published or made public in 2020, which I sort into six broad categories: results from actual searches, new search methods and instrumentation, target and frequency seleciton, the development of technosignatures, theory of ETIs, and social aspects of SETI.
Preprint
In the spirit of Trimble's ``Astrophysics in XXXX'' series, I very briefly and subjectively review developments in SETI in 2020. My primary focus is 74 papers and books published or made public in 2020, which I sort into six broad categories: results from actual searches, new search methods and instrumentation, target and frequency seleciton, the development of technosignatures, theory of ETIs, and social aspects of SETI.
Article
Full-text available
This article points to a long lasting problem in space research and cosmology, the problem of undetected signs of non terrestrial life and civilizations. We intentionally avoid the term extraterrestrial as we consider other possibilities that may arise but not fall strictly within the extraterrestrial scope. We discuss the role of new physics including dark matter and string theory in the search for life and other non terrestrial intelligence. A new classification for non terrestrial civilizations with three types and five dimensions is also provided. We also explain how our own neurophysiology, psychology and consciousness can play a major role in this search of non terrestrial civilizations task and how they have been neglected up to this date. To test this, 137 adults were evaluated using the cognitive reflection test, an attention/awareness questionnaire and a visuospatial searching task with aerial view images to determine the presence of inattentional blindness.
Article
Full-text available
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Article
Full-text available
The fields of Astrostatistics and Astroinformatics are vital for dealing with the big data issues now faced by astronomy. Like other disciplines in the big data era, astronomy has many V characteristics. In this paper, we list the different data mining algorithms used in astronomy, along with data mining software and tools related to astronomical applications. We present SDSS, a project often referred to by other astronomical projects, as the most successful sky survey in the history of astronomy and describe the factors influencing its success. We also discuss the success of Astrostatistics and Astroinformatics organizations and the conferences and summer schools on these issues that are held annually. All the above indicates that astronomers and scientists from other areas are ready to face the challenges and opportunities provided by massive data volume.
Article
Full-text available
This report presents the results of the 2006 PASCAL Visual Object Classes Challenge (VOC2006). Details of the challenge, data, and evalu-ation are presented. Participants in the challenge submitted descriptions of their methods, and these have been included verbatim. This document should be considered preliminary, and subject to change.
Article
The nationwide implementation of electronic medical records (EMRs) resulted in many unanticipated consequences, even as these systems enabled most of a patient’s data to be gathered in one place and made those data readily accessible to clinicians caring for that patient. The redundancy of the notes, the burden of alerts, and the overflowing inbox has led to the “4000 keystroke a day” problem¹ and has contributed to, and perhaps even accelerated, physician reports of symptoms of burnout. Even though the EMR may serve as an efficient administrative business and billing tool, and even as a powerful research warehouse for clinical data, most EMRs serve their front-line users quite poorly. The unanticipated consequences include the loss of important social rituals (between physicians and between physicians and nurses and other health care workers) around the chart rack and in the radiology suite, where all specialties converged to discuss patients.
Article
One of the primary open questions of astrobiology is whether there is extant or extinct life elsewhere the Solar System. Implicit in much of this work is that we are looking for microbial or, at best, unintelligent life, even though technological artifacts might be much easier to find. SETI work on searches for alien artifacts in the Solar System typically presumes that such artifacts would be of extrasolar origin, even though life is known to have existed in the Solar System, on Earth, for eons. But if a prior technological, perhaps spacefaring, species ever arose in the Solar System, it might have produced artifacts or other technosignatures that have survived to present day, meaning Solar System artifact SETI provides a potential path to resolving astrobiology's question. Here, I discuss the origins and possible locations for technosignatures of such a $prior$ $indigenous$ $technological$ $species$, which might have arisen on ancient Earth or another body, such as a pre-greenhouse Venus or a wet Mars. In the case of Venus, the arrival of its global greenhouse and potential resurfacing might have erased all evidence of its existence on the Venusian surface. In the case of Earth, erosion and, ultimately, plate tectonics may have erased most such evidence if the species lived Gyr ago. Remaining indigenous technosignatures might be expected to be extremely old, limiting the places they might still be found to beneath the surfaces of Mars and the Moon, or in the outer Solar System.
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif-ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make train-ing faster, we used non-saturating neurons and a very efficient GPU implemen-tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.