PreprintPDF Available

Thoughts on thinking

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

The general principles of thinking are considered. An attempt is made to present a geometric model reflecting logical and intuitive thinking. This article was published in the Soviet popular science magazine "Chemistry and Life" [Химия и жизнь] (1989, No. 7) in Russian. Below is an English translation with minimal changes.
Thoughts on thinking
Lev I. Verkhovsky
levver@list.ru
ABSTRACT
The general principles of thinking are
considered. An attempt is made to present a geometric model
reflecting logical and intuitive thinking.
This article was published in the Soviet popular
science magazine "Chemistry and Life" [Химия и жизнь] (1989, No. 7)
in Russian. Below is an English translation with minimal changes.
BIG BANGS ARE NOT ONLY IN COSMOLOGY
A distinctive property of thinking lies, probably, in the ability to
achieve a certain goal, that is, to find the desired option among
others that are in principle acceptable, but do not lead to the
required result. For example, if a monkey in a cage has a bunch
of different items, but he can get a banana only by choosing a
box from the pile to stand on it, and a stick to knock down a
banana, then we judge the intelligence of the monkey by how
he copes with the choice.
Valid options are combinations of certain elements: actions in
practical matters, inferences in proofs, colors and sounds in art.
Maybe, in order to get the desired combination, you just need
to go through the options one by one and discard all the bad
ones?
1
The futility of such an approach follows from a simple fact
called in cybernetics a combinatorial explosion. The fact is that
if elements can be freely grouped with each other, then the
total set of combinations grows (with an increase in the
number of elements in the set) extremely quickly,
exponentially. So, with an alphabet of only ten characters, you
can compose 10 to the hundredth power of texts with a length
of one hundred letters!
A machine that scans even a billion of these hundred-letter
words per second will need about 10 to the power of 74 years
to completely review them. Therefore, to test all the options is
beyond the power of either the slow human mind, or no matter
how perfect the computer.
And yet, somehow unique texts arise from many hundreds and
thousands of characters (in Mozart's music, you cannot touch a
single note). The essence of creativity lies in the search for such
new and irreplaceable combinations. "But after all, somewhere
there is it, that one -- the only one, inexplicable, the ingenious
order of sounding notes, the ingenious order of ordinary
words!" (R. Rozhdestvensky).
This means that there must be ways to find the “needle” of the
desired without a complete enumeration of the “haystack” of
the possible.
PYRAMID OF LANGUAGES
It is clear that the construction of the desired combination
would be impossible if it immediately began at the level of
those elements at which it should finally be expressed let's
2
call this level the implementation language. After all,
knowledge of letters is not enough to compose a novel, and in
order to get to the right address, traffic rules are not enough.
Therefore, we always use not one language, but a whole set of
them. Using this set, we are trying to solve the problem in
general, that is, to reduce it to a number of subproblems, those
to even smaller ones, and so on until each of them is so simple
that it can be expressed in the implementation language. In
fact, we are consistently breaking down one complex task into
an increasing number of increasingly easy ones. As if a set of
maps of different scales would be used when laying a route.
Really, when determining the path, we start with the roughest
map covering the entire route. From it we move on to a small
set of more detailed ones, from each of them to several even
more detailed ones. And every step we easily find what we
need, since each more general map already sharply limits
further search in meaning. Thus, the hierarchy of languages
contains an antidote to the combinatorial explosion.
It is clear that the success of the entire multi-stage procedure
will depend on how complete the existing set of "maps" is,
whether there are missing entire levels or individual instances.
But such completeness is possible only in a well-studied area.
And at the forefront of science, it is the lack of knowledge that
is more characteristic, requiring efforts to expand and
reorganize language tools. To understand the development of
such tools, it is convenient to turn to programming languages.
PROGRAMMING CONCERNS
3
The scheme of a conventional modern computer embodies the
language of machine commands, consisting of the simplest
arithmetic and logical operations. The primitiveness of this
language is a payment for versatility: it is assumed that the
machine will be used for different purposes, and from small
bricks it is possible to build houses of the most ornate shape,
which cannot be said about large blocks.
However, each specific user solves only his own narrow range
of tasks, and he does not need this versatility. On the contrary,
he would like to move large blocks, which would allow him to
reduce the busting. In other words, he would like to have a
language focused specifically on his problems. How one can get
it?
When composing several of the simplest programs, some
combinations of commands are repeated all the time, they
seem to stick together. You can assign a name to such a
combination, enter it into memory, and the higher-level
language operator is ready. (This is analogous to the
development of a conditioned reflex repetitive stimuli and
reactions become a single whole.) Such a course of action can
be called the "bottom-up" path.
But there is another way "from above". We analyze the
entire set of our problems and look for a set of as large parts as
possible, from which any desired algorithm would be formed.
Drawing again a parallel with construction, we can say that they
determine the set of blocks from which it will be possible to
erect all buildings of the specified type.
Here a man uses his advantage over a machine in the diversity
of his outlooks about the world. For a computer, this large-
4
block language is completely incomprehensible, and one needs
to translate each block into a set of bricks -- machine
commands. To do this, a translator program is composed (again
by hierarchical partitioning). In different areas, there will be
their own sets of blocks; this is how hundreds of algorithmic
languages arise each of them divides the world in its own
way.
In these relations, the general principle of thinking is
manifested to work on the upper floors of the language
hierarchy. If we don't have a high-level language at our
disposal, then we need to create one. As we have already said,
the main goal is to avoid large enumerations of options.
The final result, for example, the justification of some
statement, should be brought to something well understood:
axioms in formal theory, atomic-molecular representations in
chemistry (this is the implementation language). So, the task is
to descend to this level, and then go in the opposite direction
(from the bottom up), carrying out logical inference, strict
deduction.
In general, since the time of Aristotle, thinking has been closely
connected with logic.
TWO LOGICS. GEOMETRIC ILLUSTRATION
Even at school, in geometry lessons, we learn the essence of a
strict logical system well: if we managed to stretch a chain of
conclusions from the initial postulates to the required
statement, then there is no doubt about its truth (until
someone, like Lobachevsky, doubts the very basics). But if the
5
chain of inference is long enough, then, knowing some axioms,
it is impossible to build a proof without a big search.
Therefore, whole blocks of conclusions are also needed here.
To do this, we first solve very simple tasks (the chains are
short), and we remember each already solved one they
become concepts of a higher level (this is what we call the
"bottom" path). The most important ones, that is, reflecting the
general properties of the entire range of problems of the
statement, are called theorems they must be remembered.
Now, faced with a more difficult task, you will no longer have to
reduce it to postulates, but only present it as a combination of
already known problems and proven theorems (the way down
from them has already been done).
Let's use one geometric model that makes these, and most
importantly, some more complex things quite visual.
We will depict the axioms with small flat figures:
And the problem we are struggling with is a larger figure:
6
To solve the problem means to lay out, like the floor of a room
with parquet, the figure-problem with axiom-figures (finding
such a stacking reflects the construction of the proof, that is,
the composition and order of conclusions):
It is clear that if the task is large enough, then you can't cope
with it right away (all the same big busting). Therefore, we
should first expand the set of correct statements. Let's take on
simpler tasks (the corresponding figures are small):
We easily fill them with axioms:
7
Now, with these blocks in mind, you can return to the difficult
task again. We see that it boils down to the already solved:
This is how classes are built according to a textbook or with a
good teacher, when a specially selected series of increasingly
complex tasks allows you to gradually increase the student's
knowledge. And what to do in a new, unexplored area?
If there are any established facts there, then everything starts
with them. Here are these extracted facts that appear to be big
figures:
8
We carefully study their structure, try to discover a hidden
pattern, some general principle. We identify similar contours
and motives we define for ourselves heuristics that will
dramatically narrow the number of acceptable hypotheses.
Then we sift through plausible options (partly subconsciously,
during sleep).
Finally, after much thought and unsuccessful trials, we find
eureka! that all figures-facts are representable as
combinations of three figures-hypotheses. (Isn't that how the
idea of three quarks was born, from which the whole world of
elementary particles is made up?) Our discovery demonstrates
the following figure:
9
We are experiencing that rare and memorable moment, which
is called insight.
It is clear that the introduction of figures-hypotheses is already
a familiar way for us "from above". The catch is that these
figures themselves may turn out to be too big, too far from
ordinary ideas to be immediately expressed in the language of
the well-known. Often, it's just a vague feeling when the
author of the guess is already sure of its correctness, but still
can't convince others. As Karl Gauss said, "I know my results, I
just don't know how I'm going to get to them."
And yet, despite the logical gap that has formed, the
emergence of such unclear images is a key stage. It corresponds
to an intuitive solution, the formulation of new tasks that
determine everything further: the formulation and justification
of a hypothesis, and then its transformation into a theory. Each
intuitive image a "castle in the clouds" must be fixed (by
further sub-division) on the solid ground of axioms and
theorems. It is clear that the development of intuition is the
finding of large blocks that arise as a result of the movement of
thought in breadth, when a special view is formed that
simplifies the whole picture.
So, two main stages of theory creation are obtained: first
(intuitive), guessing the language of the highest possible level
to describe the available facts, and then (logical) a strict
justification.
HOW TO CALCULATE IDEAS
10
At one time, Gottfried Leibniz put forward a program of
"universal characteristics" a language whose symbols would
reflect their meaning, that is, relations to other concepts "its
signs would be combined depending on the order and
connection of things." All thinking, according to his idea, should
be reduced simply to calculations in this language according to
certain rules. So far, this project has been implemented only
half to formalize the deductive conclusion (the computer
also does it), but the logic of invention, the logic of imagination
-- no.
Perhaps combinatorial geometry will be useful here (and our
model refers to it), the purpose of which is to find the optimal
combination of some elements-shapes. The model reflects
various situations well, for example, the presence of competing
theories several systems of figures that fit this set of facts.
Or the appearance of a fact that cannot be put together from
known blocks. Here we have to build a new theory to break
the usual figures into parts and arrange them in a new way (to
produce, respectively, analysis and synthesis).
In addition to purely combinatorial difficulties, the obstacle
here is also that, with long use, each image begins to be
perceived as an indivisible whole, which is associated with
dogmatism in thinking and bureaucracy in its diverse
manifestations. As a rule, a fresh look is needed here, which is
often possessed by an "outsider".
Of course, our geometric model is just an illustration of some
ways of thinking, and it is still impossible to talk about a
universal approach (first of all, we need to understand how
certain statements relate to specific figures). And yet such a
11
game to some extent clarifies what Leibniz could have meant
when he wrote that there is a calculus more important than the
calculations of arithmetic and geometry the calculus of
ideas.
In the brain, connections and relationships between the images
memory engrams (which we drew in the form of figures) are
probably created in an unclear way, and the thought process
itself is reduced to rearrangement of this structure. At the same
time, minimization also works after all, we are always
looking for the shortest representation of the totality of facts;
previously this was called the principle of economy of thinking.
In general, the need for the development of some new
mathematics and logic is overdue. As the fathers of cybernetics
and general systems theory John von Neumann and Ludwig von
Bertalanffy pointed out, "logic will have to undergo a
metamorphosis and turn into neurology to a much greater
extent than neurology into a section of logic," and "attempts
have been made for a long time to create a "gestalt
mathematics", which would be based not on quantity, but on
relations, that is, form and order."
BRAIN AND COMPUTER
A computer can store any amount of information in memory
(even absolutely meaningless) and perform millions of actions
per second with them. At first, they hoped that these
advantages already guarantee a high intellectual potential, but
it soon became clear that much awareness does not necessarily
conceal wisdom. After all, as we have seen, the mind is the
12
ability not to discard bad options, but to find good ones, which
you will not achieve with a primitive search.
A person will not remember a large amount of unorganized
information (like a telephone directory), but the knowledge in
his head is well structured and interconnected. They reflect the
essential aspects of reality to the greatest extent: sets of route
"maps" are interconnected vertically and horizontally, each
concept is surrounded by its "associative aura" (Acad. D. S.
Likhachev). This wealth of connections allows you to extract
only relevant information, and from them already construct the
right solution.
Knowledge about the world, a model of the world, must also be
endowed with a computer. To do this, a set of "scenarios" is
being introduced into it now. The scenario is a general
framework, a stereotype that should be filled with specific
content every time. Having recognized the situation, the
machine searches for the appropriate scenario, after which it
raises questions and clarifies the missing details for itself.
This is not easy to do, considering that the stock of such
patterns in a person is truly enormous the experience of all
previous life crystallizes in them. We represent each
phenomenon in many cross-sections and angles, and some
things, for example, spatial relationships, are acquired
unconsciously in early childhood.
But the most important difference here is that the brain
operates directly with those capacious images that have arisen
in it, that is, it does not need to descend to the simplest
operations every time. Apparently, imaginative thinking is not
separated from memory, where these images are somehow
13
imprinted, and simultaneously with the restructuring of
memory, it organizes itself, adjusts to the newly created
language and "processor".
This is very difficult to reproduce, primarily because the
physical principles of neurological memory are not disclosed.
Now the analogy between optical holograms and memory
engrams is popular (distribution over the carrier, huge capacity,
associativity). On this similarity, they are trying to base thinking
machines of an unusual type optoelectronic, in which not
numerical codes of all concepts will be stored and processed,
but images-holograms.
Another direction is to create, as it were, an analogue of a
neural network from a large array of simple computers.
Although each of them performs a simple function, together
they manipulate entire complexes of states. Again, it turns out
something similar to imaginative thinking.
One way or another, but computers must learn, in the words of
another patriarch of cybernetics, Claude Shannon, "to perform
natural operations with images, concepts and vague analogies,
and not sequential operations with ten-digit numbers."
MACHINE AND HUMAN
The work of thought is guided by certain target settings,
motivation. The goal itself becomes the apex image that guides
the search for means to achieve it. We have a need to get new
impressions (a sense of information hunger), as well as
compress them, cover them with one glance. Probably, these
14
settings need to be introduced into the machine to make it
actively cognizing.
The day will come when intuitive thinking associated with
memory mechanisms unknown so far will also be implemented
in the form of electronic or some other schemes. Gradually,
artificial intelligence will begin to catch up and then surpass its
creator in solving various tasks, playing chess and the like.
And it will become increasingly obvious that the main
difference is not in the properties of thinking as such, but in the
fact that a person is endowed with personal properties, first of
all, consciousness. "A man knows what he knows."
Will the machine be able to overcome this milestone? When it
learns to form new concepts itself, sooner or later it will come
to the concept of "computer". And then -- the mirror effect:
knowing what a mirror is and seeing it`s reflection in it, it will
come to understand its own "I".
15
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.