Content uploaded by Gordana Dodig Crnkovic
Author content
All content in this area was uploaded by Gordana Dodig Crnkovic
Content may be subject to copyright.
Book Review
Philosophy of Computing and Information - 5 Questions,
Edited by Luciano Floridi
Gordana Dodig-Crnkovic, School of Innovation, Design and Engineering, Mälardalen
University. Sweden http://www.idt.mdh.se/personal/gdc
Product Details
Paperback: 204 pages
Publisher: Automatic Press / VIP (July 1, 2008)
Language: English
ISBN-10: 8792130097
ISBN-13: 978-8792130099
CONTRIBUTORS: Margaret A. Boden / Valentino Braitenberg / Brian Cantwell-
Smith / Gregory Chaitin / Daniel C. Dennett / Keith Devlin / Fred Dretske /
Hubert L. Dreyfus / Luciano Floridi / Tony Hoare / John McCarthy / John R. Searle
/ Aaron Sloman / Patrick Suppes / Johan van Benthem / Terry Winograd /
Stephen Wolfram
“Computing and information, and their philosophy in the broad sense, play a
most important scientific, technological and conceptual role in our world. This
book collects together, for the first time, the views and experiences of some of
the visionary pioneers and most influential thinkers in such a fundamental area
of our intellectual development. “ (Floridi)
This book is one among pearls in the 5 Questions Series by Automatic Press / VIP
which presents answers on 5 challenging questions by number of the leading
modern thinkers, in this case within the Philosophy of Computing and
Information.
The questions the editor, Luciano Floridi asked are the following:
1. Why were you initially drawn to computational and/or informational issues?
2. What example(s) from your work (or the work of others) best illustrate(s) the
fruitful use of a computational and/or informational approach for foundational
researches and/or applications?
3. What is the proper role of computer science and/or information science in
relation to other disciplines?
4. What do you consider the most neglected topics and/or contributions in late
20th century studies of computation and/or information?
5. What are the most important open problems concerning computation and/or
information and what are the prospects for progress?
Given the public interest in the eminent contributors, the answers to the question
about how they got interested in the field of Computing and Information is both
instructive and historically significant. They are highly personal and vivid
reminiscences of the pioneering times and therefore hard to recapture in this
review - they just have to be read the way they are told.
For the rest of the answers, I will give a short account for each of the
contributors, often using their own words as illustration.
As the editor points out in the introduction, the contributors had the freedom to
interpret the questions and answer in the format they find suitable, which
resulted in a very different individual styles of responses, which also adds to the
charm of the book.
Margaret Boden gives us a detailed account about “how computational ideas
can clarify fundamental – philosophical and psychological – questions about the
nature of mind” with number of valuable pointers and references. Boden
declares: “My own view is that a naturalistic view must be possible, and that it is
likely to be grounded in evolution”. Boden is rightly warning against “regrettable
hostility” between different approaches in Cognitive Science (symbolic,
connectionist, situated, dynamical, and homeostatic ) – “because all of them
(and probably more) will be needed to emulate the rich space of possible minds”.
Valentino Braitenberg emphasizes the importance of complexity, “not only in
the brain but generally in living matter everywhere” and the ability of information
which “properly understood, is fully sufficient to do away with popular dualistic
schemes invoking spiritual substances distinct from anything in physics “.
Brian Cantwell-Smith illustrates his own long journey in study of construals of
computing to conclude that “in one way or other, computation involves an
interaction or interplay of meaning and mechanism.” When it comes to meaning
(semantics) Smith is not so much interested in the relation between a program
and a process which results from running it, but rather on the connection
between that process and the task domain that the process is about, in other
words, he is “interested in the semantics of the semantics of programs”. Finally,
Smith claims that ontology and epistemology must be reconstructed together, as
a new metaphysics: “as we can now see a comprehensive theory of
meaning/mechanism dialectic – involves nothing less than a full-fledged assault
on constructing an appropriate metaphysics”. For Smith, computers can help by
serving as “laboratories of middling complexity” “in terms of which to explore
issues of intentionality, embodiment, and semantics.”
Gregory Chaitin relates information with his algorithmic complexity, Leibniz’s
argument about the necessity for natural laws to be simple and knowledge as
information compression. An important contribution to the field is his
epistemology as information theory. “A scientific theory is only of value to the
extent that it’s a compression.” and “Understanding is compression of
information”. Recently he is applying his ideas about complexity to biology.
Daniel C. Dennett gives examples from his work, especially his essay “Artificial
Intelligence as Philosophy and Psychology” (1978) as illustration of a possibility
of demonstrating simplified working models of cognitive process. “Computer
science keeps cognitive science honest”. As the unsolved problem Dennett
selects the lack of solid theory of semantic information.
Keith Devlin makes distinction between information as a semantic concept and
its syntactic representation. His approach is based on Barwise and Perry’s
situation theory. This of course has a very relevant social domain in which “the
goal is not “perfect understanding” but better (i.e. deeper, more precise, more
illuminating, more useful) understanding.” Devlin continues by concluding that “
we learned more about language by seeing the extent to which real language
both conforms and differs from Chomsky’s mathematical descriptions”.
Fred Dretske starts his research with epistemology, and is of interest as a
building block of knowledge. As knowledge by definition always is true, so must
its constituent parts in this view also be true. Dretske adds however: “If, as I
(once again) suspect, contributors to this volume mean something else by term
“information” then our answers to the questions posed will not only be different,
they will be different because – and, perhaps, only because – they are
understood to be answers to quite different questions.” “The disagreements –
and there are sure to be many – might not run very deep once the merely verbal
differences are sorted out.”
Hubert L. Dreyfus was in 1963 invited to evaluate Alan Newell and Herbert
Simon’s work on Cognitive Simulation. As a philosopher, he readily recognized
that AI scientists were in practice turning rationalist philosophy [Hobbes,
Descartes, Leibniz, Kant, Russell] into a research program of GOFAI. An
intelligent (expert) system with a set of true statements and logics was used to
assess the facts of the real world. Dreyfus’ conclusion was that “the deep
problem wasn’t storing millions of facts; it was knowing which facts were
relevant in any given situation”. In the alternative, “Heideggerian/Merleau-
Pontian approach to AI suggested by Freeman, which would solve the problem of
relevance and was ontologically sound in a way GOFAI was not, “a neurodynamic
computer model would have to be given a detailed description of a body and
motivations as ours if things were to count as significant for it so that it could
learn to act intelligently in our world.”
Luciano Floridi describes his search for “epistemology without knowing object”
and methodological minimalism obtained in epistemology by step of adopting a
more fundamental level of abstraction – information instead of traditional
knowledge. Floridi characterizes computer and information sciences as “epistemic
enablers”. He claims that Philosophy of Information is becoming our Philosophia
Prima, also “because computational and informational ideas and artifacts are
today so essential for our scientific development”.
“One of the most neglected topics in late twentieth century studies of
computation and information is a philosophy of nature in the widest sense of the
word (that is in the German sense of Naturphilosophie as this was used by
Schelling and Hegel).” – we can only agree.
Tony Hoare directs his account on the history and the future of the effort of
making software error-free. He ends with the following optimist vision: “let me
look forward to the day when programming error is a problem from the past;
when computer programmers make fewer mistakes than engineers in any other
profession” – we all look forward to that day. In this context it would be
interesting and highly relevant to learn more about Hoares study of Process
Algebra – a mathematical formalism developed to describe systems in continuous
interaction with the environment – increasingly important in new paradigms of
computing.
John McCarthy advises that “Philosophers need to adopt some of the practices
of AI and first study simple variants of phenomena like action, knowledge, belief
and context rather than only looking for the most general definitions.” – which is
a view he shares with many among other book contributors.
John R. Searle concludes that “The most neglected topics in studies of
computation and information that are psychologically real, that is, that are
relevant to Cognitive Science, have to do with the question of how the brain
actually works as a physical biological system.” Searle insightfully welcomes the
move from computational Cognitive Science to Cognitive Neuroscience.
Aaron Sloman represents the design stance (ie “learning how to produce
explanations of working systems (e.g. minds), in particular explanations that are
capable of being tested in working implementations” by constructing machines
with cognitive capacities. His conclusion is that informational architecture is
central for understanding of cognition: “I begun to think about integrated
information-processing architectures combining many different sorts of
components, and that eventually led me to the design-based analysis of many
other aspects of human minds and animal minds, constantly driven by the
question: what sort of machine could do that?” For the future “Understanding the
variety of types of virtual machines and the variety of ways in which virtual
machines can be implemented or realized in physical machines or other virtual
machines, will, I suspect, provide much matter for philosophical analysis in
future years.”
Patrick Suppes points out that brain computations on a system level are
electromagnetic while on the cell level they are chemical. They are probabilistic
and deeply parallel in structure. When it comes to the question of continuum vs.
discrete character of computational mechanism, Suppes interestingly refers to
Kant’s Second Antinomy, theory that the whole consists of indivisible atoms
whereas, in fact, none such exist; while in question of the relationship between
free will and causally bound mechanism, he refers to the Kant’s Third Antinomy
addressing the problem of freedom in relation to universal causality. As a most
important open problem Suppes chooses “the fundamental nature of space and
time” with an interesting remark: “But there is still a reluctance to develop what
seems to be a natural isomorphism between discrete space-time and continuous
space-time.” One of the open questions is for Systems Neuroscience: how large
collections of synchronized neurons are computing, with all relevant physics and
chemistry.
Johan van Benthem describes John Barwise and John Perry’s “situation
semantics” as “a radical alternative to the ancient regime in philosophical and
mathematical logic. On their view (..) logic should study the information available
in rich distributed environments (with both physical and human components),
and the resulting information flow.” “Statics and dynamics come together in
modern logics of what may be called intelligent interaction – and this is no
coincidence. Logic and information should take the systematic Tandem View that
information can not be understood in isolation from the process which conveys
and transforms it. No information without transformation!”i van Benthem also
rightly notices cohesive force that concept of information presents: “interest in
information and computation as themes cutting through old boundaries between
the humanities, social, and natural sciences”. He also emphasizes the interplay
between statics and dynamics, information and process.ii
Computer Science or rather Informatics in this context provides tools for
representation of data together with methods of computation over them. Unlike
Turing Machines which are sequential computational models, more general
formulations provide explicit representations of concurrency and communication
such as the process algebra among others.
Van Benthem concludes: “Taking biological and psychological facts seriously is
not uncontroversial in logical circles, but “Information, Computation and
Cognition” may be the way to go.
Terry Winograd describes his own fundamental work as “critical re-examination
of the relationship between symbolic processing and the communicative workings
of ordinary human language.” Winograd characterizes computer science as a mix
of disciplines unlike classical sciences but with new and promising possibilities,
both as models and tools opening “a new world of examples and Gedanken
experiments. “we can see 21st century ascendance of the biological sciences as a
product of being able to deal with extreme complexity in a rigorous
computational way.” As the largest open problem Winograd mentions “the
relationship between computation, of the kind we understand from digital
computers, and the informational activities of the human brain/mind.” “The
“decoding of thought” is a far-off but intriguing goal.” – Winograd finishes.
Stephen Wolfram spent some twenty five years applying computational ideas
to fundamental science. “The single most fruitful concept has been exploring the
computational universe of possible programs.”
He explains: “Learning about computational universe also informs many old
foundational questions in science and elsewhere. It shows us, at a basic level,
why complexity is so easy for nature to produce, and so widespread. It shows us
that there are fundamental computational limitations to traditional mathematical
science. It gives us insight into how similar phenomena like intelligence are to
natural processes. It shows us how special – and in many ways arbitrary – the
formal systems like mathematics that we have built are.”
Finally Wolfram concludes: “Of one thing I am certain: in the computational
universe there is a huge amount that we can mine for human purposes – for
creating technology, or art or other things that we as humans use.”
The above account gives just a few glimpses from this book abundant in new
ideas and insights. In sum: this is a highly recommendable and really enjoyable
reading.
i Compare to ”No information without computation” as found in the introduction to: Dodig-Crnkovic G. and
Stuart S., eds. Computation, Information, Cognition – The Nexus and The Liminal, Cambridge Scholars
Publishing, Cambridge 2007
ii In my research in the field I came to the same conclusions. See Gordana Dodig-Crnkovic
Investigations into Information Semantics and Ethics of Computing http://www.diva-
portal.org/mdh/theses/abstract.xsql?dbid=153, Mälardalen University Press, September 2006