ArticlePDF Available

Abstract

Do predictions obtained from models constitute information on which reliable decisions can be made? Is it necessary, that to be of interest, predictions and other information generated by models must be true? This paper investigates the relation between the model and reality, information and truth. It will argue that meaningful data need not necessarily be true in order to constitute information. Partially true information or even completely false information can lead to a desirable outcome such as a technological innovation or a scientific breakthrough. Sometimes sheer serendipity gives rise to an invention. A combination of true and false information may result in an epoch-making event such as Columbus' discovery of America, on his intended voyage to India. An even more basic problem prevents scientists from thinking exclusively in terms of "true" information in the research process. In beginning from an existing theory (say Aristotelian physics), and developing a new theory (say Galilean physics) one can talk about the truth within each model, but during the transition between the two, there is a mixture of old and new concepts in which truth is not well defined. Instead of the veridicity of a model, the two basic concepts that are commonly used in empirical sciences are models correctness (validity) and its appropriateness within a context. The conclusion is that despite the empirical models being in general not true but only truthlike, they may nevertheless produce results from which adequate conclusions can be drawn, and therefore can serve as the grounds for decision-making. In that sense they can yield information vital for improving our knowledge about the actual empirical world that is the precondition for technological innovation and scientific discovery.
Model Validity and Semantics of Information
Gordana Dodig-Crnkovic
Department of Computer Science and Engineering
Mälardalen University, Västerås, Sweden
gordana.dodig-crnkovic@mdh.se
Abstract
Do predictions obtained from models constitute information on which reliable decisions can be made? Is it
necessary, that to be of interest, predictions and other information generated by models must be true? This
paper investigates the relation between the model and reality, information and truth. It will argue that
meaningful data need not necessarily be true in order to constitute information. Partially true information
or even completely false information can lead to a desirable outcome such as a technological innovation or
a scientific breakthrough. Sometimes sheer serendipity gives rise to an invention. A combination of true
and false information may result in an epoch-making event such as Columbus’ discovery of America, on his
intended voyage to India. An even more basic problem prevents scientists from thinking exclusively in
terms of “true” information in the research process. In beginning from an existing theory (say Aristotelian
physics), and developing a new theory (say Galilean physics) one can talk about the truth within each
model, but during the transition between the two, there is a mixture of old and new concepts in which truth
is not well defined. Instead of the veridicity of a model, the two basic concepts that are commonly used in
empirical sciences are models correctness (validity) and its appropriateness within a context.
The conclusion is that despite the empirical models being in general not true but only truthlike, they may
nevertheless produce results from which adequate conclusions can be drawn, and therefore can serve as
the grounds for decision-making. In that sense they can yield information vital for improving our
knowledge about the actual empirical world that is the precondition for technological innovation and
scientific discovery.
1 Introduction
The ubiquity of computers and the constantly increasing availability of computer power
accelerate the use of computer-based representations, simulations and emulations,
modeling and model-based reasoning and contribute to their dynamic development, see
Denning and Metcalfe (1997). Administration, sciences, technology and businesses all
rely on models of systems which they use to describe, understand, predict and control.
This paper focuses on the relation between models, information, truth and reality.
2 System Modeling and Simulation: Validation and Verification
A model is a simplified representation of a complex system or process developed for its
understanding, control and prediction. A model resembles the target system in some
aspects while at the same time it differs in other aspects that are not considered essential,
Johansson (1999). It follows that a model, which is valid for one objective, may not be
valid for another. Models are abstracted or constructed on the grounds that they
potentially satisfy important constraints of the target domain.
Model-based reasoning supports conceptual change and facilitates novel insights as
clearly demonstrated in Magnani, Nersessian and Thagard (1999).
When discussing models, two concepts are central: verification and validation, see for
details Irobi, Andersson, and Wall (2004), and Davis (1992).
Model verification is the substantiation that the model is transformed from a problem
formulation into a model specification as intended, with sufficient accuracy. Model
verification deals with building the model right.
Model validation is the substantiation that the model, within its domain of applicability,
behaves with satisfactory accuracy consistent with the objectives. Model validation deals
with building the right model.
Consequently, in using the term ‘valid’, we refer to models that adequately represent their
target systems in their domains of applicability. The issue central for an appropriate
assessment of model validity is that of the correctness, and not necessarily of the truth.
Determining whether or not a model is an appropriate representation of the reality, for a
well specified goal, is the essence of model validation, but there are other significant
factors to be considered such as the relevance of the goal itself, Dodig-Crnkovic (2003).
Experimentation is the backbone of scientific thinking and the sine qua non technique of
Francis Bacon’s scientific method, as presented in his Novum Organum. Conducting
empirical tests allows us to go beyond the limits of Aristotelian logic in our investigation
of the physical reality.
A special case of the use of the model is a simulation which is time-dependent goal-
directed experimentation with a dynamic model. When actual experimentation cannot be
performed on the real system, it can be replaced by simulation. Simulation can be used in
addressing analysis, control, and design problems, Wildberger (2000). Simulation is a
tool which facilitates the gaining of insight, the testing of theories, experimentation with
control strategies, and prediction of performance. In the concept of simulation as a
model-based computational activity, the emphasis is on the generation of model
behaviour. Simulation can be interpreted as model-based experimental knowledge
generation, Ören (2001), and can be combined with different types of knowledge
generation techniques such as optimization, statistical inferencing, reasoning and
hypothesis processing.
A simulation depends essentially on the quality of the input data with respect to
correctness, reliability, sufficiency, relevance etc. It is the actual data representation of
the information at hand which makes possible an analysis of the effects of changes in the
underlying process based on changes in the model.
Questions of interest are to what degree can the results of simulation results be trusted
and can simulation be said to generate information at all? The former can be answered in
a pragmatic way, by asking what would be the alternative. In the case of weather
forecasting, for example, we know that the reliability of the prediction is not extremely
high, but it is improving, and it should be compared to a pure guess which is obviously a
less successful prediction method. The output of a model for producing weather forecasts
may be seen as information that is probable but not certain (true), yet necessary and
useful.
3 Information Theories
Data is generally considered to be a series of disconnected facts and observations. These may be converted
to information by analyzing, cross-referring, selecting, sorting, summarizing, or in some way organizing
the data. Patterns of information, in turn, can be worked up into a coherent body of knowledge. Knowledge
consists of an organized body of information, such information patterns forming the basis of the kinds of
insights and judgments which we call wisdom.
The above conceptualization may be made concrete by a physical analogy (Stonier, 1983): consider
spinning fleece into yarn, and then weaving yarn into cloth. The fleece can be considered analogous to
data, the yarn to information and the cloth to knowledge. Cutting and sewing the cloth into a useful
garment is analogous to creating insight and judgment (wisdom). This analogy emphasizes two important
points: (1) going from fleece to garment involves, at each step, an input of work, and (2) at each step, this
input of work leads to an increase in organization, thereby producing a hierarchy of organization.
Stonier (1997)
In his Open Problems in the Philosophy of Information Floridi (2004) suggests a list of
the eighteen most important problems of PI (Philosophy of Information). Among those,
the most fundamental is the question: “What is information?”.
“Inconsistencies and paradoxes in the conceptualization of information can be found
through numerous fields of natural, social and computer science.” Marijuan (2002)
Or, as Floridi (2005) formulates it, “Information is such a powerful and elusive concept
that it can be associated with several explanations, depending on the requirements and
intentions.”; see even van Benthem, Adriaans (2005). In the same spirit, Capurro and
Hjørland (2003) analyze the term information explaining its role as a constructive tool
and its theory-dependence as a typical interdisciplinary concept.
On the other hand Capurro, Fleissner and Hofkirchner (1999) discuss the question if a
unified theory of information (UTI) is feasible, answering in a cautiously affirmative
way. According to the authors, UTI is an expression of the metaphysical quest for a
unifying principle of the same type as energy and matter.
In the reductionist unification approach, reality is an information-processing
phenomenon. “We would then say: whatever exists can be digitalized. Being is
computation.” (ibid) In other words, at a fundamental level information characterizes the
world itself, for it is through information we gain all our knowledge, and yet we are only
beginning to understand its real meaning. If information is to replace matter as the
primary stuff of the universe, as von Baeyer (2003) suggests; it will provide a new basic
unifying framework for describing and predicting reality in the twenty-first century.
An alternative to a unified theory of information would be the networked structure of
different information concepts, which retain their specific fields of application.
It is interesting to observe that information can be understood in conjunction with its
complementary concept of computation. Cantwell Smith finds the relation between
meaning and mechanism the most fundamental question, Dodig-Crnkovic (2004).
Having said that about the current views of the phenomenon information, it might be
interesting to briefly review the existing theories of information following Collier’s
account of the subject.
3.1 Syntactic Theories of Information
In the syntactic approaches, information content is determined entirely by the structure of
language and has nothing to do with the meaning of messages.
Statistical (Shannon’s communications theory)
Shannon’s theory gives the probability of transmission of messages with specified
accuracy in the presence of noise, including transmission failure, distortion and
accidental additions. The statistical interpretation of information assumes an ensemble of
possible states each with a definite probability. The information is the sum of the base 2
log of the inverse of the probability of each weighted by the probability of the state,
H = prob(si)log(1/prob(si))
which is an expression similar to the expression for entropy in Boltzmann's statistical
thermodynamics.
Combinatorial information theory is general, and has the same form as statistical
formulation. The difference is that probability is replaced by frequency,
H = freq(si)log(1/freq(si))
Algorithmic information theory (Kolmogorov, Chaitin) combines the ideas of program-
size complexity with recursive function theory. The complexity of an object is measured
by the size in bits of the smallest program for computing it.
It was Kolmogorov who first suggested that program-size complexity provides an
explication of the concept of information content of a string of symbols. Later Chaitin
adopted this interpretation.
The intuitive idea behind this theory is that the more difficult an object is to specify or
describe, the more complex it is. One defines the complexity of a binary string s as the
size of the minimal program that, when given to a Turing machine T, prints s and halts.
To formalize Kolmogorov-Chaitin complexity, one has to specify exactly the types of
programs. Fortunately, it doesn't really matter: one could take a particular notation for
Turing machines, or LISP programs, or Pascal programs, etc.
If we agree to measure the lengths of all objects consistently in bits, then the resulting
notions of complexity will differ only by a constant term: if K1(s) and K2(s) are the
complexities of the string s according to two different programming languages L1 and L2,
then there is a constant c (which only depend on the languages chosen, but not on s) such
that
12
K ( ) K ( )ssc≤+
Here, c is the length in bits of an interpreter for L2 written in L1. For more details see
http://en.wikipedia.org/wiki/Information_theory
An interesting critical analysis of this approach may be found in Raatikainen’s
Complexity and Information; the main argument being that it is one thing to specify (by
an algorithm) an object, and another thing to give instructions sufficient for finding the
object. This points back to the fact that the information concept itself is under intense
debate.
3.2 Semantic Theories of Information
Although Shannon declared that “semantic aspects of communication are irrelevant to the
engineering problem", Shannon (1948), his approach is often termed a Mathematical
Theory of Information and treated as describing the semantic information content of a
message. Bar-Hillel (1955) notes, “it is psychologically almost impossible not to make
the shift from the one sense of information, i.e. information = signal sequence, to the
other sense, information = what is expressed by the signal sequence."
The semantic theory of information explicitly theorizes about what is expressed by
messages, i.e. about their information content. As a systematic theory it was initiated by
Carnap and Bar-Hillel and has been developed and generalized since then by Hintikka.
Information in the semantic approach is the content of a representation.
Carnap and Bar-Hillel (Bar-Hillel, 1964) used inductive logic to define the information
content of a statement in a given language in terms of the possible states it rules out. The
basic idea is that the more possibilities (possible states of affairs) a sentence rules out, the
more informative it is, i.e. information is the elimination of uncertainty. The information
content of a statement is thus relative to a language. Evidence, in the form of observation
statements, (Carnap's “state descriptions", or Hintikka's “constituents") contains
information through the class of state descriptions the evidence rules out. (The essential
underlying assumption is that observation statements can be related to experience
unambiguously.)
Carnap and Bar-Hillel have suggested two different measures of information. The first
measure of the information content of statement S is called the content measure, cont(S),
defined as the complement of the a priori probability of the state of affairs expressed by S
cont(S) = 1- prob(S)
Content measure is not additive and it violates some natural intuitions about conditional
information. Another measure, called the information measure, inf(S) in bits is given by:
inf(S) = log2 (1/(1- cont(S))) = -log2 prob(S)
prob(S) here again is the probability of the state of affairs expressed by S, not the
probability of `S' in some communication channel. According to Bar-Hillel cont(S)
measures the substantive information content of sentence S, whereas inf(S) measures the
surprise value, or the unexpectedness, of the sentence H.
Although inf satisfies additivity and conditionalisation, it has a following property: If
some evidence E is negatively relevant to a statement S, then the information measure of
S conditional on E will be greater than the absolute information measure of S. This
violates a common intuition that the information of S given E must be less than or equal
to the absolute information of S. This is what Floridi (2004) calls the Bar-Hillel semantic
paradox.
A more serious problem however with the approach is the linguistic relativity of
information, and problems with the Logical Empiricist program that supports it, such as
the theory-ladenness of observation, Collier (1990).
For recent semantic theories such as Dretske (1981), Barwise and Perry (1983), Devlin
(1991), see Collier, http://www.nu.ac.za/undphil/collier/information/information.html.
4 The Standard Definition of Information vs. Strongly Semantic Information
In his Outline of a Theory of Strongly Semantic Information as well as in Information
(The Blackwell guide to the philosophy of computing and information) Floridi (2004)
discusses the question of the fundamental nature of information. A standard definition of
information which is assumed to be declarative objective and semantic (DOS) is given in
terms of data + meaning. In this context Floridi refers to The Cambridge Dictionary of
Philosophy definition of information:
an objective (mind independent) entity. It can be generated or carried by messages (words, sentences) or by other
products of cognizers (interpreters). Information can be encoded and transmitted, but the information would exist
independently of its encoding or transmission.
It is instructive to compare the above formulation with the Web Dictionary of
Cybernetics and Systems, http://pespmc1.vub.ac.be/ASC/INFORMATION.html that
offers the following definition of information:
that which reduces uncertainty. (Claude Shannon); that which changes us. (Gregory Bateson)
Literally that which forms within, but more adequately: the equivalent of or the capacity of something to perform
organizational work, the difference between two forms of organization or between two states of uncertainty before
and after a message has been received, but also the degree to which one variable of a system depends on or is
constrained by (see constraint) another. E.g., the DNA carries genetic information inasmuch as it organizes or
controls the orderly growth of a living organism. A message carries information inasmuch as it conveys something
not already known. The answer to a question carries information to the extent it reduces the questioner's uncertainty.
A telephone line carries information only when the signals sent correlate with those received. Since information is
linked to certain changes, differences or dependencies, it is desirable to refer to theme and distinguish between
information stored, information carried, information transmitted, information required, etc. Pure and unqualified
information is an unwarranted abstraction.
In the background there is the most fundamental notion of information, ascribed to a
number of authors; “a distinction that makes a difference", MacKay (1969), or "a
difference that makes a difference", Bateson (1973).
Floridi’s Outline of a Theory of Strongly Semantic Information (2004) contributes to the
current debate by criticizing and revising the Standard Definition of declarative, objective
and semantic Information (SDI). The main thesis defended is that meaningful and well-
formed data constitute information only if they also qualify as contingently truthful. SDI
is criticized for providing insufficient conditions for the definition of information,
because truth-values do not supervene on information. Floridi argues strongly against
misinformation as possible source of information or knowledge. As a remedy, SDI is
revised to include a truth-condition.
Accordingly, SDI is modified to include a condition about the truth of the data; so that
σ is an instance of DOS information if and only if:
1. σ consists of n data (d), for n 1;
2. the data are well-formed (wfd);
3. the wfd are meaningful (mwfd = δ);
4. the δ are truthful.”
Floridi’s concept of strongly semantic information from the outset encapsulates truth and
thus can avoid the Bar-Hillel paradox that we mentioned in the previous chapter.
It is important to remember that Floridi analyses only one specific type of information,
namely the alethic (pertaining to truth and falsehood) declarative objective and semantic
information which is supposed to have definite truth value. Non-declarative meanings of
“information”, e.g. referring to graphics, music or information processing taking place in
a biological cell or a DNA molecule, such as defined in Marijuán (2004) are not
considered.
Apparently there is a dilemma here and we are supposed to choose between the two
definitions of information; the weaker one that accepts meaningful data as information,
and the stronger one that claims that information must be true in order to qualify as
information. Yet, both approaches will prove to have legitimacy under specific
circumstances, and I will try to illuminate why the general definition of information does
not explicitly require truth from the data.
5 Information, Truth and Truthlikeness
...by natural selection our mind has adapted itself to the conditions of the external world. It has adopted the
geometry most advantageous to the species or, in other words, the most convenient. Geometry is not true, it
is advantageous.
Henri Poincaré, Science and Method
Science is accepted as one of the principal sources of “truth” about the world we inhabit.
It might be instructive to see the view of truth from the scientific perspective, Dodig-
Crnkovic (2005). When do we expect to be able to label some information as “true”? Is it
possible for a theory, a model or a simulation to be “true”? When do we use the concept
of truth and why is it important?
Popper was the first prominent realist philosopher and scientist to proclaim a radical
fallibilism about science (fallibilism claims that some parts of accepted knowledge could
be wrong or flawed), while at the same time insisting on the epistemic superiority of the
scientific method. Not surprisingly, Popper was the first philosopher to abandon the idea
that science is about truth and take the problem of truthlikeness seriously. In his Logik
der Forschung Popper argues that the only kind of progress an inquiry can make consists
in falsification of theories.
Now how can a succession of falsehoods constitute epistemic progress? Epistemic
optimism would mean that if some false hypotheses are closer to the truth than others, if
truthlikeness (verisimilitude) admits of degrees, then the history of inquiry may turn out
to be one of steady progress towards the goal of truth. Oddie (2001)
While truth is the aim of inquiry, some falsehoods seem to realize this aim better than others. Some truths better
realize the aim than other truths. And perhaps even some falsehoods realize the aim better than some truths do. The
dichotomy of the class of propositions into truths and falsehoods should thus be supplemented with a more fine-
grained ordering -- one which classifies propositions according to their closeness to the truth, their degree of
truthlikeness or verisimilitude. The problem of truthlikeness is to give an adequate account of the concept and to
explore its logical properties and its applications to epistemology and methodology.
On those lines, Kuipers (2000) developed a synthesis of a qualitative, structuralist theory
of truth approximation:
In this theory, three concepts and two intuitions play a crucial role. The concepts are confirmation, empirical
progress, and (more) truthlikeness. The first intuition, the success intuition, amounts to the claim that empirical
progress is, as a rule, functional for truth approximation, that is, an empirically more successful theory is, as a rule,
more truthlike or closer to the truth, and vice versa. The second intuition, the I&C (idealization and concretization)
intuition, is a kind of specification of the first.
According to Kuipers the truth approximation is a two-sided affair amounting to
achieving 'more true consequences and more correct models', which obviously belongs to
scientific common sense.
The conclusion from the scientific methodology point of view is that, at best, we can
discuss truthlikeness, but not the truth of a theory. Like Poincaré’s geometry, other
models or theories are more or less correct and advantageous.
6 Conclusion
There are two major approaches to the individuation of scientific theories, that have been called syntactic
and semantic. We prefer to call them the linguistic and non-linguistic conceptions. On the linguistic view,
also known as the received view, theories are identified with (pieces of) languages. On the non-linguistic
view, theories are identified with extralinguistic structures, known as models. We would like to distinguish
between strong and weak formulations of each approach. On the strong version of the linguistic approach,
theories are identified with certain formal-syntactic calculi, whereas on a weaker reading, theories are
merely analysed as collections of claims or propositions. Correspondingly, the strong semantic approach
identifies theories with families of models, whereas on a weaker reading the semantic conception merely
shifts analytical focus, and the burden of representation, from language to models.
Hendry and Psillos [2004]
Here we can refer to Laudan’s Methodological Naturalism, (Laudan, p.110) in Psillos
(1997) formulation:
- All normative claims are instrumental: Methodological rules link up aims with methods which will
bring them about, and recommend what action is more likely to achieve one’s favoured aim.
- The soundness of methodological rules depends on whether they lead to successful action, and their
justification is a function of their effectiveness in bringing about their aims. A sound methodological
rule represents our ´best strategy´ for reaching a certain aim (cf. pp 103 and 128 ff)
In the actual process of discovery and in model building information is the fundamental
entity. Very often information is transformed and it changes its place and physical form.
Depending on context, it also changes its meaning. When dealing with empirical
information we always meet the fact that the real world never perfectly conforms to the
ideal abstract structure (Plato’s stance). Ideal atoms might be represented by ideal
spheres. Real atoms have no sharp boundaries. In the physical world of technological
artefacts and empirical scientific research situations in which the result of a model can be
sharply divided into two categories (true-false) are rare. However, it is often possible to
conventionally set the limits for different outcomes that we can label as
“acceptable”/”non-acceptable” which can be translated in terms of “true”/”false” if we
agree to use the term truth in a very specific sense.
There are cases in the history of science in which false information/knowledge (false for
us here and now) has lead to the production of true information/knowledge (true for us
here and now). A classical example is serendipity, making unexpected discoveries by
accident. The pre-condition for the discovery of new scientific ‘truths’ (where the term
‘true’ is used in its limited sense to mean ‘true to our best knowledge‘) is not that we start
with a critical mass of absolutely true information, but that in continuous interaction
(feedback coupling) with the empirical world we refine our set of (partial) truths. With
good reason, truth is not an operative term for scientists.
Christopher Columbus had, for the most part, incorrect information about his proposed
journey to India. He never saw India, but he made a great discovery. The "discovery" of
America was not incidental; it was a result of a combination of many favourable
historical preconditions combined with both true and false information about the state of
affars. Similar discoveries are constant occurrences in science.
“Yet libraries are full of ‘false knowledge’ ”, as Floridi rightly points out in his
Afterword - LIS as Applied Philosophy of Information: a Reappraisal (2004).
Nevertheless we need all that “false knowledge”. Should we throw away all books
containing false information, and all newspapers containing misinformation, what would
be left? And what would our information and knowledge about the real world look like?
In the standard (general) definition of semantic information commonly used in empirical
sciences information is defined as meaningful data. Floridi in his new Theory of Strongly
Semantic Information adds the requirement that standard semantic information should
also contain truth in order to avoid the logical paradox of Bar-Hillel’s semantic theory.
This paper argues that meaningful data need not necessarily be true to constitute
information. Partially true information or even completely false information can lead to
an outcome adequate and relevant for inquiry. Instead of insisting on the veridicity of an
empirical model, we should focus on such basic criteria as the validity of the model and
its appropriateness within a given context.
References
von Baeyer H.C., 2003, Information: The New Language of Science, Weidenfeld and Nicolson
Barwise, Jon and John Perry, 1983 Situations and Attitudes. MIT Press, Cambridge, MA
Bateson, G., 1973, Steps to an Ecology of Mind, Paladin. Frogmore, St. Albans
van Benthem J., Adriaans P., eds., 2005, Handbook on the Philosophy of Information, (In Gabbay D., Thagard P., and
Woods J., Handbook of the philosophy of science, to be published by Elsevier
Carnap, R., 1950, Logical Foundations of Probability. University of Chicago Press, Chicago
Carnap, R., Bar-Hillel, Y., 1964, An Outline of Semantic Information. In Bar-Hillel, Y. (Ed.) Language and
Information: Selected Essays on Their Theory and Application. Addison-Wesley, Reading, Mass.
Capurro R., Hjørland B., 2003, The Concept of Information In: Annual Review of Information Science and Technology
(ARIST), Ed. Blaise Cronin, Information Today, Inc. Medford, NJ
Capurro R., Fleissner P., Hofkirchner W., 1999, Is a Unified Theory of Information Feasible? A Trialogue In: W.
Hofkirchner Ed.: The Quest for a Unified Theory of Information. Proceedings of the Second International
Conference on the Foundations of Information Science, Gordon and Breach Publ.
Chaitin, Lecture, Dijon, 2003, http://www.cs.auckland.ac.nz/CDMTCS/chaitin
Collier, John D., 1990, Intrinsic Information In Philip Hanson (ed) Information, Language and Cognition: Vancouver
Studies in Cognitive Science, Vol. 1, University of Oxford Press
Devlin, K., 1991, Logic and Information, Cambridge University Press, Cambridge
Dodig-Crnkovic, G., 2005, System Modeling and Information Semantics, Proc. Conf. for the Promotion of Research in
IT at New Universities and at University Colleges in Sweden, Studentlitteratur, Lund (forthcoming)
Dodig-Crnkovic, G., 2004, Philosophy of Information, a New Renaissance and the Discreet Charm of the
Computational Paradigm, Computing and Philosophy Conference, E-CAP Pavia, Italy
Dodig-Crnkovic, G., 2003, Shifting the Paradigm of the Philosophy of Science: the Philosophy of Information and a
New Renaissance, Minds and Machines: Special Issue on the Philosophy of Information, Volume 13, Issue 4
Davis, P.K., 1992, Generalizing Concepts and Methods of Verification, Validation, and Accreditation (VV&A) for
Military Simulations, RAND, R-4249-ACQ
Denning, P.J. and Metcalfe, R.M., 1997, eds., Beyond Calculation – The Next Fifty Years of Computation. Copernicus
(Springer Verlag). New York
Dretske, F., 1981, Knowledge and the Flow of Information, MIT Press Cambridge, MA
Floridi, L., 2004, Information. In Floridi L., ed., The Blackwell guide to the philosophy of computing and information
Blackwell, Oxford UK
Floridi. L., 2004, Afterword - LIS as Applied Philosophy of Information: a Reappraisal, Library Trends, 52(3)
Floridi, L. (forthcoming), Is Information Meaningful Data?, Philosophy and Phenomenological Research,
http://www.wolfson.ox.ac.uk/floridi/papers.htm.
Floridi, L., 2004, Outline of a Theory of Strongly Semantic Information Minds and Machines, 14.2, pp. 197-222.
Floridi L., 2004, Open Problems in the Philosophy of Information, Metaphilosophy, Volume 35: Issue 4
Hendry R. F. and Psillos S., 2004, How To Do Things With Theories: An Interactive View of Language and Models in
Science in K. Paprzycka and P. Przybysz (eds.) Idealization and Concretization, Rodopi
Irobi I. S., Andersson, J. and Wall A., 2004, Correctness criteria for models’ validation – A philosophical perspective;
in Models, Simulations and Visualization conference (MSV'04), International Multiconferences in Computer
science and Computer Engineering, Las Vegas
Johansson, L-G. , 1999, Introduktion till Vetenskapsteorin (In Swedish), Thales
Kuipers, T.A.F., 2000, From instrumentalism to constructive realism: on some relations between confirmation,
empirical progress, and truth approximation - Dordrecht Kluwer Academic
Kuipers, T. A. F., 1987, ed., What is closer-to-the-truth? A parade of approaches to truthlikeness, Poznan Studies in the
Philosophy of the Sciences and the Humanities, Volume 10, Amsterdam: Rodopi
Kuipers, T. A. F., 2002, Inductive Aspects of Confirmation, Information, and Content, To appear in the Schilpp-
volume The Philosophy of Jaakko Hintikka
Kuipers, T. A. F., (to appear), Empirical and conceptual idealization and concretization. The case of truth
approximation, in (English and Polish editions of) Liber Amicorum for Leszek Nowak
Laudan L., 1996, Beyond Positivism and Relativism: Theory, Method and Evidence, Westview Press
Leyton, M. Process Grammar. http://www.rci.rutgers.edu/~mleyton/freeform1.htm
MacKay, D. M., 1969, Information, Mechanism and Meaning. Cambridge, MA: MIT Press
Magnani L., Nersessian N. J. and Thagard P., 1999, ed., Model-based Reasoning In Scientific Discovery, Kluwer, NY
Marijuan, P. C., 2003, Foundations of Information Science: Selected papers from FIS 2002. Entropy 5, 214-219
Marijuán P. C. 2004, Information and Life: Towards a Biological Understanding of Informational Phenomena, TripleC
2(1): 6-19, ISSN 1726-670X, http://tripleC.uti.at
Oddie, G., 2001, Truthlikeness, The Stanford Encyclopedia of Philosophy, E.N. Zalta (ed.),
http://plato.stanford.edu/archives/fall2001/entries/truthlikeness/
Poincaré, H., 1982, Science and Method, New York: Dover
Psillos S., 1997, Naturalism Without Truth? Stud. Hist. Phil. Sci., Vol. 28, No.4, pp. 699-713
Raatikainen P., Complexity and Information -A Critical Evaluation of Algorithmic Information Theory
http://www.helsinki.fi/collegium/eng/Raatikainen/information.pdf
Shannon, C. E., 1948, A mathematical theory of communication. Bell Sys. Tech. J., 27, 323-332; 379-423.
http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html
Stonier, T., 1993, The Wealth of Information, Thames/Methuen, London
Stonier, T., 1997, Information and Meaning. An Evolutionary Perspective, Springer, Berlin, New York
Wildberger, A.M., 2000, AI & Simulation, Simulation, 74
Ören, T.I., 2001, Impact of Data on Simulation: From Early Practices to Federated and Agent-Directed Simulations. In:
A. Heemink et al. (eds.) Proc. of EUROSIM 2001, June 26-29, Delft, the Netherlands
Full-text available
Article
Floridis Theory of Strongly Semantic Information defines information as consisting of data and truth in contrast to the standard definition prevailing in empirical sciences in which information is de- fined as meaningful data. I argue that meaningful data does not neces- sarily need to be true to constitute information. Partially true informa- tion or even completely false information can lead to an outcome ade- quate and relevant for inquiry. Instead of insisting on the truth of an empirical model, the focus is on basic criteria such as the validity of the model and its appropriateness within a certain well-defined con- text, as the meaning of the information content of the model is strongly contextual. Even though empirical models could in general only be 'adequate' and not 'true' they may produce results and data from which relevant conclusions could be drawn. If truthlikeness ad- mits of degrees, then the history of inquiry is one of steady progress towards the truth. In that sense models can generate information for improving our knowledge about the empirical world.
Full-text available
Article
In: Informatik Forum 1/1997, 36-45 WH: Dear Rafael, in order to start our dialogue, I would appreciate your giving the first contribution by answering the following question: what conclusions do you draw from the logical trilemma in your speech at the conference? RC: Dear Wolfgang, may I first summarize the content of the logical trilemma, or "Capurro's trilemma" as you called it in your paper "In-formatio revisited". (2) Information may mean the same at all levels (univocity), or something similar (analogy), or something different (equivocity). In the first case we lose all qualitative differences, as for instance when we say that e-mail and cell reproduction are the same kind of information process. Not only the "stuff" and the structure but also the processes in cells and computer devices are rather different from each other. If we say the concept of information is being used analogically, then we have to state what the "original" meaning is. If it is the concept of information at the human level, then we are confronted with anthropomorphisms if we use it at a non-human level. We would say that "in some way" atoms "talk" to each other, etc. Finally there is equivocity, which means that information in physics and information in education are wholly different concepts. In this case, information cannot be a unifying concept any more, i.e. it cannot be the basis for the new paradigm you are looking for. Your conclusion or "solution" of this trilemma is: we go back to the etymological roots (information as "giving form") and we take an evolutionary perspective where qualities can emerge. I call this solution "dialectical informatism" (DIAINF), considering it to be a new version of dialectical materialism (DIAMAT). I think there are several questions to be considered, which I have listed below.
Full-text available
Article
Data has strong impact on different aspects of scientific thinking. The article starts with a milestone example of the impact of data. Then clarifications are offered for some basic concepts such as, belief, fact, data, information, and knowledge. Three perceptions of simulation, namely non-scientific, scientific, as well as military views are outlined. Concurrency of simulation and real system operations are elaborated on and four possibilities for augmented reality are clarified. Where data matters in simulation it is highlighted. Unity in diversity of simulation is pointed out.
Full-text available
Article
The philosophy of information (PI) is a new area of research with its own field of investigation and methodology. This article, based on the Herbert A. Simon Lecture of Computing and Philosophy I gave at Carnegie Mellon University in 2001, analyses the eighteen principal open problems in PI. Section 1 introduces the analysis by outlining Herbert Simon's approach to PI. Section 2 discusses some methodological considerations about what counts as a good philosophical problem. The discussion centers on Hilbert's famous analysis of the central problems in mathematics. The rest of the article is devoted to the eighteen problems. These are organized into five sections: problems in the analysis of the concept of information, in semantics, in the study of intelligence, in the relation between information and nature, and in the investigation of values.
Full-text available
Article
This paper outlines a quantitative theory of strongly semantic information (TSSI) based on truth-values rather than probability distributions. The main hypothesis supported in the paper is that the classic quantitative theory of weakly semantic information (TWSI), based on probability distributions, assumes that truth-values supervene on factual semantic information, yet this principle is too weak and generates a well-known semantic paradox, whereas TSSI, according to which factual semantic information encapsulates truth, can avoid the paradox and is more in line with the standard conception of what generally counts as semantic information. After a brief introduction, section two outlines the semantic paradox entailed by TWSI, analysing it in terms of an initial conflict between two requisites of a quantitative theory of semantic information. In section three, three criteria of semantic information equivalence are used to provide a taxonomy of quantitative approaches to semantic information and introduce TSSI. In section four, some further desiderata that should be fulfilled by a quantitative TSSI are explained. From section five to section seven, TSSI is developed on the basis of a calculus of truth-values and semantic discrepancy with respect to a given situation. In section eight, it is shown how TSSI succeeds in solving the paradox. Section nine summarises the main results of the paper and indicates some future developments.
A mathematical theory of communication The Wealth of Information Information and Meaning. An Evolutionary Perspective
  • P Raatikainen
  • C E Pdf Shannon
Raatikainen P., Complexity and Information -A Critical Evaluation of Algorithmic Information Theory http://www.helsinki.fi/collegium/eng/Raatikainen/information.pdf Shannon, C. E., 1948, A mathematical theory of communication. Bell Sys. Tech. J., 27, 323-332; 379-423. http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html Stonier, T., 1993, The Wealth of Information, Thames/Methuen, London Stonier, T., 1997, Information and Meaning. An Evolutionary Perspective, Springer, Berlin, New York Wildberger, A.M., 2000, AI & Simulation, Simulation, 74
The Blackwell guide to the philosophy of computing and information Blackwell Afterword -LIS as Applied Philosophy of Information: a Reappraisal) Floridi, L. (forthcoming), Is Information Meaningful Data?
  • L Floridi
  • L Uk Floridi
Floridi, L., 2004, Information. In Floridi L., ed., The Blackwell guide to the philosophy of computing and information Blackwell, Oxford UK Floridi. L., 2004, Afterword -LIS as Applied Philosophy of Information: a Reappraisal, Library Trends, 52(3) Floridi, L. (forthcoming), Is Information Meaningful Data?, Philosophy and Phenomenological Research, http://www.wolfson.ox.ac.uk/floridi/papers.htm.
Situations and Attitudes Steps to an Ecology of Mind, Paladin. Frogmore, St. Albans van Handbook of the philosophy of science Logical Foundations of Probability An Outline of Semantic Information
  • H C Von Baeyer
  • Nicolson Weidenfeld
  • Jon Barwise
  • John Perry
  • G Ma Bateson
von Baeyer H.C., 2003, Information: The New Language of Science, Weidenfeld and Nicolson Barwise, Jon and John Perry, 1983 Situations and Attitudes. MIT Press, Cambridge, MA Bateson, G., 1973, Steps to an Ecology of Mind, Paladin. Frogmore, St. Albans van Benthem J., Adriaans P., eds., 2005, Handbook on the Philosophy of Information, (In Gabbay D., Thagard P., and Woods J., Handbook of the philosophy of science, to be published by Elsevier Carnap, R., 1950, Logical Foundations of Probability. University of Chicago Press, Chicago Carnap, R., Bar-Hillel, Y., 1964, An Outline of Semantic Information. In Bar-Hillel, Y. (Ed.) Language and Information: Selected Essays on Their Theory and Application. Addison-Wesley, Reading, Mass.
The Stanford Encyclopedia of Philosophy
  • P C Marijuán
  • G Oddie
Marijuán P. C. 2004, Information and Life: Towards a Biological Understanding of Informational Phenomena, TripleC 2(1): 6-19, ISSN 1726-670X, http://tripleC.uti.at Oddie, G., 2001, Truthlikeness, The Stanford Encyclopedia of Philosophy, E.N. Zalta (ed.), http://plato.stanford.edu/archives/fall2001/entries/truthlikeness/ Poincaré, H., 1982, Science and Method, New York: Dover Psillos S., 1997, Naturalism Without Truth? Stud. Hist. Phil. Sci., Vol. 28, No.4, pp. 699-713