Conference PaperPDF Available

TruthSift: A platform for collective rationality

Authors:
  • TruthSift

Abstract

TruthSift is a cloud-based platform that logically combines members' contributions into a collective intelligence. Members add statements and directed connectors to diagrams. TruthSift monitors which statements have been logically established by demonstrations for which every challenge has beenrefuted by an established refutation, and the complement: which statements have been refuted by established refutations. When members run out of rational objections the result is a converged diagramsuccinctly representing the state of knowledge about a topic, including plausible challenges and how they were refuted. Previous computer systems for collaborative intelligence did not have a qualitatively better solution for combining contributions than voting, and are subject to group think, interest group capture, and inability to follow a multi-step logical argument. They did not settle issues automatically point by point and propagate the consequences up. I review indications that many practically important statements most people believe to be firmly established will be revealed to be firmly refuted upon computer assisted scrutiny. TruthSift also supports construction of powerful probabilistic models over networks of causes, implications, tests, and necessary factors.
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
TruthSift: A Platform
for Collective
Rationality
Eric B Baum
TruthSift Inc
Berkeley CA USA
eric@truthsift.com
AbstractTruthSift is a cloud-based platform that
logically combines members’ contributions into a
collective intelligence. Members add statements and
directed connectors to diagrams. TruthSift monitors
which statements have been logically established by
demonstrations for which every challenge has been
refuted by an established refutation, and the
complement: which statements have been refuted by
established refutations. When members run out of
rational objections the result is a converged diagram
succinctly representing the state of knowledge about
a topic, including plausible challenges and how they
were refuted. Previous computer systems for
collaborative intelligence did not have a qualitatively
better solution for combining contributions than
voting, and are subject to group think, interest
group capture, and inability to follow a multi-step
logical argument. They did not settle issues
automatically point by point and propagate the
consequences up. I review indications that many
practically important statements most people believe
to be firmly established will be revealed to be firmly
refuted upon computer assisted scrutiny. TruthSift
also supports construction of powerful probabilistic
models over networks of causes, implications, tests,
and necessary factors.
Keywords—Proof Verification, Rationality, human
computer interaction, crowd intelligence, collective
intelligence, fact-finding.
I. INTRODUCTION
What is a proof? According to the first definition at
Dictionary.com a proof is: "evidence sufficient to
establish a thing as true, or to produce belief in its
truth." In mathematics, a proof is equivalent to a proof
tree that starts at axioms, which the participants agree to
stipulate, and proceeds by a series of steps that are
individually unchallengeable. Each such step logically
combines several conclusions previously established
and/or axioms. The proof tree proceeds in this way until
it establishes the stated proved conclusion.
Mathematicians often raise objections to steps of the
proof, but if it is subsequently established that all such
objections are invalid, or if a workaround is found
around the problem, the proof is accepted.
The Scientific literature works very similarly. Each
paper adds some novel argument or evidence that
previous work is true or is not true or extends it to
establish new results. When people run out of valid,
novel reasons why something is proved or is not
proved, what remains is an established theory, or a
refutation of it or of all its offered proofs.
TruthSift is a platform for diagramming this process
and applying it to any statements members care to
propose to establish or refute. One may state a topic and
add a proof tree for it, which is drawn as a diagram with
every step and connection explicit. Members may state
a demonstration of some conclusion they want to prove,
building from some statements they assert are self-
evident or that reference some authority they think
trustworthy, and then building useful intermediate
results that rationally follow from the assumptions, and
building on until reaching the stated conclusion. If
somebody thinks they have found a hole in a proof at
any step, or thinks one of the original assumptions
needs further proof, they can challenge it, explaining
the problem they see. Then the writer of the proof (or
others if it’s in collaboration mode) may edit the proof
to fix the problem, or make clearer the explanation if
they feel the challenger was simply mistaken, and may
counter-challenge the challenge explaining that it had
been resolved or mistaken. This can go on recursively,
with someone pointing out a hole in the proof used by
the counter-challenger that the challenge was invalid.
On TruthSift the whole argument is laid out graphically
and essentially block-chained, which should prevent the
kind of edit-wars that happen for controversial topics on
Wikipedia[1,2]. Each challenge or post should state a
novel reason, and when the rational arguments are
exhausted, as in mathematics, what remains is either a
proof of the conclusion or a refutation of it or all of its
proofs.
As statements are added to a diagram (cf Figure 1),
TruthSift keeps track of what is established and what
refuted, drawing established statements' borders and
their outgoing connectors thick, and refuted statements'
borders and their outgoing connectors thin. You can
instantly tell what is established and what refuted.
TruthSift computes this by a simple algorithm that
starts at statements with no incoming assumptions,
challenges, or proofs, which are thus unchallenged as
1 | P a g e
©2016 IEEE
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
assertions that prove themselves, are self evident, or
appeal to an authority everybody trusts. These are
considered established. Then it walks up the diagram
rating each statement in turn after all its parents have
been rated. A statement will be established if all its
assumptions are, none of its challenges are, and if it has
proofs, at least one is established. A challenge may
request a proof be added if a statement does not have
one already nor adequately prove itself. If a statement
has an established challenge, or has refuted
assumptions, or all of its proofs are refuted, it is refuted.
To understand why a statement is established or
refuted, double-click on it to center focus on it, so that
you see it and its parents in the diagram (cf Figure 2). If
it is refuted, either there is an established challenge of
it, or one of its assumptions is refuted, or all of its
proofs are. If it is not refuted, then it follows that all of
its assumptions are established, all of its challenges are
refuted, and it has an established proof or is accepted as
providing one, and so it is established. Work your way
backward up the diagram, centering on each statement
in turn, and examine the reasons why it is established or
refuted.
Its important to understand that TruthSift directly maps
mathematical practice. If based on published proofs and
refutations and proof fixes and counter-refutations a
rational human would consider a proof to be established
by the TruthSift Diagram, then so will TruthSifts
simple rating algorithm. In mathematical practice what
it means to be established is theres a proof with all the
proposed refutations refuted.
TruthSift implements a mapping of actual human
mathematical practice into a machine-human
collaboration that enables it to extend to all fields of
human endeavor, not just mathematics. The algorithm
decides the same thing a rational human would about
what has been established given the contributions to
date and thus serves as an unemotional arbiter, enabling
human collaboration, even if it is in some cases
adversarial, to achieve superhuman feats of reasoning.
It also forces the discussion to be more precise. If you
believe it has been demonstrated that Vaccines are
safe?, how much residual damage was allowed in the
determination? With TruthSift there is a spelled out
topic statement. Careful phrasing of statements is
essential to being able to actually establish them, and
enables more to be established than one might have
predicted.
The process divides up a problem into natural
subtopics that get settled point by point (although not
sequentially in timean old conclusion about a
subtopic can always be challenged with a new rational
argument or evidence). These different subtopics may
involve different collaborators or opponents. It guides
people in the essence of critical thinking and fruitful
collaboration.
2 | P a g e
©2016 IEEE
Figure 1: An example topic. The topic statement n0 is currently refuted, because its only proof is refuted. The statement
menu is shown open in position to add a proof to this proof. The topic statement is gold, pro statements are blue, con
statements are red. Proof connectors are black, challenges red, remarks purple, assumptions (not shown) blue. Statements
show the title, to see the body select “View Statement” or hover mouse. http://truthsift.com/search_view?topic=Will-
TruthSift-succeed?&id=298
Figure 2: View centered on the topic statement: “If Artificial General Intelligence is Built, there will be a
significant chance it will kill or enslave humanity.” The black triangles indicate other edges not shown. For
complex diagrams, it is often best to walk around in focused view centered on each statement in turn. Double click
on a statement will center the view on it. http://truthsift.com/search_view?topic=If-Artificial-General-Intelligence-
is-Built,-there-will-be-a-significant-chance-it-will-kill-or-enslave-humanity-&id=550&nid=5452
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
For TruthSift to work properly, posters will have to
respect the guidelines and post only proof or challenge
statements that they believe rationally prove or refute
their target and are novel to the diagram (or also novel
additional evidence as proofs or assumptions or remarks
or tests, which are alternative connector types). Posts
violating the guidelines may be flagged and removed,
and consistent violators as well. Posts don’t have to be
correct, that’s what challenges are for, but they have to
be honest attempts, not spam or ad hominem attacks.
However, don't get hung up on whether a statement
should be added as a proof or an assumption of another
until the matter is challenged. Frequently you want to
assemble arguments for a proposition stating something
like "the preponderance of the evidence indicates X",
and these arguments are not individually necessary for
X, nor are they individually proofs of X. It is safe to
simply add them as proofs of the above
proposition.They are not necessary assumptions, and if
not enough of them are established, the target may be
challenged on that basis.The goal is a diagram that
transparently explains a proof and what is wrong with
all the objections people have found plausible, with
noone finding more rational objections. Edits that move
in that direction are useful and desired.
Sometimes it may also be advisable to edit the topic
statement into a more provable form. The system can be
readily and intuitively used for forward planning or
discovery of what can actually be established about
some topic.
Truthsift supports Stipulation of a statement even
though challenged by an established challenge. Cf
Figure 3. Other statements then are rated conditional on
the stipulations within the topic. In cases where
fundamental assumptions differ, stipulations allow
contributors to make explicit what their assumptions are,
and challengers to make explicit why they don’t believe
them, and to evaluate the topic based on the stated
stipulations.
Also, Private Topics are available to restrict contributors
to invitees. A motivated subset, for example academics
or employees of a corporation, may be quite able to
produce highly informative and rational diagrams.
Finally, TruthSift supports the construction of powerful
probabilistic models, as described at the site.
.
II. THIS TECHNOLOGY IS IMPORTANT BECAUSE WE
HAVE SUFFERED FROM ITS LACK
3 | P a g e
©2016 IEEE
Figure 3: A Topic Illustrating the use of Stipulation. n4 shown in green has been stipulated. It, n2, and n1 have rounded
corners because they have status conditional on the stipulation. The Topic statement n1 is Established conditional on the
stipulation (and would be refuted without it). http://truthsift.com/search_view?topic=Stipulation-Resolves-Intractable-
Disputes&id=305
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
Peer reviewed surveys agree: A landslide majority of
medical practice is not supported by science [3,4,5,6].
Within fields like climate science and vaccines, that
badly desire consensus, no true consensus can be
reached because skeptics raise issues that they feel are
not adequately addressed by the majority (exactly what
Le Bon warned of more than 100 years ago[7]). Widely
consulted sources like Wikipedia are reported to be
largely paid propaganda on many important subjects [1],
or at best the most popular answer rather than an
established one [2]. Quora shows you the most popular
answer, not necessarily the correct one, and the answers
are individual contributions, with little or no
collaboration, and often there is little documentation of
why you should believe them. Existing systems for
crowd sourced wisdom thus largely compound group
think, rather than addressing it.
Corporate or government planning is no better.
Within large organizations, where there is inevitably
systemic motivation to not pass bad news up, leadership
needs active measures to avoid becoming clueless as to
the real problems [8]. Corporate or government plans
are subject to group think, or takeover by employee or
other interests competing with the mission. Individuals
who perceive mistakes have no recourse capable of
rationally pursuading the majority, and may anyway be
discouraged from speaking up by various
consequences[7].
Feynman famously described the phenomenon of
Cargo Cult Science[9]. In the South Seas the natives saw
planes landing on air strips and delivering Cargo, so in
an effort to get cargo they built detailed replicas of
landing strips, including wood radios. Feynman
observed that the science that results when practitioners
don't rigorously discuss and address all the opposing
arguments, but instead ignore them or replace them with
strawmen, is Cargo Cult Science. Like the Islanders
radios, it is missing the key ingredient, and so won't
function.
We see such examples when the medical literature is
assessed. Apparently they've been using the wrong cell
lines, and know it, but haven't done anything about it.
When your doctor prescribes medicine, it may have
been mistakenly tested on a different cancer[10]
In the vaccine literature, the apparent systematic
biases corrupting almost all their measurements are
never discussed, nor vast areas of proven danger such as
contaminants, aluminum, timing during development, or
multiple vaccine interactions causing autoimmune
disease, brain damage, or death. You can verify this
personally by simply searching the pdfs of the safety
surveys. Find lots of citations to papers reporting
evidence for these problems here[11,12], and note none
of them are cited or rebutted there. Regarding vaccine
efficacy, the overwhelming historical record
demonstrating that the cowpox vaccine was wholly or
largely ineffective against smallpox is never
discussed[13,14]. And so on for the rest of the case that
vaccines, rather than having been important to public
health, have been detrimental. It is ignored in the
vaccine literature, not rebutted.
III. RESULTS TO DATE RE CARGO CULT SCIENCE
Cargo Cult science bears a similar relationship to
science as a wooden radio model bears to a radio. It may
be a very detailed likeness, but the key ingredient is
missing and it won’t function.
TruthSift was created to prevent Cargo Cult Science. On
TruthSift, if anybody who examines a topic finds an
argument they believe invalidates a proof, we don't rate
the result as established until the challenge is
successfully rebutted. And they and others can explain
and counter-rebut. And in fact we are already observing
Topics that demonstrate the Cargo Cult nature of various
subjects such as Vaccine Safety.
http://truthsift.com/search_view?topic=Are-Vaccines-
Safe-?&id=406&nid=4083 and
http://truthsift.com/search_view?statement=The-
Available-Evidence-Strongly-Indicates-Flu-Vaccines-
Often-Damage-Immune-Systems-of-Recipients,-
Especially-in-Children.&id=386&nid=2822 and
http://truthsift.com/search_view?topic=The-Evidence-
Is-Weak-That-Vaccines-Have-Saved-More-Lives-than-
They-Have-Cost--&id=520
The first of these now has multiple participants and over
130 statements.
We expect that in the future other examples of Cargo
Cult Science will be exposed, which may be hugely
important for the future of humanity.
IV. PRIVATE PLANNING ANECDOTES
Users also report that private diagrams have unique
potential for intuitive yet rigorous personal and business
planning. TruthSift breaks debates down and settles
them point by point, propagating the rational
consequences of one conclusion on to other decisions.
To decide A, create a statement “Evidence for A” with
all the arguments you can think of as proof statements.
And “evidence for Not A” with proof statements of that.
Then consider the counter-arguments on the specific
arguments. Then consider what actual proofs and
refutations you can offer to the overall conclusions.
Everybody can contribute their point of view.
Experience indicates users find the process intuitive and
helpful.
V. PREVIOUS RELATED WORK
Mathematicians usually write proofs to appeal to other
4 | P a g e
©2016 IEEE
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
mathematicians, leaving many of the steps somewhat
intuitive and incompletely described. However, in
principle every correct mathematical proof should be
reducible to a series of syntactic checks. Languages like
HOL Light, Coq, PVS, Isabelle, and Mizar have
enabled mathematicians to write computer programs
checking many major results[15], and there is an
ongoing project to verify all of mathematics[16].
TruthSift, by contrast, aims to provide formal
verification for informal rational discourse such as
occurs in the social and physical sciences.
There has been research going back to the middle ages
on formal systems for persuasion dialog, recently
reviewed by Prakken[17]. These to some extent mirror
(but formalize) the intuitive discussion above of the
mathematical process, as does TruthSift. One way I
differ philosophically from some of this work is in
hypothesizing that there is an objective reality and
maybe also a Platonic reality that can help guide
members to discovery and agreement. Another example
of modeling mathematical argument was Lakatos
games[18]. TruthSift appears to be the first to
implement a representation of such a system on a
platform for collaboration and point by point proof and
refutation, with machine verification of what is
established, and to test it on general topics.
Feynman wrote[19]:”that is what science is: the result
of the discovery that it is worthwhile rechecking by new
direct experience, and not necessarily trusting the
[human] race['s] experience from the past. I see it that
way. That is my best definition... Science is the belief
in the ignorance of experts.” TruthSift implements this
vision of science.
There have been a number of previous efforts to
achieve crowd intelligence, and/or to map arguments.
Recent reviews are provided by [20] ,[21] and [22], and
a survey of argument maps may be found at [23]. In the
classification of [21], TruthSift may be considered to
have a novel means of aggregation of contributions
compared to any surveyed methods, combining
contributions according to logic, and a novel means of
Quality Control, using the aggregation to compute what
is logically established. Another characteristic used to
classify existing systems by [21] is the motivation of
users to contribute to public collaboration, for which
they list the existing alternatives: Pay, Altruism,
Enjoyment, Reputation, and Implicit Work. In addition
to invoking Altruism, Enjoyment, and Reputation,
TruthSift also offers the motivation for individuals or
5 | P a g e
©2016 IEEE
Figure 4:Detail from a topic pointing out a scientific literature apparently often overlooked. Dashed borders indicate a
citation into the peer reviewed literature. No Challenge to the conclusion has yet been added nor is any published one
known to me after search. Statements have a “View” page with more details and specification, viewable using the
statement menu. http://truthsift.com/search_view?topic=Does-the-available-evidence-indicate-Flu-vaccines-damage-the-
immune-system-of-recipients-in-other-ways?&id=386
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
organizations that want to publish a verified proof of
some proposition, for example advertisers or advocates.
In the classification of Michelucci and Dickinson[20],
the human computation ecosystems of the future are
type C systems that combine the cognitive processing
of many human contributors with machine based
computing to build faithful models of the complex,
interdependent systems that underlie the world’s most
challenging problems.’ However the existing prototype
C systems they list seem to be for specialized
applications, and also not entirely automated, like the
polymath project.[24]
Notable efforts to provide crowd sourced question
answering or information include Wikipedia and Quora.
Wikis suffer on controversial topics, and Wikipedia has
been reported to represent paid propaganda on
controversial scientific topics[1,2]. Wikipedia reports
only the last edit, and this is often apparently controlled
by influenced parties. On TruthSift such edit wars
would be transparent, flagged as contrary to the
guidelines, and possibly available as specific alternative
stipulations. Quora reports the most popular answer to
a question, rather than any more powerful
collaboration, together with a list of alternative
answers. Other fact checking or question answering
web sites present some “expert opinion”, which is
justified if at all by argumentation that is demonstrably
often comprised of logical fallacies such as straw men
and ad hominem attacks.
Klein’s review[21] of Online Deliberation
Technologies classifies systems into time-centric,
question-centric, topic-centric, debate-centric, and
argument-centric. Time centric sites like Twitter or
comment threads organize content by when it is
contributed. Question-centric sites like Quora organize
answers by questions. As Klein notes, both of these
types of systems produce voluminous output most of
which is of low quality, but among which some pearls
may be concealed. There is little collaboration beyond
voting on one most popular individual contribution.
Topic-centric sites like wikis organize content by topic,
producing a more concise output. But they suffer on
controversial topics, becoming battlegrounds that are
not won on the basis of logic. Debate centric sites like
whysaurus.com, debatepedia.com, debatewise.org, and
debate.org, allow users to contribute pros and cons.
However, they have no method of breaking down issues
into sub-points and establishing point by point what can
be established. They don’t allow linking of arguments
to arguments, much less automatic update of
consequences. They don’t support reasoning forward on
open-ended problems, as is possible with TruthSift.
Argument-centric systems like Klein’s MIT
Deliberatorium [25] (and TruthSift) have advantages.
Members prepare a concise and informative diagram
summarizing an argument. Solutions are genuinely
collaborative, and thus potentially far more powerful.
The idea of determining statements you can actually
establish and working forward to establish other
statements is integral to science, but missing from
previous argument map systems like the
Deliberatorium. Without that, as with Quora and
Wikipedia, there is no good way of composing the right
answer, as opposed to merely selecting the most
popular answer at each choice. Existing crowd-based
systems mostly compound group think, not correct it.
The best existing system for determining truth using
natural language argumentation and evidence by a large
collaboration, has been the scientific literature. But the
scientific literature has suffered under questionable
refereeing, the lack of a good mechanism to agree on
what is actually established at any given time, and the
ignorance or disregard of contrary arguments by
authors and referees or editors.
VI. DISCUSSION AND FUTURE WORK
TruthSift is attempting to create a collective
rationality in a way that hasn't been tried before. Use has
been ramping up for months but it remains to be seen
whether it will recruit a huge base of users, or what
would result from large scale public use. The
expectation is that members will explore rational
diagrams as is somewhat achieved in the scientific
literature, but it is possible that too many members will
make irrational posts or that it will be hard to agree on
what is rational.
TruthSift also supports private diagrams which only
invited members may participate in and/or view. Even if
it turns out that the public at large does not use TruthSift
well, teams of rational and committed individuals (eg
employees of a company or members of a club or a
family or a single individual) have been able to produce
useful diagrams.
TruthSift diagrams are currently demonstrating that
there are huge logic holes in things many people believe
such as the safety of vaccines. Assuming this holds up,
the potential impact on society is substantial. Just for
vaccines, the societal cost of the misconception
suggested is staggering. Moreover TruthSift may
demonstrate that there are many similar costly
misconceptions and group think delusions throughout
the corporate, intellectual, and government world. After
a TruthSift diagram is built, it is very rapid to assess the
state of the argument.
6 | P a g e
©2016 IEEE
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
A critical reason for the propagation of group think
delusions is that most people don’t take the time to
understand the issue, but all just assume others are. But
with TruthSift, they don’t have to invest much time, so
can come to rely on TruthSift as a better authority than
the group. Thus a relatively small group of contributors
can have a large impact on society.
TruthSift expects to add additional features,
including n-choice statements (one and only one of the n
statements may be established, such as negations which
are 2-choice); and connectors between diagrams,
allowing a big web of verified concepts to be built; and
a system allowing rewards to be posted for contributions
to topics that are significant (effect the status of the
topic) and remain established. This should allow
members to motivate researchers to a question, and also
to make claims that they have demonstrated something
in spite of offering a reward for disproof.
TruthSift could be used by AI’s rather than or in
addition to humans, provided they could sometimes
suggest proofs or refutations or tests of statements.
TruthSift is currently experimenting with AI-human
collaborative posts. The criticism of TruthSift may
enable a collection of AI’s, perhaps with some humans,
to produce a higher level intelligence than any of them
individually could achieve. Even if they sometimes
interject challenges or proofs that are off-base, if they
more often can identify the problem and challenge it,
they may be able to bootstrap to a more rational
intelligence. In the meantime, they can help create a
wider base of interesting and explored topics.
ACKNOWLEDGMENT
I founded/own TruthSift Inc and have filed patent
applications on the technology.
REFERENCES
[1] S Attkisson, "Astroturf and
manipulation of media
messages", TEDx University of
Nevada, (2015)
https://www.youtube.com/watc
h?v=-bYAQ-ZZtEU
[2] Adam M. Wilson , Gene E.
Likens, Content Volatility of
Scientific Topics in Wikipedia:
A Cautionary Tale 2015 DOI:
10.1371/journal.pone.0134454
http://journals.plos.org/plosone/
article?id=10.1371/journal.pone
.0134454
[3] S. A. Greenberg, "How citation
distortions create unfounded
authority: analysis of a citation
network", BMJ 2009;339:b2680
[4] Office of Technology
Assessment, Congress of the
United States (1978)
“Assessing the Efficacy and
Safety of Medical
Technologies,”, .
http://www.fas.org/ota/reports/7
805.pdf
[5] Jeannette Ezzo, Barker Bausell,
Daniel E. Moerman, Brian
Berman and Victoria Hadhazy
(2001). “Reviewing The
Reviews” . International
Journal of Technology
Assessment in Health Care, 17,
pp 457-466.
http://journals.cambridge.org/ac
tion/displayAbstract?
fromPage=online&aid=101041
[6] John S Garrow What to do about CAM?: How much of
orthodox medicine is evidence based?”, BMJ. 2007 Nov 10;
335(7627): 951.doi:10.1136/bmj.39388.393970.1F PMCID:
PMC2071976 http://www.dcscience.net/garrow-evidence-
bmj.pdf
[7] Gustav Le Bon, The Crowd, (1895), (1995) Transaction
Publishers New Edition Edition
[8] Kiira Siitari, Jim Martin & William W. Taylor (2014)
“Information Flow in Fisheries Management: Systemic
Distortion within Agency Hierarchies”, Fisheries, 39:6, 246-
250, http://dx.doi.org/10.1080/03632415.2014.915814
[9] Feynman, Richard P., C CARGO CULT SCIENCE (adapted
from Caltech Commencement Address 1974),
https://www.lhup.edu/~DSIMANEK/cargocul.htm
[10] http://discovermagazine.com/
2014/nov/20-trial-and-error
[11] http://lifeboat.com/blog/
2016/06/the-top-ten-reasons-i-
believe-vaccine-safety-is-an-
epic-mass-delusion#comment-
293149
[12] http://truthsift.com/
search_view?topic=Are-
Vaccines-Safe-?
&id=406&nid=4083
[13] Alfred Russel Wallace, (1898), Vaccination a Delusion, Its
Penal Enforcement a Crime: Proved by the Official Evidence in
the Reports of the Royal Commission.
http://people.wku.edu/charles.smith/wallace/S536.htm
[14] http://truthsift.com/search_view?topic=The-Evidence-Is-Weak-
That-Vaccines-Have-Saved-More-Lives-than-They-Have-Cost--
&id=520
[15] T.P. Hales, "Formal Proof",
Notices of the A.M.S. V55
No11 1355-1380.(2008)
[16] Formalizing 100 Theorems
(2016)
http://www.cs.ru.nl/~freek/100/
[17] Henry Prakken, "Formal systems for persuasion dialogue", The
Knowledge Engineering Review archive Volume 21 Issue 2,
June 2006 pp163 - 188 Cambridge University Press New York,
NY, USA
http://www.cs.uu.nl/groups/IS/archive/henry/dgreview.pdf
[18] Alison Pease, K Budzynska, J Lawrence, and C Reed; "Lakatos
Games for Mathematical Argument",in Parsons, S., Oren, N.,
Reed, C. & Cerutti, F. (eds) Proceedings of the Fifth
International Conference on Computational Models of
Argument (COMMA 2014), IOS Press, Pitlochry, pp59-66. DOI
10.3233/978-1-61499-436-7-59
http://comma2014.arg.dundee.ac.uk/res/pdfs/07-pease.pdf
7 | P a g e
©2016 IEEE
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
[19] Richard P Feynman, What is Science? (1968) http://www-
oc.chemie.uni-regensburg.de/diaz/img_diaz/feynman_what-is-
science_68.pdf
[20] J.L. Michelucci, J. L.
Dickinson, “HUMAN
COMPUTATION The power of
crowds”, Science 1 January
2016: Vol. 351 no. 6268 pp. 32-
33
[21] Klein, Mark, A Critical Review
of Crowd-Scale Online
Deliberation Technologies
(August 28, 2015). Available at
SSRN:
http://ssrn.com/abstract=265288
8 or
http://dx.doi.org/10.2139/ssrn.2
652888
[22] A. J. Quinn, B. B. Bederson, in
Proceedings of the SIGCHI
Conference on Human Factors
in Computing Systems
(Association for Computing
Machinery, New York, 2011;
http://doi.acm.org/10.1145/197
8942.1979148 ), pp. 1403–
1412.
[23] https://en.wikipedia.org/wiki/
Argument_map
[24] T. Gowers, M. Nielsen, Nature
461, 879 (2009)
[25] Mark Klein, "How to Harvest Collective Wisdom for Complex
Problems: An Introduction to the MIT Deliberatorium",
http://cci.mit.edu/klein/papers/deliberatorium-intro.pdf
8 | P a g e
©2016 IEEE
ResearchGate has not been able to resolve any citations for this publication.
Technical Report
Full-text available
Humanity now finds itself faced with pressing and highly complex problems – such as climate change, the spread of disease, international and economic insecurity, and so on-that call upon us to bring together large numbers of experts and stakeholders to deliberate collectively on a global scale. Such large-scale deliberations are themselves complex processes, however, with emergent properties that often prevent us from adequately harnessing the community's collective wisdom. Collocated meetings, beside being expensive, are prone to such well-known effects as polarization, power dynamics, and groupthink. Social media (such as email, blogs, wikis, chat rooms, and web forums) enable more concurrent input but still typically generate more heat than light when applied to complex controversial topics. Large-scale argumentation systems represent a promising approach for addressing this important challenge, by virtue of providing a simple systematic structure that radically reduces redundancy and encourages clarity. This paper will describe the efforts we have made to explore this approach, giving an overview of the key underlying concepts, the ways we have translated these concepts into a working system (the MIT Deliberatorium), and the experiences we have had evaluating this system in a range of contexts. The Challenge Decision-making in large communities rarely fully harvests the collective wisdom of its members, even for very high-stakes problems where it can make the difference between disaster and success. It can be simply too expensive to bring the key players into one room, and too difficult to manage the interactions of large groups to get the best that the members have to offer. Only one person can talk at a time, loud voices can dominate a discussion, and emergent dynamics can lead such groups to either deadlock without a solution (polarization) or prematurely settle on a solution without sufficiently exploring the space of promising alternatives (groupthink) [1]. In recent years, social media technologies (e.g. email, web forums, chat rooms, blogs, wikis, and idea forums) have emerged that have the potential to address this important gap. Such tools have enabled diverse communities to weigh in on topics they care about at unprecedented scale, in turn leading to remarkably powerful emergent phenomena [2] [1] [3] [4] such as:
Article
Full-text available
This paper provides a short critical review of crowd-scale online deliberation support technology. It distinguishes five main types, including time-centric, topic-centric, question-centric, debate-centric, and argument-centric, and outlines their strengths and weaknesses in terms of the comprehensiveness and quality of the content, as well as in terms of the ability to manage and harvest such deliberations.
Article
Full-text available
Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for 'edit wars' surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically "controversial" (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider "noncontroversial" (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.
Article
Full-text available
To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it. A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that beta amyloid, a protein accumulated in the brain in Alzheimer's disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network. Citation bias, amplification, and invention, and their effects on determining authority. The network contained 242 papers and 675 citations addressing the belief, with 220,553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding. Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.
Article
This article reviews formal systems that regulate persuasion dialogues. In such dialogues two or more participants aim to resolve a dierence of opinion, each trying to persuade the other participants to adopt their point of view. Systems for persuasion dialogue have found application in various elds of computer science, such as nonmonotonic logic, articial intelligence and law, multi-agent systems, intelligent tutoring and computer-supported collaborative argumentation. Taking a game-theoretic view on dialogue systems, this review proposes a formal specication of the main elements of dialogue systems for persuasion and then uses it to critically review some of the main formal systems for persuasion. The focus of this review will be on regulating the interaction between agents rather than on the design and behaviour of individual agents within a dialogue.
Article
Scientific heavyweights deplore the NHS money wasted on unproved and disproved treatments used by practitioners of complementary and alternative medicine (CAM),1 2 but Lewith, a CAM proponent [see previous letter], is cited elsewhere as saying that the BMJ reckons that 50% of the treatments used in general practice aren't proved, and 5% are pretty harmful but still being used.3His data were taken from the BMJ Clinical Evidence website (http://clinicalevidence.com/ceweb/about/knowledge/jsp, viewed 6 May 2007). A pie chart indicates that, of about 2500 treatments supported by good evidence, only 15% of treatments were rated as beneficial, 22% as likely to be beneficial, 7% part …
Astroturf and manipulation of media messages
  • S Attkisson
Reviewing The Reviews
  • Ezzo Jeannette
  • E Barker
  • Brian Daniel
  • Victoria Berman
  • Hadhazy
The Crowd, (1895), (1995) Transaction Publishers New Edition Edition
  • Gustav Le Bon