ChapterPDF Available

Post Digital Text (PDT) Reads the Readers Instead

Authors:
  • Le Village Français du Nigeria(Centre Inter-universitaire Nigérian d'Études Françaises)

Abstract

A Post-Digital Text (PDT) reads the readers instead of the readers reading it. This is because these texts, integrated into new media or built using Artificial Intelligence, display based on the mood and consciousness of the readers. PDT techniques include botification, datafication, predictive modelling, adversarial and physiognomic facial recognition, meta-algorithms exploring human as data, Big Data, Internet of Things, Machine Learning, Deep Learning, blockchain and Neural Networks coded in the Post Digital Language (PDL).
The Future of Text ||
1
Contents
The Future of Text || 10
How to Read this Book 12
Companion Works 14
Acknowledgements 16
Foreword by Ismail Serageldin 17
Introduction by Frode Alexander Hegland 19
Contributor Bios 29
Alexandra Saemmer 33
Writing in the age of computext 33
Ann Bessemans 37
Legibility/Readability & Visibility 37
Bibliography 40
Barbara Tversky 41
The Future is the Past 41
Bibliography 43
Bob Horn 45
Diagrams, Meta-Diagrams and Mega-Diagrams One Million Next Steps in
Thought-Improvement 45
Notes 48
Concepts 50
Bob Stein 53
Print Era: RIP 53
Brendan Langen 55
Thinking with Paper 55
Daniel Berleant 59
2
Dialogues With the Docuverse Is the First Step 59
Acknowledgment 61
Daveed Benjamin 63
The Bridge to Context 63
The Innovation 64
How Does Bridging Work? 65
Erik Vlietinck 69
Markdown, the ultimate text format? 69
Markdown to the rescue? 70
Fabian Wittel & David Felsmann 73
Breathing Life into Networks of Thoughts 73
The Energy of a Conversation 73
Breathing Life into Text 73
Living in the Future of Text 74
Fabio Brazza 77
Futuro do Texto 77
Future of Text 78
Faith Lawrence 81
A Tale of Two Archives 81
Imogen Reid 85
Notes for a Screenplay Loosely Based on C.K. William’s Poem, The Critic 85
Jad Esber 89
The bookshelf as a record of who I am...and who I was 89
The bookshelf as a source for discovery 89
The bookshelf as context for connection 90
People connect with people, not just content. 90
Jamie Joyce 93
“Web-based Conceptual Portmanteau” 93
Jay Hooper 99
3
The versatility of text: a letter of love, grief, hope, and rhetoric 99
Jeffrey K.H. Chan 103
Text as persuasive technology: A caveat 103
Jessica Rubart 107
Collaborative-Intelligent Sense-Making 107
Joe Devlin 109
Marginalia Drawings 109
John Hockenberry 113
Text as a Verb, a Noun and The Revenge of Phaedrus 113
Jonathan Finn 117
Meaningful Text For Mindful Devices 117
Infotext 118
Meaning enhanced by knowledge 119
A Humane User Interface 120
Working With Ideas 120
Experimental listing of concepts 121
Karl Hebenstreit Jr. 123
To me, a word is worth a thousand pictures 123
Kyle Booten 127
O Puzzle Box of Inscrutable Desire! 127
Lesia Tkacz 131
Artifact from a Possible Future: A Pamphlet Against Computer Generated Text
131
Luc Beaudoin 135
Beyond the CRAAP test and other introductory guides for assessing knowledge
resources: The CUP'A framework 135
Caliber 136
Utility 136
Potency 137
4
Appealingness 137
Future of information technology and strategies 137
Relevance of psychology 139
Bibliography 140
Mark Anderson 141
Writing for Remediation—Tools and Techniques? 141
Megan Ma 143
Critical Legal Coding: Towards a Legal Codex(t) 143
Niels Ole Finnemann 149
Note on the complexities of simple things such as a timeline 149
Notions of Text 149
Text and hypertext in the binary alphabet 151
Finally - Machine Translation - When did it start? 155
Peter Wasilko 157
Writing for People and Machines 157
Abstract 157
Background On the Nature of Computers and Code 157
Code As a Language to Think With 158
Programming Language Paradigms 159
Writing Programs As Texts For People 160
What Lies Ahead 161
Philippe Bootz 163
Literariness and reading machines 163
1. Convergence towards a question 163
2. Perspectives in programmed digital literature 163
Rafael Nepô 167
Book Reading Rituals and Quirks 167
Richard A. Carter 171
Inscriptions to the Stars: Time, Space, and Extra-terrestrial Textualities 171
Rob Haisfield 175
5
Programmable text interfaces are the future, not GUIs 175
What Other Graphical Applications Could Have A Programmable Text As An
Interface? 181
The Future Is Programmable Text, Not Graphical Applications 183
Bibliography 184
Sam Brooker 187
Concepts without Borders: The Valorisation or Invalidation of Medium 187
Sam Winston 189
One Thinks of Another 189
Act 1 189
Act 2 190
The End (or the missing Act 3) 190
Sarah Walton 193
Truth 193
Stephen Fry 197
Grace 197
Tim Brookes 201
Future of Text: Cursive 201
Vinton G. Cerf 205
The Future of Text Redux 205
Yohanna Joseph Waliya 207
Post Digital Text (PDT) Reads the Readers Instead 207
Concepts 208
The 10th Annual Future of Text Symposium Day 1 211
Welcome by Vint Cerf 212
Introduction by Frode Hegland 213
Overview of the Day 217
Sam Winston 218
Jay Hooper 223
Fabian Wittel 229
6
Bob Horn 234
Rafael Nepô 242
David Lebow 254
Frode Hegland 261
Discussions 267
Closing 270
The 10th Annual Future of Text Symposium Day 2 275
Notes on Transcription by Danillo de Medeiros Costa 309
History of Text Timeline 312
13,8 Billion Years Ago 313
250 Million-3,6 Million 313
2,000,000-50,000 BCE 314
50,000-3,000 BCE 314
3000 BCE 315
2000 BCE 316
1000 BCE 317
BCE – CE 318
100 CE 318
200 318
300 318
400 319
500 319
600 319
700 319
800 319
900 320
1000 320
1100 320
1200 320
1300 321
1400 321
1500 322
1600 323
1700 324
7
1800 325
1810 325
1820 326
1830 326
1840 326
1850 326
1860 327
1870 327
1880 328
1890 328
1900 329
1910 329
1920 329
1930 330
1940 331
1950 332
1960 333
1970 336
1980 339
1990 344
2000 349
2010 352
2020 353
Future 354
Contributors 354
Postscript : Digital Text 355
Glossary 357
Endnotes 361
References 368
Visual-Meta Appendix 377
8
9
10
The Future of Text ||
Published December 9th, 2021.
All articles are © Copyright of their respective authors.
This collected work is © Copyright ‘Future Text Publishing’ and Frode Alexander Hegland.
The PDF edition of this work is made available at no cost and the printed book is available
from ‘Future Text Publishing’ (futuretextpublishing.com) a trading name of ‘The Augmented
Text Company LTD, UK
This work is freely available digitally, permitting any users to read, download, copy,
distribute, print, search, or link to the full texts of these articles, crawl them for indexing,
pass them as data to software, or use them for any other lawful purpose, without financial,
legal, or technical barriers other than those inseparable from gaining access to the internet
itself. The only constraint on reproduction and distribution, and the only role for copyright in
this domain, should be to give authors control over the integrity of their work and the right to
be properly acknowledged and cited.
Typeset in Times New Roman for body text and Avenir Book for headings.
ISBN: 9798780922513
DOI: https://doi.org/10.48197/fot2021
11
12
How to Read this Book
This work is distributed as a PDF which will open in any standard PDF viewer. If you choose
to open it in our free Reader’ PDF viewer for macOS, you will get useful interactions because
of the inclusion of Visual-Meta, including the ability to fold into an outline, click on citations,
select text and cmd-f to ‘Find’ all the occurrences of that text–and if the selected text has a
Glossary entry, that entry will appear at the top of the screen–and more: https://
www.augmentedtext.info for free download. http://visual-meta.info to learn more about
Visual-Meta.
13
14
Companion Works
The Future of Text Series of Books are available from https://futuretextpublishing.com
The Future of Text Volume 1 is at DOI https://doi.org/10.48197/fot2020a ISBN:
9798556866782
The Future of Text Interviews will be available from https://futuretextpublishing.com
The software for Authoring & Reading we are building to help illustrate what we
‘preach’ is available from https://www.augmentedtext.info
Visual-Meta is described at http://visual-meta.info
A group of us meet every Monday and Friday 4pm UK time as well as monthly meetings, to
work on the future of text. Join us. Visit https://futuretextpublishing.com for details and
schedules.
15
Edgar & Frode Hegland, November 2021. Hegland, 2021.
16
Acknowledgements
In loving memory of Douglas Engelbart.
Vint Cerf and Ismail Serageldin for your constant moral support.
My advisors at the University of Southampton, Dame Wendy Hall, Les Carr and Dave
Millard.
The Future of Text Initiative advisors Dave De Roure and Pip Willcox.
Jacob Hazelgrove quite literally made my literature dreams come true by writing the
software.
The Open Office Hours team, many of whom meet twice a week particularly: Mark
Anderson, Peter Wasilko, Rafael Nepô, Alan Laidlaw, Brendan Langen, Gyuri Lajos, Adam
Wern and Brandel Zachernuk.
Rafael Nepô for helping me with The Symposium.
Laura Coelho de Almeida, and Julia Wright who helped with social media.
Bruce Horn, Ted Nelson, Keith Martin, Valentina Moressa, Mark Anderson and my brother
Henning for dialog and support.
All the contributors to the first and second editions of The Future of Text.
And last, and most fundamentally, my parents Turid & Ole Hegland, my wife Emily Maki
Ballard Hegland (who took the beautiful picture opposite) and our son Edgar Kazu Ballard
Hegland.
Frode Alexander Hegland
Wimbledon, late 2021
17
Foreword by Ismail Serageldin
If human language is the greatest human achievement, then the writing of that means of
communication is what has defined societies and civilizations and saved their legacies for
posterity. In that context, the Book, appears as much more than a convenient way to record
some information, or as a means of entertainment, but rather it becomes the central
instrument of societal expression, the embodiment of contemporary humanity.
If that tends to exalt the book, I will go further and emphasize that my statement is not
about the book as artifact, the codex that we have all come to know and love, it is rather
about the text, that provides context and content, message and meaning, that engages the
reader with the author and provides so many of us with what it means to be human. The text
is an assemblage of words and sentences of a certain length, that can be read off different
platforms, from the scroll to the codex, from the electronic book on a special tablet or on the
laptop or the smart-phone, it can be in audio format or even in tabloid or newspaper…. The
format matters less than the content. So those of us who value and elevate “The Book” in the
abstract, are really celebrating the “Text”.
Surely, today there are many other forms of communication that have encroached upon
the special place of the Text. Radio was expected to destroy text-based newspapers and
magazines. It did not. Movies, i.e. moving pictures, whether in the form of film or short
videos, whether accessed in theaters or on television, or streamed through the internet, all of
these have increasingly become the instruments of entertainment as well as the new artistic
form for the communication of narratives. But the text remains. The classics are being
revived and reissued in all these formats. And more people write today than ever before.
There are more titles published every year, and there are more readers every year. And what
is more, even our youth seem to be texting more than they are talking on their mobile phones.
And as we reflect about the future of our societies, our interaction with machines, our
links to each other, we inevitably are drawn to think about the process of creating the text.
That special interaction between the author and the language, and the creative process by
which myriad combinations of letters and words could be formed and some are selected to
produce the Text. But before the author does that, he or she has retrieved the work of others,
studied it, manipulated it, referred to it and derived his or her own unique and innovative
contribution against a background of giving due recognition to the work of others. As a
reader before becoming an author, we value our ability to find and retrieve the stored works
of our predecessors, to cite parts of these works, and to give credit where credit is due. In
Academic circles, we have elaborated a whole system of references and footnotes and
18
endnotes to give credit where credit is due. The digital revolution and its modern machines
have helped us enormously with digital text, the internet, search engines, hypertext and other
means of interaction with the legacy of others. We can highlight sections of the text of others,
we can copy and cite these sections, we can enrich our thinking process as we develop our
own textual creation. Even more, these machines with the magic of the internet, have enabled
us to cross over to those new realms that also compete with – and complement – the classic
definition of words as the basis of text, to the graph, the image and the video, as devices to
pass on narrative and ideas.
But setting down that text on the platform of choice is what every author does. From
the classic setting of “pen to paper”, to the more modern world where we rely on the ever-
greater assistance of machines, from typewriters to word processors to the still-dim but ever-
more promising future.
The pillars of this community, especially Frode Hegland, have provided, and are
continuously creating, new ways and means of transforming the setting of text to platform.
They are inventing that ever more promising future.
Visual-Meta, Frode Hegland’s creation, is a remarkable contribution to simplify and
improve the means by which Text is identified, stored, retrieved and manipulated. It also
enormously enriches the text itself, including enhancing its storability, its retrievability and
by facilitating its readability by instantaneous contextual glossaries, and by our ability to
highlight parts of it and to copy and transfer these parts, and by thinking in advance of the
risks of technical obsolescence by adding Visual-Meta as an appendix to the pdf version of
that text. The “Reader” part of this operation, truly adds to the durability, retrievability and
richness of reading the text of others.
But Visual Meta also does more. It has an “Author” part. It helps the author compose
his or her own material by giving them the ability to diagram their thoughts as they go
through their creative process, establish directional links between these diagrammed items,
call in correct and instantaneous citations and so much more. Writing has never been so
enticing and exciting. The software has never been so helpful. Our modern machines are the
enablers of this enormous in-depth transformation of our interaction with the language as
writers, readers or custodians.
Discussions sparked by the symposium have resulted in a decade-long dialogue among
many creative people about “The Future of Text”. The many short essays in this book reflect
the breadth of the individual interests and the many different directions that members of this
creative community have taken their reflections on “The Future of Text”. There are many
ways to think of “Text” and how we interact with it. Enjoy.
19
Introduction by Frode Alexander Hegland
It’s an early morning here in Cyprus as I sit and write this introduction to the second volume.
The sky is not bright blue, it is cloudy, with only a few breaks where I can see sun. This is
poetic since writing survived here after dying out everywhere else during the Late Bronze
Age collapse. But my mind is more focused on the COP26 summit starting in the UK
tomorrow.
From Wikipedia: “The 2021 United Nations Climate Change Conference, also known as
COP26, is the 26th United Nations Climate Change conference. It is scheduled to be held in
Glasgow, Scotland, United Kingdom, between 31 October and 12 November 2021, under the
co-presidency of the United Kingdom and Italy. The conference is the 26th Conference of the
Parties (COP) to the United Nations Framework Convention on Climate Change and the third
meeting of the parties to the Paris Agreement. This conference is the first time that parties are
expected to commit to enhanced ambition since COP21. Parties are required to carry out
every five years, as outlined in the Paris Agreement, a process colloquially known as the
'ratchet mechanism'. The venue for the conference is the SEC Centre in Glasgow. Originally
due to be held in November 2020 at the same venue, the event was postponed for twelve
months because of the COVID-19 pandemic in Scotland.”
Richer Dialogue
I wrote to you, ‘dear reader of the distant future’, in the introduction to the first volume of
The Future of Text. For this second volume it is more important to write to the ‘dear reader of
today’ since if we don’t have a sustainable planet, there won’t be much of a future.
Our technologies for improving physical buildings, transport, commerce, data transfer,
data processing, warfare, entertainment advance at a rapid pace, but what about technologies
to support knowledge and understanding?
We build impressive buildings because we have increased our understanding of
materials and tools. We travel further in cars at lower energy usage–sometimes the cars even
drive themselves–and we travel further into space than ever before. We develop ever more
efficient commerce systems to power a global economy where we can request almost any
item to be delivered to our door within 24 hours–at low cost. We build tremendously
powerful personal and cloud computing systems to crunch numbers at previously unheard of
speeds. We also build drone swarms and cyber warfare capabilities to tear our enemies to
shreds at safe distance. We build virtual worlds, on VR platforms and on ‘old fashioned’
20
consoles to present visually stunning worlds with incredible freedom of movement and
interaction.
Yet our text remains much the same.
As our phones become smarter, never mind our watches and the (previously passive)
speakers in our homes, our text has generally stayed ‘dumb’, unable to communicate the
richness which went into it–when we publish a document we strip away even such useful
attributes as the section headings in a document and reduce the text to a form where
sentences cannot even be selected, only lines. The documents we publish are for another age,
when humans would only read documents, not for one where humans both read and
manipulate documents.
Looking At Text
The lack of richness in textual communication and the poverty of affordances is an urgent
and important problem. The characteristics of text are known to most school children but they
bear repeating to us since our daily use of text has rendered text near-invisible as a medium.
If you had met me in person it’s not unlikely I would have told you, perhaps more than once,
that talking with most people about text is almost like talking to someone about the glass in a
window and all they will talk about is the view (the analogy would be the meaning of the
text) or the frame (the font or immediate presentation of the text), since that is what they can
see, not the qualities of what text does for us, such as its symbolic qualities.
Please allow me a few points about the characteristics of text, from my opening
remarks at the 10th Annual Future of Text Symposium, on the following pages:
21
Text is Simple
‘Text is simple’ My 4 year old son Edgar is learning to read and write. It is expected that he
will be proficient to an extent of reading basic books by the time he is 5. Although he has of
course picked up daddy’s iPhone and taken pictures and made the odd wobbly video, it will
take him considerably longer to make a video which presents thoughts beyond what he can
capture immediately in front of him. Beyond ‘text is simple’ a poem, or sorts:
Text is frozen yet… Text is alive with potential.
Text can be rendered into many forms. Text is not one thing. Graffiti is text. Code is text.
Text does not move at its own speed, it moves at yours.
Text is free from much of human colour, text will never replace the voice of a loved one. Text
is not trying to be speech. Text was not invented to solve the same communication problems.
Text lasts, images fade. Movies get remade, books don’t get re-written.
Text is timeless, though words are not.
The environment text is expressed into, its substrate, is evolving. Text is now active and
connected.
A TikTok of Doug Engelbart.
Text is clearly expandable. Links, emoji, tapback.
Text can be mind controlling, through propaganda, myths or social media. Or even poetry.
Text can be mind expanding.
Text is neither true or false, it is simply a conveyor of information, it contains no inherent
judgement, no inherent validity, it can only be judged and validated through other text, and
today we are figuratively drowning in text which is instantly click-copyable and click-
shareable but few clicks are devoted to understanding and evaluation.
Text is cheap. Cheap to author and cheap to read. If someone writes on a modern laptop and
uses a modern smartphone, they do not need further tools to write or to produce a broadcast
quality video. However, producing a video comes with a large amount of external costs, such
22
as locations, talent, effects, music and so on.
The problems text create can be expensive to deal with. Fake news, climate change denial,
social justice pushback–Black Lives Do Matter.
My text is not neutral.
Text allows for quick non-linear interaction without having to formulate queries–it’s a lot
quicker to thumb back in a document to look for something you have a vague idea of, rather
than to speak exactly what you are looking for, to a human or AI.
Text is placeable in space: You can write on a napkin, on sand or on a computer screen and
put any text anywhere you want to, and it will remain within eye’s reach for you to think
with, greatly increasing your working memory.
Text is referable. When someone puts something in writing, they can be held to account.
Text is addressable, which means that it is not just citable, it is trail-followable, where an
author can clearly state what is referred to, for the reader to verify.
Text is characterised. My father would seldom say that he disagreed or that someone was
wrong. Instead he would say that he characterised the issue differently. This is what we do
when we write glossary terms. We present our personal definition of something, we make no
assumption of knowing a universal truth. This is powerful.
Text is where we place much of our most important thoughts, in laws, records, diary entries,
and love letters–civilisation was ‘literally’ written into existence.
Text is simply the most advanced symbolic communication media our species has so far
managed to come up with.
Text is more. Much more than I can list.
Text is also largely ignored. This is something we are trying to change with our effort
around The Future of Text book and Symposium, as well as with the software we are
producing to, as Alan Kay puts it, the best way to predict the future is to invent it, and the
infrastructure we are working on. We live in a perilous time but I believe we can write our
way out.
Thank you for reading this, thank you for being a part of it.
23
The Environment of Text Has Evolved
Even though I lament that text and how we interact with text has barely changed, it is clear
that the environment text is produced and consumed in has changed:
It’s overwhelming. In the 21st century we do not have the luxury of time to even try to
read every document in our field in fine detail. There is simply too much information
published. We must develop new ways of reading to deal with the volume of material and get
rid of the snobbish notion that we need to read everything deeply. When we come across
work which is important to us, then we do indeed need the opportunity to digest and reflect,
but we equally need to reduce the time wasted on immaterial material. This is not new but it’s
worth noting.
It’s active. Text today is active in social media, where considerable resources have been
invested by those who profit from what is sometimes called ‘engagement’ in social media, as
well as state actors who wish to manipulate the views of whole segments of populations
through posts crafted with thorough knowledge of the population they are targeting. Those
with vast resources have weaponised text, ‘civil defence’ remains weak, the average citizens
do not have access to incessantly powerful text tools.
Text Is Getting Shorter and More Connected
Text authored for linear entertainment needs no more improvement than paintings, nobody
will complain that a painting is not up to date and neither will anyone complain that
Shakespeare is too linear.
I would say that text authored for learning or knowledge work in general does need to
have improvements in how we interact with it however.
What we see, as I’m sure you would agree, is that information is getting shorter and
more connected. This goes to the heart of what digital text, or hypertext, is, and it’s not new.
The hypertext community has been historically focused on linking smaller nodes of
information rather than on larger, fixed documents [2] [3] [4] [5]. In contrast, scholarly
communication is centred around the consumption (studying and literature reviews) and
production (publishing and submissions to teachers and journals) of frozen ‘complete’
documents.
We can view it as a cost issue: Previously going to the library to get a book took time
and effort so you might as well read the whole thing, it made sense. Now we can instantly
search and jump to a myriad documents so the cost of access is lower to the point of
interfering with the cost of mental concentration to consume a large volume in a linear
fashion. It is therefore not enough to decry the lack of long reading or laugh at the 3 min max
24
videos on TikTok. As text thinkers we need to adapt our thinking of what text is when digital
and what we want text to do for us. And this is our challenge; to use the power digital
technologies afforded us without losing the depth and rigour of academia.
What We Need To Augment
It’s fair to ask what exactly we should build to augment our interactions with text, and the
answer is of course many different things, but I think I can safely tell you about what our
group is looking into:
The most basic writing is noting something down. This can be based on something one
is reading (an annotation), or just a thought. A challenge is to provide the opportunity to note
this down in a myriad of ways, with context/metadata. If you have a thought while walking
down Piccadilly because a building inspired you and you say this to the assistant on your
watch, this note should be accessible based on the text itself as well as the time of day, your
location, what you were doing before, during or after making this note, who you might have
been with/talking to, the weather that day and so on.
And here comes the fun part: We need to create incredible spaces for interaction with
the knowledge we have gained and connections to external knowledge to follow our
curiosity. This is an area seeing much exciting attention from companies including Roam and
Notion. How far can we build these thought spaces, based on the rich information noted
down earlier and how can we publish with this information staying intact? This is the
question we ask, and we ask how we can do it in an open way so that there can be real
competition in this space, with the user owning their data at every point.
And then we come to publishing the work. It’s not enough to simply fill a bucket of
knowledge. At some point it needs to be presented in a coherent form, as a report, academic
paper or even a journal entry for ourselves. The idea here is that it should be possible to
publish using PDF with Visual-Meta appended to the document to the level of detail,
depending on what you want to share, including all the original context to allow a reader to
expand the document into rich views.
From the perspective of this series of books, I think we should also look at different
ways of binding text together in volumes. For example, this book is the second volume of
The Future of Text. I hope there will be more. An interaction we have built into our PDF
viewer Reader means that you can select text and cmd-f to see all the occurrences of that text,
which can be pretty useful seeing what other authors referred to the same keywords. This
means that maybe we should make a single volume of The Future of Text 1 & 2, so that you
can quickly and easily see who else is writing about a specific keyword you see and find
interesting. That would make the book massive though. The point of this trivial example is
25
that you, the reader, should be able to do book-binding and re-binding as you see fit,
specifying that you want to open book one or both in a single binding. Maybe you want to
share a few articles with a friend and maybe that friend wants to read only those or ‘open’ the
book back into a full set, without losing any annotations they might have made to their first-
read articles. I think this is quite evocative.
Maybe. What is definite is that we should experiment with different ways for authors
and readers to package the information, serving, as Ted Nelson so eloquently put it some time
ago, ‘God The Author and God the Reader’.
Going Beneath The Surface
So far the text we share only carries surface meaning.
If we want richer dialogue we need richer means of expression and communication.
And I think it’s fair to say that there has been no time in history when we have had a greater
need than now. Let me tell you what I mean. When we share what we write in the form of a
document, we share only the surface of what we have written. We do not include who we are,
how the document is structured, or who we cite, in any way which is easily and robustly
accessible programatically.
And we share in document formats which may not last the decade, let alone a hundred
or a thousand years.
I say the solution is simple: Let us write our way outa, let us write our metadata into an
appendix in a form readable by both mankind and machine.
Since Context Matters & Metadata Matters, Add It, Don’t Hide It or Lose It
Let us write metadata plainly for all to see.
Our group proposes the Visual-Meta approach, as outlined on visual-meta.info but how
we do it is less important than that we do it at all. I’m grateful that the ACM is doing a pilot
study on this with us and I’m grateful that others are joining us in dialog for how best to
make ordinary documents contain rich information.
Let us also do this for web pages, which is what we are working on now, please join us.
And let us try to find a way to do this when we converse using social media. Let any post
refer to any section of a longer text (document or web page), let citations flow through to
help the reader check the veracity of what’s stated. Let’s more explicitly present how we
characterise aspects of our knowledge, rather than cling to arrogant statements of some
notion of ‘truth’, when we should be looking at aligning our perspectives and how we choose
to interpret data.
26
In the beginning was the word. In the beginning of a new Age of Enlightenment we
need a new kind of word to unleash human thought and understanding. Let’s build this
together.
Augment Everybody
Now is a time when education really matters, for individuals and for society at large. We
simply cannot afford the weight of being dragged down by those who will not engage with
science and reason, yet we certainly do not want to have people listen to those who speak
from a perspective of science and reason to be listened to unquestioning. The solution is, as
we often mumble to each other over coffee discussions, education. But then we only seldom
look at what improving education would really entail and I accept that I too, am not qualified
to comment on this, though I certainly have opinions based on my teaching at London
College of Communication and on getting my PhD from the University of Southamptonb.
What I will say is that tools for thought matter. It’s easy to say students need a good
laptop (and they do). It is equally important to focus on the software on these laptops and
while the large software companies have fantastic creative power, the systems students, and
the rest of us, generally, have for reading and writing have not improved measurably over the
last few decades, while the hardware performance of the machines, and the entertainment
software (games and movie CGI) has increased markedly. Some of us are doing what we can
by building what we think are better tools. Join us, even if that simply means complaining
about what the status quo offers.
Let’s augment everybody, let’s leave no mind behind.
Let’s end by going much further back in time than the Bronze Age Collapse and look at
an early pivotal moment in the history of text, when Mesopotamian city states transformed
using tokens to manage their agricultural resources. From around 7500 BCE they had been
using tokens, but as Denise Schmandt-Besserat wrote in the first Future of Text [6] around
3300 BC, the tokens were wrapped in a clay envelope which had indentations of the contents
to show what was within. This would eventually lead to the full information being written on
the surface clay, the token inside would no longer be necessary.
I am a huge proponent of making metadata visual in order to make it easy for mankind
and machine to read and to ensure it won’t be stripped away if the document format changes–
as long as the ‘surface content’ can be read, this will remain. It is clear that metadata cannot
always be visible however, such as when you copy from a document with Visual-Meta and
the system attaches the full Visual-Meta to the clipboard–something you cannot see or easily
access–so let’s end with thoughts of resurrecting the Mesopotamian tokens as a poetic idea of
what can be behind the text, what context and connections can be embedded within. Let’s
27
imagine thousands, or hundreds of thousands of modern documents being broken open and
out comes useful units of knowledge.
Let’s shatter frozen, superficial documents, go beneath the surface and create a truly
liquid information environment where knowledge and understanding flows.
Let’s augment everybody, let’s leave no mind behind.
P.S.
As I still sit here in Cyprus by the water, it’s getting a little lighter in the sky and as I enjoy
working on a modern laptop, Apple’s 13” M1 with AirPods Pro, listening to what’s
sometimes called ‘Melodic Techno & Progressive House Mixc in hour long, sometimes
longer sets on YouTube, and I can drown out what is distracting background sounds for me (I
have Misophonia) and enjoy my coffee ‘frappe’, I wonder how I would have been able to
work here all those thousands of years ago, if at all. Leaving aside the cultural aspects of the
likelihood I would even have been able to afford an education, and focusing just on the tools,
I am so grateful for the augmentations the technology offers me. I can speak a message to Siri
on my Apple Watch so that I don’t have to spend a lot of time coordinating with my wife who
is now coming down to the beach with my beautiful baby boy Edgar who is now four and a
half. I can instantly check my curiosity using our software tool Liquid with Google &
Wikipedia.
And I wonder, if we put some real effort into improving the future of text, what will my
descendants and Edgar’s descendants have to augment how they think and communicate? I
wonder further as I sit and do a few edits at 30,000 feet while flying back to the UK. I have
my impressive, futuristic feeling laptop but the in-flight Wi-Fi has gone down (that is of
course a topic by itself, how we don’t refer to the Internet, or even the Web anymore in
general discourse, we use the means of connection–Wi-Fi, Broadband or 4G/5G). I feel
slightly disconnected, I wonder what the rest of the team are thinking about the book and
about what we are doing. I am not a Phoenician from thousands of years ago sailing through
the Mediterranean, navigating by starlight and the outlines of shores, ever watchful of
changing weather and hostile craft, my mind is in cyberspace, attempting to navigate and
make sense of a new world, as fraught with new dangers, hoping, hoping desperately that we
have the foresight to really look deeply at the tools we build to connect us to our thoughts and
each other.
Back in my home in London doing the final edits to this introduction and working on
putting the book together, I again glance up at the sky. Cold and frosty, nothing like the
balmy Mediterranean sky of just a few days earlier, and I am reminded that this volume is
published the same month as the James Webb Space Telescope is launched. It is amazing
28
what humanity can do when we just simply get on with it. We will soon be looking into
interstellar space with a precision previously far out of our reach. I hope we can make a
similar effort in how we look at each other and our own minds, through the ways we express
ourselves symbolically and beyond.
Let’s dream about that how we can do that. And build to make it a reality.
Frode Alexander Hegland
Editor, frode@hegland.com
Cyprus, 31st of October 2021
29
Contributor Bios
Alexandra Saemmer is Full Professor of Information and Communication Science and
co-director of the CEMTI laboratory at University of Paris 8, France. Her research focuses on
socio-semiotics of cultural productions (texts, images, videos, websites, platforms, …), and
digital literature. She is also an author of digital literature herself.
Ann Bessemans Professor and post doctoral researcher at PXL-MAD School of Arts /
Hasselt University, research group READSEARCH.
Barbara Tversky is a professor emerita of psychology at Stanford University and a
professor of psychology and education at Teachers College, Columbia University. Author of
Mind in Motion: How Action Shapes Thought.
Robert E. “Bob” Horn is a fellow of the World Academy of Art and Science. For 27
years, he was a Senior Researcher the Human Science and Technology Advanced Research
Institute (H-STAR), Stanford University, where he worked with international task forces and
governments on wicked problems an social messes. He has taught at Harvard and Columbia
universities and is the author/editor of ten books. www.bobhorn.us
Bob Stein Founder of Criterion, Voyager and the Institute for the Future of the Book.
Brendan Langen is a Conceptual Designer, Researcher + Writer in Chicago.
Daniel Berleant is the author of The Human Race to the Future, 4th ed., 2020, pub. by
Lifeboat Foundation and available on Amazon.
Daveed Benjamin CEO Bridgit.io Author of first-of-its-kind augmented reality book,
Pacha’s Pajamas: A Story Written By Nature that features Mos Def, Talib Kweli, and Cheech
Marin.
Erik Vlietinck Freelance technology journalist and reviewer/analyst for Byte,
Macworld, Publish, Photoshop User, IT Week, and many others. Marketing writer for Ricoh
Europe, HP, Phoseon, Quark, EFI… Switched from being a lecturer at ICT law 28 years ago
to writing.
Fabian Wittel & David Felsmann. Napkin Co-Founders. David Felsmann: Believer in
knowledge creation, likes to build companies, couldn’t find an easy yet powerful personal
knowledge management system and hence builds Napkin.
Fabian Wittel: Lives at the intersection of code and learning, spent the last years in
organizational development and now builds Napkin, a simple system to connect and sort
thoughts.
Fabio Brazza is a Brazilian composer, rapper and poet.
30
Faith Lawrence. Data Analyst in The National Archives' Catalogue Taxonomy and
Data department and am Project Manager for Project Omega - TNA's new pan-archival
catalogue/editorial management project.
Imogen Reid completed a practice-based PhD at Chelsea College of Arts, her practice
being writing. Her thesis focused on the ways in which film has been used by novelists as a
resource to transform their writing practice, and on how the non-conventional writing
techniques generated by film could, in turn, produce alternative forms of readability. Her
work has appeared in: Hotel, LossLit, gorse, Zeno Press, Elbow Room, Sublunary Editions,
IceFloe Press, ToCall, Experiment-O, Soanyway, and The Babel Tower Notice Board. She
has pamphlets with Gordian Projects, Nighjar Press, and Timglaset.
Jad Esber Co-Founder, Koodos. Fellow, Berkman Klein Centre for Internet & Society
at Harvard University.
Jamie Joyce President & Executive Director of The Society Library
www.societylibrary.org
Jay Hooper holds a doctorate in computer science and has over 15 years of experience
in human-computer interaction, sociotechnical research, and user experience. He takes a
mixed methods approach to research, and is a leader in the web science community, bridging
stakeholders from industry and academia. Jay recently served as programme co-chair of the
ACM Web Science conference, and currently works as an independent consultant in Canada.
Jeffrey Chan is an assistant professor at the Singapore University of Technology and
Design. His work focuses on enlarging the possibility of ethics in design.
Jessica Rubart Professor of business information systems, OWL University of Applied
Sciences and Arts.
Joe Devlin is an artist living and working in Manchester and Leeds. His most recent
solo exhibition ‘Gatefolds’ was held at Studio 2, Todmorden, in 2019. His work has appeared
in Cabinet Magazine, Frozen Tears III (edited by John Russell), Text 2 (edited by Tony
Trehy), ToCall Magazine, edited, published, and printed by psw (Petra Schulze-Wollgast),
and No Press (derek beaulieu). His latest publication, Net Reshapes, is published by Non-
Plus-Ultra. He runs the publishers Nuts and Bolts. For more information, please visit https://
nutsandboltspublishing.com
John Hockenberry I spent more than 30 years in print and broadcast journalism. For
the last 45 years I have been a paraplegic. Disability is an experience limited by text. My
career ended with some texts. I want to explore the evolution and future of the VERB: Text.
Jonathan Finn developer of creativity apps (e.g. Sibelius music writing system).
Karl Hebenstreit, Jr. IT Specialist at the US General Services Administration.
31
Kyle Booten Assistant Professor, Department of English University of Connecticut,
Storrs, Assistant Professor of English, University of Connecticut, poet/programmer
developing tools for noöpolitics, author of "To Pray Without Ceasing"
topraywithoutceasing.com
Lesia Tkacz is a Web Science PhD researcher at the University of Southampton, and
focuses on studying computer generated novels, paratext, and potential readers. Her work
also includes creative AI collaboration projects at the Winchester School of Art.
Luc Beaudoin Adjunct Professor of Cognitive Science & Education at Simon Fraser
University.
Mark Anderson A member of the WAIS (Web and Internet Sciences) Group at
Southampton university and an independent researcher in Hypertext. His focus is on how
hypertext can be used for more efficient retention of knowledge within organisations. He is
also active in the recovery and preservation of old hypertext systems.
Megan Ma is a Residential CodeX Fellow at Stanford Law School. Her research
considers the limits of legal expression, in particular how code could become the next legal
language. Her work reflects on the frameworks of legal interpretation and its overlap in
linguistics, logic, and aesthetic programming. Megan is also the Managing Editor of the MIT
Computational Law Report and a Research Affiliate at Singapore Management University in
their Centre for Computational Law. As well, she is finishing her PhD in Law at Sciences Po
and was a lecturer there, having taught courses in Artificial Intelligence and Legal Reasoning,
Legal Semantics, and Public Health Law and Policy. She has previously been a Visiting PhD
at the University of Cambridge and Harvard Law School respectively.
Niels Ole Finnemann Professor Emeritus. Department for Communication, University
of Copenhagen, Denmark. Former head of Center for Internet Studies and Netlab at
University of Aarhus and professor in Internet History, Digital Cultural Heritage and Digital
Humanities at Copenhagen University.
Peter Wasilko New York State licensed Attorney, Independent Scholar, and
Programmer.
Philippe Bootz, e-poet and Profesor, Université Paris 8 https://elmcip.net/person/
philippe-bootz
Rafael Nepô, is an Information Architect and the founder of Mee, a modular platform
to organize information in a visual way. His side projects involve researching Semiotics,
Reading & Writing, Encyclopaedias, Library & Information Science, Information
Architecture, Iconography. A lot of his work is influenced by Japanese Philosophy. mee.cc
Richard A Carter is an artist and Senior Lecturer in Digital Media at the University of
32
Roehampton. Carter is interested in examining questions and issues concerning more-than-
human agency within digital art and literature - considering these generate insights into what
it means to perceive, to articulate, and to act within the world. Carter's research is embedded
within his artistic practice, developing hybrid art objects that meditate on the potentialities of
sensing, knowing, and writing at the intersection between human and machinic actors.
Rob Haisfield Independent Behavior Design and Gamification Consultant / Behavioral
Product Strategist at Spark Wave.
Sam Brooker Assistant Professor in Digital Communication and Convenor, BSc, MSc,
PhD, FImanfUK, F SmeIR, at Richmond, the American International University in London.
Sam Winston His practice is concerned with language not only as a carrier of messages
but also as a visual form in and of itself. Initially known for his typography and artist’s books
he employs a variety of different approaches including drawing, performance and poetry.
https://www.samwinston.com/information
Sarah Walton is an author, digital consultant, lecturer and coach. Founder of
Counterpoint Digital Consulting counterpoint.bz. Founder of Soul Writing & Soul Business
drsarahwalton.com
Stephen Fry is an English actor, broadcaster, comedian, director and writer.
Tim Brookes founded The Endangered Alphabets Project
www.endangeredalphabets.com and The Atlas of Endangered Alphabets at
www.endangeredalphabets.net
Vinton G. Cerf co-inventor of the Internet
Yohanna Joseph Waliya, UNESCO Janusz Korczak Fellow, ELO Research Fellow,
Winner of Janusz Korczak Prize for Global South, Curator of MAELD & ADELD,
University of Calabar.
33
Alexandra Saemmer
Writing in the age of computext
All digital texts are polyphonic and performative, whether they are written on Word,
Facebook, Prezi, After Effects, WordPress, Facebook, Instagram, Power Point, or
programmed “by hand”d. Their content and structure are executed live on a machine whose
hardware and softwares vary from one brand to another, from one generation to another. For
me as an academic and writer of digital literature, the future of text is deeply linked to its
digitalization: I define it as a tensive dialogue between the human and the technical writing
tool marked by economic and ideologic strangleholde, but also by intense moments of
inspirationf and discovery.
In order to actualise, a digital text, first of all, relies on an operating system that
structures the display in advance: The Apple symbol or the Microsoft primary-coloured
window logo should therefore be considered as an integral part of the text. Software tools for
writing, editing and publishing are not neutral intermediaries neither: They embody the
voices of their creators, from Steve Jobs to Mark Zuckerberg; they materialise points of view,
values and ideologiesg through the proposals they make to the user in menus and icons, the
predefined frames they impose on a text. Drop-down menus for example, are injunctions to
give media content a prescriptive format; forms to be filled out limit the space for expression,
resulting in a way of organising the textual content that, in part, is not within the remit of the
author.
“Architexth” is what the French researchers Yves Jeanneret and Emmanuel Souchier
call the highly structured writing interface of software tools and platforms. Nevertheless, a
prefabricated device can result in active appropriation. The poetics of digital text lie, for me,
in the complex interaction between writers and readers who perceive, interact with and
interpret the contents and structures of the text, and its polyphonic programme that makes the
voice of the author resonate, as well as the software designers and manufacturers, and the
computer that updates the programme. From the infancy of net arti to today’s writings on
social networksj, this balance of power has become a theme for many writers who attempt to
deconstruct it.
Recently, the formatting process of the text has taken a new turn that I refer to as
“computext”. While architext imposes a form on media content, computextk anticipates its
very production, and sometimes even writes instead of the author. Predictive text generators,
like Gmail’s “Smart Compose”, use machine-learning processes that predict what the human
34
user is about to write according to probability. For example, when they answer an email, they
just have to start their sentences for the system to complete them automatically. The
suggestions are calculated by algorithms that detect expressions used regularly by all Gmail
users, and by the individual.
The probable continuation of the text is calculated as soon as the human starts typing an
email, but the results are nonetheless limited by the programme as early on, Google had to
deal with violent comments in the results generated. When Gmail writes out what the human
writer may or may not want to express, the result reflects a representation of the brain as a
network of highly routinised connections, moved by almost reflexive habits. But it also
reflects what Google tries to impose as standards of expression on its community of users.
Web apps like “Write with transformerl” based on GPT-2 technology give an idea of
what an artificial neuronal network is capable of when it generates text without these
constraints. The writer first of all chooses an IA model. By clicking “trigger autocomplete”
on the page, options are given for the continuation of each sentence. By setting the
“temperature”, the writer can opt for a varying degree of conventionality, in other words: they
can encourage results that converge with, or deviate from the regular responses in the
generator’s database. If the writer for example has the app complete a list using first a high
level of conventionality, then a low one, it seems like the app can decide to no longer follow
the schema it detects. The writer can go even further and create a new database, using the
already existing algorithmic structures.
In the last few decades, digital literature has invented its own poetics in relation to
software architexts. How will these poetics be reinvented in the age of computext? The
results calculated by the “Write with transformer” neural network give us a foretaste of the
potential of this dialogue. Obviously, the text generator reveals discursive routines, but the
writer can regulate the level of routinisation, and start a lively conversation with the text
library it is based on. Each piece of text proposed comes from the texts in the tool’s database,
and these origins continue to resonate in the new text produced as the author selects, rejects
or rewrites the proposals.
35
text created with https://transformer.huggingface.co/doc/gpt2-large. Saemmer, 2021.
Writing with machine computext might become a sort of curatorial task: it involves finding
an individual path through the avalanche of content generated by AI. Singular texts emerge
from this mesh while remaining deeply connected to the algorithmic structure of the tool, and
the content of its database.
I imagine the future of writing with neural networks as a process of enquiry, that digs into
already existing texts in order to create new paths.
36
37
Ann Bessemans
Legibility/Readability & Visibility
Abstract:
Typography and legibility/readability are terms that are often used inappropriately and
without considering their meaning in context with their origin. After all, what
makes some letters easier to read than others? Does a text or typographic message
have the main purpose to be read and is legibility/readability therefore assigned as one
of the most important criteria? Or is there more to it and can typography
communicate with its audience in other ways and thus quantified differently as
legibility? This text will introduce important definitions and will give, in relation to it,
a quick analysis of the typographic communication process.
The most famous definition of typography and undoubtedly one of the pointiest is the one of
Stanley Morison (1889-1967) in his First Principles of Typography: “Typography may be
defined as the art of rightly disposing printed materials in accordance with specific purpose:
of so arranging letters, distributing the space as to aid to a maximum the reader's
comprehension of the text. Typography is the efficient means to an essentially utilitarian and
only accidentally aesthetic end, for the enjoyment of patterns is rarely the reader’s chief aim.
Therefore, any disposition of printing material which, whatever the intention, has the effect of
coming between the author and the reader is wrong. [8]
Beatrice Warde [9] tried to clarify the function of typography through various
metaphors which answer to the same principle as the one from Morison: Reading should not
become a mere viewing experience, where the reading process is hindered by the purely
visual nature of the typographic design.
Many of the used and contemporary typographic (legibility) definitions start from
invisible typography, from the convention. Convention, an unavoidable aspect of legibility/
readability, is understood as a form of certainty to map out routes. This means that when you
accept the convention you can reasonably agree that you can read undisturbed with the
familiar letterforms and familiar typographic patterns.
The contemporary definitions that emphasize the purely conveying role in transferring a
message by means of typography were set out throughout the typesetting craft, but also
during the early 20th century which is characterized by some pioneers such as William
38
Morris, Cobden Sanderson, Emery Walker and St. John Hornby who carried out a reappraisal
of craftsmanship. According to these typographers, there was a decline in the typographic
standards, which they attributed to "the machine," and so they printed their books according
to the standards derived from 15th and 16th century printed matter.
Typography was almost synonymous with "the art of printing books" in the 15th and 16th
centuries and the possibilities were very limited. In particular the strict horizontal-vertical
pattern was difficult to detach from. A series of technological developments changed this. As
a result of the age-old tradition, Morris and his relatives considered these deviations from this
horizontal-vertical pattern inefficient and inferior.
After the First World War, revolutionary changes took place. In England the "Invisible
Typography" arose, and in Germany the "Neue Typographie" [10]. Both tendencies were a
reaction to the decline of typographic standards, but also to the lavish decorations which were
a staple in the 19th century. The "Invisible Typography" and the "Neue Typography" have in
common that they prioritize an unimpeded transfer of information. In the 1960s and 1970s
the Swiss typography followed with their strict grids and form schemes. Not the typography,
but the content should make the reader think, which is why the designer must act with great
restraint. The dogmatic belief in "only use what is strictly necessary" is essential for
effectively conveying content is underlined during these times.
Today, texts in printed matter or on screens are often overwhelmingly designed, even
noisy. If, according to Gerard Unger, you are willing to get used to such a design or if the text
is interesting enough, it also creates a typographic "silence". The design dissolves and allows
itself to be read, if only for a moment, before it reclaims attention to itself. Visibility is not an
obstacle a priori, and legibility is a flexible concept [11]. It seems necessary that
qualifications such as invisible and visible/notable require to be more clearly delineated.
Where is the boundary, when do letters and typography become too visible? And is visibility /
noticeability always disadvantageous or does standing out also have a clear function, a
legibility function?
At the end of this text, it should be clear that typographic reading (legibility/readability)
research can focus on the visible and invisible typography. When it comes to legibility within
invisible typography, it often concerns the disappearing letters / or even disappearing design
as framed within the known definitions. For example, when a newspaper is not designed as a
newspaper, it is no longer invisible. This immediately disturbs the reader’s expectations of
this medium. Partly because with his years of reading experience the reader has built up an
unconscious typographic knowledge.
The typographic definitions of legibility and readability are somewhat in line with what
can be described as micro or macro typography. Legibility – Micro typography is defined as
39
formal knowledge of the basic shapes of letters and their smallest details (such as letter
proportions and contrast). It also concerns the letters in words, sentences and text.
Readability - Macro typography is primarily dependent on the typographic layout, secondly it
is the motivated choice of the font.
In addition to legibility and readability, noticeability / visibility can also be important. Until
today visibility, as a theoretical term, has barely been treated as a separate level in type
design and typography next to legibility and readability. The entire discussion of visibility as
a legibility criterium or legibility component within typography balances time and again
between the influences emanating from developments in the technological, theoretical,
economic and social. In interaction with this, but just as often in a leading way, it is
developments on a design level that have been catalysts and driving forces for visibility.
Contrary to legibility & readability, visibility can be situated on both a micro and macro
typographic level.
In my opinion, readability research could be studied at three levels: legibility, readability &
visibility / noticeability.
To give some context in introducing the term ‘visibility’ into legibility in general, it
should be understood that the increased variation within typography has an effect on its
visibility / noticeability. In the long period between the invention of printing (15th century)
and that of the trio of lithography, photography, offset (19th / early 20th century),
experimentation with typography was rare. So many materials and (digital) techniques have
now been developed that a revolution had to take place. The production process now offers us
almost unlimited possibilities.
However, the theoretical approach to typography / to typographic definitions in many
graphic manuals lags behind these developments. Due to the speed and scale of the technical
changes, the new machines, techniques and programs have received more attention than the
products. There has been an intoxication from the new technologies. In addition to literal
illegibility, this can also lead to illegibility of the content of texts. The question could be
whether the increased variation within typographic design benefits effectiveness in addition
to visibility. Or how these two relate to each other and can thus be studied.
Within this story of visible and invisible typography, it seems paradoxical to treat
legibility /readability & visibility as equals. Perhaps, to give priority to visibility, it may be
that designs are made less legible or readable. Should legibility be the only quality label
within a typographic design? Legibility is (has become) and remains a very flexible concept.
New interpretations are emerging.
This text is based on the lecture Ann Bessemans gave during the ATypI (Association
40
Typographique Internationale) congress 2020 All Over (online edition).
Bibliography
Morison, Stanley. 1967. First Principles of Typography. London: Cambridge University
Press. Tschichold, Jan. 1928. Die neue Typographie, Ein Handbuch für Zeitgemäss
Schaffende. Berlin: Verlag des Bildungsverbandes der Deutschen Buchdrucker.
Unger, Gerard. 1997. Terwijl je leest. Amsterdam: De Buitenkant.
Warde, Beatrice. 1956. The Crystal Goblet: Sixteen Essays on Typography. Cleveland and
New York: The World Publishing Company.
41
Barbara Tversky
The Future is the Past
What is text anyway? It can’t be impressions in clay or ink on parchment or pixels on a
screen. Those are manifestations of text. Text must be more abstract than any instantiation of
it. Perhaps it’s meaningful groups of linguistic characters visible to the eye. But that
canonical way of understanding text shuts out meaningful groups of sounds audible to the
ears or meaningful patterns of dots tangible to the hands. There are many who use their ears
or fingers to read text rather than their eyes. These are all ways to sense language.
Text, in that narrow sense of meaningful groups of visible characters, developed to
represent language. Language, in turn, developed to convey thought, meanings, though it is
not the only means of conveying thought. Spoken language disappears quickly; meaningful
groups of visible characters stay around. They can preserve language, making it permanent
and public. Putting thought permanently into the world doesn’t require written language;
images can do it. Text-like images were created long before writing. They are the earliest
evidence of symbolic thought. Currently, the earliest known public expression of thought is a
marvelous depiction of a pig and a buffalo hunt found in a cave in Sulewesi [12] and judged
to be 44,000 years old. Images in caves and above strewn all over the world, the Bayeux
Tapestry, Trajan’s column, Chinese scrolls, Egyptian tombs, tell stories of hunts and
conquests and growing wheat and more. These represent thought quite directly, like the early
(5-6,000 years ago) pictographic attempts to represent language, also invented all over the
world, Egyptian and Mayan hieroglyphics and Chinese characters among them.
Mapping meaning directly to pictographic characters encounters problems. One is that
many meanings, if, where, truth, yesterday, Aristotle, don’t have clear visual representations,
a problem partly solved by adding ways to map sound. A messy mapping, and eventually
leading to another mapping, mapping sound rather than meaning to characters. It turns out
that each language has a relatively small (20-40) number of basic sounds from which a
multitude of words can be created. Words and sentences in turn can express a multitude of
meanings. Around 3000 years ago, workers speaking a Semitic language in the Eastern
Mediterranean invented the alphabet, a small set of characters that represent sound directly.
This efficient system was adopted and adapted by many languages. That history is known
from remnants of ancient bits of ink on parchment and impressions in clay and carvings in
rock. Mapping sound to characters also gets messy as readers of English know, though,
thought, tough, through. And throw in threw.
42
As every student knows or should know, Gutenberg invented moveable type, a cultural
leap, but practical only for alphabetic languages with a manageable number of characters. It
was tried but did not work for Chinese. Moveable type easily prints ink on a page, an
important technological advance. Moveable type enabled mass production of books and
general literacy and put scribes out of business. Words and images parted ways; words were
easy to produce and took over mass communication. Images flourished as art.
What you are reading and I am writing is not ink on a page; it’s pixels on a screen.
Pixels can form meaningful groups of letters but they can almost as easily form depictions of
all sorts--maps, charts, diagrams, cartoons, sketches, comics, photographs, video--robbing
characters of the alphabet of some of their advantages. The rapid rise of emojis, of comics
and graphic books, of Instagram and Pinterest and YouTube, where images, static and
animated, of varying abstraction and purpose, dominate written words, attests to the thirst and
utility for vivid forms that express meaning directly. Journals are filled with research
demonstrating the advantages of maps over verbal directions, of clear diagrams and charts
over explanations in words, of creating visual explanations over verbal ones for learning, of
messy sketches for creative thought and innovation. Journals and books are also increasingly
packed with maps and diagrams and charts and photos and works of art.
Words have powers; images superpowers, and not only because written words are
images. Speakers of any language, young and old, literate or not, can readily recognize a line
drawing of a chair, and a spare armless chair evokes different sensations and associations
than a plush overstuffed one. True, you can imagine both and probably did as you read, but
you may also agree that the name of your child or lover—or your own—pales next to a
photo. Many concepts don’t lend themselves to direct depiction, but may be depicted
metaphorically, just as they are in language, the scales of justice or the White House.
Sometimes only a word or symbol will do, and they find their place in graphics. Images can
be abstract as well. Jump now to the simplest, dots and lines. Klee famously observed: A line
is a dot that went for a walk. Dots are easily understood as places, events, or ideas; lines as
paths or links or relations in a sketch map, time line, or network. A physical walk or a mental
one. Add arrows as asymmetric relations and boxes as containers of stuff of any kind and you
have a small toolkit from which to construct a wealth of maps, diagrams, and charts. The
meanings of these simple abstract forms, dots, lines, boxes, and arrows, are immediately
apparent, at least to educated eyes. As before, backed by research. The same toolkit—and
much more—serves gestures.
Even before there were cave paintings and petroglyphs, communication was face-to
face. We do use words to communicate thought, but we also use gesture, intonation, facial
expressions, actions of the body. We use the world—pointing to things in it, arranging sticks
43
and stones and salt shakers to represent things in it, drawing in the sand, sketching on
napkins. Canonical face-to-face communication is naturally multi-modal. As is the future of
text.
Bibliography
Tversky, B. (2019) Mind in Motion: How Action Shapes Thought. NY: Basic.
44
45
Bob Horn
Diagrams, Meta-Diagrams and Mega-Diagrams One Million Next
Steps in Thought-Improvement
Here is how it started.
A friend sent me this quote from an introduction to a book:
“Therefore, it is suggested that novices first approach this text by going through it from
beginning to end, reviewing only the color graphics and the legends for these graphics.
Virtually everything covered in the text is also covered in the graphics and icons. Once
having gone through all the color graphics in these chapters, it is recommended that the
reader then go back to the beginning of the book and read the entire text, reviewing the
graphics at the same time. Finally, after the text has been read, the entire book can be rapidly
reviewed merely by referring to the various color graphics.”
The author says, “don’t read the text first!”
My background
Having written a book Visual Language: Global Communication for the 21st Century [13],
on the syntax, semantics, and pragmatics of tightly integrating words and visual elements, I
was interested enough to buy the book.
It’s a big book
644 pages
The visual language it contains
538 diagrams
62 Tables
Total: 600 diagrams and tables in 644 pages!
The Book. Stephen M. Stahl [14] Essential Psycho-pharmacology: Neuroscientific Basis and
46
Practical Applications. Cambridge University Press.
(I used the 2d edition. It is now in its 7th edition)
Use. Text. Medical school.
Prerequisites. Psychology; chemistry; biology; medicine
I was amazed. I went around and asked some medical students at Stanford, "What do you do
with your big texts? Like this textbook?"
They said (I’m summarizing): "Oh, we always read the graphics first. We never read the text.
There’s too much to read in medical school. Every few weeks we’ve got a bunch of those 600
page books to cover. The diagrams are faster. You can see the structure of the models. The
tables you can see the data. You don’t have to search around in the text for them."
Is massive use of diagramming part of the future of text? Yes, I think so.
Why? Because we have to build written communication so that learners (and forgetters) can
use it at maximum speed. All of us must be able to scan and skip what we already know. We
must be able instantly to see the structures of the mental models we need to use.
Hypertext will not solve thought-improvement
We live in a world of information overload. Hypertext (without diagramming) will help
somewhat, but will not solve the scanning/skipping problem. Even better hypertext that links
“everything” important and relevant will, of course, be useful. But such better access is not
nearly enough [15].
A major problem is how do we continuously improve our thinking about the world around us.
How do we make sense of it?
Mental models.
How do we make sense of complicated models?
Diagrams.
My general conclusion is: We have to work on the thought-improvement problem. And
integrated sets of diagrams and their meta-theory are immediate next steps.
One other thought-improvement idea. Eco-philosopher Timothy Morton has addressed
one of the limitations of our current thought-processes with his invention of the concept of
hyperobjects.” Hyperobjects can be defined as huge phenomena whose concepts are so
47
gigantic in time, space and other characteristics that we have increasing difficulty wrapping
our minds around them, and hence, not easily making sense of them. Examples of
hyperobjects: Morton mentions climate change and radioactivity. There is not enough space
in this article to extensively describe and discuss his thoughts on this. Look him up (Morton
2013; no datem) [16].
Crisis in public discussion
We face challenges to our democracy in the public comprehension of the increasingly
wicked problems and social messes we face. Again it is the understanding of complexity of
the issues and the kinds of thought processes required that can make the most progress now.
Importance of diagrams to improving human performance
Two important series of psychological experiments have produced empirical results to
support the conclusion that diagrams improve the efficiency and effectiveness of learning.
The improvements vary in different experiments from 20 to 89 percent over convention
presentation of prose (i.e. essay form of text) (Summarized in [17], [18]).
We can draw two conclusions: (1) using more and better diagrams could significantly
improve learning and, thereby, human performance at many levels of schooling and
subsequent professional work tasks, and (2) we need to create an advanced science and
technology of diagramming because some diagram renderings of a mental model are better
than others for learning, retention, search, and hyper-linking.
Meta-diagramming of diagrams
Are there types of diagrams? Yes, there is the beginnings of a field here, with initial
explorations of taxonomies and further attempts to make software that enables the creation of
different types of problems that result in creation of different kinds of diagrams. I proposed
one simple set of meta-categories in my book Visual Language (Horn, 1998).
Proposal: Launch a mega-diagramming project to diagram several complete subject
matters
We need a one-million, integrated-diagrams project for that!
48
Given the computational capabilities that we have now, it is possible to diagram several
complete and quite different subject matters, fields, or disciplines of science and the
humanities. Does the internet already contain all of the diagrams of one or more subject
matters or discipline? Probably not all, but a considerable amount. It is highly likely that with
artificial intelligence we will find and create the meta-frameworks for much of what is not
currently available.
The mega-diagramming project will not be the “solution” to all our current challenges of
thought-improvement. But it is clearly one of the next steps. And we don’t know what
opportunities the accomplishment of such a project would produce. Just one example might
be an answer to the question: What are the elements text that do not “fit” into diagrams of
any kind? Another example: the massively diagrammed field of knowledge would permit the
hyper-linking of a fully diagrammed discipline of science. This would permit a new way of
seeing it. New ways of seeing enable new insights, new identification of problems, new
analogs between disciplines, and new ways of redesigning.
Next: Leaders needed
Who wants to lead such an important project? What organization will fund it? I have
suggested elsewhere that it could form the foundation for metaphorically sequencing the
human cognome [19]. A mega-diagramming project is one doorway to that objective.
Notes
1. Information taxonomy. Initial versions of the research on structuring thought referred to
here, and its subsequent embodiment in a life-cycle methodology for creating structured
documents) was awarded the Diana Lifetime achievement award by the Association of
Computing Machinery’s SIGDOC (Special Interest Group on Documentation) in 2000.
49
Mega-Diagramming Project. Horn, 2021.
50
Concepts
Diagrams. Units of communication that integrate text and visual elements to portray abstract
relationships, changes in time and branching, and internal and external structure of
phenomena
Meta-Diagrams. Study and portrayal of different types of diagrams
Mega-Diagrams. Diagrams that portray very large phenomena and processes. Also known
as information murals or info-murals.
Thought-Improvement. The larger context and one of the goals of the future of text
Graphics. Any single or group of visual elements
Icons. Any relatively small picture or symbol used to identify a thing or an idea: A small
picture or symbol used to identify a tool, document, command etc. on a computer interface
Visual Language. The emerging communication methods that tightly integrate text and
visual elements (images and shapes) that are thought to be increasingly a language.
Hyperobject. Timothy Morton’s word for huge phenomena whose concepts are so gigantic
in time, space, and other characteristics that we have increasing difficulty wrapping our
minds around them, and hence, not easily making sense of them.
Wicked problems. Horst Rittle and Melvin Webber, 1973, Characteristics of wicked
problems:
1. No definitive formulation of the problem
2. No stopping rule
3. Solutions not true-or-false, but good-or-bad
4. No ultimate test of a solution
5. Every solution to a wicked problem is a "one-shot operation"
(no opportunity to learn by trial-and-error) and every attempt counts significantly
51
6. No enumerable (or an exhaustively describable) set of potential solutions; No well-
described set of permissible operations to get anywhere
7. Every wicked problem essentially unique
8. Every wicked problem a symptom of another problem
9. The choice of explanation determines the nature of the problem's resolution
10. No right to be wrong
Messes. A systematically inter-related group of problems. Russell Ackoff 1974
Social messes. Messes created by human societies.
52
53
Bob Stein
Print Era: RIP
If 9/11 marked the beginning of the end of the American Empire,
I think we can say the Covid Pandemic
which moved much of daily life into the virtual world
(at least for the privileged),
marks the definitive end of the print era.
Publishers will continue to put out long-form linear texts with fixed perspectives
but that practice will dwindle as something new is born.
Some will say Baudrillard and others predicted this and it hasn't yet come to pass.
But in fact it has.
Talk to your young children and your grand-children about their media usage.
Some still read books, but most do not.
They watch videos, They make videos.
Most significantly they build and explore worlds.
And they do it with others.
The abstraction of text will always be a useful component, but it no longer rules.
54
55
Brendan Langen
Thinking with Paper
The moment you are inspired to work out an idea, what do you do? Do you go somewhere?
Do you pull out your phone and start typing? The initial wave of thought floods our brains
with excitement; do you record it?
I go to my notebook. I have to write.
A small, lined notebook holds these words. This essay, along with other blue sky ideas,
began there. My notebook is the safe space to scratch out the unknown, the home of
deliberate and undistracted thought.
Writing on paper forces me to slow down. The paper turns me into a craftsman. Many studies
suggest similar benefits – the brain’s language, memory, and thinking functions activate when
writing by hand [21] [22] [23]. Away from the distraction of computers, I am fully present,
focused on choosing the exact words that reflect my thoughts. In this way, the paper notebook
affords full focus – a feeling hard to come by on a machine that provides near infinite access.
Of course, not everyone is beholden to the same focusing challenges. Many of you use
different mediums to think. The computer works just fine for some of us. That’s quite the
point! We have all lived different experiences, and we all think in our own ways.
Our thinking changes depending on the surface. My etches in a dot grid notebook are
full of everyday observations. My words in a hardcover mini lined notebook form short
stories. Elaborate drawings and their descriptions fill my sketchbook. My Roam graph is
purely typed, with past references transcluded and queries interspersed. There, I ask myself
questions, talk to my future self, and generate new thoughts. Each has different constraints,
which offer different lenses in which to think.
The goal of tools is to enable us to achieve something we could not without it. In our
hand, the hammer pounds nails that build structure. The pencil jots notes that build ideas. The
ideal tool augments us in a way that enables us to be greater than we were before. Paper and
pencil have given us this gift.
But herein lies the issue. Once I realized my notes could link to one another digitally, I
had to question my approach. My paper notebooks can’t talk to each other. They can’t even
reopen themselves. So, I moved to the computer. Thinking and writing in my graph was the
best way to build onto my thoughts.
I imagine many of us have fallen into this approach. Surely, computers enable us to
56
accumulate knowledge. Yet, I was missing the biggest point – improving my ability to think.
Our best thinking is done in a multitude of ways, not only at the computer. For me, it's on
paper.
And yet, to get my paper thoughts to my digital graph, the friction is immense. Digital
tools like OCR work at times, but rarely on my written notes without extensive training. We
all write differently than we type. The medium is different.
So, we stand at an impasse. Either we take time to deal with the friction or we accept
that our words stay in the notebook. Much of the time we default to the latter, and poof!
Notebooks stack up on the bookshelf, our grand thoughts locked away as if they never
existed.
Considering the visceral feeling of losing notes, it’s a bit mad that many of us have
willingly accepted that we probably won’t see many of our handwritten notes again. This
isn’t quite Hemingway losing a suitcase of his life’s work, but still, how gutting a feeling!
Are we satisfied with that reality?
My guess is no. Enough people stand to benefit from a better solution. What might
happen if paper could communicate with computers?
Allow yourself to dream for a moment.
On the simple end, fleeting tasks on post-its might sync to our digital TODOs. Our
sketches might intertwine with related digital notes, helping us link thoughts across time and
space. On the grander end, might we have more access to collective insights? The prolific
notebooks of Leonardo da Vinci and Charles Darwin are chock full of generative text. I am
certain we stand to benefit from interweaving notes from other brilliant minds. And on…
Surely, downsides would arise, as well. An influx of volume equates to clutter, resulting
in greater need for improved organization and retrieval systems. New disputes may occur. I
can easily imagine a bitter legal bout to determine business ownership over a napkin sketch.
We must always consider the downstream effects of our advances.
Attempts to solve this problem have been made here already. Richard Saul Wurman's
thought about paper that updates when held near a power source offers an idea [24]. As do
commercial efforts. Anoto, E-Paper, Livescribe, Moleskine's Smart Writing System, and
others have taken aim at modernizing paper. Today’s efforts show promise. Perhaps they will
come to fruition with a larger audience.
The dreamer in me imagines a more accessible future, though. Perhaps a ubiquitous
retinal scan allows wink to capture. Or all future paper dipped in an electronic coating pushes
text to our personal cloud. Maybe OCR just becomes really good and can be tailored to
anyone. Whatever the solution, we must prioritize its accessibility.
57
Because thinking is important. Thinking is how we solve the problems we create on this
planet. We live in a world where billions of people can access information from anywhere,
yet we create more information waste than ever before. How can we put our thoughts to
better use?
As a child of the 20th century, I was born prior to the age of digital natives, so paper
feels like home to me. Yet, generations from now, will paper still be prominently used?
Musing on human reactions to technology, Douglas Adams claimed, "Anything invented after
you’re thirty-five is against the natural order of things" [25]. Indeed, maybe an emerging
technology may render paper less useful. But today, tracking back to the emergence of cheap
paper in the late 1800s CE, paper is ubiquitous with thought.
We live in an incredibly fortunate time to ask these questions. Computing and thinking
pioneers like Engelbart, Licklider and Nelson laid the groundwork more than a half century
ago, and a growing collective has readopted their hopes of augmenting human intellect today.
If our aim is better thinking, we must integrate different mediums of thought.
In a world that prioritizes convenience, it's only sensible we try to eliminate all waste,
especially with our most valuable resource – time. We create things to make life easier. Why
not make it possible to interact with paper?
Thinking back to those moments of inspiration – do you still have the thoughts
somewhere?
Perhaps you do, but many of us sit on countless stacks of inaccessible notes. Even if I
still have the notebook, the thoughts are incomplete, disorganized, or forgotten. Rich detail
lives in those initial thoughts, but most of us have lost that context. Even if you disagree with
Ginsberg's "first thought, best thought" mindset, the origins of our ideas are ripe to revisit. In
that excitement, our body shouts, "This is important!" We should listen.
Whether you prefer to think in analog or digital, the future of text will enable deep
thought for each of us. When we want to write on paper, we won’t have to worry about
misplacing the page. The future of text will allow us to think with paper, with a machine
behind it.
58
59
Daniel Berleant
Dialogues With the Docuverse Is the First Step
Tags: chatbot, docuverse, webiverse, knowledge acquisition
In the future, burgerbots will customize your hamburger within an infinite array of
possibilities. Whether you want it medium or well done, with mustard or mayo, plant-based
beef or dead turkey, 200 calories, a belly-busting 600, or 175, 193, or even five 50-calorie
miniburgers with different specifications, no problem. I’ll take mine with the top bun well-
toasted white bread and the bottom bun untoasted whole wheat, if you can do that. “Sure, no
problem,” it says in metallic robo-accent, or any other accent you wish.
Similarly, text will be custom generated from the web docuverse in real time as you
need it. Text bots will respond to your search question in any language, or in a hybrid of two
languagesno so you can effortlessly practice basic Spanish at your precise current level while
reading comfortably in familiar English — or vice versa. Generated responses will be at
whatever reading level you like, from any perspective, and at length, in brief, or another
length of your choice. And just in case the response merely whets your appetite or doesn’t
satisfy, there will be several alternative responsesp as well. Each will be replete with
convenient outlinksq. Imagine interacting not with one superchatbot but with a whole panel
of them, much as you might like to talk with not just one expert but an entire advisory board
of experts. For example, consider asking a medical question, or seeking advice about or
assistance with your home, car, personal or work life, dinner (burgers anyone?), or anything
else.
This may sound good, but dig under a few rocks and the bad and ugly come crawling
out. There are already concerted trollbot campaigns, thought to be sponsored by certain
foreign governments, as well as special interest groups with plenty of money to pay secretive
PR companies more than happy to hoodwink the public for a price. These trollbotsr can spew
out disinformation, chaos and confusion on Twitter and pretty much anywhere that reader
comments have sufficient visibility that it’s worth it to those actors.
The technology for sophisticated text generation is not yet capable of fully supporting
everything mentioned, but it’s getting closer with surprising speed. GPT-3s is the name of
currently the largest text generator powered by connectionist computing technology. The term
“connectionist” here means it has lots of small computing elements connected together in
60
complex ways. The recent explosion in artificial intelligence is based on a type of
connectionist design called a neural network. It doesn’t really have neurons in it, rather it has
computing circuits that are roughly analogous to neurons. And like neurons connect to other
neurons with synapses, these computing structures contain analogs of synapses called
“parameters.” GPT-3, for “Generative Pretrained Transformer 3,” the successor to GPT-2 and
the precursor to even bigger future textbots, has no less than 175 billion such parameters.
These parameters each need to be assigned a numerical value. Being pretrained, this has
already been done, using algorithms since no person would know what values to give them
even if they had time to do it. Even so, the cost was in the millions of dollars. That’s a tiny
fraction of a cent per parameter, so consider it a bargain. The bottom line: GPT-3 isn’t alive
and it doesn’t understand what it generates, but it can still generate very human-like text. For
example, it “can generate samples of news articles which human evaluators have difficulty
distinguishing from articles written by humans,” according to one evaluationt.
A custom-prepared hamburger is better in front of you than somewhere else. Similarly,
texts should be highly accessible and available when you want them and with a minimum of
user actions like clicks, menus to choose from, verbal requests, fishing of phones out of
pockets, and so on. A computer built into a watch (a “smartwatch”) is a bit more convenient
than a smartphone in your pocket, but even a standard smartphone can and should be held
onto your arm with a convenient strap-on holder that allows access by merely lifting your
forearm a few inches. Better yet is a computer display built into your eyeglasses that displays
text on the lenses, or on a separate tiny display attached to the frame. Google Glass did that
but was a failure as a consumer product, though it lives on as a product for a limited set of
commercial applications. Perhaps the high price tag was the problem. At any rate other
companies have jumped into the fray with Apple Glass under development and quite a
number of other product with varying capabilities currently made by smaller companies.
Displays of this kind are called smartglasses, so watch for more of them in the future.
So far we’ve discussed how text will be sliced, diced and grilled to order. When such
customized text passages, derived from web texts, videos, images or sound tracks, are
presented during a person’s dialogue with the docuverse, that only solves part of a larger
problem. As anyone knows who has searched in vain for bits of information, when surely
others know more than is on the web and even we ourselves have things we could say about
it, dialogue with the docuverse of the web can customize only what is already there. More is
needed. However the user experience of putting content online requires too many events and
user actions, from account registrations to passwords to clicks, resulting in built-in
disincentives to add more useful content to the webiverse. Better comment functions won’t
solve the problem because so often people don’t make them, and because when they do, so
61
many comments are vacuous.
Since the search for information on the web will often be a dialogue with its docuverse,
the next step would be to leverage that interactive dialogue to acquire new content. By
cleverly asking the user questions that both clarify what they are looking and also whose
answers fill in gaps in the information available online, new content will be added to the
webiverse in a way that feels natural to the user. Like a good conversationalist, the dialogue
will encourage users to share any knowledge they may have about whatever topic they are
there to find out more about, and feel good about doing so. This chatbot-acquired knowledge
will be organized, indexed and stored for use in future dialogues with other users. That way
new content will be generated by bootstrapping from dialogues with the docuverse that
people would already be having, without requiring people to decide specifically to make
content. That should generate a continually growing body of new content, leveraged by the
web search chatbot assistant of the future.
Acknowledgment
This article benefited significantly from discussion with Chia-Chu Chiang.
62
63
Daveed Benjamin
The Bridge to Context
The ubiquity of the web has become so widespread that all Internet users can benefit from the
ability to ensure they are reading and receiving trustworthy information. Representative
democracy, in fact, requires an informed citizenry yet, currently, there is no possibility of
containing the spread of false news nor effective tools to help discern what news is real from
what is false.
Unmet social needs include the inability to know what to believe on the web, the lack
of context on the web, and in particular the lack of access to information that contradicts and/
or supports what the user is focused on. This can all be summed up as the lack of information
integrity for virtually all important aspects of online human development and social
engagement, and especially the news.
News consumers need access to a robust information ecology. Creators need to earn
value for contributing to the enhancement of the information ecology. News organizations
need to protect their brand from misinformation and disinformation, and to ensure that they
are publishing the most accurate information possible. Fact checkers need exposure for their
fact checks.
The anatomy of a bridge: two pieces of content, a relationship, analytics, and metadata.
Benjamin, 2021.
The solution is the bridge, a conceptual deep link in the annotation space over the webpage
that provides insight, context, clarity, and neutrality. Bridging is a revolutionary use of the
annotation space. Today’s knowledge annotation providers (e.g., Hyothes.is and Diigo.com)
enable unstructured textual notes on text content. Bridges connect text, pieces of images, and
64
segments of video and audio with a relationship. Consider, for example, this scenario: a
contradictory bridge from a written sentence in a news article to a segment of video interview
that directly contradicts what was written in the sentence.
An important aspect of the bridge is accessibility via an annotation ecosystem called the
Overweb. The Overweb is accessible through the Presence browser overlay which has
browser extensions, SDKs, a mobile app, and will be supported natively by browsers that
adopt the protocol. Thus soon many people will have inherent access to bridges, providing
the basis for a more informed citizenry, equipped to effectively participate in all levels of
democracy. Thus, society will have a better sense of what to believe and what not to believe
on the web. This makes the disinformation industry less effective and lucrative, which
reduces the overall amount of false information over time.
The Innovation
The three step process for making a bridge. Benjamin, 2021.
Bridges within the web connect two content snippets of information with a relationship. The
content snippets can be text, pieces of an image, or even segments of video and audio. The
relationships include contradicting, supporting, and citing. Upon submission, the bridge goes
to the Bridge Registry where approved validators confirm whether the relationship between
the content snippets is correct.
Once confirmed, the bridge self-assembles into a universal knowledge graph that
connects online content in the annotation space above the webpage. When the viewer's
attention moves towards a piece of online information with bridged information, the Presence
browser overlay highlights the piece of information. Clicking the highlight activates the
display of all the bridges related to the piece of information.
65
How Does Bridging Work?
A new kind of Internet Technology is emerging called the Overweb that operates as a trust
layer over the current web. The Overweb crowdsources the creation and validation of bridges
that provide deep layers of context. The trust layer operates as an overlay in the annotation
space on top of the web. This is not new. Annotation was always planned as part of the web
browser but was removed from Netscape in 1995 due to competitive and technological
reasons. The Presence browser overlay enables overlay applications that are accessible
anywhere on the web.
Two primary uses involving bridging are emerging.
Fact-Checking: The Overweb utilizes the efforts of citizen fact-checkers and a
social network to combat misinformation. Using the Presence browser overlay, a community
of participants create bridges that connect fact-checks to false claims to counter the spread of
misinformation.
Bridging Competition: Bridger.live is a knowledge esports platform for building
and curating knowledge by connecting claims and evidence on the web. Competing in a
specific challenge, participants identify claims in a topic area and search for contradictory or
supporting content. Using the Presence browser overlay, participants create bridges that
connect contradictory evidence to false claims, and submit these bridges to challenges to be
eligible to win prizes.
66
The bridges self-assemble into a public knowledge graph that connects text, images,
segments of video and audio. Benjamin, 2021.
The bridges aggregate into a public knowledge graph with network effects based on the
number of bridge creators, the number of bridge viewers, the number of bridges, and the
connectedness of the graph. The universal knowledge graph creates 360° context for any
claim on the web, including supporting and contradicting evidence.
To ensure the integrity of the knowledge graph, the bridges created by participants are
validated prior to being posted to the ledger. Upon submission, all bridges go to the bridge
registry in which approved validators confirm whether the relationship between the claim and
evidence is correct. Once the integrity of the bridge is verified, the bridge posts to a
distributed ledger (or blockchain) and becomes part of the public knowledge graph, and is
thereby available as context online.
67
Before going into the knowledge graph, the relationship of the bridge must be validated.
Benjamin, 2021.
The bridge creator receives token rewards for the value a bridge creates in the ecosystem
based on people crossing and upvoting the bridge. Influential curators and bridge validators
also get a token reward. Creators, validators, and curators can choose to take their rewards in
either the utility or the governance token, which confers the right to vote on the future of the
protocol.
In summary, the emergence of bridging on the Overweb – via the Presence browser
overlay – crowdsources the building of robust information ecologies for news content that
combat the spread of misinformation and provide the basis for contextual information
anywhere on the web.
68
69
Erik Vlietinck
Markdown, the ultimate text format?
Tags: markdown, software lock-in
Technical evolution has reached the point where even the most simple note taking app has
more layout design capabilities than the average book printer of the nineties could dream of.
Traditionally, however, text editing apps lock you in. That makes it impossible to switch from
one application to the next without having to export first, which usually makes you lose at
least some of the design elements the originating app was capable of. An Apple Pages
document, for example, isn’t a file. It’s a collection of files that contain no text but binary
data only.
That is not criticism of Apple or the Pages app per se; they do deliver excellent export
capabilities as total lock-in would not be accepted by users.
At some point in recent history, people started buying desktop and portable computers
en masse to make things easier and faster. The crucial reason to use a computer was and still
is that it speeds up tasks you perform when you’re not creating a TikTok or Youtube movie,
chatting on iMessage, Whatsapping, tweeting or making Mark Zuckerberg richer and more
arrogant.
And as computers are now everywhere and always on, we demand streamlined apps.
When one is forced to take more actions than a simple “Open File” in order to get one’s text
in another app without losing any of the styling, one often gets frustrated. Having to export
content before you can import it back into another app takes two steps that are essentially
redundant.
What you need, therefore, is a file format based on a language and a parser that allow
humans to write a bit of simple code (a pound sign for a heading level 1, two for level 2, and
so on) sprinkled throughout their content using the simplest editor there is: a plain text editor.
Such a language has existed for some time. It’s the fabric that holds together the World
Wide Web: HTML or HyperText Markup Language. But writing text in HTML is tedious and
error-prone, and therefore even more frustrating than exporting and importing. Something
else had to be “invented”.
70
Markdown to the rescue?
In 2004, a new markup language was created for formatting text that was far less convoluted
than HTML and “rot proof”, i.e. in the future someone should be able to read the text without
problems. It was called Markdown. Its key design goal was and still is readability. Since
2004, a number of variants have been developed.
With Markdown (the original is in capitals throughout this text) you can switch to any
plain text or markdown editor without first exporting and importing back again. Six years
later, a lot of dedicated markdown editors have spawned with most of them translating the
code on the fly into formatting for on-screen presentation and the other way around,
converting simple shortcuts such as Command-1 for a level 1 heading into its markdown
equivalent. That makes markdown not only simple to generate but also appealing to users
who balk at inserting code into their content manually.
Unfortunately, Markdown is basic. Tables exist only in HTML and footnotes or
citations aren’t supported at all. Others jumped on the markdown wagon, though, and added
their own markdown extensions to support features the users of their editors asked for.
The result is that markdown has become a non-standardised markup language that is
inconsistent from one editor to the next and that comes with often incomprehensible
limitations. As a result, users are often surprised to find that a document that renders one way
on one system renders differently on another. To make matters worse, because nothing in
Markdown counts as a “syntax error,” the divergence often isn’t easily discovered. As a
result, John McFarlane, David Greenspan, Vicent Marti, Neil Williams, Benjamin Dumke-
von der Ehe and Jeff Atwood started a new initiative, CommonMark (https://
commonmark.org).
CommonMark proposes a standard, unambiguous syntax specification for Markdown,
along with a suite of comprehensive tests to validate markdown implementations against this
specification. It, however, only attempts to standardise Markdown, i.e. the original. The most
often used “standard” by markdown editors that support text elements like footnotes, tables
and citations, though, is MultiMarkdown, developed by Fletcher Penney.
On his web page (https://fletcherpenney.net/multimarkdown/), Penney states::
“MultiMarkdown, or MMD, is a tool to help turn minimally marked-up plain text into well
formatted documents, including HTML, PDF (by way of LaTeX), OPML, or OpenDocument
(specifically, Flat OpenDocument or ‘.fodt’, which can in turn be converted into RTF,
Microsoft Word, or virtually any other word-processing format). MMD is a superset of the
Markdown syntax, originally created by John Gruber. It adds multiple syntax features (tables,
71
footnotes, and citations, to name a few), in addition to the various output formats listed above
(Markdown only creates HTML). Additionally, it builds in “smart” typography for various
languages (proper left- and right-sided quotes, for example).”
That makes MultiMarkdown very appealing to use because it gets rid of the need to
export/import when you want to switch from one editor to another and it solves the problem
of Markdown not supporting export formats such as PDF. And although a MultiMarkdown
document dropped on a Markdown editor or a proprietary markdown app may look slightly
different, the essence is that the the text itself will still be readable, which means it protects
against what you might call text rot. As long as software exists that reads plain text, future
humans will be able to read what has been written in markdown. At worst, they will have
trouble understanding the glyphs that aren’t part of the content itself.
72
73
Fabian Wittel & David Felsmann
Breathing Life into Networks of Thoughts
Can Self-Organizing Networks of Atomic Notes Increase Serendipity and Inspiration?
The Energy of a Conversation
Think about an inspiring conversation you've had. How did you feel? How did the text of that
conversation resonate with you?
Text in a conversation is alive. Thoughts are lined up or contrasted. Side topics show
up, detach, some open loops will be closed, others lead to surprising new standpoints. Written
text, by contrast, feels much more like a monologue, it can only lay down thoughts in one
linear sequence. How can we achieve the feeling of a lively conversation in written text, how
can we breathe life into it?
Breathing Life into Text
A common definition of life is being able to react to stimuli, to evolve, to reproduce and to
grow. What if we applied that definition to the text of the future, what if we would build a
living network of thoughts?
Every living organism needs building blocks that are connected to each other. A good starting
point is the familiar and proven concept of atomic notes. Niklas Luhmann with his
Zettelkasten [30] and Vennevar Bush with the Memex [31] both broke down longer texts into
their smallest meaningful units and created atomic notes to allow for remixing them in
different sequences. From that inspiring conversation mentioned above, we take the key
thoughts and record their atomic notes on index cards.
To further connect those building blocks within our network of thoughts we can add
tags to denote the topics or concepts they touch. Thereby we create a relationship between
cards with similar tags. Our thoughts can now fly freely like a swarm of birds and arrange
according to their relations that pull them towards each other. That's the basic structure of our
network of thoughts. However, it does not live yet.
In order to react to stimuli, our network of thoughts needs a suitable interface. The
interface should allow us to navigate our thoughts, get reactions to our actions just as in an
74
inspiring conversation. Rather than a classic list-based application, the interface should look
like those flying thoughts we imagined earlier.
The technical solution would be a minimalistic force-directed graph allowing us to
move seamlessly from one thought to the next, presenting connected thoughts and always
changing shape according to our interests. A technically easy way to select connected
thoughts is by using the tags we added before. As pre-trained models for natural language
processing (NLP) become a commodity, a more sophisticated selection of thoughts just as
highlighting opposing arguments are more and more feasible. Another reaction to stimuli
would be instantaneous reactions to newly added thoughts: while adding a new thought to our
network of thoughts, it presents us with existing thoughts around the same topics.
But how can our network of thoughts evolve? We should let it "dream": while we're not
interacting with the network of thoughts, algorithms can try to spot patterns in our thoughts.
For example, many thoughts could be related to a certain mental model that we hadn't seen as
an abstraction ourselves. While dreaming, our network of thoughts could suggest those tags
and connections, always improving the network's coherence or pointing out our
inconsistencies. With the exponential improvement of NLP, applications like this seem to be
within reach.
However, we're still missing one key element to realize the ambitious vision: Our
network of thoughts has to be able to grow and reproduce. Only adding our own thoughts
isn't enough to fulfill this criteria. What if different networks of thoughts could join , what if
it was possible to connect and adapt thoughts from different networks?
That inspiring conversation with someone else could evolve to become an overlap of two
networks of thoughts, enabling both people to branch out into new areas and, potentially, with
the addition of different thoughts, extending each network organically.
Living in the Future of Text
Fast forwarding a decade into the future demonstrates some obvious applications for a living
network of thoughts. The first and most simple application is using it as a personal creativity
assistant: Whenever we start a new project our network of thoughts can produce a brief report
as an aide-mémoire of all the ideas we gathered and that we can build on. Revealing
connections we might not have seen ourselves can foster serendipity in a way that linear text
cannot.
Second, it will become more important than ever to prevent our minds from being
hijacked by algorithms. Ill-tuned platforms will find even more effective ways to hack our
75
behaviour to get more of our attention [32]. A personal network of thoughts can be the calm
and safe space we use to remind us about what's important. When we become overwhelmed
by the volume of information, instead of distracting ourselves with endless feeds assembled
by opaque algorithms we can consciously dive into our own network of thoughts.
Third, networks of thought might mitigate a limitation inherent in speech. While speech
is an excellent tool to coordinate rationally in a rather linear world, it now traps our thinking
in this rational abstraction that is not very effective in complex dynamics [33]. Our parallel,
intuitive mode of thinking might allow to cope with complex situations more effectively –
especially when augmented with a living network of thoughts.
Let's build the future of text. With an elegant system for a more conscious and inspired
world!
76
77
Fabio Brazza
Futuro do Texto
Fiquei feliz de ser convidado pra falar sobre o futuro do texto, sendo que toda vez que
escrevo minha caneta mira o futuro. Já não sei se a pergunta é qual será o futuro do texto ou
qual será o texto que ficará no futuro?
Acredito que o poeta conversa pelas janelas do tempo e deseja que sua arte seja forte
suficiente para sobreviver por gerações, mas ao mesmo tempo teme que ela se torne só mais
um produto descartável e seja soterrada pela onda de consumo desenfreada que prioriza a
informação rápida e superficial.
Entre o perene e o perecível o texto precisará se adaptar pra conseguir se comunicar
com a nova linguagem, sem perder profundidade ou conteúdo. Esse tem sido um dilema pra
mim, e acho que ao mesmo tempo que a internet trouxe o acesso para todos, ela também
esvaziou o texto.
Muitas pessoas leem notícias e manchetes, mas quantas leem livros e mergulham fundo
no conhecimento? Machado de Assis e Shakespeare estão aí para todos lerem de graça se
quiserem, mas será que é a linguagem que mudou ou o interesse que está faltando?
Como digo num Rap meu “A cada som me cobro por mais sabedoria, mas nesse mundo
sem sentido porque minha música faria? Seria muita utopia viver de poesia, num País onde
pra cada três farmácias que abrem fecha uma livraria?”
Não quero soar fatalista nem ser pessimista dizendo que o texto está se perdendo com a
modernidade, na verdade acho que a internet e as ferramentas novas tem um poderoso papel
na disseminação das ideias e ao mesmo tempo que as pessoas conseguem se formar por
cursos da internet, justiças são denunciadas, artistas independentes conseguem ganhar
visibilidade e se projetar para o mundo, as Fake News ganham força e Presidentes são eleitos
em textos falaciosos que se espalham como vírus.
O futuro do texto depende de como vamos controlar e usar essas novas ferramentas,
pois desde o tempo antigo a única certeza é que “as palavras tem poder,” e textos e ideias
podem eleger um presidente ou derruba-lo, construir um muro, ou derruba-lo, empoderar o
povo ou aliena-lo.
Como poeta sigo tentando achar o equilíbrio entre o popular e o profundo e usando o
Rap a Rede e a Rua para disseminar minha arte. Creio que minha missão é continuar
disseminando conhecimento e reflexão nos meus textos, boto também minhas falhas nas
78
folhas e deixo para os filhos do amanhã, jogo pro universo como uma mensagem na garrafa,
quem a encontrar verá o eco do tempo que eu vivi, petrificado em palavras.
Cada texto é como uma mensagem deixada na parede da caverna. Só o tempo dirá quais
delas serão encontradas e servirão de bússola para a humanidade no porvir.
Future of Text
I was happy to be invited to talk about the future of text, and since every time I write, my pen
focuses on the future. I no longer know if the question is what will be the future of text or
what will be the text that will be in the future?
I believe that poets talk through windows of time and want their art strong enough to
survive for generations, but at the same time fears that it will become just another disposable
product to be buried by the wave of unbridled consumption that gives priority to fast and
superficial information.
Between the perennial and perishable the text will need to adapt to be able to
communicate with this new language, without losing depth or content. This has been a
dilemma for me, and I think that at the same time that the Internet brought access to
everyone, it also made text empty.
Many people read news and headlines, but how many read books and dive deep into
knowledge? Machado de Assis and Shakespeare are there for everyone to read for free if they
want, but is it the language that has changed or the interest that is missing?
As I say in a Rap I wrote "With every beat made, more wisdom is needed, but in this
meaningless world why would my music be needed? Would it be utopian to live on poetry, in
a country where for every three pharmacies that open a bookstore is deconstructed?"
I don't want to sound fatalistic or pessimistic saying that the text is getting lost with
modernity, in fact I think that the internet and new tools have a powerful role in the
dissemination of ideas and at the same time that people manage to graduate from Internet
courses, justice is denounced, independent artists gain visibility and project themselves into
the world, Fake News gains strength and Presidents are elected in fallacious texts that spread
like viruses.
The future of text depends on how we will control and use these new tools, because
since ancient times the only certainty is that "words have power," and texts and ideas can
elect a president or overthrow him, build a wall, or break it down, empower the people or
alienate them.
As a poet, I keep trying to find the balance between the popular and the deep and using
79
Rap, the network and the street to disseminate my art. I believe that my mission is to continue
disseminating knowledge and reflection in my texts, I also put my flaws on paper and leave it
to the children of tomorrow, a message in a bottle I throw into the universe, whoever finds it
will see the echo of the time I once lived, petrified in words.
Each text is like a message left on a cave wall. Only time will tell which of them will be
found and which will serve as a compass for humanity in the future.
80
81
Faith Lawrence
A Tale of Two Archives
History, by convention, requires text. It might be an arbitrary line, but nevertheless before
writing is pre-history; after the invention of writing is history. That is not to say that a
historian’s only primary sources are textual, but textual evidence is a key part of our
understanding of the history, even if our understanding and interpretation of any given text
may change.
As a culture we have decided that preserving texts for the future is important. We recognise
past mistakes about what is ‘important’ and ‘worth preserving’ – a hard decision that is often
made from necessity. Looking at the expanding digital world with its proprietary formats and
fragile storage media, we worry that we are repeating those mistakes; that today’s digital text
will not be available for tomorrow’s historians to pore over and debate.
In the 1830s Henry Cole, then a teenager, was working with public records. To his
horror they were so poorly kept that rodents, among other animals, were gnawing their way
through the documents. The story is that he stormed into Parliament brandishing the
mummified remains of one such miscreant to make the case for the public record being
properly maintained. From his activism the National Archivesu (né the Public Record Office),
the “official archive and publisher for the UK Government, and for England and Wales”v,
was born. The rat still resides at the Archives in the form of record E 163/24/31 –
euphemistically described as “Specimens of decayed documents” – the words that it ate once
more part of the public record, albeit permanently misfiled.
The Archive of Our Own (AO3)w was proposed in 2007 by Astolatx after a confluence
of events around fan platforms brought the shaky nature of their existence to the fore. It was
humans rather than rats that threatened to put holes in the textual record of the community
but, in much the same way, the community acted to preserve what had been created and what
might be created in the future.
These two archives might seem, at first glance, very different. One the epitome of
official-dom – the record of governance, Empire and still overwhelmingly the words of white
men – the other a rejection and reimagining of canonical narrative, openly embracing the
profane and lascivious and situated within the more female-identified, and frequently
LGBTQ+, side of fan communities (although it must be noted, still predominantly white and
Western focused).
But is it so controversial to say that history and fiction are not so far removed? It is a truism
82
that history is written by the victors. As recent events have shown, our present is also not
without its own issues around facts and fabrications. History is the shared fiction that we
interpret from the foundation of facts (some of which may, literally, be built of paper by
unreliable narrators). Conversely fiction is the shared history of its own unreality.
And when it comes to subject matter, anyone with even a passing knowledge of history will
be aware that it, especially the history of Empire, is also steeped in the darker side of human
nature, even if we dress it up in formality and hide it in mundanity.
One thing they definitely have in common is that archives preserve the past and
present’s text for the futurey – the archival record.
Context is fundamental to archival thought. For the archive the context of the record –
the provenance of not just itself but its relationship to other records it was stored with, even if
that relationship was just spatial – is of prime importance. The word context comes from the
Latin ‘con-‘ (together) and ‘textere’ (to weave). In the more literal meaning of ‘weaving
words together’ it is documented back to the fifteenth century. Now we are weaving words
together once more, but this time our needles and thread are metadata, hyperlinks, and
machine learning.
Metadata is data. The catalogue is a textzaa. When we think about the future of the
archival record we must also think about the future of the catalogue because “an archive
without a catalogue is like a room without a door”bb.
The National Archives’ Project Omega is developing a pan-archival catalogue data
model and editorial system designed for the needs of born-physical, born-digital, digital
surrogate and trans-digital records. Expressing archival standards through internationally
recognised vocabularies, the Omega Data Modelcc supports a linked data system which
encodes the catalogue and has the potential to allow for multiple layers of description, each
with its own history of revision (the record of the record). The change is not only to upgrade
the existing editorial system, which is past end of life, but to build a foundation around the
record upon which more complex skeins of information can be wound. From the complex
model we feel our way slowly towards a broader and more inclusive offeringddee. Is the
published description drawn directly from the record itself? Written by an archivist? By a
historian? Automatically generated by a computer? By a member of a community with a
connection to that the record? Can we have all these things and offer the reader the path into
the archive which best suits them while still maintaining the official standards and gravitas?
ffgg.
From the other direction AO3, in their Terms of Servicehh, specifically position the tags
associated with a work by the author as part of the work. As such the volunteers who wrangle
83
the tags hold them inviolate (tags that break the terms of service are a matter for the abuse
team). They are both metadata and text – a part of the record from the start.
AO3’s tagging system attempts to bridge the gap between the computer-processable
power of formalisation and the user-friendliness of tagging. By volunteers identifying
canonical or common tags and their synonyms, parent, child, meta and sub tags, a shared
folksonomy has been codified over timeii, allowing record creators to use their chosen
terminology but supporting searching/browsing, filtering and autocompletion. This is not a
small undertaking: in 2019 these volunteers wrangled 2.7 million tagsjj; and has not been
without its detractors, especially in the early days when the system was not as well
developed. However, it has enabled AO3 to rescue a number of older archives which could
no longer be maintained, and integrate their original metadata categorisations so those too
would be recorded without difficulty. Across the archive, the tags in their connected structure
are explorable as their own record of the shared conceptual model.
Is this the future of the archive? Catalogues which support multiple connected paths
into the records, reflecting the record, archivist, and reader in multiplicity. The intention is to
open the records to a wider audience, to allow new paths and strengthen quieter voices. And
so we aspire to generate texts of how we perceive the records of past endeavours, for future
eyes to read.
84
85
Imogen Reid
Notes for a Screenplay Loosely Based on C.K. William’s Poem, The
Critic
Scene One: Writing
Interior: Public Library. Time of day: Unknown
We see a man. We see this man hunched over a desk, his shoulders tensed, his body leaning
forward, his posture wrapped with, and composed by, attentive concentration, books lining
the walls from floor to ceiling around him. We see this man sitting at the same desk, at the
same hour, in a public library, day after day returning his pen again and again to the top left-
hand margin of a battered loose-leaf book. At first glance, from a slightly oblique angle,
looking over his shoulder behind him, he appears to be making nothing more than a
meticulous copy of a text, a scrivener methodically and systematically transcribing.
In the silence of the library, we hear the nib of his pen scoring the paper beneath it, the
persistent sound augmenting, we hear this sound although we cannot see what this man is so
diligently, so conscientiously writing. But the catch word here is not what but how, because
this is a matter of ink on paper, of a mangled and tangled barely recognizable text modified
and undone during the seemingly routine task of rewriting, neither plot nor narrative, but the
material attributes of the page, the repetitions and elisions encountered when attending to the
topographical details of somebody else’s printed, and somewhat conventionally set, page of
perfectly legible writing.
We watch this man trace line after line, his pen over articulating the facets of cut and struck
type, same page, same routine, hour after hour, day in and day out, week after week, progress
is reversed but not halted, each line gradually misaligned, the space between them slowly
diminishing beneath the layers of ink gradually accumulating, erasing any evidence of the
well-worn track that directs the eye from left to right, according to the dictates of Western
European convention. But this man is not simply rewriting words that are already written, he
is overwriting, and in so doing he reconfigures the topography of the standard printed page.
The form each word is returned as illegible writing.
Cut to: Black Screen
86
On the soundtrack we hear the sound a of pen on paper, insistently transcribing
Scene 2: Reading
Interior: Public Library. The same scene seen from a different angle. Time of day: Unknown
With red-rimmed eyes you sit hunched over your desk, your posture dictated by the demands
of the unintelligible page of writing that rests on the table before you. A tangled tale that
resists the tongue, a homespun yarn, a printed text(ile) lodged within the texture of the page
Absorbed in futile activity of decipherment, you apprehend rather than comprehend this
vocabulary of incoherence, you struggle, you hesitate, you stammer, no matter. You start
again. Same page, same routine, day in and day out, week after week, because the difficulties
posed by this illegible text compel you to invent alternative ways to read
on
the screen a single frame jams on a loose-leaf page of writing, juddering, endlessly repeating
the same words, over and over and on
For me, the future of text lies in its capacity to resist the idea that there is a correct, or
standard way to think, speak, and write, it lies in its capacity to interrupt and disrupt the
deeply ingrained grammatical rules, such as sentence structure, punctuation, and
capitalization, that have come to regulate what we are capable of thinking, feeling, and
becoming. The future of text lies in its ability to reroute the recognizable topography of the
page and the familiar left to right circuit so often inscribed upon it, it lies in its capacity to
provoke and engender alternative interactive forms of readability with the aim of liberating
readers from the normative constraints that are so often imposed upon them.
Image Title: Text(ile)
The image was made by cutting, turning, erasing, repeating, and overprinting a page of
writing in order to weave words together like cloth. In resisting the Western European
convention of writing from left to right, each text(ile) attempts to yield an alternative
physical, tactile kind of readability within which the eye can move freely and in multiple
directions at once.
87
Text(ile). Reid, 2021.
88
89
Jad Esber
Walk into someone’s home, and you’ll find their personal bookshelf. Books lent by friends,
gifted over the course of a certain chapter of life, ordered off Amazon after being referenced
in conversation, picked up spontaneously in the airport duty-free before flights, bought solely
for the aesthetic value of their covers. Books are cultural artifacts and markers of life
moments.
The bookshelf as a record of who I am...and who I was
Inga Chen wrote:
“What books people buy are strong signals of what topics are important to people, or perhaps
what topics are aspirationally important, important enough to buy a book on it that will take
hours to read”.
As I look at the books I’ve accumulated, I’m reminded of how I’ve changed. How my
bookshelf changes is quiet, but powerful commentary on what’s happening in my life. A few
months ago, I bought a bunch of baking books—like many people, I had a baking phase
during covid. Recently, I’ve been really into product design and am amassing a bunch of the
canonical books on the topic. In many ways, the bookshelf is an archive of who I am—and
who I want to be.
Beyond my selection of books, the way I organize them on my bookshelf is
opinionated. I chose to showcase Understanding Media by Marshall McLuhan, but hide
Principles by Ray Dalio. I also spent a couple of hours organizing my books based on their
color, and every time I add one, I slot it into the right place. I care a lot about the aesthetic of
my bookshelf, because it’s quite visible - it stands center stage in my living room.
The bookshelf as a source for discovery
When we visit bookstores, we may be looking for a specific book, to seek inspiration for our
next read or maybe we just want to be in the bookstore for the vibes - the aesthetic or what
being in that space signals to ourselves or others. When I visit someone’s house, I’ll always
look at what’s on their bookshelf to see what they’re reading. This is especially the case for
someone I admire or want to get to know better.
90
Scanning the bookshelf, I’m on the lookout for a spontaneous discovery. The
connection I have with the bookshelf owner provides some level of context and the trust that
it’s somewhat a vetted recommendation. I look through the books for a title that catches my
eye - maybe I’ll leaf through the pages and sample a few sentences, read the book jacket or
the author biography. If something resonates, maybe I’ll ask to borrow it or take a note to buy
it later.
The bookshelf as context for connection
There’s something surprisingly intimate about browsing someone’s bookshelf - a public
display of what they’re consuming, or looking to consume. When I’m browsing someone’s
bookshelf, I’m also on the lookout for books that I’ve read or books that fit my ‘taste’ - and
when I find something, it immediately creates common ground, triggers a sense of belonging
and connection. It might be even more reason for me to opt to dig deeper into their bookshelf
to see what else they’re reading.
Along with discovery, the act of borrowing a book in itself creates a new context
through which we can connect. Recommending a book to a friend is one thing, but sharing a
copy of a book in which you’ve annotated texts that stand out to you, highlighted key parts of
paragraphs—that’s an entirely new dimension for connection.
Last summer, after falling in love with Sally Rooney’s Conversations with Friends, my
friend Aleena mailed an annotated copy to me, and I then mailed it to another friend. As it
passed hands, we kept store of the parts that meant something to us through different colored
pens and highlighters, claiming separate parts of the book as our own. It was like an
intellectual version of the sisterhood of the traveling pants. The book became a shared
collectible that we could use to archive our thoughts, feelings, and emotions, bringing us
closer together in our friendship with a new understanding of how we connect with each
other, and the messages that resonate with us.
People connect with people, not just content.
What’s more powerful than the books and the topics they discuss is the author. The effort I
undertook to source the book, the significance of its edition or how early I got it, whether the
book is signed by the author all serve as some “proof of work” that signal to myself, and the
world, the intensity of my fanship. And in all of this, putting out a carrier signal of varying
intensity to other fans.
91
Take this metaphor of a bookshelf, and apply it to any other space that houses cultural
artifacts, or ‘social objects’. Beyond the books we own, the clothes we wear, the posters we
put on the walls of our bedrooms, the souvenirs we pick up — these are all social objects.
They showcase what we care about and the communities to which we belong. At their core,
social objects have always acted as a shorthand to tell people about who we are and
functioning as beacons that send out a signal for like-minded people to find us. On the
internet, social objects come in the form of URLs of JPGs, articles, songs, videos.
Pinterest, Goodreads, Spotify and the countless other platforms center discovery and
community around content and creators. What’s missing from our digital experience is this
aspect of ownership that’s rooted in physicality. But that’s changing.
Web3 shifts the balance of ownership. In most of web2, the collection you build up is
tied to a given platform. But on-chain identity lets us tie ‘social objects’ to us, not the specific
platforms or applications. This mirrors what we do with our possessions in real life. To put it
differently, your books are yours - you can take them off the bookshelf and bring them with
you whenever and wherever you want.
As we enter this new era, we’ll see platforms competing for a share of our digital
selves, an abstraction of the time we spend online, and we’ll spend increasing amounts of
time and money developing our digital identities. With that, we’ll see platforms compete less
for a share of our time, but more for a share of our digital identity. Platforms, or protocols,
that become the de facto ‘bookshelf’ for our online lives, where our social objects are placed
and are on display, have a huge opportunity in front of them.
92
93
Jamie Joyce
“Web-based Conceptual Portmanteau”
A concept introduced by The Society Library
The word "portmanteau" has many meanings. One definition is "a word formed by
combining two other words," according to the Cambridge Dictionary. Examples of this kind
of portmanteau include the words: intertwingled, cyborg, and netizen. Portmanteau in this
sense operates on text in a two-dimensional way, by combining the letters to craft a new
word. Today, we posit that additional dimensions of meaning (specifically more precise
meaning), can be added to text if based in web-based technologies such as those used by the
Society Library.
Instead of combining words to create a composite concept or describe an emergent
meaning, our "web-based conceptual portmanteau" is created using a technology that allows
us to combine multiple forms of media, visualizations, registers, and expressions into one
information packet for the purpose of optimizing for precise meaning and comprehension.
Unlike portmanteau, which operates at the "word-level," web-based conceptual portmanteau
works conceptually at the "claim-level."
For our purposes, we define claim as being: an assertion of truth, which can be a
statement of fact or opinion, wrong or right.
Examples of claims include:
The word cyborg was coined in 1960.
The Wikipedia article for the word cyborg is hilarious.
Wikipedia is always accurate.
Claims can stand alone or be combined together to form arguments, text snippets, compound
sentences, and other sentences, but the minimum application of "web-based conceptual
portmanteau" is at this claim-level, even if the claim is considered a partial sentence.
That being said, we now offer our definition of "Web-Based Conceptual Portmanteau:"
94
Definition:
"Web-Based Conceptual Portmanteau" is an internet based expression of meaning that relies
on web-based technologies to present a claim, argument, sentence, or text snippet with at
least two of the following features as inherent to the text structure:
embedded definitions, which are inextricably attached to the claim, argument, sentence,
or text snippet. (Shown below)
Figure 1. Joyce, 2021.
variant phrases of similar/different registers, including registers of different reading
levels and technical registers which use jargon, which are inextricably attached to the
claim, argument, sentence, or text snippet. Various phrasings of similar register, but
substituted words, is also a feature.
Figure 2. Joyce, 2021.
the expression of meaning shown visually through images, giphys, or other graphics
referenced from external sources, which are inextricably attached to the claim, argument,
sentence, or text snippet. Formats include: JPEG, TIFF, GIF, BMP, PNG, WebP, SVG, and
others.
95
Figure 3. Joyce, 2021.
the expression of meaning shown visually through videos referenced from external
sources, which are inextricably attached to the claim, argument, sentence, or text snippet
(video with audio does not count as two distinct features). Formats include: MP4, MOV,
MKV, and others.
Figure 4. Joyce, 2021.
the expression of meaning articulated through audio referenced from external sources,
which are inextricably attached to the claim, argument, sentence, or text snippet (audio
with video does not count as two distinct features). Formats include: MP3, WAV, AIFF,
AU, PCM, and others.
96
Figure 5. Joyce, 2021.
contextual explainer paragraphs which provide more broad context of the claim,
argument, sentence, or text snippet itself or its connection with adjacent content.
Figure 6. Joyce, 2021.
It is encouraged, but not required, that "web-based conceptual portmanteau" contain source
meta-data for any content that is combined with the claim, argument, sentence, or text snippet
as a standard. The 'physical' connection of these features can be expressed through buttons,
badges, hyperlinks, or other similar visual indicators on, in, or adjacent to the claim,
argument, sentence, or text snippet.
97
Figure 7. Joyce, 2021.
"Web-based conceptual portmanteau" is not merely conceptual, but an innovation of text that
is already being applied with web-based technologies at the Society Library. "Web-based
conceptual portmanteau" was invented in order to convey the more precise meaning of
claims, arguments, text snippets, and sentences for the purpose of enabling comprehension
and understanding in an educational context.
Since this is the first articulation of “web-based conceptual portmanteau” that we know of,
it is understandably the most clunky and cumbersome version of itself currently. We assume
that more deeply integrated, intuitive, and slick depictions of this concept will be
forthcoming.
We believe that the future of text includes a compound, multi-media representation of a
claim which may be expressed as features in a package, such as our “node” structure. Text is
two dimensional, but with the capabilities enabled by web-based technologies, the future of
text can be multidimensional.
We thank our volunteers Stephen Wicklund, Mike Kissinger, and Presley Pizzo for making
“web-based conceptual portmanteau” possible at the Society Library. The Society Library is a
501(c)3 non-profit digital library that builds educational databases of knowledge by
extracting arguments, claims, sentiments, and evidence from books, academia, news, the web,
and other media. See more at SocietyLibrary.org
98
99
Jay Hooper
The versatility of text: a letter of love, grief, hope, and rhetoric
Text, drawn via ink on paper, struck via hard polymer keys, poked and prodded via a glass
screen, enunciated painfully to Siri or Alexa or...
Text as presentation, a stylistic flow of words, synonyms, grammar, emoji, emoticons, font
Text as discovery, peeping beneath flaps in kids' books, metaphors in novels, arguments in
prose
Text as synthesis, card sorting and stickies and meaning emerging from the glorious messy
data
Text as verification, as Siri (or Alexa or...) reflect back what (perhaps) was said
Text as marginalia, following trails, hints of the people who came before
Text as ritual, picking out the precise notepad, pen, beverage, location
Text as writing to externalise memory, to understand oneself
Text as storytelling and emotion, as reason and reasoning
Text as hurry, information, give-me-the-data
Text as learning, who am I? what am I?
Text as self, as mind, as real, as vivid
Text as poetry
Clearly, text is (almost) as flexible and fluid as thought itself.
Let us not skirt the fact that physical text has a history that echoes far further back than any
digital artefact. Let us also, in this piece, consider digital text and its representation,
manipulation, and interpretation.
The future of text is unpredictable, like any future, but surely includes myriad variety and
form. From augmented reality to quietly overlaid assistive text as we wander the world
(names, directions, resources) to improved interfaces (please!) to clearly convey what data is
flowing where, to be used when and by whom, to high quality annotations to our reports and
the underlying data regarding context. Not just what was uttered, but how, and in which
context, and what was the body language anyway?
The future of text is not only aspirational and utopian. It includes manipulation, thrusts
of calculated fake news and disinformation, carefully targeted manoeuvres executed across
tools, algorithms, crowds to achieve nefarious intent [42]. We have seen the abuse of data
100
mining, crowd actions such as dogpiling, Twitter bots to manipulate trends and, most
recently, the use of bots to conduct hate raids on twitch. Not to mention the manipulations via
text that we have accepted as every day for decades, such as advertising designed to tug and
fray at our self worth in ways that perhaps we might remedy if only we spend money on that
which is advertised. (Advertising, of course, can be expected to become more insidious as it
is tailored to your demographic data and click history, and obfuscated as it slips into your
online interactions, games and data streams.)
The future of text is awkward, rife with broken interfaces. From apps that drop UI
elements over your iPhone’s status bar, to broken grocery websites that just don’t know how
people browse for food items, to government PDFs that require the use of a mouse to select
options from a dropdown menu. It would be, alas, unreasonable to expect anything but more
of the same in the future.
The future of text is pragmatic and tool-oriented, as scholarly and applied researchers
continue to conduct user research and build tools to capture data and convey the
interpretation of that data. May the future of text include annotations to better capture nuance,
ambiguity, change and discussion.
Oh, and the future of text is, at least for the next while, balkanised across time and
space. A disparate mess, a diaspora of empty 404s and forgotten memories just like the
messy, broken humans who leave these trails in their wake. This arises not only from the
literal balkanisation of the internet [43], but from the plain and simple march of time, as the
ranks of dead websites build up, piled high from dead servers, expired domains, and
bankrupted businesses. It happens offline too: remember all those misplaced files and folders,
piled on your desktop under the revealing name of "temp"? Digital text, like all data, is a
mess.
The future of text is questions. What happens to my Google search history, who can use
that and how, who is not authorised but has a slim chance of getting in there (and who is
liable when they do)? How will the politicisation of text affect us all? How will the value of
my data change and how will my data be used?
Context is everything, and in the present day we often encounter text detached from its
context of origin. This is where and why we need to make sense of it, and where we call on
web scientists and all who hail from data science, network science, sociology, linguistics,
psychology, law. We need to reach our cupped hands into the waters of data, see what we see,
infer as best we can the best way forward.
How else can we understand how to deal with implicit information such as cultural
elements, mis- and disinformation, community values, subtly shifted meanings over time?
(Sometimes, of course, we have to talk to people.)
101
The tools must grow with us: algorithms, research methods, software.
So our data will remain incomplete, messy, subject to misinterpretation. But we will
continue to collaborate to understand it as best we can, to make sense of this strange world of
ours, and most of all, the humans who express themselves within it.
The future of text is as messy, vibrant and beautiful as it always was.
102
103
Jeffrey K.H. Chan
Text as persuasive technology: A caveat
Of the many uses of text, perhaps none is more morally confounding than using it to
persuade. To persuade a person is to change his or her mind using the meanings of the text.
Persuasion presupposes reason, yet reason cannot fully grasp how persuasion works. Even so,
persuasive text is used to change our minds on who to elect, which café to visit, and today,
why people should be vaccinated.
Persuasion often employs facts and reasons, which are framed in ways that invite fair
and critical deliberation. A person is persuaded when he freely accepts the outcome of this
deliberation. The same cannot be said for manipulation. Cass Sunstein is therefore correct to
distinguish between persuasion and manipulation [44]. Manipulation tries to influence people
without sufficiently engaging or appealing to their capacity for reflection and deliberation. A
person is manipulated when he is led to do what he will not freely do on his own volition.
Manipulation operates by veiling the manipulated, and where trust is always betrayed with
every unveiling.
Nevertheless, it is unclear if persuasion and manipulation can always be clearly
distinguished in practice. Not all persuasions are motivated with the interest of the persuaded
in mind, and there are manipulations driven by noble intentions that ply persuasiveness as
their overall ploy. Unless their respective motivations and intentions are known and publicly
conceded, it may be next to impossible to discern between selfish persuasion and salutary
manipulation. Especially when so much of textual persuasive technology today is in the
hands of Big Corporations and Big Governments (or their indistinguishable syncretion) that
may not always have the best interest of the people that they are trying to persuade in mind,
skepticism of the text as persuasive technology is required.
What then makes the text unique among other persuasive technologies? First, a
persuasive text must be externalized as an artifact. This artifact not only connects the
persuaded to the persuader, who is the likely author of the text, but also on behalf of the
persuader, continues to perform the task of persuasion. Arguably, the persuasive text might be
the world’s first autonomous technology—bound to the perpetual task of persuading
everyone that encounters it. Every generation, ad infinitum, encounters the persuasive power
of Socratic texts anew.
Second, a series of generative operations can be performed to amplify the power of a
persuasive text. As an artifact, this text can be replicated and shared, either in part or as a
104
whole; it can be embellished, expanded or layered with other texts; it can constitute the
genesis of an entirely new text. An artifact often invites artistry, and a persuasive text is no
exception. Especially for an interactive persuasive text, this artifact can be designed to
‘invite’, ‘encourage’, ‘nudge’ and even ‘steer’ people: crafted to deliver the precise
‘affordances’ that then constrain human behaviors in the direction preferred by the designers
of this technology [45]. Persuasion is the artful contours cast in the preferred direction of
these designers—where presumably, following, rather than fighting them, demands less
effort.
Third, although connected by the text, the persuaded and the persuader maintain
unequal footings. The persuader usually enjoys full knowledge of his own intention to
persuade, but this intention is often opaque to the person he is trying to persuade. The
persuader also enjoys the luxury of time to refine his persuasive texts, and the more robust
these are, the more likely a person can be persuaded in less time. Conversely, the person to be
persuaded is neither given as much time nor a chance to consent before accepting the
meanings organized to change his mind. As a matter of fact, the two-steps convention of first
seeking consent and then intervening to change a person’s mind is conflated in textual
reading: a person’s mind is being changed as he engages the persuasive text. It is impossible
to seek consent while receiving an intervention. With this unequal footing between the
persuader and the persuaded, it is no small wonder that persuasion often tries to sneak in like
a thief in the night.
Compounding all these is the possibility that persuasive text today can be produced by
AI-driven text generation. Granted, text generation is still an incipient technology. But given
time, data, and greater computational powers—especially development in affective
computing—this technology is likely to proximate the persuasive power of human-generated
text even if it might never understand why it is persuading. If persuasive power is a uniquely
human capacity, then making machines that are capable of persuasion is no different than
trying to counterfeit humanity. Counterfeiting humanity, according to AI expert Frank
Pasquale, is not only deceptive but also unfair because it gives the false appearance of human
interest and support where there is none [46]. The emergence of this AI technology is likely
to further blur the line between persuasion and manipulation—if only because a persuasive
machine-generated text that appears to counterfeit humanity can never shake off the suspicion
of being also manipulative.
Of the many futures of text, persuasive text is likely to become more salient in a
fractious world. More persuasion will be seen to become necessary where there is less
solidarity; or to paraphrase Richard Rorty, when people’s self-conception increasingly
appears to bear no relations to others, they have been usually persuaded to change their minds
105
[47]. Yet despite all the seemingly justifiable things that persuasive texts try to do, it is also
important to keep in mind their proximity to manipulation. If persuasive texts can never be
cleanly dissociated from manipulation, then the next best safeguard may be the caveat of this
reminder.
106
107
Jessica Rubart
Collaborative-Intelligent Sense-Making
Concepts
Language: A method of human or system communication
Collaboration: Working together
Sense-making: Giving meaning to something
Shared understanding: A group’s perception
I like this famous quote of linguist Benjamin Lee Whorf: "Language shapes the way we think
and determines what we can think about" [48]. It articulates the importance of a language’s
structure and its expressiveness.
In Germany, for example, gender-neutral forms have been being discussed for years in order
to explicitly include women and non-binary people. In particular, during this year’s election
campaign the debate about gender-neutral forms in the German language has highlighted
differences between the parties. The German language, as others, genders words. For
example, the German word for a male author is “Autor” and for a female author it is
“Autorin”. The plural forms are “Autorinnen” (female) and “Autoren” (male). Traditional
people use the so-called “generic masculine” in a general context, in this case “Autor”
(singular) or “Autoren” (plural), and argue that all genders are included. This causes
misunderstandings and linguistic discrimination. There are gender-sensitive solutions
(besides mentioning all different forms), such as using the gender star, colon or underscore, in
our example “Autor*innen”, “Autor:innen”, or “Autor_innen” in the plural form, which is
pronounced as a small break, an unvoiced glottal stop and intended to include all genders
explicitly in a short form.
This shows that a gender-inclusive language is important and that language develops
over time.
In the Hypertext community, many languages have been discussed – for efficient and
effective collaboration between humans and machines, between machines themselves, as well
as between networks of humans and machines.
For example, in social media users create, share, comment, rate, and tag content.
Collaborative tagging of information resources by end users and sharing those tags with
others has been coined “folksonomy” by Thomas Vander Wal in 2004 as a combination of
108
“folk” and “taxonomy” [49] With tags, people add explicit meanings to items on the Web.
Folksonomies are very useful for information retrieval tasks.
In contrast to such usage-driven languages, the Semantic Web, for example, focuses on
integrating machines. Textual languages, such as the Resource Description Framework (RDF)
and the Web Ontology Language (OWL), are used to describe resources and knowledge
about those. Machines shall be able to interpret and reason about data across applications and
organizational boundaries.
In the Hypertext community, there are approaches, which map between users’ and
system-oriented languages. Schema-based hypertext, for example, utilizes typed nodes and
typed links. These types can relate to either system types or user-oriented types. In MacWeb
[50], for example, one can specify an object-oriented structure for system types. In
Compendium [51], semantic types are used. These describe the semantic purpose of a
structure element and are sometimes referred to as “role” [52]. They can represent the users’
language.
In [53] a meta-modeling approach is described that allows mapping of system types to
semantic types by collaborative configuration. Spatial hypertext [54] supports the creation of
structure, such as categories and relationships, by means of visual attributes and spatial
proximity. In this way, a visual language develops over time. Spatial parsing or structure
mining algorithms can support the identification of explicit structure and by this means the
knowledge building process of users. Text mining and natural language processing can
identify structure from text [55].
I think collaborative-intelligent sense-making in terms of providing shared
understanding between collaborating users as well as intelligent systems is very promising
for the future of text.
109
Joe Devlin
Marginalia Drawings
Two examples of compressed marginalia drawings. Collating all notations made by former
readers found in library books on to a single sheet of A4 paper. An ongoing series.
110
111
112
113
John Hockenberry
Text as a Verb, a Noun and The Revenge of Phaedrus
In a book about written text, I confess the height of cheekiness in beginning an essay with
this question: “Would you prefer oral?” Now, for those of you with some familiarity with the
works of Plato and the controversy in the ancient world concerning written versus spoken
language, you will see the reference. But children of the screen-mediated 21st century will
come to this question from a radically different context. They will see this texted question as
part of an information gathering ritual for online hook-up sites. Texts in this application,
deliver and receive information to help users decide whether or not they want to proceed with
a date. It is very much the present moment of TEXT to have this emergent identity as a verb,
a feature that seems to be at the center of an explosive restructuring of the nature of language
and communication. Since its birth in the 1990’s “TEXTING” has been driving language
interactions ranging from anonymous and candid sexual discourse that bypasses social
conventions of modesty and etiquette, to targeted advertising, to passionate and often
hysterical political speech to weaponized and deliberate propaganda speech meant to delude
and confuse people, to popularized discussions of mindfulness that confront, albeit
superficially, issues in philosophy and religion.
The question is not new, and it has quite another context in the 4th century BCE where
asking “Do You Prefer Oral?” might have led to a heated discussion with Plato (who also
talked a lot about erotic love) concerning the relationship between language and knowledge.
The story of how a civilization that cultivated and transmitted knowledge through oral speech
became a civilization that preferred information set down in text through writing is a
narrative that has been evolving since the emergence of language itself. Imagining the future
of text takes us to the next transition in this story of human consciousness.
In Plato’s time speech was the superior channel for communication, oral language was
the platform and speaking and arguing were the verbs. For Plato, writing was something of a
recording device, soulless and mechanical, text was the platform for facilitating events like
theatre. Plays needed to be written down to create performances. Text was also an important
tool for commerce and “transcribe” or “leaving one’s mark” were among the verbs for this
cursory exercise in communication called writing.
In Plato’s time of papyrus and hand transcribed codices the importance of reading and
writing as an alternative to speaking was already a sophisticated controversy but the
development of printed text centuries later made inevitable the ascension of text over speech.
114
The supremacy of text has been untouchable for 24 centuries. Now technology has created an
urgency to recognize the limitations of traditional text. This “Future of Text” project as well
as others are inquiries into how reading and writing might become as multidimensional as
other contemporary media. Enhancing text with digital tools is an initiative of our era but the
limitations of text that we encounter every time we interact with websites and live media
were well known by Plato. The curious dialogue Phaedrus is in part an indictment of text and
writing that has always been seen by scholars as either a spirited defense of rhetoric or a
curiously misplaced diatribe against the future. Curious because Plato made sure his
dialogues were carefully written down. Did he anticipate the death of an oral tradition that
had delivered language and agriculture to civilization but would not withstand the arrival of
writing? The fact that Plato’s transcriptions of idealized conversations were written down is
why we know about them at all in our time.
In Phaedrus, Plato argues that the writing of text constitutes an important archiving
function but ultimately it is a degraded platform of knowledge compared to oral language.
Text cannot interact with the reader in any meaningful way. Excessive use of written text can
only impede true internalized learning best stored, Plato insisted, in vigorous and continually
expanded human memory and communicated with the tools of rhetoric to refine and upgrade
truth through argument and persuasion. Relying on inert writing, Plato argues in the dialogue
(in a familiar blog-like rant) can only degrade the memory capacity of the brain and diminish
the authority of knowledge because writing is not generated through continual oral challenge
and questioning. Plato prefers “oral”, this much is clear.
Plato might find personal vindication in our time observing how text itself has crossed a
threshold to become a verb, defying its own limitations even as scholars work hard to design
hypertext enhancements and poly-modalities for text, a system of symbols and syntax that
remains the repository of human knowledge. Plato would see “texting” as the rescue of
rhetorical techniques from a prolonged dormancy. He would recognize other signs such as the
explosive growth of live presentations like TED Talks and virtual conferencing. Plato would
no doubt be appalled at the atrophied qualities of spoken rhetoric in the modern public
square, but he would see texting and apps like Tik-Toc as attempts to reengage with the
spontaneous skills of rhetoric to draw audiences and persuade listeners and viewers. Plato
would undoubtedly be an avid counter of “likes.”
While the superficial impressions of texting by today critics are that it is a retreat to the
brainstem, language stripped of nuance and punctuation and blunted with sexual energy and
rage or trivialized with meaningless narratives of celebrity, there may be other ways of
thinking about texting. The events of the past 5 years have seen texting emerge as an
influential rhetorical device for heads of state and activists driving politics, the search for
115
justice and the maintenance of public health. All of these issues were of concern in Plato’s
time and while he would be disgusted with the qualities of execution in Twitter and Snapchat,
he would certainly embrace and encourage the scale of the interactions.
As an active verb, texting drives all kinds of speech and mass communication now. At
scale texting may constitute a re-ascension and restoration of the supremacy of the oral
transmission of language and knowledge. It is an arc that extends from before the time of
Socrates and Homer and makes a steep upward bend with the 20th century development of
electronic media. Does this oral-textual convergence become inevitable as the Internet
reaches a critical mass in this century?
Texting and the subsets of Tweeting, commenting, chatting, subreddits and postings of
all kinds are driven by writing but they are an instantaneous experiential form of
communication that embodies the dynamic interactive qualities of speech. With the
sometimes absurdly sentimental or infantile humorous visual elements such as emojis and
YouTube GIF memes, users seem to be creating a spontaneous hypertext language of
unpunctuated acronyms, and emotional symbols. These user-crafted enhancements of text
grow even as the reading of traditional manuscripts declines.
Worldwide 3 billion people text. Each day nearly 30 billion texts in one form or another
are sent and received. It may have taken 2400 years for the tension between text and
knowledge to re-emerge, but the arguments made in Plato’s Phaedrus about the dangers of
relying on writing seem to predict the growth of texting as a broad cultural platform for
communication while traditional written knowledge drifts to the fringes. In a nation of 220
million adults barely 2 million US citizens read a whole book in a year. Even in the world’s
reading leader, India, the impressive average of 10 hours a week spent reading is dwarfed by
time texting each day for that nation’s nearly 750 million mobile users.
Plato writes in Phaedrus that writing resembles static paintings which, “stand there as if
they are alive, but if anyone asks them anything they remain most solemnly silent. The same
is true of written words.” But the scale and rapidly iterative interactivity of tweeting and
texting may have broken this inertia giving active texting the qualities of rhetorical speech. If
we closely apply Plato’s analysis, texting as a verb may not be a retreat from knowledge but a
return to an even more ancient construct of knowledge, albeit on a vast scale. An important
difference that Plato would articulate as a warning, is how the internet functions as a
prosthetic for the human brain. Philosophers such as Franco Berardi and Catherine Malabou
have noted that our modern understanding of neuroplasticity in the brain confirms Plato’s
warnings about the impact of text and speech on consciousness. It also confirms propositions
of earlier philosophers from Spinoza to Heisenberg regarding the impossibility of specifying
any absolute condition of cognitive reality. Irreversible damage to the brain from “Googling”
116
and the growth of online life and work, as has been argued by critic Nicholas Carr, may be as
unwarranted a pre-judgement as Plato’s suggestion that writing would degrade human
intelligence, but there is no debate that there is a tangible impact on the structure and
organization of the central nervous system from online experience.
Will digital technology further migrate brainpower into static electronic repositories at
the expense of memory and cognitive consciousness? Plato would warn that reliance on
digital tools would threaten to diminish individuals into nodes of emotion and ideology that
merely signify rather than engage in a mission of persuasion and compromise. There is ample
evidence of this static and anger driven discourse impeding urgent political and economic
reforms. But there is also evidence of individual users adopting texting into an emerging and
profoundly rhetorical multimedia experience in the work of performers such as rapper
Donald “Childish Gambino” Glover, comedian and storyteller Bo Burnham, pop singer Billie
Eilish and visual artists Jason Innocent and Barbara Kruger who all rely on text as a means of
provoking and interacting with their audiences. Kruger precisely references Plato’s ancient
warnings about writing’s potential for spreading untruth and enabling plagiarism when she
said this in a 2021 interview,
“Digital life has been emancipating and liberatory but at the same time it’s haunting and
damaging and punishing and everything in between. It’s enabled the best and the worst of
us.”(fn)
Plato had a profound suspicion of mass culture and a disturbing faith in elites to be the
preferred custodians of knowledge and truth. The future of text is surely in large part a dual
between what academic elites might make possible and what mass culture will make
irrevocable in language and communication. A strong preference for either “oral” or “written”
may ultimately be a hindrance in acquiring full literacy in this century. Kruger’s notion that
the “best and the worst of us” has been enabled in the current turbulent environment for text
and communication may be powerful evidence that Phaedrus is a living ghost of memory,
speech and writing who has haunted 24 centuries of human civilization and is very much with
us today.
117
Jonathan Finn
Meaningful Text For Mindful Devices
The future of text is meaning.
We seem to be on a journey from WYSIWYG documents in the 1980s via video calls in
the present to AR and VR in the future, during which text, along with the computer, is
gradually fading from view… perhaps in favour of intelligent glasses which enhance the
world with simple labels and the like. But we need full text in an augmented world just as we
do in the real world. How else to express anything beyond what we see in front of us?
Text is currently the most powerful way to transfer meaning between minds. (Those
two words were the same about 6000 years ago: méntis meant thought.) That’s because text is
frozen speech. Speech is single use, evanescent, whereas text is a recording you can replay by
eye at any time, and skip forwards or backwards at high speed. But something is lost in the
conversion from meaning to speech to text. In a speaker’s mind are something like trains of
thought using facts, beliefs, hypotheticals, inferences, analogies, goals and much more, tying
together mental objects and relationships (with a halo of subtle connotations attached). These
structures are converted to words and can be roughly reconstructed in the listener’s mind, but
only if a shared vocabulary and shared background knowledge are assumed. So those too are
part of the meaning. With text you lose more than speech because it isn’t interactive, as
Socrates bemoaned: there’s no speaker who you can ask to elaborate what they mean.
Still, text can be updated to show more of what’s in your mind. This has been
happening for millennia: the original text of the Epic of Gilgamesh isn’t even divided into
words, nor does it have a title to encapsulate it. As of the late 20th century, numerous
haphazard text upgrades had added features such as: symbols for pause and intonation
(punctuation), précis (titles, headings), structural marks (paragraphs, parentheses, bullet lists),
the beginnings of interaction with the writer (footnotes and indexes). Let’s call this pre-web
version Text 1.0. Alongside it a Text 1.1 has developed with a patchwork of graphic
conventions which are text in all but name, just difficult to type. Symbols both concrete and
abstract, special layouts, arrows, boxes, speech bubbles and so on. We see Text 1.1 on road
signs, presentations, cartoon strips, product packaging, animations, everywhere. It’s
haphazard and ill-defined, but it shows what a huge appetite there is for augmenting text.
118
Infotext
We can have a radical upgrade, a Text 2.0, to communicate much fuller meaning from one
mind to another. We’ll call it Infotext: it annotates plain text with a standardised repertoire of
lines, symbols and colours in a precisely defined way. Infotext is natural and intuitive, partly
based on existing conventions and is learnable in a few minutes. If in doubt you can always
just read the text and ignore the graphics. (It has areas of overlap with some infographics,
emojis, Bob Horn’s structured writing, and numerous other initiatives.) Infotext can be used
on signs and in print, but its fullest form is interactive: on a computing device it could
become a paradigm of what Jef Raskin called the Humane Interface.
The first aim of Infotext is clarity. When skimming lots of information, such as a
message thread, search results or a bibliography, the basic meaning should jump out. When
reading slowly, it shows you subtler structure and meaning. A piece of Infotext starts with one
or more tiny précis – just as news articles have a headline, then a lede (a summary in the first
sentence) – but these are much smaller. A symbol near the start of the text shows its category,
more fundamental than its content: fact (information), fiction, opinion, proposal, intention (a
plan), question, etc. (The categories have logical definitions, and are already deeply
embedded in some languages, such as the subjunctive for non-actual events.) Near that is a
précis no longer than a word: like Recipe, or initials like JF for a person, or one of a
controlled set of emoji-like symbols. Of course some apps do things like this on an ad hoc
basis, but it should be as standard as starting a sentence with a capital letter. More summaries
of increasing length can also appear, which are key to skimming.
Within the main text many types of meaning can be shown. Here’s a taste of them.
There are obvious structures like lists and clauses, but others are more subtle: importance
(shown in Text 1.0 by bold and italic, but in Infotext also by size and colour) is inferred from
the reader’s personal priorities, not just the content alone. There are far-reaching logical
connections such as what I call the So-relation, a kind of cause and effect: in an email thread
you discuss going on holiday somewhere sunny, maybe Key West, which you could reach by
flying to Miami and hiring a car. Here are 3 levels of goals: the 1st (a sunny holiday) can be
achieved by the 2nd (going to Key West), which can be achieved by the 3rd (a chain of 2
goals, the flight and the car). In a long text these connections can be far apart, but form a
hidden network in text (and in life) for Infotext to reveal.
Writing Infotext on your device isn’t hard work since you just type text normally, or
speak, and most annotations are inferred automatically from the content. Those that aren’t
can be added by keyboard shortcuts for basic human concepts such as ‘interesting’ or ‘my
idea’ or ‘I believe this’, combined with shift keys like ‘not’ and ‘very’. (Shortcuts more
119
fundamental than cut and paste.) So just as your device always displays text with suitable
layout and fonts, it will show all text as Infotext, no matter its source.
Meaning enhanced by knowledge
More advanced Infotext requires more advanced text analysis. Suppose an email contains text
quoted from earlier in a thread: Infotext can trivially identify this as 'not new information’, so
grey it out or shrink it as unimportant. Email apps do this of course, but it shouldn’t be a
special feature of an app: it’s part of your device’s fundamental duty to give you only what’s
useful to know. But how can it deemphasise information which was previously stated a
different way? The text can be converted internally to a knowledge representation (KR):
something like a concept map, showing the relationships between concepts not words. This is
far from a fully-solved problem but it makes many other semantic features possible:
searching text for ideas (not specific words), or making an auto-précis of any desired length
(say for a title or a 1-line summary).
Of course, if your device can make a KR of one piece of text, it can make a large KR of
all the text documents on your device, effectively containing a version of all its knowledge.
(Not to mention internet sources…) Then it can identify not just what information is new to
this email thread, but probably new to you – and endless other things. This isn't a hand-wavy
appeal to AI where 'anything is possible', as Infotext uses the KR primarily for displaying
information, not for making its own deep inferences or new ideas. Your devices already take
a small step towards this by storing your diary, photos and other documents centrally, tagged
with simple meanings – but apps have hardly scratched the surface of joining the dots
between them, and text is almost ignored.
As a very simple example, your house painter messages you: “I think the kitchen paint
could be too dark, how about Honeysuckle 4 or 5 instead?”. Backed up with a KR, the
Infotext graphics show you this is an opinion, from a known person (Sandy, with lots of
relevant information about him), about a known colour (Honeysuckle 3, identified from a
previous message), written at a specific time (part of the text, not a feature of the app). These
aren’t links but annotations you can expand by tapping or just looking, whereby ‘wall colour’
shows its name, who chose it and when, and your photo of a sample on the wall. You can
explore the KR’s information further from there, search it and of course do a wealth of useful
things. But if this sounds like browsing a website or mind map it’s not really the same:
without relying on explicit tags and contexts, it shows the information that’s relevant to what
you’re doing, what you should think of now. (That means there’s little need for viewing
options – it broadly knows what you’re up to.) And instead of apps with individual
120
appearances, the whole view of the information dynamically adapts to its content and the
situation.
A Humane User Interface
You can use Infotext to interact. Your house painter was implicitly giving you 3 options,
which can appear explicitly as (say) 3 branches meaning possible scenarios. You can just tap
or blink at one to respond, just as you could speak a single word in reply. This is obviously
equivalent to a dialog box, and indeed questions and messages from your device itself appear
just as if it’s a person – not via buttons on a special window. (After all it is like a person…
who though?) Many other special features of your device’s GUI just become Infotext. I’d say
it starts to be a Humane User Interface or HUI.
Converting apps to use this HUI clarifies their logical essence, and often shows that
their features aren’t really their own. When a satnav app offers 3 alternative routes to a
destination, these would appear just like the 3 choices in a message. After you’ve made your
choice, the Infotext summary of the route indicates that it’s a plan (as opposed to a fact,
opinion, etc.), meaning the details and times can change. But future entries in a calendar app
are also Infotext plans: events that ought to happen, but may not. So these are shown in
exactly the same way as the satnav plan – and why not even in the same place, which is not in
any one app? This isn’t surprising, because you only have one time schedule and only one
mind.
To cut a long story short… your device becomes a store of information: your ideas,
memories, plans, messages, documents and more. Much like now – but radically rearranged.
You access and view this centrally, not inside apps, using Infotext with other media (photos,
video, animation). Many apps partly melt away as they focus on doing, not showing. Some
disappear entirely and we wonder why we ever needed them, as their role is replaced by that
of the interface: to display meaning directly. This system is more than an app, but less than an
OS, and quite suitable for existing devices. It’s far from easy to achieve, but doesn’t require
general AI: it’s not doing your thinking for you. It’s using the right way to help you think.
Working With Ideas
I’m working on a detailed version of this proposal called MindsEye. As well as filling out
much more about Infotext as a medium and as an interface, it shows how to extend it to
editing ideas directly. Infotext can represent trains of thought and other mental constructs,
121
and show them in some radical ways. Editing these – even if you want simple text to be the
end product – is the true goal to which apps for word processing, mind-mapping and note-
taking only aspire. You can send people the ideas themselves, which even for the simplest
message is better than text. Just as spreadsheets adopted cells linked together as the model of
calculations, MindsEye has a model of thought (and it’s not logical propositions): but that’s
another story.
The aim of working with ideas not text is a very old one, but I think we’ve been lured
for decades in a different direction by the siren song of the desktop metaphor. Your computer
screen became a mini office with documents, folders and a wastebasket, where you type a
WYSIWYG preview of the article you’re about to print – if you ever actually print it. As your
screen has got bigger and better so has this virtual office, and soon with AR or VR it will
almost become real! A brilliant idea, but it contains an insidious error: a half-finished article
shouldn’t look like half an article, as it does under the desktop metaphor. It should look like
all your ideas and aims visualised, some of them detailed enough to be text, most not, but in
some abstract meaningful form unlike notes, mind-maps or anything we have now. It should
look like your mind. And when you’ve finished your article so you can finally imagine how it
would look on paper, only then is it right to see a preview: because logically the whole screen
always represents your mind’s eye, not an office.
Your device’s display seemed like looking through a telescope at the outer world, but
you were looking through the wrong end: it should have shown your inner world all along.
Some things on the screen look realistic, such as a 3D mockup of your painted kitchen, but
not because the screen represents the real world: they’re just reflections in your mind’s eye.
There are times (like walking along the street) when you could well use augmented reality,
but when you’re doing ‘knowledge work’ (which is most of the time) you need an augmented
mindspace. Doug Engelbart’s ground-breaking 1968 demo certainly seems to me more like
the latter.
I feel that much of this is obvious, yet numerous other projects pointing in the same
direction have only gone partway… In 50 years computers have advanced enormously, but
their model of information hasn’t made the same progress towards what I see as its natural
conclusion: text augmented in the right way can express meaning fully, and then be the
humane interface to a device which logically is part of your own mind.
Experimental listing of concepts
meaning: information as it is used by a human mind, taken from words (narrow definition)
122
or from any source (wide definition)
infographic: a graphical item which conveys information easily
Infotext: a proposed system of infographic annotations of text to enhance its meaning
augmented reality (AR): a digitally-enhanced experience of the real world
virtual reality (VR): a digitally-created experience of a world, not necessarily like the real
world
desktop metaphor: describes user interfaces where the screen is a virtual desk on which is
office equipment (paper, folders, a calendar, a printer etc.)
123
Karl Hebenstreit Jr.
To me, a word is worth a thousand pictures
The aim of this essay is to consider the future of text in service to people with disabilities by
identifying three prominent topics: enabling real-time participation and communication,
contrasting algorithm-centered artificial intelligence with human-centered AI, and outlining
the implications of this contrast for placing human-centered disciplines within higher
education.
The first topic, in service to championing the civil rights of people with disabilities,
focuses on the crucial role that technology and disruptive innovation play in enabling
everyone to participate and communicate in real-time. This essay’s title was a favorite saying
of the late Joseph Stuart Roeder, a senior access technology specialist at the National
Industries for the Blind. In seriousness, this quip highlights text’s role in transforming
society, the need for text-based interfaces to content, particularly multisensory content being
generated in real-time.
For background, to introduce the complexities of the underlying challenges for
disability studies to a general audience, disability advocates typically begin with solutions for
physical impairments. Addressing this most tangible disability, these solutions have been
widely implemented so everyone has personal experiences with curb-cuts and ramps that
enable access to and within buildings. In the wake of the pandemic, the advantages of
motion-sensitive (no-contact) controls for doors and restrooms are apparent. These
innovations provide examples of how disruptive technologies can benefit everyone, an
alternative to widely-held negative connotations of disruptive innovations. There can be
constructive disruptions, innovative breaks that make the conventional obsolete.
In considering the future of text, the core disruptive technology for realizing universal
real-time participation is artificial intelligence, which can enable text-based interfaces to
multimedia stimuli. For translating audio into text for hearing-paired, there are well-
established technologies such as closed-captioning, for which traditional AI is rapidly
improving the accuracy. Once again, advantages for everyone are recognized: knowing what
is being presented while in noisy environments, applying to translations among languages.
For the visually-impaired, challenges are more difficult and twofold. First, interpreting static
artifacts: recognizing objects in photographs and explaining infographic charts. Second, for
dynamic artifacts and occurrences, there is an art of descriptive audio, enabling a deeper
understanding of videos and holding out the promise of being able to provide a context of
124
what is happening, interpreting nonverbal cues. For people with the wide range of cognitive
disabilities, the challenges are even more difficult, depending not only solutions for the above
but also doing so in an understandable, cognitively-appropriate way.
The second topic brings attention to the fact that while the examples above highlight
successes of algorithm-centric AI, realizing the potential for people with disabilities requires
a different, expanded orientation. Fortunately, this orientation has been resurging in the form
of human-centered AI (HCAI), a paradigm that Ben Shneiderman [58] provocatively frames
as the Second Copernican Revolution. He advocates for three ideas. First, considering HCAI
as a two-dimensional framework opens up the possibilities for accommodating both AI
paradigms, rather than an either/or situation. Second, a plea for a shift in metaphors from
emulating humans to empowering people. Third, a three-tiered governance model: reliable
systems (software engineering); safety culture (organizational design); and trustworthy
certification (external reviews). These insights are further reinforced in Jan
Auernhammer's article on Human-Centered AI [59], which contrasts two philosophical
perspectives from the early development of AI: the “rationalistic” stance represented by John
McCarthy and “design” represented by Douglas Engelbart. Honoring the contributions of
Engelbart is the basis for presenting HCAI as resurging, that it is a re-energizing of
Engelbart’s intelligence augmentation paradigm. In fact, his work has guided my career in
supporting people with disabilities since discovering it in the mid-1990s. I recognized that
while his augmenting human intellectual framework [60] and boosting collective intelligence
system [61] was focused on enabling people to solve increasingly complex and urgent
problems, his framework and system apply to all human capabilities.
The third major topic concerns how HCAI will gain a stronger foothold within higher
education, bringing attention to the power relationships among disciplines. Considering the
relationship between the dominant algorithm-centered and the resurgent human-centered AI
(HCAI) perspectives, deeper insights can be drawn from the history of cognitive science. In
his call for reforming cognitive science, Ashok Goel [62] notes similarities between cognitive
science and artificial intelligence: “The developmental trajectory of AI has mirrored the
development of cognitive science, in that both started as multidisciplinary fields, and both are
now dominated by a single discipline, psychology in the case of cognitive science and
computer science in the case of AI” (p. 894). This article is based on Goel’s experiences as a
co-chair for the 2019 Cognitive Science (CogSci) conference and responding to a then-just-
published paper [63]. Goel’s article is part of a special issue, one of ten commentaries on the
Núñez article [64]. For the future of text, these conversations within cognitive science will
hopefully extend to strengthening ties between cognitive science and artificial intelligence.
In conclusion, the ideal of enabling people with disabilities to have equitable
125
opportunities to participate and communicate effectively in real-time is a challenge requiring
both algorithm-centered and human-centered AI. In an earlier paper, Shneiderman [65]
proposes a compromise design, applying algorithm-centered AI for internal processing in
service to human-centered AI for user interfaces. Combined, these technologies can generate
the adaptive text-based interfaces needed to continue progress toward this ideal.
126
127
Kyle Booten
O Puzzle Box of Inscrutable Desire!
Today it is still possible to pretend that we write primarily, if not exclusively, for readers who
are human. There are, however, more and more exceptions to this rule. Deft practitioners of
search engine optimization remember that for a web page to attract the favor of Google’s
ranking algorithms it must contain “primary and secondary keywords at the correct densities”
[66]. Students whose writing is subjected to the scrutiny of robo-graders reverse engineer
these algorithms’ often nonsensical expectations in order to earn better marks [67]. On
Twitter, where algorithms routinely caution, obscure, or suspend accounts for tweets that
violate the platform’s speech codes, users figure out how to obfuscate their meaning just
enough to make it machine-unreadable.
As machine learning in general becomes more easily accessible to people who are not
engineers or programmers of any sort, Algorithmic Editors will become more widespread.
After all, there are many situations in which a human needs to know something about a large
amount of text but would prefer not to spend the time reading it themselves. A particularly
popular user on an online dating platform, overwhelmed by a deluge of flirtatious messages,
might apply a textual low-pass filter of sorts, automatically ignoring both the most crass and
the most boring. But Algorithmic Editors will have literary uses as well. Many poetry
magazines receive bushels of submissions for each poem that they will eventually publish,
and often it is left to volunteer editorial assistants or under-remunerated graduate students to
sift through this “slush pile.” No doubt the editors of certain literary periodicals would like to
be able to automatically misplace those submissions that are unpromising candidates for their
pages. The Algorithmic Editor of an experimental journal would move too-tidy sonnets to the
back of the stack; that of a formalist publication would assign the same fate to poems that
seem too outré, a judgement based in part on an overabundance of irregular, unpredictable
line breaks. Already most poetry magazines require hopeful writers to send in their
works via Submittable, a submission management platform. In the background, this platform
could train a classifier to predict whether a poem will be rejected or accepted based on how
editors have handled previous submissions. Yet editors—being creative types themselves—
would naturally also want the ability to specify (in natural language, if possible) the sorts of
poetry that they really want to see—e.g., “Comfortably surreal like Ashbery, but not so
WASPy, and more overtly political.” Or, to return to the example of the dating site:
“Messages that indicate the sender is very funny but not in an overly neurotic way.” Tools for
128
filtering texts based on reader-defined models could become as common and portable as ad
blockers or spell checkers.
Now for a second, more audacious prediction. At present, Algorithmic Editors are still a
somewhat embarrassing fact of our digitized textual reality. The school district that pays an
ed-tech company to implement some automatic grading software wants their students to go
on acting as if each of their precious words is being read by a human. Failure to suspend
disbelief would be lead to widespread disaffection or cognitive truancy. Likewise Twitter
wants you to behave, but it doesn’t want you to remember that it is scanning your tweets for
offensive verbiage (or else its bots would congratulate you when you manage to tweet
something within the bounds of acceptable discourse).
But there is nothing inherently shameful about Algorithmic Editors as such. They might
be drawn into the limelight rather than pushed offstage. An example: right now, poetry editors
are defined as people who have the good taste and good sense to pick and publish quality
poems. Yet the editor, in our current human-centric literary workflow, is bound to their tastes
—which, as we know from Bourdieu, are not nearly as unpredictable or unexplainable as we
would like, dependent as they are upon one’s class background and related demographic
factors. In the future, editors who are truly devoted to their art as well as humble in the
recognition of their own limitations will take it upon themselves to design Algorithmic
Editors to replace themselves. However, “replace” here is not quite the right term, since these
Algorithmic Editors would not model the human editors’ desires but rather enact novel
desires, desires that could not (yet) be desired, tastes for which there is not yet a tongue.
Poised over the digital interface that allows for the rapid manufacture of Algorithmic Editors,
the human editor will hammer at statistical language models, bend and buckle them, fold
their edges backwards upon each other, bore holes and whittle notches in them, place one
inside the other, pinion them together with delicate gears and screws until what’s left is an
intricate puzzle box—one that the human editor does not, cannot say exactly how to solve.
This Algorithmic Editor will take as its input a text—a poem—and return a boolean
value signalling whether or not the poem is pleasing to it. Poets will lose sleep, lose years
trying to write a poem to satisfy the obscure and demanding whims of this algorithm.
“False…False…False…False…” it will say to all of them. Or perhaps, to help writers know
when they are on the right track, it will offer a real number. A bad attempt will be scored
0.00201. A better attempt, 0.15083. Perhaps it will mercifully provide some more helpful
feedback—these were the words it liked, these the ones it didn’t. Or complex suggestions in
natural language: “I hate poems that know when they’ve gotten to the end,” “Too chatty, and
yet you don’t say anything,” “Dactyls and iambs and trochees, all jumbled together like
mixed nuts,” or “Reminds me of Whitman. But not his virtues.” On web forums, poets will
129
share their strategies while cursing the Algorithmic Editor’s creator. They will perform
ablation studies in an attempt to isolate those syntactic, semantic, and prosodic qualities that
seem most auspicious. A literary magazine such as the Kenyon Review will publish the year’s
nearest misses with extensive commentary from poets as well as computer scientists.
Again and again the Algorithmic Editor will return its negatory verdict. Until at last,
one day, a poet—the one the prophecies speak of, not even published yet, just barely started
an MFA—will with blithe or quivering hand submit their newest, most uncertain
composition. The Algorithmic Editor will measure the words with its weights, then print an
unfamiliar reply: “True.” And indeed, what the poet will have written will be something True.
130
131
Lesia Tkacz
Artifact from a Possible Future: A Pamphlet Against Computer
Generated Text
“In the modern world, we are increasingly consuming and producing text which is far
removed from its natural origins. Texts are highly processed, imported, monetized by
corporations and are quickly discarded to become cyberspace junk clogging up our
information pipelines and spaces. How could this be healthy? Most of us are so far removed
from traditional text creation that we don’t realize how much of it is reproduced, reprocessed,
manufactured, and calculated by machines.
Did a human write the texts you consume? Are you crafting your own texts, or recycling
ready-made and regurgitated synthetic constructions? Imagine a world where real authors are
replaced by computers.
IT IS ALREADY HAPPENING! Research labs are quietly developing algorithms which can
generate stories and entire novels. The replacement of the novel author with the robot writer
looms ever nearer. Pure, original, imaginative, and individual creative writing is in very real
danger of being supplanted by recycled, mashed-up, statistically predicted, stochastically
screwy and over-processed content owned by mega-corporations.
Is this the textual ecosystem and future that you want?? ACT NOW TO REJECT
COMPUTER GENERATED TEXT!!! Before it is too late.”
132
Pamphlet Cover, 2021 Collage. 105 x 148 mm. Tkacz, 2021.
133
Pamphlet Pages 11-12, 2021 Collage. 148 x 210 mm. Tkacz, 2021.
134
135
Luc Beaudoin
Beyond the CRAAP test and other introductory guides for assessing
knowledge resources: The CUP'A framework
You are overloaded with documents to read. So I must quickly convince you that this chapter
is sufficiently useful for you to read it. Assessing knowledge resources is a constitutive skill
of knowledge work that requires ongoing attention, and should not be allowed to fossilize
upon graduation.
Here I present the CUP’A framework framework for assessing knowledge resources factually
and pragmatically. In contrast to frameworks one finds in study skills texts (e.g., the CRAAP
test), philosophy of science and elsewhere, this framework:
1 is developed not only for students but also for professional knowledge workers;
2 is inscribed in integrative design-oriented psychology;
3 includes suggestions for using information-processing software in powerful ways;
and
4 functionally specifies information-processing software (the ‘future of text’).
The CRAAP schema advises you to consider the Currency, Relevance, Authority, Accuracy,
Purpose and Point of view of information. The CUP’A schema assists in assessing the
Caliber, Utility and Potency of sources, while sensitizing one to the seductive dangers of
Appealingness.
CUP'A Assessment Criteria.
136
Caliber
The caliber of a resource is its objective quality with respect to reasonable expert standards,
irrespective of your particular goals, knowledge or preferences. General standards of caliber
include:
the clarity of its thesis and its overall clarity,
the suitability of research methods used,
the rigor of its arguments, backing and statistics,
the originality of its concepts, claims, findings, etc.
its actual or potential impact and significance,
its grounding in previous literature (e.g., missing or misused references),
its conceptual richness and coherence (relevant ot its potency), and other criteria.
A resource can measure well against some standards and poorly against others.
Often a factual resource conveys, or at least should reference, an explanatory theory or
model. Hence criteria for assessing theories are relevant, which include
assessing its generality, parsimony, extensibility, mechanistic plausibility and
practical usefulness; and
determining whether it (a) can account for fine structure and (b) is part of a
progressing or degenerating research programme.
There are other general criteria of caliber, as well as criteria that are specific to particular
domains.
Utility
The utility of a resource is a measure and description of how instrumental it would be to
one’s projects goals, plans and areas of responsibilities — more generally, to one’s motives. A
resource may be of high caliber but irrelevant to one’s intentions, considering its cost (time,
etc.), risks and constraints. Moreover, a resource may be deeply flawed but potentially useful.
We must try to prevent our utility judgments from affecting caliber judgments.
Assessing utility requires explicit knowledge of one’s projects. Personal task/project
management software can help one track and pursue one’s projects, goals, plans and actions.
Ideally software would enable one to:
1 link knowledge resources to specific projects or motives; and
2 quantify the utility of the resource.
137
Explicitly making such judgments may help one judiciously select and use information.
Potency
A resource’s potency is the extent to which it might affect you as a person: your beliefs,
understanding, attitudes, goals, standards, etc. Potency is inherently subjective to you but
objective as a matter of psychology. A resource may be of high caliber but impotent to you if,
say, you have already mastered its key knowledge. A potent resource is typically difficult to
assimilate: it calls for accommodation (Piaget’s term): elaboration, restructuring, productive
practice etc.
Appealingness
The appealingness of information is how it interacts with our preferences (likes, dislikes)
and other motivators. Appealingness can adversely bias one’s judgments of caliber, utility and
potency. For instance, dubious information (clickbait, idea pathogens, etc) may appeal to
one’s preferences (‘my side’ bias, etc). For example, The Goodness Paradox by Richard
Wrangham exposed anthropologists who rejected high-caliber papers because the papers
clashed with their political attitudes. (Ironically, such clashing is itself often based on
misunderstanding.)
A promising fact is that experiencing mirth and debugging one’s software involve
discovering one’s errors, and yet are both pleasant. It might be possible to generalize, transfer
and nourish such dispositions (e.g., enjoying having one's flawed ideas corrected).
Future of information technology and strategies
To support assessment of knowledge resources, the following innovations are required. One
needs to be able to
Assign global assessments to resources. Not merely “likes”, but systematic ratings (and
possibly descriptions) of caliber, utility and potency.
Not merely highlight text but tag one’s annotations. For instance, one should be able to tag
text as I disagree or as containing a particular fallacy. Common categories should be built-
into the information processing software and new tags addable.
Filter annotations by tag, for instance to list everything in the resource with which one
138
disagrees.
Robustly link entire sources to multiple other resources, such as one’s evaluative notes
about them (“meta-docs”), one’s projects, and related documents (such as others’ reviews).
These other resources may be developed in arbitrary software (outliners, mind mappers,
etc.) and stored locally or remotely. Ubiquitous linking software enables navigating
between a source and metadocs without searching. See “A manifesto for user and
automation interfaces for hyperlinking” and Hook productivity software.
Find previously encountered resources designated as pertinent to a (sub)project.
Share entire sources, meta-docs, and annotations; and links to said information.
139
Figure 1 Related documents that explicitly or implicitly evaluate a source. Beaudoin, 2021.
Relevance of psychology
Assessing knowledge is inherently difficult. Compounding the psychological and technical
challenges summarized above are problematic social trends wherein the humanist ideals of an
open society are rejected. Postmodernism rejects the possibility of separating truth and value
(it lacks the CUP’A framework). Fear of the ‘tyranny of the cousins’ (per Wrangham’s
theory), of being ‘canceled’, can adversely bias CUP judgments. (See Cynical Theories by
140
Pluckrose and Lindsay for an exposition of epistemological trends). Research in psychology
is required to help us design software and strategies to assess information objectively despite
these issues. It would help us deal with the fact that some truths may look ugly. It might help
us understand and counter the memetic fitness of parasitic information. It would help us
evaluate information analytically, systematically and rigorously.
Bibliography
This chapter is based on Cognitive Productivity books by Luc P. Beaudoin. See https://
CogZest.com/projects/cupa for bibliography and supplementary materials.
141
Mark Anderson
Writing for Remediation—Tools and Techniques?
Remediation’? Here I borrow and extend the notion of re-purposing content in a new
medium, as introduced in Bolter & Grusin’s book ‘Remediation (2000) [72]. Whilst their
work used the perspective of literary criticism, here I use the term descriptively to refer an
original work may subsequently exist in other media than as originally created and possibly
even with the original order or narrative altered.
Today’s text can be easily ‘remediated’ by re-presenting it in different media. This may be
either in its original linear (narrative) form or altered in some way; as multiple (hypertextual)
linked narratives, as an alternative narrative, as abstraction of certain parts of the overall
source, or in some other interwingled kk manner.
With foreknowledge of possible future remediation—whether by authorial choice or
not, can we write so as to inform remediation in a beneficial manner? How then might our
writing tools assist in making remediated work flow and fit better in its new form? Even
‘just’ re-fitting text together is not easy—even in a single language. If proof is needed,
contrast the elegance of Ted Nelson’s ‘stretchtext’ ll concept and the challenging task of
actually writing mm for such use. What metadata might be of practical use, and how might we
create/edit such metadata alongside alongside the primary (linear) narrative of the text?
Can all text be remediated? Probably most of it, especially if the source media is digital.
But, need it be so, or—pertinently—should it be? I would suggest not. There is an abundance
of writing where the author’s narrative (voice) is important to the understanding of the
author’s intent. Meanwhile, at the more factual side of writing, it is probably best to re-use
reactor shut-down checklist items in the order written. So remediation is not without
consequence, even outwith the lens of literary criticism. Between the above extremes, our
writing—our work—extends beyond formally published output and it is here where the
opportunities, or risks, of remediation lie.
Frode Hegland’s ‘Visual-Meta’ nn (‘VM’) standard made its formal public debut via
implementation in the papers of the ACM Hypertext’21 Conference oo. Having been closely
involved in supporting the launch of VM, I find myself now reflecting on how our present
digital writing tools lack affordances for writing in a manner informative to remediation. A
key design intent of VM is to allow documents to be more self-descriptive at locus of
interaction—much as a book’s front-papers tell us its provenance. If VM describes a whole
142
document perhaps similar self-descriptive metadata (not necessarily as VM) could be created
—where pertinent—for smaller sections of a document, such as might then assist with
automated remediation. The task is not necessarily simple, or cleanly linear.
Consider that sections—not necessarily contiguous—of a document might be relevant
to a certain form of remediation, or the same content might map to more than one such
remediated use; different addressable sections might not be discontinuous. Furthermore,
remediation effects might also work in reverse. Consequently, a statement in an agreement
might depend on the unchanged existence of several laws. With suitable metadata, changes in
any or all of those laws might usefully be able to ripple back through the document, if only as
a prompt for human (or AI?) review of affected text.
Inferred dependencies within text may not be obvious to (current) algorithmic analysis, and
human minds still have agility in areas where algorithms do not. Language captured as text is
complex, not least in its elisions and omissions (where meaning must then be inferred).
An implicit task here is aiding addressability, an issue considered in the earliest
hypertext systems in the 1960s such as Doug Engelbart’s NLS pp. Whilst addressability is
hardly new issue, as yet it is often enacted with insufficient clarity of focus. The issue is less
being able to address any/all things—today’s software already allows that—but it is more an
issue of addressing the right things (allowing for scope and context).
Nor is the task simply a matter of ‘more (meta)data’. The volume of data, of itself, does
not yield greater insight and auto-adding metadata to every potentially addressable part of
text would produce a data overhead of questionable value (data still needs to be stored;
storage has a cost). Ergo, effective remediation metadata is inherently a deliberate authorial
action.
Forms of remediation are with us already, even if not obvious to the casual author, or
reader. It seems predictable that the degree of remediation (especially non human-controlled)
will increase. That likelihood begs a question: are we ready? Indeed: do we have tools and
techniques that allow an author to write with the ability to inform later remediation? I
believe it is not so. Yet with forethought, we ought to be able to adapt (or create new) writing
tools that both work in a manner still familiar to the author and which can also seedsthe
resulting text with relevant metadata for remediation. The added challenge is setting an
elegant sufficiency of metadata to avoid deadweight. Thus, which additional data, attached at
what scope?
The future is not set: there is no textual fate but what we make. Let us gift our future
selves the benison of better tools for remediation.
143
Megan Ma
Critical Legal Coding: Towards a Legal Codex(t)
There is no legal text without context. That is, legal information exists in a networked
manner; legal documents interact with and reference one another across a temporally
sensitive frame. Therefore, legal texts should be perceived as objects with code as the
semiotic vessel. How these objects interact, how references are made, and how their histories
interrelate must be accounted for. For legal text then to exist beyond natural language and as
computer code, formal languages must necessarily be understood as linguistic mediums.
Formal languages are currently used in a manner that operates largely on efficiency.
This is perhaps owed to a limited regard of the language as strictly syntactic and/or semantic;
a focus on structure and outcomes as opposed to content and means. Analogous with learning
a foreign language for the first time, code has only been acknowledged in a functional,
mechanical sense. Metaphor, irony, fiction, and other complex uses of language have not
been considered because code has yet to be perceived as worthy of interpretation. In
defining, then, techniques of critical analysis, the potential of code as a non-naturalqq but
linguistic medium will be tested against the requirements of legal language. In doing so, I aim
to make a preliminary assessment on the prospect of a legal codex(t)rr.
Mark C. Marino argues that code, like other systems of signification, cannot be
removed from context. Code is not the result of mathematical certainty but “of collected
cultural knowledge and convention (cultures of code and coding languages), haste and
insight, inspirations and observations, evolutions and adaptations, rhetoric and reasons,
paradigms of language, breakthroughs in approach, and failures to conceptualizess”. While
code appears to be ‘solving’ the woes of imprecision and lack of clarity in legal drafting, the
use of code is, in fact, capturing meaning from a different paradigm. Rather, code is
“frequently recontextualized” and meaning is “contingent upon and subject to the rhetorical
triad of the speaker, audience (both human and machine), and messagett.” It follows that code
is not a context-independent form of writing. Having understood the complexities and pitfalls
of natural language, there is now a rising demand to understand the ways code acquires
meaning and how shifting contexts shape and reshape this meaning. The questions become
whether there could be a pragmatics of code, and if so, how could code effectively
communicate legal concepts?
In the “Aesthetics of Generative Code,” Geoffrey Cox et al. advance the notion of a
“poetics of generative codeuu.” They note that the code, frequently ‘read’ and referenced, is
144
only its written form. This mistakenly reduces code to mere machine-readable notation and
implies that code is limited to expressions of logic. In effect, this falsely conflates form with
function. Alternatively, they argue that to build proper criticisms of code, one must also
understand the code’s actions. Code does not operate in a single moment in time and space,
but as a series of consecutive actions that are repeatablevv.
A comprehensive literacy of code enables plays on its structure, using distinctive
syntactic operators to produce a specific arrangementww. The code’s execution is its
chronotopexx. It materializes the abstract elements and particular design choices in the
arrangements. It is where the meaning and narrative of the code is bridged with its makeup.
Code is shaped by its performance. Subsequently, the analysis of code should consider its
constant shifts in state.
The reading of code, then, requires moving past its static form to understand the effects
caused by symbols during its dynamic engagementyy. Code must be understood in action;
only then are design choices situated and contextual references revealed. To interpret and
develop critical hermeneutics, code must be understood beyond programmatic syntax and
semantics to computational pragmatics. Code “yield[s] meaning to the extent to which we
interrogate their material and sociohistorical context, […] and read their signs and systems
against this backdropzz.” Consequently, code must be read against the backdrop of its own
context vis-à-vis its transposed one.
Code is, therefore, undeniably a form of text. More importantly, its interpretative
practices illustrate that while code is not isomorphic to natural language, code as text is not
inconceivably different from natural language text. Some overlap exists. The test, however, is
not whether text generally is inclusive of code. Rather, the test is whether legal text could be
code; in effect, a legal codex(t). Nevertheless, legal language is rather distinct. Moreover,
legal concepts have relied on natural language for their expression. It is yet to be determined
whether natural language may be the only form of legal writing. That is, can legal writing
exist outside of natural language construction?
Reflecting on the distinctiveness of legal language, the initial task is to determine
whether code could fulfil the demands of the language. Peter Tiersma acknowledged the oft-
arcane qualities of the technical language. Yet, he argues that both the lexical and structural
complexities are intentional. Rather, the language is not merely communicative. Its stylistic
form is not embellishment, but in fact, integral to its function. That said, what Tiersma
alludes to is the law’s conceptual complexity traceable through its linguistic patterns. Other
scholars, such as Brenda Danet and James Boyd White, have noted that these stylistic choices
represent the symbolic significance and ritualistic behavior of the language. The poeticism of
legal language, reinforced by literary devices of metaphor and fiction, is instrumental to its
145
existence. The legal language is perceivably figurative and requires it to be experienced. It is
a specific imagination of fact and configures narratives as truths. As well, the legal grammar
reveals the law’s “strange retrospective temporalityaaa.” Neither causal nor chronological,
legal language establishes commitments made in the present, for the future, by referring to
the past. This nonlinear interpretation of time is an implicit representation of the
incompleteness of law, its knowledge is interruptible and incapable of total attainment.
It follows that the legal language may be categorized by three distinct markers: (1)
conceptual complexity; (2) poeticism; and (3) temporal specificity. Conceptual complexity
describes the use of specific vocabulary and peculiar sentence constructions for the
communication of legal concepts. Poeticism reflects the use of literary devices and the
heavily figurative quality of the language; and, finally, temporal specificity articulates the
law’s particular relationship with time.
From a critical lens, code is conceivably (1) incomplete; (2) poetic; and (3) temporally
driven. The second and third traits seem rather transferable to legal language. That is, artful
manipulations of syntactic operators can enable duality of meaning and metaphorical
representation. Code is also sensitive to its dynamic engagement, highly mutable and
susceptible to change. Together, these two traits pair well with the second and third
characteristics of legal language.
The first trait, however, is more complicated and perhaps the crux of the investigation.
It places at the forefront whether the lexical and syntactic complexity is inherent to the law’s
performative character. The current difficulty with ‘code-ification’ may be described as
forcing square pegs in round holes. It is an attempt to draft computational legal expressions
by extracting the underlying logic of legal processes. This, in turn, flattens and compresses
the richness of law. Moreover, it assumes that legal norms may be ‘transferred’ from one
container to another. In contrast, accepting that natural language has already impacted the
construction of legal concepts, only one criteria of evaluation is relevant. That is, code should
only be assessed for its ability to inherit natural language’s traits. The most fundamental
being indeterminacy. Should the indeterminacy of the law reflect the indeterminacy of the
language, then code should simply be tested for its inherent incompleteness. In that regard,
code can indeed be indeterminate. Code can be ambiguous. Code can be partial.
Nevertheless, the inquiry becomes: what is the benefit of drafting in code as opposed to
natural language? Why should code even be considered legal text? Prior literature has shown
that arguments for legal code-ification typically fall in line with simplification and efficiency.
In fact, the argument should be one of clarity and accessibility. David Mellinkoff was perhaps
first to conflate clarity with simplification. This has dangerously implied that legal
complexity should be reduced. Evidently, attempts at simplification have accomplished what
146
has been akin to reckless extraction and bad translations (i.e., transliterating or decoding). A
hurdle experienced most presently in discussions around a domain-specific language for law.
On the other hand, it has been demonstrated that, overriding paradigmatic shifts, or
reconceptualizing entirely away from natural language, runs into problems of
overcomplexitybbb. How then could natural language maintain its signatureccc in code?
Interestingly, Critical Code Studies has provided a fascinating illustration of how code
can inherit and retain its natural language ancestry. Consider the command PRINT. Marino
describes the various evolutions of the term. Historically, printing began as the notion of
putting words on paper (or, parchment). Importantly, print has come to signify a “system of
inscriptionddd.” The word print itself “bears no automatic relationship to what [it] stands
foreee.” It is arbitrary. In programming languages, PRINT is understood as the display of data
on the screen. Just as with most linguistic meaning, programming commands and variables
may be represented using any select combination of characters. PRINT could just as easily
be TNIRP. The intentional choice of PRINT represents a continuity in humanistic tradition,
history, and sociopolitical origins.
Likewise, inherent to the legal language is a preservation of tradition. Though David
Mellinkoff may regard it as “weasel wordsfff,” the persistent use of archaisms (i.e., Middle
and Old English, Latin and French) reflects the same form of continuity. Therefore, a legal
codex(t) is conceivable to the extent that it inherits its natural language roots and embodies
existing complexity. Moreover, there must be mechanisms in place for the legal language to
refer between the analog (natural language) and the digital (code). The legal language must
continue to be seated within a network of its history, relationships, and evolving contexts. In
this way, the integrity of legal norms is maintained, and human-centricity is upheld. It
follows that an associative code for legal writing is premised on establishing first
computational legal understanding – in effect, an infrastructure for clarifying legal
knowledge.
Importantly, there is a significant difference between translation and drafting. To
imagine a legal codex(t) is not to frame it as a question of translation. Instead, it is a
reflection of whether code has the capacity to draft going forward. Rather than rewriting
existing legal texts in code, the exercise should be one of reference. It requires applying
knowledge attained from computational legal understanding to develop an associative code
for legal writing. It is the formation of a computational legal network.
Undoubtedly, the ideas put forth require further examination. For now, it may be
important simply to acknowledge that pragmatics has been, and continues to be, a missing
piece to the LegalTech puzzle. Current uses of formal languages and computational
technology have made strides in ‘clarifying’ the law through simplification. This method,
147
however, treats complexity as a defect and is revealed in the persistent focus on syntactic and
semantic techniques in legal knowledge representation. Importantly, this is not to suggest that
logic and structure is not part of the equation, but that it is not the entire solution. Instead, the
richness of the law should be preserved through methods of representing pragmatics
computationally. This extends into perceptions of code. That is, code should be critically
analyzed for its interpretative potential beyond function. In doing so, can the benefits of
quantitative methods be bridged with normativity; in effect, reintroducing the space for
argument and indeterminacy.
148
149
Niels Ole Finnemann
Note on the complexities of simple things such as a timeline
On the notions text, e-text, hypertext, and origins of machine translationggg.
Keywords/Tags: Notions of text, e-text and hypertext; history of text timeline; pioneering
machine translation?
The composition of a timeline depends on purpose, perspective, and scale – and of the very
understanding of the word, the phenomenon referred to, and whether the focus is the idea or
concept, an instance of an idea or a phenomenon, a process, or an event and so forth.
The main function of timelines is to provide an overview over a long history, it is a kind
of a mnemotechnic device or a particular kind of Knowledge Organization System (KOS)hhh.
The entries in the timeline should be brief and indisputable. Therefore, timelines often
identify the first occurrences rather than the most widespread or most qualified instances
leaving the fuller and more complex, and possibly disputable story out. But even first
occurrences are often difficult to establish.
The first occurrence is most often only the first finding of an instance. Older instances
may be found and competing definitions develop either within a field or in different fields.
This is further complicated since the phenomena, their names, and their meanings may
change over time. Former meanings may become redundant, or they must accommodate and
coexist with new meanings. The time and place of the composition of the timeline are to be
considered in interpreting the things listed.
The following note will discuss these issues as they occur in the development of the
notions of text, e-text and hypertext, and the origin of machine translation.
Notions of Text
The word‘ text’ is simple, but the phenomena referred to has a long and complex history. In
the Middle Ages it was used for the main body of a manuscript as distinct from additional
notes and illustrationsiii. Later, it was applied to printed texts rather than written manuscripts.
Over the years different definitions occur in linguistics, in literary studies, critical
bibliographic theory concerned with scholarly editions, among historians, and - after the
150
invention of e-text- in a variety of fields in computer- and communication sciencesjjj.
In 20th century critical bibliographic theory, the text was understood as an expression
of the intention of the author [73]. In linguistics and literary theory, the focus moved from the
author intention to inner structures of autonomous works based on ‘close reading’ [74].
Linguistic theory maintains the use of text for linguistic expressions, while in literary and
semiotic theories the notion is expanded to include images [75] all sorts of multimedia
expressions [76], dissolved in intertextuality [77], and/or in reader interpretations [78]; [79].
The word ‘text’ furthermore overlaps wordings as script, writing, document, linguistic
expression, and other written, externalized expressions. Spoken language is usually excluded.
A History of Text timeline thus depends on both explicit and implicit and ever-changing
ideas of ‘text’ and related wordings. The notion is also influenced of historical changes in the
material dimensions concerning production (carved, hand-written, typed, printed, electronic
and so forth), storage and reproduction (stone, wood, papyrus, parchment, paper, rolls, books
etc.), dissemination, and reception.
Changes in physical dissemination of texts – for instance due to new mechanical and
electrical techniques –are accompanied by developments of new genres such as the printed
daily newspaper made possible by telegraph and rotary press in mid 19th century. If we list
the first modern newspaper one might suggest that ‘forerunners’ of weeklies and non-periodic
news media whether handwritten or printed should also be listed. But what about texts in
other media and materials such as runestones, and graffities on the city walls? Which aspects
of this broad - and far too short - story should be included in a ‘History of Text Timeline’?
Even if it may be possible to list the major material innovations genres becomes really
intriguing. The notion of genre is difficult to define, but useful for our orientation in the huge
universe of texts. Novels, short stories, poetry, essays, historical documents and diploma,
news, drama, audio, video, and hypertext genres with sub-genres in all categories. The issue
of genre is complicated for at least three reasons. To identify a genre always take more than
one instance, usually a series of texts sharing a set of – eventually also changing –
characteristics. It is a relational term. The second reason is that the same text often can be
included in a hierarchy of genres and sub-genres as well as in a set of network relations to
other texts (intertextuality). We may for some purposes distinguish between the media as
materiel conveyors of content (shared physical characteristics of a set of texts) and genres
which can be identified only by looking into the content (shared meaning characteristics and
style of a set of texts). Third, a recent shift in both functionalist and cultural historical genre
theory away from focusing on the similarities “between documents” to examine social action
seen as “typified rhetorical actions based in recurrent situations” further complicates the
issues of recording genre history within a history of text timeline [80]; [81]; [82].
151
Opening for genres also opens for an endless number of issues which is maybe more
relevant within the humanities than in the sciences, at least until the sciences enters the fields
of the humanities recognizing that where you have text, you have ambiguities and troubles.
Text and hypertext in the binary alphabet
Today, text has also become a verb, to text a message which marks the arrival of a new
medium of text. Texting refers only to a particular e-text format, as written and possibly real
time interactive network communication rather than longer documents to be read at a later –
possibly unknown – time in the futurekkk. The special form of texting, however, reveals a
more far-reaching transition away from the array of static (written, typed, or printed) texts to
e-texts in which the time dimension is always incorporated as an editable option.
The potentials of this emerge gradually in many different areas. Since there is no
general history of digital materials yet, it’s not possible to give a full overview. It is possible
though to depict a few major steps since Roberto Busa’s pioneering project on digitizing
Thomas Aquinas’ works (Index Thomisticus) in 1949 [83]. A print version (sic) appeared in
the 1970’s, and a digital version in the 1990’s [84]lll.
Efforts to develop a standard for e-texts appeared only in the 1960’s. In 1969 the IBM
employed Charles F. Goldfarb coined the notion ‘Mark Up Language’ and created an (aimed
to be) general markup language, GM [85]. The idea of establishing standard formats for e-
text was carried further also within the critical scholarly edition community and the now
established Humanities Computing community, resulting in new mark-up languages such as
SGML (1980) and TEI (1990)mmm. There is a gradual change from computational theory to
new sorts of text theory as foundation for these efforts culminating in the development of
OHCO, a general model of text as ´Ordered Hierarchy of Content Objects’ in the 1990’s
[86]nnn.
Thus, there is a development from the interpretation of text as expression of the
intentions of the author over formal and structural text theory to a modular and hierarchically
ordered theory initiated by the efforts to the create digitized versions of static texts whether
written, typed or printed.
Despite the differences all these ideas aimed to provide a digital edition as a copy of the
original. The text would be stored as a file, and could be copied, processed, retrieved, edited,
and searched in a main frame computer – considered either as a logical machine which would
facilitate the development of more consistent and rational ‘scientific’ text analysis, or as a
toolbox with a range of retrieval features to deal with the text. The sequences of bits in which
152
the text as well as the codes and functionalities was embedded, were not considered part of
the content.
In the 1980’s the mainframes were supplemented with distributed terminals allowing
access across distance and - even more far-reaching- with small, but high-capacity, stand-
alone desktop computers and graphical user interfaces. The door for utilizing the binary
sequences, including codes, and instructions for semiotic purposes as part the work was
opened. The clear distinction between tool and text became an editable variable. The new
perspectives relate most fundamentally to a change in the utilizations and conceptualizations
of hypertext.
The notion hypertext was coined in 1965 of Ted Holm Nelson who first defined
hypertext as ‘non-sequentially read text, as links were inserted in a primary text as
references.’ [87]; [88]. Later Nelson gave a dynamic version defining hypertext as ‘branching
and responding text. Best read at a computer screen’ [89].
In between the French author Gerard Genette had introduced the word hypertext for a
different type of relation between texts, namely for a text (denoted hypotext) used as template
for a later text [90]. According to Genette James Joyce’s Ulyssees was a hypertext because it
used the Homerian Odysseus as its hypotext. However, Nelson and Genette worked in
different academic cultures, which were probably not aware of each other which leaves the
question how such cultural limitations should be manifested in a timeline of text?
Ted Nelsons – very influential – definitions focus on the reader perspective and are in
accordance with the idea that digital features are external to the content of the text. These
notions are still useful in some cases, but the later development is mainly based on the
inclusion of hypertext features as part of the content. So, the notion of hypertext is gradually
widened, Alan Kay & Adele Goldberg [91] focused on the flexibility and the capacity to
include and manipulate all sorts of symbolic expressions in the hypertext. Michael Joyce
introduced the use of links as narrative component within the story, Afternoon. A story [92]
( [92]), and produced the software Storyspace, to write hypertext fiction with Jay D. Bolter,
who also gave an elaborate theoretical analysis of the reconceptualization of the computer in
his Writing Space: The Computer, Hypertext, and the History of Writing [93]. Bolter
described the computer as a fourth type of writing technology in human history and hypertext
as the fundamental semiotic operating mechanism of digital computers since it was rooted in
the editable relation ‘between the address of a location in the storage and the value stored at
that address’ allowing that both the address and its content can be edited via the interface.
This again provides the computer with an invisible but editable space behind the visible
representation of the text. Thus, in Bolter’s analysis hypertext replaced the program as the
basic operating principle of computers.
153
George P. Landow focused on the approximation of writer and reader modes, denoted a
‘wreader’ [94], and Jerome McGann, added to the new interpretation of hypertext with his
notion of ‘Radiant Text’ [95], aiming to include several interpretations of a work in the same
critical scholarly edition. A related reinterpretation of hypertext develops within computer
science in the rise of Human-Computer interaction Studies (HCI) utilizing hypertext to
facilitate users to access to the system architecture and programs via the interface. Later,
Katrin Hayles [96] summarizes the range of signifying components of e-text utilized in this
second-generation hypertext as ‘including sound, animation, motion, video, kinaesthetic
involvement, and software functionality, among others’ooo. Most of these features are based
on the inherent time dimension fundamental to digital materials which - due to this and
contrary to printed materials - remains open for changes as links can always be inserted
deliberately with instructions for change any sort of content.
This again leads to ambiguities of what is meant by ‘e-text’? Should it refer only to digital
materials which are replicas of printed/static materials eventually including born digital
materials if they are intended and coded to be closed? Or should it be extended to include all
sorts of digital materials due to their manifestation in the very same, binary alphabet and
independent of whether these sequences functions as text, images, sounds, processes,
programs, instructions, coded links, and so forth and independent of whether they are
intended to stay closed or to be continuously edited eventually, partly based on real-time
updating?
Both meanings make sense, and the former can be included as a particular case within
the latter though they have quite different implications. In the first sense the notion includes
only what can be made visible as reproduction or simulation of a text produced in a static
material form. The digital representation is external to the text “itself”. Hypertext is necessary
to access, navigate, search, and read the text, but not part of the content. In the second sense
the notion may include all sorts of manifestations in the binary alphabet, independent of
visual appearance, and in which both the Latin alphabet, other alphabets, musical scores,
speech, and images as well as the lay out, and a wide range of processes, and not least the
scripts, instructions and programs are manifested. If manifested, they can also be used as
semiotic elements in the composition of a born digital text including the invisible parts of the
e-text. Hypertext is always necessary for dealing with an e-text, but now it may also be part
of its content.
There is no doubt that Ted Nelson’s notion of hypertext is still the first known
articulation of the term [97] [98]. However, the feature 'mechanical linking' was already
there, included for instance in Paul Otlet’s Mundaneum [99], and in Vannevar Bush’ idea of a
‘Memex’ [100]. The basic functionality of hypertext includes an anchor point from which a