Conference PaperPDF Available

From Killing Trees to Executing Bits: A Survey of Computer-Enabled Reading Enhancements for Evolving Literacy

Authors:

Abstract and Figures

The technology-enabled future of reading is broadly surveyed. Through innovations in digital typography and electronic publishing, computers enable new styles of reading. Audio and music, animation, video, multimedia, hypermedia, and live documents extend traditional literacy. Several classes of systems and instances thereof — including commercial projects, research prototypes, and the author's own systems — are considered, spanning scales from the granularity of a subcharacter up through the scope of the internet. Enhanced capabilities include character articulation, word-sized graphs and images, textual animation, spatial browsing, stereographic display, complementary multimedia, dynamic interaction, and duplex reading. A document can be considered a database, through which almost arbitrary slices can be made, reprojecting contents according to reader initiative.
Content may be subject to copyright.
From Killing Trees to Executing Bits:
A Survey of Computer-Enabled
Reading Enhancements for Evolving Literacy
Michael Cohen
Spatial Media Group; University of Aizu
Aizu-Wakamatsu, Fukushima 965-8580; Japan
e·mail: mcohen@u-aizu.ac.jp
Abstract—The technology-enabled future of reading is broadly
surveyed. Through innovations in digital typography and elec-
tronic publishing, computers enable new styles of reading. Audio
and music, animation, video, multimedia, hypermedia, and live
documents extend traditional literacy. Several classes of systems
and instances thereof — including commercial projects, research
prototypes, and the author’s own systems — are considered,
spanning scales from the granularity of a subcharacter up
through the scope of the internet. Enhanced capabilities include
character articulation, word-sized graphs and images, textual
animation, spatial browsing, stereographic display, complemen-
tary multimedia, dynamic interaction, and duplex reading. A
document can be considered a database, through which almost
arbitrary slices can be made, reprojecting contents according to
reader initiative.
I. INTRODUCTION
Note: Because this document exploits advanced PDF fea-
tures, it is strongly recommended to download this file and
open it in Adobe Reader1to appreciate its embedded effects,
as less powerful applications (such as OS X’s Preview or web
browser plug-ins) do not properly render all the multimedia
elements.
As outlined by Table I, the earliest computer-human inter-
faces, the 1st generation, were basically 1D: “glass teletypes”
with monostreams of text. The 2nd generation was 2D, in-
troducing the mouse (or trackball or stylus) and GU Is with
graphical conventions, such as a cursor (currency indicator),
selection, and drag-and-drop. We are now into the 3nd gen-
eration of interfaces, informed by a 3Dvirtual reality (VR)
paradigm, which is also allowing reconsideration of tradition-
ally flat media. Disrupted by computers, the art of reading and
the nature of literary is evolving. Ted Nelson’s Xanadu vision
[Nelson, 1974: 1], which anticipated the web, reinforced the
idea of nonlinear reading, which allows readers to choose their
own paths through a document or collection of documents. As
illustrated by Figure 1, hypermedia — including, for particular
instance, e·books — reifies information as a space that can be
visited, inhabited, navigated, and rearranged.
II. ART IC UL ATED TE XT A ND EXPRESSIVE GLYP HS
Writers have long played with making pictures from text,
including ASCII art and in recent times, emoji and emoticons
1http://get.adobe.com/reader
hypertext
multimedia
(interactive)
hypermedia
󲎈
via spatial data organization
{mixed & augmented reality,
cyberspace,
virtual reality,
metaverse}
data (presented
linearly &
textually)
non-sequential access:
links &
cross-references
(footnotes,
bibliographic citations,
marginalia, indices,
hot-spots, etc.)
audio: voice, sound, & music;
graphics: drawings, photos,
pictures & images;
animation: motion, gestures,
video & movies
Figure 1. The simplest kind of data (in the upper left of the diagram) is
1D, supporting only linear reading (from beginning to end without forking,
etc.). Adding non-linear elements (moving rightward in the diagram) creates
hypertext. Adding non-textual information like pictures (moving downward)
makes multimedia. Crossing hypertext with multimedia spawns hypermedia,
information as space (bottom right).
such as :-). Digital typography and electronic publishing en-
able the inverse, making word-like tokens out of graphical
objects. For instance, Edward Tufte has promoted the idea
of “sparklines” [Tufte, 2006: 2, p. 45–63]: inline, word-sized
micrographics, such as graphs like this (generated by
filtering a quantized function with a low-pass filter, resembling
a rounded staircase with sloped risers, and representing “scrub-
bing” pages through a document with continuous transitions
between discrete detents).
Zebrackets [Cohen, 1992: 3] [Cohen, 1994: 4], a descrip-
tion of which recently refreshed implementation is forthcom-
ing, features small-scale horizontal striations superimposed on
parenthetical delimiters. The basic idea is to allow glyphs
(characters) to carry extended information, such as functional
rˆ
ole, logical position, and nesting level. Zebrackets extends
parentheses and square brackets according to their position in
an expression. Implemented as filters that re-present textual in-
formation graphically, pairwise delimiters are “scored,” cutting
aligned typographical grooves to associate balanced mates and
ease visual parsing.
Zebrackets articulates parentheses and brackets to enrich978-1-4799-7227-2/14/$31.00 © 2014 IEEE
Generation, Dimension Mode Input, Control Output, Display
1st, 1Dtextual keyboard “glass teletype”
command language (line editor) monaural sound
2nd, 2Dplanar mouse (screen editor) graphical displays
direct manipulation trackball, joystick stereo panning & fading
stylus, touchpad MIDI
steering wheel controller
Aural natural language communication speech synthesis (TT S: text-to-speech)
speech recognition (AS R) spatial sound (5.1, VBAP,WFS, Ambisonics, etc.)
head-tracking
Gestural and Haptic: 3Djoystick, spaceball tactile arrays
Tactile, multitouch panel Braille devices
Kinesthetic, and bird, bat, wand, game pad force-display or -feedback
3rd, 3DProprioceptive gesture recognition motion platform
handwriting recognition
steering wheel
Olfactory odor detection smell and fragrance emitters
Gustatory flavor detection ?
Visual head- and eye-tracking stereoscopic systems:
head-worn or -mounted displays
holograms
volumetric displays
Table I. GENERATIONS AND DIMENSIONS OF I/O MODALITIES
expressiveness. For example, here is the zebracketed chemical
formula for one of the components of vitamin B12:
carbanide;cobalt; 5- 5,6-dimethylbenzimidazol-1-
yl -4-hydroxy-2- hydroxymethyl tetrahydrofuran-
3-yl 1-methyl-2- 3- 2,13,18-tris 2-
amino-2-oxo-ethyl -7,12,17-tris 3-amino-
3-oxo-propyl -3,5,8,8,13,15,18,19-
octamethyl-2,7,12,17-tetrahydrocorrin-3-
yl propanoylamino ethyl hydrogen phosphate
(Zoom in to more clearly see the striations on the parenthetical
delimiters, which can be difficult to discern at the default font
size.)
“Pretty printers” such as Microsoft Word’s Equation
Builder have some related features: successive, pairwise col-
ored parentheses for static display (as in Excel’s formula
editor), or flashing matching delimiter for dynamic display (as
in Emacs2). Modern editors such as BBEdit3and TeXShop4
use various kinds of dynamic display: text between matching
parentheses can be highlighted, matching parentheses blinked,
and unmatched parentheses detected and signaled (via, for in-
stance, flashed window background or audio beep). Zebrackets
allows static, monochromatic association, and therefore normal
printability.
III. .
!
Traditional text display is basically graphical, 2D, but such
conventions are challenged by 3D VR-style interfaces, which
encourage three-dimensional text [FL@33 et al., 2011: 5]
[Heller and Fili, 2013: 6]. Even though such effects are not
appropriate for extended passages, as display fonts they can
be used for titles and isolated phrases.
Stereographic 3D(S3D) movies necessarily use depth-
modulated text for titles and credits, even if such place-
ment is often the neutral screen surface. For nontheatrical
2http://www.emacswiki.org/emacs/ShowParenMode
3http://www.barebones.com/products/bbedit
4http://pages.uoregon.edu/koch/texshop
5http://www.wordle.net
environments such as books, alternative stereographic tech-
niques case be used, such as anaglyphic or lenticular dis-
plays. Coherent even without special glasses, Chromadepth
pictures, as in Figure 2, can be viewed with chromastereop-
tic [Steenblik, 1993: 7] eyewear6to appreciate stereographic
effects, using color-coded binocular parallax to modulate ap-
parent depth.
IV. E·BOOKS
In this millennium, richly illustrated book design style has
become popularized (by publishers such as Dorling Kinder-
sley), preparing readers for supertextual experience. E·book
apps and readers — both various native hardware platforms
(such as the Amazon Kindle,7Barnes & Noble Nook,8Rakuten
Kobo,9and Sony Reader10) and also software emulators (such
as the eponymous apps for tablets, phablets, and smartphones)
— offer enhanced reading experience, including:
adjustable font size, style, & family,
e·ink for reflected light, or variable backlit display
brightness (including sensitivity to ambient light),
panning and zooming, including multitouch gestures
such as tap or pinch–unpinch (that don’t finally rotate
the image),
scrubbing through pages, sometimes with skeuomor-
phicly animated transitions,
on-demand, multilingual dictionaries, and
marginalia
hypertextual and hypermedia contents, cross-
referencing, & indexing, including reader-initiated
electronic highlighting, bookmarks, & gloss.
Tablet-based readers, leveraging integrated MEMS (micro-
electromechanical sensor) systems— including accelerome-
ters, compasses, and gyroscopes— can automatically switch
6http://www.chromatek.com
7http://www.amazon.com/Kindle-Fire-Amazon- tablet/dp/B0083Q04IQ
8http://www.barnesandnoble.com/u/NOOK-Book-eBook- store/379003094
9http://www.kobo.com/ereaders
10http://www.sony.jp/reader, http://ebookstore.sony.jp
(a) “3DDDDDDD,” by Haruki Sato (b) WordleTM5word cloud, composed of words from this document, using a 3-color
palette and animated palindromically in a “cinemagraph” style
Figure 2. Chromadepth pictures, most satisfyingly appreciated with chromastereoptic eyewear5
document display orientation between vertical portrait and
horizontal landscape modes.
Contemporary e·book and PD F readers (such as iBooks11
and PDF Connoisseur12) have TTS (text-to-speech) capabilities.
Modern speech synthesizers have matured beyond the
“drunken, Scandinavian robot” accents of the recent past,
and sound almost natural (albeit with sometimes strange
inflections, such as “upspeak,” rising intonation at the
end of declarative sentences), including breathing effects.
Click here for a TT S rendering of this document’s abstract.
(Right-click [or Control+Click] >Disable Content or
change page to turn off.)
Synthesized speech speed can be adjusted without
“chipmunk-voiced” pitch-shifting artifacts. Along with “talk-
ing books,” recorded with voice actors, such audible words
can be aligned with traditional text. “Immersion Reading”
refers to hearing a book read in synchrony with its being read
visually, including karaoke-style realtime text highlighting.
Audible13 (the audiobook subsidiary of Amazon) promotes
a “Whispersync” feature for integrated alternation of audio
and text, allowing readers to enjoy polymorphic books audibly
(when exercising or whatever) as well as visually.
Specialized e·books — such as Touchpress’14 “Disney
Animated”15 & “The Orchestra”16 and Harper-Collins’ Brian
Cox’s “Wonders of the Universe” [Cox, 2014b: 8] — lever-
age the power of dynamic presentation to extend literacy:
multimedia applications that feature richly animated panning
and multitouch zooming to contextualize passages. On net-
worked devices, they can exploit integration with distributed
resources, including search engines, web pages (often though
CGI, common gateway interface, for utilities such as Google
and Wikipedia), repositories such as YouTube, cartographic
and georeferenced data, and service by curated, “computable
knowledge” databases such as Wolfram|Alpha.17
11https://www.apple.com/ibooks
12http://www.kdanmobile.com/pdf-connoisseur, https://itunes.apple.com/
app/id486334370
13http://www.audible.com
14http://www.touchpress.com
15http://disneyanimated.touchpress.com, https://itunes.apple.com/app/
id632312737
16http://orchestra.touchpress.com, https://itunes.apple.com/app/
id560078788
17http://www.wolframalpha.com
V. MU LTIMEDIA: AUDIO, VIDE O,A ND ANIMATION
Click here for ambient sound of howling cavern wind accom-
panying above animation.
Figure 3. MadefireTM motion comic19: The coarse stop-action animation
shown here doesn’t do justice to the multilayered sliding, zooming, and
sound effects of actual contents. (Used with permission of Madefire and IDW
Publishing.)
Time is the core of mul dia. Flipbooks are classic paper-
based stop-action displays, but electronic documents don’t
need thumb-riffling to achieve dynamic effects, since e·books
can naturally feature animation. For example, cinemagraphs
are mostly still pictures enlivened by video animation of
selected sections (as in Fig. 2b), made with image editing
time
software18 to composite motion frames into an otherwise static
context. Often published in an animated GIF format, they create
the illusion of a video.
An extreme example of such dynamic trends is Spritz,19
which animates the basic reading process itself. Featuring rapid
serial visual presentation to enhance reading speed, software
centers each word at its “optimal recognition point,” slightly
left of center of each word, which is claimed to be the point
at which each block of letters is deciphered. (Of course, such
presentation precludes phrase-reading and back-skipping.)
Cross-media contents adaptations include film comics,
which are recastings of animated movies, taking still images
from a film and adding word balloons to retell the story.
The inverse interpolation is also possible: animated or motion
comics start with a graphic novel and augment the originally
static medium with multimedia, including animated word bal-
loons, automatic cueing through sequential frames, panning,
and even audio. Such books are typically based on manga or
anime, but the catalog of the MadefireTM publisher for their
proprietary readers20 includes a serialized adaptation of the
classic Star Trek “City on the Edge of Forever” TV episode
[Ellison, 2014: 9], visual and auditory samples of a page of
which can be seen in Figure 3.
Leveraging familiarity with traditional graphic novel con-
ventions [McCloud, 2000: 10], animated comics modernize
such idioms, including dynamic arrangement of irregularly
shaped panels, speech and thought balloons, multilayer com-
posites, and in-scene motion. Individual panels can be slid and
automatically dilated into fuller views — rendered dynamically
for sequential reading — while sound effects, voice acting,
background music, and animation extend original artwork.
VI. SPATIAL BROW SI NG
Panoramic or turnoramic imagery and 3Dmodels, including
image-based rendering of photospheres, can be displayed and
viewed on computers, including laptops and tablets. Emerging
virtual reality viewers such as the Sony Project Morpheus
and Facebook-acquired Oculus Rift21 HMDs (head-mounted
displays, or headsets) encourage realtime panoramic browsing
of spaces and HU D (head-up display) reading, along with
new capture techniques— such as PointGray spherical vision
cameras,22 the Panono Panoramic Ball Camera,23 Ricoh’s
Theta camera,24 and Google’s Project Tango25— new stitching
18 Cinemagram: http://cinemagr.am, https://itunes.apple.com/app/
id487225881, https://play.google.com/store/apps/details?id=com.cinemagram.
main, Flixel Cinemagraph: http://flixel.com, https://itunes.apple.com/app/
cinemagraph+/id879724183, https://itunes.apple.com/app/cinemagraph-pro/
id777874532, https://itunes.apple.com/app/flixel-cinemagraph-pro/
id642139481, Fotodanz: https://play.google.com/store/apps/details?id=com.
application.fotodanz, iCinegraph: https://itunes.apple.com/app/id482422595,
Kinotopic: https://itunes.apple.com/app/id493214555, Pictoreo:
http://www.9apps.com/android-apps/Pictoreo
19http://www.spritzinc.com
20http://www.madefire.com/madefire-app, https://itunes.apple.com/app/
id533379666, https://play.google.com/store/apps/details?id=com.madefire.
reader
21http://www.oculus.com
22http://www.ptgrey.com/360-degree-spherical-camera- systems
23http://www.panono.com/ballcamera
24http://theta360.com
25https://www.google.com/atap/projecttango
and modeling applications— such as 360 Panorama,26 Pho-
tosynth,27 Kolor’s Autopano Giga,28 and Autodesk’s Project
Memento29— and virtual tour applications— such as Easy-
Pano30). Commercial exemplars of such spatial browsing in-
clude “Wonders of Life” [Cox, 2014a: 11], which features
a breathtakingly swooping interface to navigate geolocated
sections, “Solar System,”31 which features an orrery with a
heliocentric model that can be freely explored, and Theodore
Gray’s “The Elements”32 and “Molecules,33 which feature
many turnos (turnoramas, a.k.a. object movies) with which
users can inspect samples.
VII. DYNAM IC INTERACTION
TLC/TLC.pdf
Figure 4. This 3Dmodel (procedurally modeled in Mathematica, exported as
an ST L file, and converted to U3Dby MeshLab34) can be actively manipulated
in Adobe Reader. The cube features orthogonal profiles of mascot, celebrity,
and landmark figures from Aizu-Wakamatsu, Japan: AkaBeko red ox,
Hideo Noguchi , & Tsurugajy ¯
o Castle . (Assert Enable 3D
content in Adobe Reader to inspect. Right-click [or Control+Click] on the
figure to Generate Default View, then tumble the model freely.)
A simple example of shallow interaction is tooltips (as
in web browsers, e·readers, and other apps)— pop-up expla-
nations revealed when hovering over a control or hotspot—
which are a useful help or “peeking” utility. More deeply,
26http://occipital.com/360/app
27http://photosynth.net, https://itunes.apple.com/app/id430065256
28http://www.kolor.com/image-stitching-software-autopano-giga.html
29https://memento.autodesk.com
30http://www.easypano.com
31http://solarsystem.touchpress.com, https://itunes.apple.com/app/
id406795422
32http://apps.theodoregray.com/home/the-elements-a-visual-exploration,
https://itunes.apple.com/app/id364147847
33http://apps.theodoregray.com/home/molecules, https://itunes.apple.com/
app/id923383841
34http://sourceforge.net/projects/meshlab
some modern document formats allow embedding of multi-
media (audio, animation,35 video, etc.) and 3Dmodels. For
instance, such dynamic media can be embedded in PD F files
[Graf, 2012: 12] (using, as this document does, tools such as
Alexander Grahn’s L
A
T
E
Xmedia936 package), browsable with
a suitable viewer such as Adobe Reader37 (with Acrobat-9/X
compatibility, including RichMedia Annotation, an extension
to the PD F specification), as in Figure 4 above.
Similarly, Mathematica [Wolfram Research, Inc., 2014: 13]
notebooks are “live”— suitable for traditional seminar-style
presentations but also dynamically executable. Such notebooks
can be read normally, but contents can also be animated,
parameterized, edited, and reevaluated or reexecuted. (For
business reasons, Wolfram Research sells Mathematica as
a Computable Document Format content creation tool, but
provides the “CDF Player”38 freely.)
VIII. DUP LE X READING: CROWDS AND CLOUDS
One interpretation of “prosumer” is as a portmanteau
combination of producer and consumer, as with bottom-
up “crowd sourcing.” In Unix operating system idiom, file
permissions are organized as {read,write,execute} ×
{user,group,other}. Since such permission is articulated
as groups of three bits, octal representation is often used to
express such access, as in, for example, “chmod 0755 fn
(which makes a program readable and executable by all, but
writable only by its owner). The original web, based on a
broadcast model, can be thought of as 0644, read-only by
non-owners. Crowd-sourced knowledge and social network
services (blogs, wikis, Facebook, LinkedIn, Twitter, etc.) imply
write permission and feature content creation and remixability,
0666. Contemporary capability is like 0777, since web- and
cloud-based apps imply executability as well as contributability
[McFedries, 2006: 14].
With network-distributed e·books, there is no difference
between ‘version-up’ refresh of software and revised editions
of a book: incremental, “soft” releases can adjust and augment
material. Besides such a “subscription-based,” multicast model,
organized feedback from subscribers back to the publisher or
community can be accommodated. Reading is almost social, as
the normal one-way process is extended with crowd-sourcing
media, such as ongoing public compilation of books’ popular
highlights.
IX. DOCUMENTS AS DATABA SE S: SIT ES A ND CI TE S
Schemas are structured representations of
domain-specific information. For instance, IE EE 1599
[Baggi and Haus, 2009: 15] is a representation standard
for hyperreferential musical information: scores, audio
tracks, images, videos, etc. Any document or collection
of documents, though — not just XML-encoded semantic
ontologies — can be considered a multidimensional database,
through which slices can be taken or projections made.
For example, this manuscript uses the “Multibibliography”
35http://www.ctan.org/tex-archive/macros/latex/contrib/animate
36http://www.tug.org/texlive/Contents/live/texmf-dist/doc/latex/media9
37http://get.adobe.com/reader
38http://www.wolfram.com/cdf-player
39http://ctan.org/pkg/hyperref
package [Cohen et al., 2013: 16], which generates alternative
references sections: alphabetic, sequenced, and also
chronological orderings. Its extended inline citation format, as
seen throughout this document and as depicted by Figure 5,
integrates various call-out styles, and is richly hyperlinked
for electronic browsing, since it is articulated to allow click-
through to particular subbibliographies. The subbibliographies
themselves are fully hypertextually crossed through their
labels, linking among them.
Especially in this postmodern era of sampling, appropria-
tion, quoting, retweeting, remixing, and mash-ups, connections
can be as important as unanchored original ideas. Eventually
projections will not be statically generated at “compile time”
like the multibibliography example here, but generated “on-
the-fly,” like a spreadsheet or object-oriented database, sortable
by and pivotable around arbitrary fields and keys. The essence
of such anticipation is “late binding,” deferring determination
of form of presentation until “runtime” (reading time).
X. CONCLUSION: TH E FUT UR E OF LITERACY
Modern e·books have not only words and pictures, but
motion and sound and well. Encouraging readers to actively
engage with contents can enhance interest, comprehension,
learning, and retainment. As diagramed by Figure 6, the pro-
cess of reading is becoming one of “info-viz,” information
visualization using interactive data representations (visual and
otherwise) [Carlson, 2007: 18, Section 18: Scientific Visual-
ization].40 Such instruments blur distinctions between stand-
alone applications and graphical readers for external contents,
and also between playing and reading, as they can be as
much toy as traditional book. Typographic design innovations
soften differences between text, numbers, graphs, and images
(including maps and diagrams). Spatial effects, such as im-
mersive imagery and live 3Dmodels, nudge modern literacy
towards something between reading and looking at painting or
sculpture.
The book is a kind of user interface, as amusingly consid-
ered by Isaac Asimov’s short story “The Holmes-Ginsbook De-
vice” [Asimov, 1969: 19], the “Introducing the Book” video41
imagining a medieval help-desk, the “Experience the power
of a bookbookTM ” parody42 by Ikea, and “The Book of the
Future” comic tracing its regression [Snider, 2012: 20].
Even though such hypermedia-enhanced documents as the
one you are reading here are ordinary if modern — dissemi-
natable through contemporary digital conference proceedings,
journal archives, and digital libraries — they can include ad-
vanced features that have not yet diffused into the mainstream.
(Of course, for digital formats, archivability is an issue: Will
pdf documents still be readable in a century? A millennium?
?) Even colored text is not yet very popular in traditional
publishing (excluding signage and advertising), presumably
because of inertia of habits formed with conventions predicated
on the practice of squirting and painting monochromatic ink on
“dead trees.” Such emerging enhancements represent semiotic
opportunities, working at different scales through documents:
40https://design.osu.edu/carlson/history/lesson18.html
41https://www.youtube.com/watch?v=pQHX-SjgQvQ
42https://www.youtube.com/watch?v=MOXQo7nURs0
alphabetical
chronological
(timeline)
sequence
(first appearance in document)
inline citation::
(Suzuki, 2015: 57)
date
date
name
name
page index
index
page page
Figure 5. Hyperreferential links across a document and among multibibliographies: Each inline citation, exemplified by the block in the center, is linked to
references in subbibliographies, which cross-reference each other and can also point back to the inline callouts. Hollow arrowheads represent links provided by
hyperref’s backref39; solid arrowheads represent links provided by the multibibliography package.
I(t):
time-varying
display
S:
specification
K:
knowledge
V:
visualization
P:
perceptual
& cognitive
abilities of
user
E:
exploration
dK/dt
dS/dt
Data Visualization User
apprehension
sophistication
curiosity
adjusted
projection
Human User
Figure 6. Reading can be thought of as a kind of directive for visualization. Boxes in the figure above denote state containers; circles denote processes
that transform inputs into outputs. The central process in the model is visualization V. Data D is transformed according to specification S into time-varying
information display I(t). Specification S includes applied algorithms and parameterization, including filtering of data (interpolation, projection, or smoothing),
and visualization comprises mapping preprocessed filtered data into display primitives for rendering. Time-dependent information I(t) is perceived by a user,
with increase in knowledge K as a result. Amount of knowledge gained depends on information I(t), current knowledge K of the user, and particular properties
of the perceptual and cognitive abilities P. Current knowledge K(t) is sum of initial knowledge K0and accumulated knowledge gained from displays. Users can
interactively explore, E(K), changing specification S. Visual analytics is the science of analytical reasoning facilitated by interactive interfaces, a combination
of data analysis and information visualization. Visual analytics spans disparate components, including analytical reasoning, visual representations, computer-
human interactions, data representations and algorithms, and tools for collaboration and communicating results. Model and flow visualization are extended from
[van Wijk, 2006: 17].
macro Hypertextual elaboration, like the multibibliogra-
phy package’s unspooling of references databases,
suggests other dynamic associations and threading
among as well as within documents. The ghost of
a failed experiment can be glimpsed in this page’s
background watermark, intended to be a usable
QRCode: contrast tuning could not reconcile the
tension between human- and camera-readability.
meso Live contents such as audio, sequential stills (as
in Figs. 2b, 3, and 7), video, and manipulable
3Dmodels (as in Fig. 4) can be embedded in a
document. Such multimedia isn’t printable, but
browsable through interactive readers.
micro Articulated expression of individual characters
like zebrackets and sparklines illustrates the power
of semi-custom glyph generation. Every pixel of
every character can be semi-customized for its
particular circumstances. Just as sometimes video
contents distributed for home entertainment rely
upon viewers’ ability to pause playback, dilating
time, sometimes dense documents are best appre-
ciated with zooming into detail, dilating resolu-
tion. As an example of another failed experiment,
Fig. 2b was shrunk (still animated!) into the mi-
crographic period at the end of this sentence, but
with unusable resolution (Try extreme zooming
to see it [up to 6400%, the current limit of Adobe
Reader].)
As reading systems get “softer” and more “analog”
[Campbell and Kautz, 2014: 21],43 it is both desirable and
inevitable that heretofore static documents will be generalized
into imagery, animation, and dynamic interactive information
interfaces. Eye-trackers will sense gaze, and configure dis-
plays to enrich and enhance reading. Complementing personal
reading devices, large-format screens — billboard- and room-
sized multiscreen configurations — will provide immersive or
at least ambient experience, including collaborative or social
“reading.” The future of literacy is animation, in the sense of
dynamic, colored, arbitrarily displayed contents.
Here is a “finale” example of Zebrackets, with an extended
“stripes and slits” (“poles ’n’ holes”) visual style, this time
colored and animated: , which can
be correspondingly graphed as in Figure 7, and displayed
synchronously, even though they are separate parts of the
document, deployed in a master-slave relationship. Mixed-
scale features distributed across a document can be entangled.
Figure 7. Tree representation of parenthetical expression
In this joint example, a frame rate of 1.5 Hz is used,
cyclically flickering the nodes in a double pulse rhythm. Re-
dundant coding (syntax, glyph articulation, color, synchronous
flashing, graph representation) is intended to make the structure
more perspicuous. Arbitrary animation is possible, including
modulation of frequency, phase, looping, rhythm, . . . Html5,
eclipsing rich media frameworks such as Adobe Flash and
Microsoft Silverlight, will make it easier to apply such effects
to web pages.
Such trends are admittedly somewhat worrisome, as pa-
tience for ordinary text could atrophy. The “Don’t ask; mul-
titask!” attitude could erode appreciation of literature and
extended non-fiction that reward sustained, concentrated atten-
tion. However, although paper enjoys an endearing physicality
43http://vecg.cs.ucl.ac.uk/Projects/projects fonts/projects fonts.html
impossible to simulate [Nakanishi, 2007: 22], digital “unflat-
tened” media are too convenient not to prevail (Amazon’s
opening a brick-and-mortar outlet notwithstanding). The act
of reading will approach cinematic information exploration,
multimodal “knowledge navigation.” Such predictions might
seem like sacrilege to traditionalists and bibliophilic purists,
but user experience trumps homogeneity of medium. Technol-
ogy has progressed beyond the printing press, and so shall
we.
REF ER EN CES SORTED BY NA ME
[Asimov, 1969: 19] Asimov, I. (1969). The Holmes-Ginsbook Device. In
Opus 100. Houghton Mifflin. 5
[Baggi and Haus, 2009: 15] Baggi, D. and Haus, G. (2009). IEEE 1599:
Music Encoding and Interaction. Computer, 42(3):84–87. 5
[Campbell and Kautz, 2014: 21] Campbell, N. D. F. and Kautz, J. (2014).
Learning a manifold of fonts. ACM Trans. Graph., 33(4):91:1–91:11. 7
[Carlson, 2007: 18] Carlson, W. (2007). A Critical History of Computer
Graphics and Animation. 5
[Cohen, 1992: 3] Cohen, M. (1992). Blush and Zebrackets: Two Schemes
for Typographical Representation of Nested Associativity. Visible Lan-
guage, 26(3-4):436–449. 1
[Cohen, 1994: 4] Cohen, M. (1994). Adaptive character generation and
spatial expressiveness. TUGboat: Communications of the T
E
X Users
Group, 15(3):192–198. 1
[Cohen et al., 2013: 16] Cohen, M., Haralambous, Y., and Veytsman, B.
(2013). The Multibibliography Package. TUGboat: Communica-
tions of the T
E
X Users Group, 34(3):901–904. http://www.ctan.org/
pkg/multibibliography, ftp://ftp.dante.de/tex-archive/macros/latex/contrib/
nmbib/nmbib.pdf. 5
[Cox, 2014a: 11] Cox, B. (2014a). Wonders of Life. Harper Collins. 4
[Cox, 2014b: 8] Cox, B. (2014b). Wonders of the Universe. Harper Collins.
3
[Ellison, 2014: 9] Ellison, H. (2014). Star Trek: The City on the Edge of
Forever. Madefire. 4
[FL@33 et al., 2011: 5] FL@33, Vollauschek, T., and Jacquillat, A. (2011).
The 3D Type Book. Laurence King Publishing. 2
[Graf, 2012: 12] Graf, N. A. (2012). 3DPDF: Open Source Solutions for
Incorporating 3D Information in PDF Files. In Proc. Nuclear Science
Symp., Medical Imaging Conf., Anaheim, California. 4
[Heller and Fili, 2013: 6] Heller, S. and Fili, L. (2013). Shadow Type:
Classic Three-Dimensional Lettering. Princeton Architectural Press,
Thames and Hudson Ltd. 2
[McCloud, 2000: 10] McCloud, S. (2000). Reinventing Comics. Peren-
nial/HarperCollins. 4
[McFedries, 2006: 14] McFedries, P. (2006). When Good Clicks Go Bad.
IEEE Spectrum, 43(10):88. 5
[Nakanishi, 2007: 22] Nakanishi, T. (2007). Special Effects: A Book About
Special Printing Effects. AllRightsReserved Ltd. 7
[Nelson, 1974: 1] Nelson, T. H. (1974). Computer Lib/Dream Machines.
Tempus Books of Microsoft Press. 1
[Snider, 2012: 20] Snider, G. (2012). Incidental comics: The book
of the future. New York Times Sunday Book Review. http:
//www.nytimes.com/interactive/2012/03/30/books/review/snider01.html,
http://www.incidentalcomics.com/2012/04/book-of-future.html. 5
[Steenblik, 1993: 7] Steenblik, R. A. (1993). Chromastereoscopy. In
McAllister, D. F., editor, Stereo Computer Graphics and Other True 3D
Technologies, pages 183–195. Princeton University Press. I SBN 0-691-
08741-5. 2
[Tufte, 2006: 2] Tufte, E. R. (2006). Beautiful Evidence. Graphics Press. 1
[van Wijk, 2006: 17] van Wijk, J. J. (2006). Views on visualization. IEEE
Trans. on Visualization and Computer Graphics, 12(4):421–433. 5
[Wolfram Research, Inc., 2014: 13] Wolfram Research, Inc. (2014). Math-
ematica. 5
a+b *c * d-e - f- g*h
-
*
*
+
a
b
c
-
d
e
-
f
*
g
h
REFERENCES SEQUENCED BY APPEARANCE
[1: Nelson, 1974] Theodor H. Nelson. Computer Lib/Dream Machines.
Tempus Books of Microsoft Press, 1974. 1
[2: Tufte, 2006] Edward R. Tufte. Beautiful Evidence. Graphics Press, 2006.
1
[3: Cohen, 1992] Michael Cohen. Blush and Zebrackets: Two Schemes
for Typographical Representation of Nested Associativity. Visible
Language, 26(3-4):436–449, 1992. 1
[4: Cohen, 1994] Michael Cohen. Adaptive character generation and spatial
expressiveness. TUGboat: Communications of the T
E
X Users Group,
15(3):192–198, September 1994. 1
[5: FL@33 et al., 2011] FL@33, Tomi Vollauschek, and Agathe Jacquillat.
The 3D Type Book. Laurence King Publishing, 2011. 2
[6: Heller and Fili, 2013] Steven Heller and Louise Fili. Shadow Type:
Classic Three-Dimensional Lettering. Princeton Architectural Press,
Thames and Hudson Ltd., 2013. 2
[7: Steenblik, 1993] Richard A. Steenblik. Chromastereoscopy. In David F.
McAllister, editor, Stereo Computer Graphics and Other True 3D
Technologies, pages 183–195. Princeton University Press, 1993. IS BN 0-
691-08741-5. 2
[8: Cox, 2014b] Brian Cox. Wonders of the Universe. Harper Collins,
February 2014. 3
[9: Ellison, 2014] Harlan Ellison. Star Trek: The City on the Edge of
Forever. Madefire, 2014. 4
[10: McCloud, 2000] Scott McCloud. Reinventing Comics. Peren-
nial/HarperCollins, 2000. 4
[11: Cox, 2014a] Brian Cox. Wonders of Life. Harper Collins, June 2014.
4
[12: Graf, 2012] Norman A. Graf. 3DPDF: Open Source Solutions for
Incorporating 3D Information in PDF Files. In Proc. Nuclear Science
Symp., Medical Imaging Conf., Anaheim, California, Nov. & Dec. 2012.
4
[13: Wolfram Research, Inc., 2014] Wolfram Research, Inc. Mathematica,
2014. 5
[14: McFedries, 2006] Paul McFedries. When Good Clicks Go Bad. IEEE
Spectrum, 43(10):88, October 2006. 5
[15: Baggi and Haus, 2009] Denis Baggi and Goffredo Haus. IEEE 1599:
Music Encoding and Interaction. Computer, 42(3):84–87, March 2009.
5
[16: Cohen et al., 2013] Michael Cohen, Yannis Haralambous, and
Boris Veytsman. The Multibibliography Package. TUGboat:
Communications of the T
E
X Users Group, 34(3):901–
904, 2013. http://www.ctan.org/pkg/multibibliography,
ftp://ftp.dante.de/tex-archive/macros/latex/contrib/nmbib/nmbib.pdf.
5
[17: van Wijk, 2006] Jarke J. van Wijk. Views on visualization. IEEE Trans.
on Visualization and Computer Graphics, 12(4):421–433, 2006. 5
[18: Carlson, 2007] Wayne Carlson. A Critical History of Computer Graph-
ics and Animation, 2007. 5
[19: Asimov, 1969] Isaac Asimov. The Holmes-Ginsbook Device. In Opus
100. Houghton Mifflin, 1969. 5
[20: Snider, 2012] Grant Snider. Incidental comics: The book of the future.
New York Times Sunday Book Review, March 30 2012. http://www.
nytimes.com/interactive/2012/03/30/books/review/snider01.html, http://
www.incidentalcomics.com/2012/04/book-of-future.html. 5
[21: Campbell and Kautz, 2014] Neill D. F. Campbell and Jan Kautz.
Learning a manifold of fonts. ACM Trans. Graph., 33(4):91:1–91:11,
July 2014. 7
[22: Nakanishi, 2007] Taka Nakanishi. Special Effects: A Book About
Special Printing Effects. AllRightsReserved Ltd., 2007. 7
REF ER EN CES SORTED CHRON OL OG IC AL LY
[Asimov, 1969: 19] Asimov, I. The Holmes-Ginsbook Device. In Opus 100.
Houghton Mifflin, 1969. ISBN 978-0-395-07351-3. 5
[Nelson, 1974: 1] Nelson, T. H. Computer Lib/Dream Machines. Tempus
Books of Microsoft Press, 1974. ISBN 0-89347-002-3 and 0-914845-
49-7. 1
[Cohen, 1992: 3] Cohen, M. Blush and Zebrackets: Two Schemes for Ty-
pographical Representation of Nested Associativity. Visible Language,
26(3-4):436–449, 1992. 1
[Steenblik, 1993: 7] Steenblik, R. A. Chromastereoscopy. In D. F. McAllis-
ter, editor, Stereo Computer Graphics and Other True 3D Technologies,
pages 183–195. Princeton University Press, 1993. ISBN 0-691-08741-5.
IS BN 0-691-08741-5. 2
[Cohen, 1994: 4] Cohen, M. Adaptive character generation and spatial
expressiveness. TUGboat: Communications of the T
E
X Users Group,
15(3):192–198, 1994. 1
[McCloud, 2000: 10] McCloud, S. Reinventing Comics. Peren-
nial/HarperCollins, 2000. ISBN 0-06-095350-0. 4
[McFedries, 2006: 14] McFedries, P. When Good Clicks Go Bad. IEEE
Spectrum, 43(10):88, 2006. doi:10.1109/MSPEC.2006.1705780. 5
[Tufte, 2006: 2] Tufte, E. R. Beautiful Evidence. Graphics Press, 2006.
ISBN 0-9613921-7-7. 1
[van Wijk, 2006: 17] van Wijk, J. J. Views on visualization. IEEE Trans.
on Visualization and Computer Graphics, 12(4):421–433, 2006. 5
[Carlson, 2007: 18] Carlson, W. A Critical History of Computer Graphics
and Animation. 2007. 5
[Nakanishi, 2007: 22] Nakanishi, T. Special Effects: A Book About Special
Printing Effects. AllRightsReserved Ltd., 2007. ISBN 978-988-99001-
1-3. 7
[Baggi and Haus, 2009: 15] Baggi, D. and Haus, G. IEEE 1599: Music
Encoding and Interaction. Computer, 42(3):84–87, 2009. doi:10.1109/
MC.2009.85. 5
[FL@33 et al., 2011: 5] FL@33, Vollauschek, T., and Jacquillat, A. The 3D
Type Book. Laurence King Publishing, 2011. ISBN 1856697134. 2
[Graf, 2012: 12] Graf, N. A. 3DPDF: Open Source Solutions for Incorpo-
rating 3D Information in PDF Files. In Proc. Nuclear Science Symp.,
Medical Imaging Conf. Anaheim, California, 2012. 4
[Snider, 2012: 20] Snider, G. Incidental comics: The book of the fu-
ture. New York Times Sunday Book Review, 2012. http://www.
nytimes.com/interactive/2012/03/30/books/review/snider01.html, http://
www.incidentalcomics.com/2012/04/book-of-future.html. 5
[Cohen et al., 2013: 16] Cohen, M., Haralambous, Y., and
Veytsman, B. The Multibibliography Package. TUGboat:
Communications of the T
E
X Users Group, 34(3):901–
904, 2013. http://www.ctan.org/pkg/multibibliography,
ftp://ftp.dante.de/tex-archive/macros/latex/contrib/nmbib/nmbib.pdf.
5
[Heller and Fili, 2013: 6] Heller, S. and Fili, L. Shadow Type: Classic
Three-Dimensional Lettering. Princeton Architectural Press, Thames
and Hudson Ltd., 2013. ISBN 1616892048, 978-1616892043. 2
[Campbell and Kautz, 2014: 21] Campbell, N. D. F. and Kautz, J. Learning
a manifold of fonts. ACM Trans. Graph., 33(4):91:1–91:11, 2014. ISSN
0730-0301. doi:10.1145/2601097.2601212. 7
[Cox, 2014a: 11] Cox, B. Wonders of Life. Harper Collins, 2014b. ISBN
978-0007527625. 4
[Cox, 2014b: 8] Cox, B. Wonders of the Universe. Harper Collins, 2014a.
ISBN 978-0062110541, 978-0062110543. 3
[Ellison, 2014: 9] Ellison, H. Star Trek: The City on the Edge of Forever.
Madefire, 2014. 4
[Wolfram Research, Inc., 2014: 13] Wolfram Research, Inc. Mathematica.
2014. 5
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Conventional standards for bibliography styles entail a forced choice between index and name-year citations and corresponding references. We reject this false dichotomy, and describe a multibibliography, comprising alphabetic, sequenced, and also chronological orderings of references. An extended inline citation format is presented which integrates such heterogeneous styles, and is useful even without separate bibliographies. Richly hyperlinked for electronic browsing, the citations are articulated to select particular bibliographies, and the bibliographies are cross-referenced through their labels, linking among them.
Article
Full-text available
Zebrackets is a system of meta-METAFONTs to generate semi-custom striated parenthetical delimiters on demand. Contextuahzed by a pseudo-environment in LQX, and invoked by an aliased pre-compiler, Zebrackets are nearly seamlessly invokable in a variety of modes, manually or automatically generated marked matching pairs of background, foreground, or hybrid delimiters, according to a unique index or depth in the expression stack, in 'demux,' unary, or binary encodings of nested associativity. Implemented as an active filter that re-presents textual information graphically, adaptive character generation can reflect an arbitrarily wide context, increasing the information density of textual presentation by reconsidering text as pictures and expanding the range of written spatial expression. Adaptive Character Generation: Zebrackets Zebrackets ICohen 921 ICohen 931 takes a small-scale approach to hierarchical representation, focus-ing on in-line representation of nested associativ-ity, extending parentheses (also known as "lunulae" c~ennard 911), and square brackets (a.k.a. "crotch-ets"), by systematically striating them according to an index reflecting their context. Functionality. Table 1 crosses three of the dimen-sions currently supported by Zebrackets, using a LISP function (which performs a generalized "inclusive or") as a scaffolding. index is the semantic value of the pattern being su-perimposed on the delimiters: unique generates a unique, incremental index for each pair of delimiters depth calculates the depth of the delimited ex-pression in an evaluation stack, useful for visualizing expression complexity encoding scheme refers to the way that the index is represented visually: demux named after a demultiplexer, or data selector, which selects one of n lines us-ing 1g2 I n1 selectors, puts a 'slider' on the delimiter. Such a mode is useful for establishing spatial references, as in f top)(middle)(bottom$. unary creates a simple tally, a column of tick marks binary encodes the index or depth as a binary pattern, the most compact of these repre-sentations The demux encoding mode always has ex-actly one band or stripe, but the unary and binary encodings have variable numbers, and use an index origin of zero to preserve back-wards compatibility. Since the striations are adaptively chosen, the complexity of the delim-ited expression determines the spacing of the streaks. Without NFSS, the maximum number of stripes for a self-contained face is lg, i: I= 7. Otherwise, for overly rich expressions that ex-ceed visual acuity, Zebrackets can be limited to a fxed striation depth, wrapping around (repeat-ing) the indexing scheme if the delimiters ex-haust the range of uniquely encodable values, as seen in the unique x {dernuxl unary] sextants. type controls the style of the striations superim-posed on pairs of delimiters: background bands drop out segments from the delimiters foreground explicitly put in black ticks, which are more legible if less inconspicuous hybrid combines these two styles, dropping out bands at all the possible slot locations, and then striping the actual index Eventually perhaps, greyscale striations (not yet implemented) might interpolate between these approaches, causing the ticks to disap-pear at normal reading speed, but be visible when doing a detailed search. Foreground Zebrackets only work well with thinner faces, and background Zebrackets only with bolder faces. Figure 1 exercises Zebrack-ets through an obstacle course of less common fonts, showing some of the legibility problems, even with figure/ground modes chosen to flat-ter the filigrees.
Article
The design and manipulation of typefaces and fonts is an area requiring substantial expertise; it can take many years of study to become a proficient typographer. At the same time, the use of typefaces is ubiquitous; there are many users who, while not experts, would like to be more involved in tweaking or changing existing fonts without suffering the learning curve of professional typography packages. Given the wealth of fonts that are available today, we would like to exploit the expertise used to produce these fonts, and to enable everyday users to create, explore, and edit fonts. To this end, we build a generative manifold of standard fonts. Every location on the manifold corresponds to a unique and novel typeface, and is obtained by learning a non-linear mapping that intelligently interpolates and extrapolates existing fonts. Using the manifold, we can smoothly interpolate and move between existing fonts. We can also use the manifold as a constraint that makes a variety of new applications possible. For instance, when editing a single character, we can update all the other glyphs in a font simultaneously to keep them compatible with our changes.
Article
The design and manipulation of typefaces and fonts is an area requiring substantial expertise; it can take many years of study to become a proficient typographer. At the same time, the use of typefaces is ubiquitous; there are many users who, while not experts, would like to be more involved in tweaking or changing existing fonts without suffering the learning curve of professional typography packages. Given the wealth of fonts that are available today, we would like to exploit the expertise used to produce these fonts, and to enable everyday users to create, explore, and edit fonts. To this end, we build a generative manifold of standard fonts. Every location on the manifold corresponds to a unique and novel typeface, and is obtained by learning a non-linear mapping that intelligently interpolates and extrapolates existing fonts. Using the manifold, we can smoothly interpolate and move between existing fonts. We can also use the manifold as a constraint that makes a variety of new applications possible. For instance, when editing a single character, we can update all the other glyphs in a font simultaneously to keep them compatible with our changes.
Article
IEEE Std 1599 allows interaction with music content such as notes and sounds in video applications and in any interactive device.
Article
The field of visualization is maturing. Many problems have been solved and new directions are sought. In order to make good choices, an understanding of the purpose and meaning of visualization is needed. In this paper, visualization is considered from multiple points of view. First, a technological viewpoint is adopted, where the value of visualization is measured based on effectiveness and efficiency. An economic model of visualization is presented and benefits and costs are established. Next, consequences and limitations of visualization are discussed (including the use of alternative methods, high initial costs, subjectiveness, and the role of interaction). Example uses of the model for the judgment of existing classes of methods are given to understand why they are or are not used in practice. However, such an economic view is too restrictive. Alternative views on visualization are presented and discussed: visualization as an art, visualization as design and, finally, visualization as a scientific discipline.