Article

Vehicular lifelogging: New contexts and methodologies for human-car interaction

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents an automotive lifelogging system that uses in-car sensors to engage drivers in ongoing discoveries about their vehicle, driving environment, and social context throughout the lifecycle of their car. A goal of the design is to extend the typical contexts of automotive user-interface design by (1) looking inward to the imagined character of the car and (2) looking outward to the larger social context that surrounds driving. We deploy storytelling and theatrical strategies as a way of moving our thinking outside the familiar constraints of automotive design. These methods help us to extend the concept of a lifelog to consider the lives of objects and the relationship between humans and non-humans as fruitful areas of design research.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... . A visual illustration of our design space for the creation of digital content in connected cars characterizing prior work [14,16,24] using five dimensions. ...
... A particular application is "vehicular lifelogging" [13,14,16,24] that consists in collecting and storing various data regarding the driver's performance using the in-vehicle builtin sensors and systems. Vehicular lifelogging is a specific context of use for lifelogging [1,11], where data about one's life events and experiences are stored for later retrieval, acting as an external digital memory support. ...
... (2) We demonstrate the utility of our design space by characterizing the features of existing vehicular lifelogging systems and applications, but also by presenting a new prototype, informed by the dimensions of our design space, that employs a smartphone and a pair of smartglasses with an embedded micro video camera. Our contributions fill in missing design and engineering aspects of an application area of lifelogging that has been little researched so far [13,14,16,24]. We hope that our design space will foster new research on vehicular lifelogging and inform new prototypes and applications for the creation, management, and sharing of digital content in relation to connected and hyper-connected cars. ...
Conference Paper
Full-text available
Connected cars can create, store, and share a wide variety of data reported by in-vehicle sensors and systems, but also by mobile and wearable devices, such as smartphones, smart-watches, and smartglasses, operated by the vehicle occupants. This wide variety of driving- and journey-related data creates ideal premises for vehicular logs with applications ranging from driving assistance to monitoring driving performance and to generating content for lifelogging enthusiasts. In this paper, we introduce a design space for vehicular lifelogging consisting of five dimensions: (1) nature and (2) source of the data, (3) actors, (4) locality, and (5) representation. We use our design space to characterize existing vehicular lifelogging systems, but also to inform the features of a new prototype for the creation of digital content in connected cars using a smartphone and a pair of smartglasses.
... A smartphone application could be used to review the photos after the drive. With their Vehicular Lifelogging project [36], McVeigh-Schultz et al. envisioned a system in which the experiences of any driver of the car could be logged. They created an in-car prototype integrating GPS coordinates and car information based on rain sensors (to detect windshield fluid), a sensor for detecting seated passengers, and other sensor data. ...
... ? N/A [34] N/A [9] Location-based [36] N/A Car to car socialization [10] Location-based [48] ? ? ...
Conference Paper
We investigate and characterize a design space for in-car games based on a survey of previous work, and identify an opportunity for "cross-car" multiplayer games played among occupants in nearby cars. This is supported by innovations in automotive technology like autonomous driving, full-window heads-up displays, and ad hoc communication between vehicles. In a custom virtual reality driving simulator, we created three games to illustrate design dimensions: Killerball, a competitive free-for-all game; Billiards, a player versus player, massively multiplayer online game with player assists; and Decoration, an idle-style game with multiplayer resource management. A 12-participant evaluation with a semi-structured interview revealed a positive response to input controls and HUDs, and suggests game genres have a similar effect on time for an emergency driving takeover task. We used insights from our process and evaluation to formulate design considerations for future cross-car games.
... If the user's need for a positive emotional experience is satisfied, it is expected that practical sides, such as usability or safety, can also be enhanced [21]. To provide an emotional and rich vehicle experience, some researchers have attempted to log a vehicle-based life log [11]; reinforce the social interaction of the driver and passenger [3]; and explore a voice color for a pleasant auditory system [9]. However, there is still a lack of research suggesting a vision or a practical case that supports an emotional interaction between a driver and a vehicle. ...
... We can imagine a future vehicle-driver interaction model similar to the intimate relationship between a special purpose pet and human. The study of McVeigh-Schultz et al shows that a similar human-car relationship could be achieved when the user projects his or her inner life into the car [11]. ...
Conference Paper
Research on electric vehicle (EV) has focused on technological issues, such as energy, physical structure and self-driving. It is important to consider how to enhance emotional user experience for the new generation of EV. In this paper, we propose a vehicle-applicable pet-morphic design strategy and a concept validation prototype, 'DooBoo' to support an emotional user experience of a personal EV. DooBoo is a pet-like dashboard system that applies the interactive characteristics of pet dogs. We illustrate three key potential scenarios: personal security, safe driving assistance, and vehicle information guidance. The preliminary user feedback showed the possibility of helping emotional driving experience with EV; they felt togetherness as it is another passenger, affection with the vehicle could grow, and it offered means of intuitive and visceral communication. We discuss implications of our prototype and study results for guiding emotional interactions with EV.
... This idea, that material forms have a "life" of their own, takes on particular potency when designed objects accrue interactive features, becoming what Sherry Turkle calls 'relational artifacts' [26]. Within alt.chi, research has similarly explored this imagined "innerlife" of objects through projects such as a vehicular lifelog that supports human-car relationships [12] or moving TV screens which orient preferentially towards certain participants over others [13]. ...
... Its intent is to stimulate further discussion and design explorations around animism. In this sense, AniThings draws from the methodological investments of design fiction [5,19,22], and extends previous research within alt.chi that argues for design fiction as an important methodological innovation within HCI [24] [12]. ...
Conference Paper
Full-text available
This paper explores the metaphor of animism as a methodological framework for interaction design and, in particular, advocates for a form of animism the authors term 'heterogeneous multiplicity.' Animism can make valuable contributions within ubiquitous computing contexts, where objects with designed behaviors tend to evoke a perception that they have autonomy, intention, personality and an inner life. Furthermore, animism that supports heterogeneous multiplicity offers unique opportunities to stimulate human creativity through embodied engagement with an ecology of things. To demonstrate the concept of heterogeneous multiplicity, the authors present a speculative design project, AniThings, that intertwines multiple animistic collaborators to position activities of digital resource discovery and curation beyond the narrow domain of recommendation engines and personal feeds. The project illustrates an ecology of six tangible, interactive objects that, respectively, draw from a variety of digital resources and inhabit a range of variously positioned stances towards their human collaborators and each other. This diversity of behaviors, resources, and positionality makes AniThings ideal for supporting open-ended ideation and collaborative imagining activities.
... To achieve autonomous recognition and documentation of landscapes during transit, we integrate the "Human-Vehicle-Environment" (HVE) triad philosophy (Joshua, 2012). This approach emphasizes the interactions among humans, vehicles, and the environment, spanning disciplines like traffic engineering, computer science, and human-computer interaction. ...
Conference Paper
Road trip vlogs have gradually become popular content for sharing among people. This study introduces an artificial intelligence (AI)-based road trips video editing system, designed with a primary purpose of preventing traffic accidents caused by drivers recording scenic views with smartphones while driving. To enable the dashcam to automatically capture materials of interest, it is crucial to define the starting-ending time and content. Data monitoring around the Human-Vehicle-Environment (HVE) is a critical factor for establishing the capture rules. Guided by audiovisual language theory, the captured video materials are used to support intelligent editing. Furthermore, stylization in editing, including narrative lyrical and documentary style, is another design factor to achieve diversity in videos. Research results demonstrate that the synergy among the HVE elements is a pivotal factor in capturing key visuals. Using AI to complete road trip videos can reduce traffic accident risks and promote the effectiveness of recording scenic views.
... Several commercial products are available for lifelogging enthusiasts, among which Narrative Clip 2 [86], MeCam [74], and SnapCam [125]. Lifelogging prototypes based on wearable video cameras [16,28,44] are well represented in the scientific literature, which has targeted specific applications, such as food-logging [56], vehicular lifelogging [5,73], thinglogging for the Internet-of-Things [36], logs of computer usage [43] and sleep patterns [75], and applications that monitor and report aspects regarding the quality of life [148]. Among these applications, lifelogging has been used for people with cognitive disabilities [6,14,132]; e.g., Berry and Stix [14,132] discussed two applications for people with Alzheimer's that enabled users to relive recent visual experiences with the help of the data stored in the lifelog, and Al Mahmud et al. [6] proposed an image capturing device for people with aphasia that captured photographs and added tags automatically. ...
Article
Full-text available
We address in this work the topic of broadcasting one’s visual reality, captured by the video cameras embedded in mobile and wearable devices, to a remote audience. We discuss several designs of such life broadcasting systems, which we position at the intersection of lifelogging, Alternate Reality, and Cross-Reality technology. To this end, we introduce the “Alternate Reality Broadcast-Time” matrix for the broadcasting and consumption of alternate realities, in which designs of systems that implement sharing and consumption of personal visual realities can be positioned, characterized, and compared. These design options range from simple video streaming over the web using conventional video protocols to mediated and augmented reality, to audio narration and vibrotactile rendering of concepts automatically detected from video captured by wearable cameras. To demonstrate the usefulness of our broadcast-time matrix, we describe three prototypes implementing lifelogging, concept recognition from video, augmented and mediated vision, and vibrotactile feedback. Our contributions open the way toward new applications that blend lifelogging, consumption of multimedia alternate realities, and XR technology to empower users with richer opportunities for self-expression and new means to connect with the followers of events in their lives.
... Unlike clip-on video cameras, smartglasses can record video from the first-person and eye-level perspective [10,11]. Preliminary work on lifelogging applications has focused on monitoring various aspects of life, such as food-logging [34], computer usage [1], sleep patterns [35], wordometer systems [36], vehicular lifelogging [37,38], thing-logging for the Internetof-Things [39], and applications that collect data for the purpose of evaluating the quality of life [40]. For example Zini et al. [40] proposed a system to monitor four distinct aspects of life quality, such as activities, sleep, fatigue, and mood. ...
... Yim and Kim [17] designed a lifelog-based bus information web application which can store the location of the bus where the passenger can easily search the bus. McVeigh-Schultz et al. [18] presented vehicular lifelogging system for a MINI countryman, which uses the internal vehicle sensors to alert drivers about ongoing vehicle discoveries, social context, driving environment for the entire life cycle of their vehicles. The memories and sensors specific notification can be displayed on a MINI infotainment system which contains location data, annotated lifelog events and sensors data when a particular location is revisited. ...
Article
Full-text available
Vehicular lifelogging provides us with the opportunity to digitally record our life associated with our journeys on the road. Driving on road being an integral part of life, should be given unparallel attention to log life bits on road for many purposes, including safety, entertainment, memory augmentation, and analyzing driving behaviors. Various technologies have been used to assist people in capturing life events such as wearable devices, biometric devices, fitness devices, non-visual wearable, and unwearable devices. These devices have played a vital role in capturing life events of individuals. However, little attention has been given to capturing and recording events related to a person’s life on road. By using a vehicle’s integrated sensors along with other auxiliary devices, a vehicle can very conviently capture personal as well as vehicle-related data of the driver. This research aims to design and develop a framework, called AutoLog, which is implemented in the Android platform and can be installed on any android-based smartphones/tablet or a vehicle’s infotainment system. AutoLog is capable of logging the driver activities and vehicle dynamics along with several varying environmental contexts. The proposed solution is evaluated through an empirical study by collecting data from 63 drivers. Different tests i.e., descriptive tabulation, Cronbach alpha, Kendall’s tau-b, and Principal Component Factor Analysis (PCFA) have been carried out to analyze the data. These tests are significant and the findings show that the use of AutoLog application has a positive perception in terms of attitude, ease of use, user satisfaction, and intention to use. In addition, the findings also show that the proposed solution helped in improving risky driving behavior.
... A today smartphone is proven as a sophisticated computing platform having all the communication and technological capabilities as mobile phone and also contains a rich set of sensors, including accelerometer sensor, gyroscope, GPS, proximity, magnetometer, camera, etc. [17][18][19]. By the integration of sensors and the latest technology in cars and due to contextual rich interfaces and interaction, cars becomes a mobile sensing and computing devices [20]. A modern car is intelligent enough to capture personal as well as car experiences. ...
Conference Paper
Full-text available
In this paper, we identified the driver risky driving behavior through analyzing the variations in the vehicle dynamics when using a smartphone while driving. These variations have been effectively captured using a smartphone and vehicular sensory technologies. A framework named "AutoLog" has been used in this paper which uses the On-Board Diagnostic-2 (OBD2) and smartphone sensors to capture the driver smartphone activities along with vehicle data. We captured certain essential parameters that could be used for analyzing the driving behavior. An alert message can be generated by performing risky activities on either a smartphone or any activities while driving. Results show that performing smartphone activities i.e. making calls and text messaging while driving suddenly degrades the speed and abruptly change in the steering wheel leads to lane deviation which may cause accidents.
... Lifelogging occurs for a variety of desiderata, made possible by the potential of lifelog data to inform about how to live one's life [22]. Prior work on applications of lifelogging addressed "food-logging" [33], computer usage [58], sleep patterns [42], "wordometer" systems that compute an estimate of the number of words read in everyday life [5], monitoring aspects regarding the quality of life [62], vehicular lifelogging [2,40], or "thing-logging" for the Internet-of-Things [19]. For example, Kitamura et al. [33] implemented a system that could recognize images containing food with 88% accuracy, estimate the food balance with 73% accuracy, and present users with a visualization of their food log; and Zini et al. [62] proposed a system designed to monitor four distinct aspects of life quality, such as activities, sleep quality, level of fatigue, and mood, which were presented to users as gauge charts on their smartphones. ...
Article
Full-text available
We introduce "Life-Tags," a wearable, smartglasses-based system for abstracting life in the form of clouds of tags and concepts automatically extracted from snapshots of the visual reality recorded by wearable video cameras. Life-Tags summarizes users' life experiences using word clouds, highlighting the "executive summary" of what the visual experience felt like for the smartglasses user during some period of time, such as a specific day, week, month, or the last hour. In this paper, we focus on (i) design criteria and principles of operation for Life-Tags, such as its first-person, eye-level perspective for recording life, passive logging mode, and privacy-oriented operation, as well as on (ii) technical and engineering aspects for implementing Life-Tags, such as the block architecture diagram highlighting devices, software modules, third-party services, and dataflows. We also conduct a technical evaluation of Life-Tags and report results from a controlled experiment that generated 21,600 full HD snapshots from six indoor and outdoor scenarios, representative of everyday life activities, such as walking, eating, traveling, etc., with a total of 180 minutes of recorded life to abstract with tag clouds. Our experimental results and Life-Tags prototype inform design and engineering of future life abstracting systems based on smartglasses and wearable video cameras to ensure effective generation of rich clouds of concepts, reflective of the visual experience of the smartglasses user.
... In our more recent work, a project called PUCK (Place-based Ubiquitous, Connected, and Kinetic) positions the lifelog as a platform for supporting relationships between a building and its occupants [11]. We have also developed an automotive lifelogging system that uses in-car sensors to engage drivers in ongoing discoveries about their vehicle, driving environment, and social context throughout the lifecycle of their car [8]. This work departs from the familiar emphasis on video recording as the primary tool of the lifelog. ...
Conference Paper
In this paper, we describe an approach to lifelogging that positions everyday objects, vehicles, and built environments as worthy of their own lifelog systems. We use this approach to anchor what we call ambient storytelling, and we argue that this methodology opens up new opportunities to design for rich and enduring relationships between humans and non-humans. We will explore this research approach through a series of examples, including: (1) a building that learns about its occupants and reveals its lifelog through playful solicitation and reciprocation, (2) story-objects that reveal backstory to their users, and (3) an automotive-sensor system and lifelog platform that facilitates context-specific playful interaction between a driver and their car. In this last example, we were interested in how a vehicle-based lifelog could augment drivers' existing propensities to project character onto their cars. In each of these examples, we reposition the concept of the lifelog to consider the "lives" of objects and the relationship between humans and non-humans as a worthwhile area of research.
Chapter
Full-text available
Conference Paper
In this paper, we describe Expressing Intent - our initial exploration of rich interactions between human actors and three connected objects -- (1) a bookshelf that learns about taste, (2) a radio that determines mood, and (3) a window that augments visual reality. These objects interpret and express 'intent' in a multitude of ways within the context of a shared office space. Objects with intent, or animistic qualities, can evoke diverse reactions from human actors, depending on how they are designed. To investigate the effects of multiple human and non-human actors interacting with self-interest in mind, we deliberately designed each object with distinct needs and values that complement human behavior when placed in a shared office space. The resultant system of interactions involves cascading relations between object-object, human-object and object-human. Further, after our initial prototype, we discover prime areas in interaction design that warrant further exploration. Specifically, the implications of incorporating animism in object design, objects with needs and values independent of their users, and the implications of designing connected heterogeneous ecosystems (i.e. distinct but cooperative objects) vs. homogenous ecosystems (i.e. uniform and cooperative objects).
Article
Background Recently, the development of smart products calls for new perspectives on the future role of interactive products around us. One of these perspectives is the analogy of such products with living organisms. Although these visions have been suggested, little has been investigated about how such perspectives can impact practical design. Methods To guide the design of more emotional and symbiotically interactive products that can be compared to living organisms, we investigated the design characteristics that can trigger impressions of lifelikeness in interactive objects. We collected and analyzed 27 design cases that can be considered to have lifelike characteristics. A case analysis workshop was conducted, composed of sorting of a design case according to its impression of lifelikeness, and an in-depth interview to identify the characteristics of design that affect these impressions. The collected data were analyzed through repetitive affinity diagramming, and the four characteristics of design properties were deduced. Results Four key characteristics of the design properties of interactive objects were identified: a) similarity in physical properties, b) dynamic behavioral properties, c) independence, and d) userrecognition. The participants tended to perceive an interactive object as more naturally lifelike when its physical propertieshad more similarities with those of living organisms, when its behavioral propertieswere more dynamic, when it was considered to function independently, and when it had the characteristic of recognizing the user. Conclusions Our work will provide lessons for designing future products and systems using the analogy of living organisms as an emotional experience. We also discussed the design implications of practically utilizing the identified characteristics. There maining issues that need to be discovered, the limitations of the study, and potential future work were also discussed.
Article
This paper presents a design exploration of full-body interaction games played in cars. It describes how we have designed, implemented, and evaluated the core experiences of three different games, which were all aimed at making sitting properly more fun for players/children while travelling by car. By making the restricted body an integral part of gameplay, we hope to, as a side product of gameplay, bring about the best and also most safe body posture for young players/children travelling by car, i.e., sitting reasonably upright and still in their child seat with their head leaning back on the neck rest. Another outcome of this could also be an overall safer situation in the car, in that children not sitting still in their child seats while being driven might be stressful for the driver. By presenting the details of our design efforts in this particular design context, we hope to add also to the knowledge we, in HCI, have for how to design bodily experiences with technology at large.
Article
Full-text available
This article argues that our apprehension of the world is increasingly colored by animistic connotations. Traces of animism – the idea that objects and other nonhuman entities possess a soul, life force, and qualities of personhood – are evident in the way we talk to our computers, cars, and smartphones, and in our expectations that they will reply more or less instantaneously. As the Internet of Things becomes more mainstream, the fact that our phone communicates with our thermostat, car, washing machine, or bathroom scale is no longer a future scenario; it is increasingly a shared reality. Our way of experiencing everyday objects is changing to accommodate their shifting nature, purpose, and agency.
Conference Paper
This paper presents an activity tracking and recall mechanism so called TR for mobile handheld devices such as smartphones. A goal of the TR is to provide both a practical and extensible information tracking, storing, and retrieval way for user-centric mobile devices. We implemented the proposed mechanism on Android and Tizen mobile platforms and tested its performance in terms of the CPU usage, and the amount of energy consumed by the TR.
Conference Paper
Continuous driving-behavioral data can be converted automatically into sequences of “drive topics” in natural language; for example, “gas pedal operating,” “high-speed cruise,” then “stopping and standing still with brakes on.” In regard to developing advanced driver-assistance systems (ADASs), various methods for recognizing driver behavior have been proposed. Most of these methods employ a supervised approach based on human tags. Unfortunately, preparing complete annotations is practically impossible with massive driving-behavioral data because of the great variety of driving scenes. To overcome that difficulty, in this study, a double articulation analyzer (DAA) is used to segment continuous driving-behavioral data into sequences of discrete driving scenes. Thereafter, latent Dirichlet allocation (LDA) is used for clustering the driving scenes into a small number of so-called “drive topics” according to emergence frequency of physical features observed in the scenes. Because both DAA and LDA are unsupervised methods, they achieve data-driven scene segmentation and drive topic estimation without human tags. Labels of the extracted drive topics are also determined automatically by using distributions of the physical behavioral features included in each drive topic. The proposed framework therefore translates the output of sensors monitoring the driver and the driving environment into natural language. Efficiency of proposed method is evaluated by using a massive data set of driving behavior, including 90 drives for more than 78 hours over 3700 km in total.
Article
Full-text available
This article offers a discussion of the theoretical framework used to describe and develop the so-called “intelligent house”, i.e. the idea of futuristic housing dominated by pervasive technologies. Two discourses related to this framework are identified: The dream of efficiency and the privacy nightmare. The dream of efficiency corresponds with an idea of the intelligent house as a caring, adaptive environment, whereas the privacy nightmare focuses on the new flows of information and monitoring. Together, we argue, these discourses mirror a vertical, hierarchical understanding of surveillance which does not sufficiently unfold the role of the people inhabiting the intelligent house. Using two case studies, we suggest “participatory surveillance” as an alternative framework of understanding what it means to inhabit an intelligent building. Participatory surveillance is horizontal, mutual, potentially empowering and social, and thus, the inhabitant is understood as an active part of the house. In addition we argue that participatory design methods may be ways of involving and positioning inhabitants as active users in more horizontal relationships with designers and surveillance technologies. This, however, requires that we avoid thinking of surveillance technologies, as either vehicles of care or control and recognize the complexity of the social contexts the technologies will operate in. We argue that the concept of participatory surveillance is useful in this regard, and that it can challenge and contribute to the way we conceive of and design intelligent living and working environments.
Article
Full-text available
The storage functions in commercial systems encourages people to keep more and more of their memories in digital form. Commercial systems that support information capture include head-worn video capture and the continuous monitoring and recording of physiological informations. The low-cost abundant storage makes it possible to record most life experiences including audio, video, and other types of data. Recording, creating, receiving, storing and accumulating digital materials is easy, but the challenge is to manage and use them sensibly. The primary research challenge in digital memories is to cope with the vast quantities of material. Much thoughtful research is undergoing in every related area, including privacy, security, user interfaces, sharing content analysis, data mining, and summarization.
Conference Paper
Full-text available
Car rides are often perceived as dull by the passengers, especially children. Therefore, we aim to introduce a system fostering a col-laborative and communicative experience in this environment. This paper presents the design for a game played together by all car-occupants, including the driver, according to their abilities and ca-pacities. A fully implemented prototype of our system called nICE: nice In-Car Experience is evaluated under real world conditions in a user study with five families using a qualitative approach.
Article
Full-text available
Passive capture lets people record their experiences without having to operate recording equipment, and without even having to give recording conscious thought. The advantages are increased capture, and improved participation in the event itself. However, passive capture also presents many new challenges. One key challenge is how to deal with the increased volume of media for retrieval, browsing, and organizing. This paper describes the SenseCam device, which combines a camera with a number of sensors in a pendant worn around the neck. Data from SenseCam is uploaded into a MyLifeBits repository, where a number of features, but especially correlation and relationships, are used to manage the data.
Conference Paper
Full-text available
In this paper, the theater-system technique, a method for agile designing and testing of system behavior and interaction concepts is described. The technique is based on the Wizard-of-Oz approach, originally used for emulating automated speech recognition, and is extended towards an interactive, user-centered design technique. The paper describes the design process using the theater-system technique, the technical build-up of the theater-system, and an application of the technique: the design of a haptic-multimodal interaction strategy for highly automated vehicles. The use of the theater-system in the design process is manifold: It is used for the concrete design work of the design team, for the assessment of user expectations as well as for early usability assessments, extending the principles of user-centered design towards a dynamically balanced design.
Conference Paper
Full-text available
In this paper, we describe "Experience Prototyping" as a form of prototyping that enables design team members, users and clients to gain first-hand appreciation of existing or future conditions through active engagement with prototypes. We use examples from commercial design projects to illustrate the value of such prototypes in three critical design activities: understanding existing experiences, exploring design ideas and in communicating design concepts.
Article
Full-text available
MyLifeBits is a project to fulfill the Memex vision first posited by Vannevar Bush in 1945. It is a system for storing all of one's digital media, including documents, images, sounds, and videos.
Article
Bruce Sterling, author, journalist, editor, and critic, was bornin 1954. He has written eight science fiction novels and threeshort story collections. He edited the anthology MIRRORSHADES, thedefinitive document of the cyberpunk movement. He also wrote thenonfiction book THE HACKER CRACKDOWN: LAW AND DISORDER ON THEELECTRONIC FRONTIER (1992) available on the Internet. He haswritten regular columns on popular science and literary criticismfor The Magazine of Fantasy and Science Fiction, Interzone, andScience Fiction Eye. He has appeared in ABC's Nightline, BBC's TheLate Show, CBC's Morningside, on MTV, and in Wired, Wall StreetJournal, World Art, Time, Newsweek, Details, Nature, The New YorkTimes, Der Spiegel, and other equally improbable venues.
Conference Paper
A traditional life-log is written in the first-person viewpoint since a user collects data using sensors worn on the body. A UbiGraphy that we introduce here is a third-person viewpoint life-log that is made possible by the spontaneous interaction between a wearable computer and smart objects in a ubiquitous computing environment. A wearable computer uses smart objects in the proximity to capture a user's smiles, poses, and even songs from the third-person viewpoint, and then write a life-log where a user appears. This paper presents the design of a protocol that enables UbiGraphy and our first prototyping effort for experiencing UbiGraphy.
Article
As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on "The American Scholar," this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.
Article
A thorough appreciation of physical, social, interactional, and psychological contextual factors is crucial in the design of ubiquitous computing applica- tions. This paper investigates the benefits of a method called bodystorming for carrying out design sessions in the original context, 'in the wild', instead of the office. A location is selected that is identical or similar to the original environment. Innovation, carried out on-site, is based on ethnographical data presented as concrete de- sign questions. Individual solutions to design questions are brainstormed and discussed on-site. Facets of data collection and preparation, formulation of design ques- tions, selection of locations, session administration, and evaluation of design ideas are presented. We found that bodystorming permits immediate feedback for generated design ideas, and can provide a more accurate under- standing of contextual factors. Bodystorming sessions were found memorable and inspiring. It is best suitable for designing for activities that are accessible and unfa- miliar to the researchers.
Conference Paper
The purpose of this paper is to disclose the operational principles of `WearCam', the basis for wearable tetherless computer-mediated reality both in its idealized form, as well as in some practical embodiments of the invention, including one of its latest embodiments. The specific inner workings of WearCam, in particular; details of its optical arrangement, have not previously been disclosed, other than by allowing a small number of individuals to look inside the glasses. General considerations, background, and relevant findings, in the area of long-term use of wearable, tetherless computer-mediated reality are also presented. Some general insight (arising from having designed and built more than 100 different kinds of personal imaging systems over the last 20 years) is also provided. Unlike the artificiality of many controlled laboratory experiments, much of the insight gained from these experiences relates to the natural complexity of real-life situations
Ambient Storytelling: The Million Story Building
  • J Stein
  • S Fisher
Stein, J. and Fisher, S. Ambient Storytelling: The Million Story Building. ACM SIGCHI Conference on Tangible, Embedded, and Embodied Interaction, (2011).
Design Fiction: A short essay on design, science, fact and fiction
  • J Bleecker
Bleecker, J. Design Fiction: A short essay on design, science, fact and fiction. Near Future Laboratory, 2009.
The Ethnographic (U) Turn!: Local Experiences of Automobility
  • A Zafiroglu
  • T Plowman
  • J Healey
  • D Graumann
  • G Bell
  • P Corriveau
Zafiroglu, A., Plowman, T., Healey, J., Graumann, D., Bell, G., and Corriveau, P. The Ethnographic (U) Turn!: Local Experiences of Automobility. Automotive UI, (2011), 47-48.
Million Story Building
  • J Stein
  • J Watson
  • W Carter
Stein, J., Watson, J., Carter, W., et al. Million Story Building. 2009. http://interactive.usc.edu/project/millionstory-building/.
Automotive Software Engineering: Principles, Processes, Methods, and Tools
  • J Schauffele
  • T Zurawka
Schauffele, J. and Zurawka, T. Automotive Software Engineering: Principles, Processes, Methods, and Tools. SAE International, 2005.
Ford Falcon Evocative Objects: Things We Think With
  • J Donath
Donath, J. 1964 Ford Falcon. In S. Turkle, ed., Evocative Objects: Things We Think With. MIT Press., Cambridge, MA, 2007.
  • A Albrechtslund
  • T Ryberg
Albrechtslund, A. and Ryberg, T. Participatory Surveillance in the Intelligent Building. Design Issues 27, 3 (2011).
A Cooperative In-Car Game for Heterogeneous Players
  • N Broy
  • S Goebl
  • M Hauder
Broy, N., Goebl, S., Hauder, M., et al. A Cooperative In-Car Game for Heterogeneous Players. AutomotiveUI, (2011).
Experience prototyping
  • M Buchenau
  • J F Suri
Buchenau, M. and Suri, J.F. Experience prototyping. DIS, ACM Press (2000), 424-433.
Evocative Objects: Things We Think With
  • J Donath
  • Donath J.
Donath, J. 1964 Ford Falcon. In S. Turkle, ed., Evocative Objects: Things We Think With. MIT Press., Cambridge, MA, 2007.
Surveillance Society: Monitoring Everyday Life (Issues in Society)
  • D Lyon
  • Lyon D.
Lyon, D. Surveillance Society: Monitoring Everyday Life (Issues in Society). Open University Press, 2001.
Interactive Architecture: Connecting and Animating the Built Environment with the Internet of Things
  • J Stein
  • S Fisher
  • G Otto
Stein, J., Fisher, S., and Otto, G. Interactive Architecture: Connecting and Animating the Built Environment with the Internet of Things. Internet of Things Conference, (2010).
IxDA Interaction Eleven Conference
  • B Sterling
Sterling, B. Closing Keynote address. IxDA Interaction Eleven Conference, (2011).
etal Million Story Building
  • J Stein
  • J Watson
  • W Carter
Design Fiction : A short essay on design, science, fact and fiction . Near Future Laboratory , 2009 . Bleecker, J. Design Fiction: A short essay on design, science, fact and fiction
  • J Bleecker
  • Bleecker J.
As we may think . Atlantic , 1945 . Bush, V. As we may think
  • V Bush
  • Bush V.
Sterling, B. Closing Keynote address
  • B Sterling
  • Sterling B.