Figure 2 - uploaded by Tim Bergin
Content may be subject to copyright.
Electric Pencil PC advertisement, PC Magazine, vol. 5, no. 21, 9 Dec. 1986, p. 104.  

Electric Pencil PC advertisement, PC Magazine, vol. 5, no. 21, 9 Dec. 1986, p. 104.  

Source publication
Article
Full-text available
Since Electric Pencil first debuted in 1976, more than 400 other word processing packages have emerged, most fading into oblivion. This article recounts the history of microcomputer word processing software - focuses on three of the earliest word processing software packages, Electric Pencil, EasyWriter, and WordStar, which was the mid-1980s leader...

Context in source publication

Context 1
... Although a version of Electric Pencil was creat- ed for the IBM PC, the package did not survive the increasingly competitive marketplace. Figure 2 shows a 1986 ad for Electric Pencil. ...

Citations

... Digital word processors have been in use since at least the 1970s (Bergin, 2006). In addition to simply recording and storing text, their primary function has been to offload typical writing tasks, such as editing, from humans to computers. ...
Article
Full-text available
In this paper, we argue that a particular set of issues mars traditional assessment practices. They may be difficult for educators to design and implement; only provide discrete snapshots of performance rather than nuanced views of learning; be unadapted to the particular knowledge, skills, and backgrounds of participants; be tailored to the culture of schooling rather than the cultures schooling is designed to prepare students to enter; and assess skills that humans routinely use computers to perform. We review extant artificial intelligence approaches that–at least partially–address these issues and critically discuss whether these approaches present additional challenges for assessment practice.
... Text editing was once considered a killer app of personal computing (Bergin, 2006). Editing text used to be the first skill a novice computer user mastered, and all personal computers are sold with a word processor. ...
Thesis
Tool use pervades our everyday life. We spontaneously manipulate objects as tools, sometimes for tasks beyond their assigned function, thereby re-purposing them, such as when a knife is used as a screwdriver. The Technical Reasoning hypothesis in cognitive neuroscience posits that humans engage in tool use by reasoning about mechanical interactions among objects. By modeling tool use based on abstract knowledge about object interactions, this theory explains how tools can be re-purposed for tasks beyond their original design as a product of knowledge transfer. In digital environments, user interfaces often provide tools with pre-defined functions, such as formatting, scrolling or zooming, meant to be used for a specific set of tasks. However, the literature offers examples of users re-purposing digital tools in unexpected ways. This motivated me to investigate the Technical Reasoning hypothesis as a theoretical model for digital tool use, based on the users’ acquired knowledge of the digital world. First, I studied computer users performing a task in a digital text editor while being constrained to re-purpose some of its commands. While most participants managed to re-purpose at least one command, some experienced difficulty due to biases stemming from their knowledge of procedures and functions learned from similar environments. I relate these observations to phenomena of physical tool use, particularly, technical reasoning and functional fixedness. Next, I studied how users perceive the possibilities for action on digital objects through toolbars in the interface. Using an experimental environment whose objects support both graphics- and text-oriented commands, I controlled the visibility of corresponding toolbars to introduce the environment to participants before performing tasks with both toolbars available. This resulted in strategies where the preferred command types associated with the toolbar presented in the introduction, sugg esting a priming effect, which can hinder the exercise of technical reasoning to use alternative and possibly more efficient strategies. Last, I present a collaboration study about extreme users of text editing tools that led to the design of Textlets: interactive objects that reify text selections into persistent tools for text documents. Textlets constitute a generative concept building on principles of Instrumental Interaction. We observed a user re-purposing a Textlet during an evaluation study, supporting the notion that an instrumental approach may contribute to re-purpose digital tools. This thesis provides evidence of the relevance of the Technical Reasoning hypothesis as a theoretical model for interaction and opens the way to the design of tool-centric interfaces.
... Text editing was once considered a 'killer app' of personal computing (Bergin, 2006). Editing text is usually the first skill a novice computer user masters, and all personal computers are sold with a word processor. ...
Thesis
Millions of users work with documents for their everyday tasks but their user interfaces have not fundamentally changed since they were first designed in the late seventies. Today’s computers come in many forms and are used by a wide variety of users for a wide range of tasks, challenging the limits of current document interfaces. I argue that by focusing on extreme users and taking on a principled perspective, we can design effective and flexible representations to support document-related knowledge work. I first study one of the most common document tasks, text editing, in the context of technical documents. By focusing on legal professionals, one example of extreme document users, we reveal the limits of current word processors. Legal professionals must rely on their memory to manage dependencies and maintain consistent vocabulary within their technical documents. To address these issues, we introduce Textlets, interactive objects that reify text selections into persistent items. We present a proof-of-concept prototype demonstrating several use cases, including selective search and replace, word count, and alternative wording. The observational evaluation shows the usefulness and effectiveness of textlets, providing evidence of the validity of the textlet concept. During my work with legal professionals in the first project, I was introduced to the domain of patent writing and filling. In the patent process, patent attorneys write patent submissions that describe the invention created by the inventor. Patent examiners review the submission and decide whether the submission can be granted as a patent. In collaboration with a European Patent Office, I studied the patent examiners’ search and review process. The study reveals the need to manage text from multiple documents across various interconnected activities, including searching, collecting, annotating, organizing, writing and reviewing, while manually tracking their provenance. I extend Textlets to create Passages, text selection objects that can be manipulated, reused, and shared across multiple tools. Two user studies show that Passages facilitate knowledge workers practices and enable greater reuse of information. These two projects led to another important aspect of knowledge work: file management. I focus on scientists, another example of extreme knowledge workers, to study their document management practices. In an age where heterogeneous data science workflows are the norm, instead of relying on more self-contained environments such as Jupyter Notebooks, scientists work across many diverse tools. They have difficulties using the file system to keep track of, re-find and maintain consistency among related but distributed information. We created FileWeaver, a system that automatically detects dependencies among files without explicit user action, tracks their history, and lets users interact directly with the graphs representing these dependencies and version history. By making dependencies among files explicit and visible, FileWeaver facilitates the automation of workflows by scientists and other users who rely on the file system to manage their data. These three document representations rely on the same underlying theoretical principles: reification, polymorphism and reuse. I reflect on my experience designing and evaluating these representations and propose three new design principles: granularity, individuality and synchronization. Together with the empirical findings from three examples of extreme users, technological demonstration of three proof-of-concept prototypes and three design principles, this thesis demonstrates fresh new approaches to working with documents, a fundamental representation in GUIs. I argue that we should not accept current desktop interfaces as given, and that by taking on a principled and theory-driven perspective we can contribute innovative interface concepts.
... The content of discourse in UI include the typical buttons/menus chosen and the reasons why they, not others, were chosen. To compete with other word processors, such as WordStar, SuperWriter, PFS: Write, Multimate, Perfect Writer, and WordPerfect, MS Word showed its strength of "What You See Is What You Get" (WYSIWYG) (which was thought to be originated by WordStar) (Bergin 2006). MS Word provided two methods for users to apply the formatting, that is, the formatting commands and the formatting keys. ...
Chapter
Full-text available
The decision-making process of bilingual individuals differ based on the language used, this phenomenon is called “Foreign Language Effect”, and in this paper, the authors will demonstrate just how pervasive and ubiquitous this effect is. To extend the current understanding of this phenomenon, three different games were designed in order to measure risk propensity, preference towards foreign products and empathy. Eight different first languages were tested against two main second languages, English and Chinese, resulting in the collection of 650 data samples. The research was conducted through questionnaires and the results obtained showed that different first languages reacted in different ways to foreign language effects, people are less inclined towards risky behaviour when using a second language, and they generally prefer to buy products when those are presented in their native language. Age and proficiency level do not significantly affect the effect discovered, and there is a reduction in the empathy level when individuals need to take delicate decisions within second language contexts. Some cultural causes have been identified and provide explanations for some of the anomalous behaviour observed in certain first languages, opening future research questions on the role that culture plays in bilingual decision making.
... Before that, almost as a known fact, the term "personal computer" is considered by many to be coined by Henry Edward Roberts (September 13, 1941 -April 1, 2010), the owner of Micro Instrumentation Telemetry Systems. Popular Electronics ran a cover story on his Altair 8800 in January 1975 (56)(57)(58)(59), which predated the record in the May 1976 issue of Byte (55). ...
Experiment Findings
Materials and Methods (Zhiwen Hu, Jian Zhang, Yongfeng Huang, Xun Wang. Origins of Terminology Pending Further Discovery, Science, 2019, http://science.sciencemag.org/content/291/5501/39.2/tab-e-letters.)
... During data collection, the killer application [23] is used to scrape various comments relevant to a particular issue on Twitter. The killer app is a programming language used to collect the hardware, software and consumers needed on a specific platform. ...
Article
Full-text available
Over the last decade, the increased use of social media has led to an increase in hateful activities in social networks. Hate speech is one of the most dangerous of these activities, so users have to protect themselves from these activities from YouTube, Facebook, Twitter etc. This paper introduces a method for using a hybrid of natural language processing and with machine learning technique to predict hate speech from social media websites. After hate speech is collected, steaming, token splitting, character removal and inflection elimination is performed before performing hate speech recognition process. After that collected data is examined using a killer natural language processing optimization ensemble deep learning approach (KNLPEDNN). This method detects hate speech on social media websites using an effective learning process that classifies the text into neutral, offensive and hate language. The performance of the system is then evaluated using overall accuracy, f-score, precision and recall metrics. The system attained minimum deviations mean square error − 0.019, Cross Entropy Loss − 0.015 and Logarithmic loss L-0.0238 and 98.71% accuracy.
... The first major breakthrough was in accessibility through the development of personal computers (Freiberger and Swaine, 1984) such as the Altair 8800 in 1974 (Roberts and Yates, 1975), Apple II in 1977 (Wozniak, 1977), and IBM PC in 1981 (Morgan, 1981), which were sufficiently low in cost as to make dedication of a computer to the tasks of one interactive user costeffective. 8 Low-cost personal computers encouraged software entrepreneurs to develop text-processing capabilities that provided a word processor at a lower cost than specialist hardware (Bergin, 2006), and added enhanced capabilities such as WYSIWYG and proportional spacing (Rubinstein, 2006). These, and other innovations such as the spreadsheet (Grad, 2007;VisiCalc, 1984), in turn expanded the market for personal computers, a classical positive feedback process leading to exponential growth. ...
Article
Five decades ago advances in integrated circuits and time-sharing operating systems made interactive use of computers economically feasible. Visions of man-computer symbiosis and the augmentation of man’s intellect became realistic research objectives. The initial focus was on facilitating interactivity through improved interface technology, and supporting its application through good practices based on experience and psychological principles. Within a decade technology advances made low cost personal computers commonly available in the home, office and industry, and these were rapidly enhanced with software that made them attractive in a wide range of applications from games to office automation and industrial process control. Within three decades the Internet enabled human–computer interaction to extend across local, national and international networks, and, within four, smartphones and tablets had made access to computers and networks almost ubiquitous to any person, at any time and any place. Banking, commerce, institutional and company operations, utility and government infrastructures, took advantage of, and became dependent on, interactive computer and networking capabilities to such an extent that they have now been assimilated in our society and are taken for granted. This hyperconnectivity has been a major economic driver in the current millennium, but it has also raised new problems as malevolent users anywhere in the world have become able to access and interfere with critical personal, commercial and national resources. A major issue for human–computer studies now is to maintain and enhance functionality, usability and likeability for legitimate users whilst protecting them from malevolent users. Understanding the issues involved requires a far broader consideration of socio-economic issues than was required five decades ago. This article reviews various models of the role of technology in human civilization that can provide insights into our current problématique.
... Before that, almost as a known fact, the term "personal computer" is considered by many to be coined by Henry Edward Roberts (September 13, 1941 -April 1, 2010), the owner of Micro Instrumentation Telemetry Systems. Popular Electronics ran a cover story on his Altair 8800 in January 1975 (56)(57)(58)(59), which predated the record in the May 1976 issue of Byte (55). ...
Article
Full-text available
The origins of terminology of the humanities, social sciences, and natural science are still pending further discoveries. The earliest usage track-down of a target term could provide an insightful and compelling argument for rigorous historical story, and finally help us penetrate to the essence of reality. However, many empirical extrapolated theories based on sole information source always turns out de facto knowledge illusions. Therefore, retaining a clear sense of the pros and cons of any retrospective information source is the necessary prerequisite to such scientific efforts. (Citation: Zhiwen Hu, Jian Zhang, Yongfeng Huang, Xun Wang. Origins of Terminology Pending Further Discovery, Science, 2019, https://www.science.org/doi/10.1126/science.291.5501.39b)
... In a period when the usefulness of PCs for home users was still not entirely clear, word processing programs were a hint of a tangible function for these new machines, i.e. letter and text writing done on traditional typewriters or even by hand for decades. The first word processing program for personal computers, called Electric Pencil, was released in 1976 and was the harbinger of a long series of other word processing software (Bergin, 2006). As we saw in Chapter 2, Microsoft also developed a word processing system, simply called Word, whose first version was launched in 1983 but became a standard program on all personal computers only in the late 1980s, following on from the dissemination of a new version optimized for the Windows operating system. ...
... Urdu being a subset of Arabic language has been of great interest for researchers and scholars since its development. Many problems were answered and still being answered by many researchers encasing various facets of it Today in this technological era, where office automation [5] [6] has caused a stir and unabridged office work has come in just a single contraption, for the people who have a command over Urdu language only prefers text editors to be in their native language as this can enhance their productivity for their routine work in their respective disciplines. Many efforts were made in this regard and companies came with their proprietary solutions. ...
... Barnaby and Fox continued to improve WordStar through several interim versions. A major innovation was the development of the Mail Merge program, which allowed the insertion of names and addresses into a WordStar document from a separate file [5] [6]. ...
Article
Full-text available
Transliteration and word processing in Urdu is becoming under limelight in the last decade. Though there has been an immense addition of word processing features in contemporary word processors for English but as far as Urdu word processing is concerned, the researchers are seen mostly at the side of finding single aspect resolutions or optimization of Urdu language processing problems but tactlessly they give their integration (as one single text editing solution) a very less significance. Integrated solutions for the fulfillment of modern Urdu content editing needs are found to be very fewer and feeble at present. Because of having no compact word processing solution to retort present-day Urdu word processing needs, there is a great need for an Urdu word processor that could be able to justify the contemporary text editing needs of people who can read and write Urdu language.This paper presents features and requirements for a state of the art Unicode based Urdu word processor described as per the present day necessities. It also considers the comparative analysis of the present day word processing solutions, their blemishes, and their strengths over each other and what more can be done in this area to promote Urdu word processing would be described in this paper as well. KEYWORDS: Urdu word processing, Natural Language Processing, Guidelines for Urdu word editor, Urdu Editor.