Communications of the ACM

Published by Association for Computing Machinery

Online ISSN: 1557-7317

·

Print ISSN: 0001-0782

Articles


Figure 1. Smooth body with good joint connections.  
Figure 2. PAR architecture.  
Figure 4. Scene from Jack's MOOse Lodge.  
Figure 5. Virtual trainer for military checkpoints.  
Animation Control for Real-Time Virtual Humans
  • Article
  • Full-text available

September 1999

·

241 Reads

·

·

The article focuses on animation control for real-time virtual humans. The computation speed and control methods needed to portray 3D virtual humans suitable for interactive applications have improved dramatically in recent years. Real-time virtual humans show increasingly complex features along the dimensions of appearance, function, time, autonomy and individuality. The virtual human architecture, which researchers have been developing at the University of Pennsylvania is representative of an emerging generation of such architectures and includes low-level motor skills, a mid-level parallel automata controller and a high-level conceptual representation for driving virtual humans through complex tasks. The architecture, called Jack, provides a level of abstraction generic enough to encompass natural language instruction representation as well as direct links from those instructions to animation control. Building models of virtual humans involves application-dependent notions of fidelity.
Download
Share

Unifying Biological Image Formats with HDF5

October 2009

·

132 Reads

·

·

·

[...]

·

The biosciences need an image format capable of high performance and long-term maintenance. Is HDF5 the answer?

Brain-Computer Interfaces for Communication and Control

May 2011

·

1,190 Reads

The brain's electrical signals enable people without muscle control to physically interact with the world.

Reputation Systems for Open Collaboration

August 2011

·

143 Reads

Content creation used to be an activity pursued either individually, or in closed circles of collaborators. Books, encyclopedias, map collections, had either a single author, or a group of authors who knew each other, and worked together; it was simply too difficult to coordinate the work of large, geographically dispersed groups of people when the main communication means were letters or telephone. The advent of the internet has changed all this: it is now possible for millions of people, from all around the world, to collaborate. The first open-collaboration systems, wikis, focused on text content; the range of content that can be created collaboratively has since expanded to include, for instance, video editing (e.g., MetaVid [5]), documents (e.g., Google Docs 1, ZOHO 2), architectural sketching (e.g., Sketchup 3), and geographical maps (e.g., OpenStreetMaps [10], Map Maker 4). Open collaboration carries immense promise, as shown by the success of Wikipedia, but also carries challenges both to content creators and to content consumers. At the contentcreation end, contributors may be of varying ability and knowledge. Collaborative systems open to all will inevitably be subjected to spam, vandalism, and attempts to influence the information. How can systems be built so that constructive interaction is encouraged and the consequences of vandalism and spam are minimized? How can the construction of high-quality information be facilitated? At the content-The authors like to sign their papers in alphabetical order; thus, the author order does not necessariy reflect the size of the contributions. This is the author’s version of the work. It is posted here by permission of ACM for your personal use.

Figure 1. 
Figure 2. 
(Computer) Vision Without Sight

July 2012

·

618 Reads

Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology.

Figure 2. Jack reaching for the connectors on top of the power supply.  
Figure 3. Warning spheres indicating contaminated hydraulic disconnects.  
Humans for validating maintenance procedures

August 2002

·

82 Reads

They can be sent to check the human aspects of complex physical systems by simulating assembly, repair, and maintenance tasks in a 3D virtual environment.

Signal processing in SETI

December 1985

·

98 Reads

It is believed that the Galaxy might contain ten billion potential life sites. In view of the physical inaccessibility of extraterrestrial life on account of the vast distances involved, a logical first step in a search for extraterrestrial intelligence (SETI) appears to be an attempt to detect signals already being radiated. The characteristics of the signals to be expected are discussed together with the search strategy of a NASA program. It is pointed out that all presently planned searches will use existing radio-astronomy antennas. If no extraterrestrial intelligence signals are discovered, society will have to decide whether SETI justifies a dedicated facility of much greater collecting area. Attention is given to a multichannel spectrum analyzer, CW signal detection, pulse detection, the pattern detector, and details of SETI system operation.

Can Parallel Algorithms Enhance Serial Implementation? (Extended Abstract)
The broad thesis presented suggests that the serial emulation of a parallel algorithm has the potential advantage of running an a serial machine faster than a standard serial algorithm for the same problem. It is too early to reach definite conclusions regarding the significance of this thesis. However, using some imagination, validity of the thesis and some arguments supporting it may lead to several far-reaching outcomes: (1) Reliance on “predictability of reference” in the design of computer systems will increase. (2) Parallel algorithms will be taught as part of the standard computer science and engineering undergraduate curricula irrespective of whether (or when) parallel processing will become ubiquitous in the general-purpose computing world. (3) A strategic agenda for high-performance parallel computing: a multistage agenda, which in no stage compromises user-friendliness of the programmer's model, and thereby potentially alleviates the so-called “parallel software crisis”. Stimulating a debate is one goal of our presentation


Expert Simulation For On-line Scheduling
The state-of-the-art in manufacturing has moved toward flexibility, automation and integration. The efforts spent on bringing computer-integrated manufacturing (CIM) to plant floors have been motivated by the overall thrust to increase the speed of new products to market. One of the links in CIM is plant floor scheduling, which is concerned with efficiently orchestrating the plant floor to meet the customer demand and responding quickly to changes on the plant floor and changes in customer demand. The Expert System Scheduler (ESS) has been developed to address this link in CIM. The scheduler utilizes real-time plant information to generate plant floor schedules which honor the factory resource constraints while taking advantage of the flexibility of its components. The scheduler uses heuristics developed by an experienced human factory scheduler for most of the decisions involved in scheduling. The expertise of the human scheduler has been built into the computerized version using the expert system approach of the discipline of artificial intelligence (AI). Deterministic simulation concepts have been used to develop the schedule and determine the decision points. As such, simulation modeling and AI techniques share many concepts, and the two disciplines can be used synergistically. Examples of some common concepts are the ability of entities to carry attributes and change dynamically (simulation—entities/attributes or transaction/parameters versus AI—frames/slots); the ability to control the flow of entities through a model of the system (simulation—conditional probabilities versus AI—production rules); and the ability to change the model based upon state variables (simulation—language constructs based on variables versus AI—pattern-invoked programs). Shannon [6] highlights similarities and differences between conventional simulation and an AI approach. Kusiak and Chen [3] report increasing use of simulation in development of expert systems. ESS uses the synergy between AI techniques and simulation modeling to generate schedules for plant floors. Advanced concepts from each of the two areas are used in this endeavor. The expert system has been developed using frames and object-oriented coding which provides knowledge representation flexibility. The concept of “backward” simulation, similar to the AI concept of backward chaining, is used to construct the events in the schedule. Some portions of the schedule are constructed using forward or conventional simulation. The implementation of expert systems and simulation concepts is intertwined in ESS. However, the application of the concepts from these two areas will be treated separately for ease of presentation. We will first discuss the expert system approach and provide a flavor of the heuristics. The concept of backward simulation and the motive behind it will then be explored along with some details of the implementation and the plant floor where the scheduler is currently being used. We will then highlight some advantages and disadvantages of using the expert simulation approach for scheduling, and, finally, the synergetic relationship between expert systems and simulation.

Design Maintenance Systems

January 1992

·

37 Reads

Summary form only given. Conventional software maintenance is difficult because it is often performed using only source code as a source of design information. Rediscovery of essential design information and assumptions is difficult, if not impossible, from just the code. The author believes that emphasis should rather be placed on capturing and maintaining the design of a program, with source code being derived directly from the design. This approach ensures that the design is retained, rather than the source code. A design maintenance system captures the specification of a program, how the implementation is derived from the specification, and the justification showing why the implementation satisfies the specification, and allows that design to be updated. To make specifications, derivations and justifications concrete, he chose to build a DMS in a formal transformational context. He defines maintenance deltas, which specify a desired change to a program. Delta propagation procedures revise and rearrange the derivation and design histories to remove useless parts. He shows how these procedures integrate into a transformation system to effect repair on the cauterised design history. The end result of the process is another complete design history, from which a program may be generated, and to which another maintenance delta may be applied

Figure 2: Outline structure for Masters course in Large-Scale Complex IT Systems
Large-scale Complex IT Systems

September 2011

·

1,462 Reads

This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.

Abstracting Abstract Machines: A Systematic Approach to Higher-Order Program Analysis

May 2011

·

254 Reads

Predictive models are fundamental to engineering reliable software systems. However, designing conservative, computable approximations for the behavior of programs (static analyses) remains a difficult and error-prone process for modern high-level programming languages. What analysis designers need is a principled method for navigating the gap between semantics and analytic models: analysis designers need a method that tames the interaction of complex languages features such as higher-order functions, recursion, exceptions, continuations, objects and dynamic allocation. We contribute a systematic approach to program analysis that yields novel and transparently sound static analyses. Our approach relies on existing derivational techniques to transform high-level language semantics into low-level deterministic state-transition systems (with potentially infinite state spaces). We then perform a series of simple machine refactorings to obtain a sound, computable approximation, which takes the form of a non-deterministic state-transition systems with finite state spaces. The approach scales up uniformly to enable program analysis of realistic language features, including higher-order functions, tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection.

Toward Real-Time Performance Benchmarks for Ada

September 1986

·

22 Reads

Benchmarks are developed to measure the Ada notion of time, the Ada features believed important to real-time performance, and other time-related features that are not part of the language, but are part of the run-time system; these benchmarks are then applied to the language and run-time system, and the results evaluated.

Figure 1: Onion routing.
Figure 5: Example of a software-exploit attack 
Figure 6: The Dining Cryptographers approach to anonymous communication. Alice reveals a 1-bit secret to the group, but neither Bob nor Charlie learn which of the other two members sent this message.
Figure 7: Why DC-nets is hard to scale in practice: (1) worstcase N × N coin-sharing matrix; (2) network churn requires rounds to start over; (3) malicious members can anonymously jam the group.
Figure 10: WiNon: using virtual machines to harden anonymity systems against software exploits, staining, and selfidentification
Seeking Anonymity in an Internet Panopticon

December 2013

·

366 Reads

Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.

Computing polynomial resultants: Bezout's determinant vs. Collins' reduced P.R.S. algorithm

February 1969

·

33 Reads

Algorithms for computing the resultant of two polynomials in several variables, a key repetitive step of computation in solving systems of polynomial equations by elimination, are studied. Determining the best algorithm for computer implementation depends upon the extent to which extraneous factors are introduced, the extent of propagation of errors caused by truncation of real coeffcients, memory requirements, and computing speed. Preliminary considerations narrow the choice of the best algorithm to Bezout's determinant and Collins' reduced polynomial remainder sequence (p.r.s.) algorithm. Detailed tests performed on sample problems conclusively show that Bezout's determinant is superior in all respects except for univariate polynomials, in which case Collins' reduced p.r.s. algorithm is somewhat faster. In particular Bezout's determinant proves to be strikingly superior in numerical accuracy, displaying excellent stability with regard to round-off errors. Results of tests are reported in detail.

Fig. 1 . Classification performance of Bot or Not? for four different classifiers. The classification accuracy is computed by 10-fold cross validation and measured by the area under the receiver operating characteristic curve (AUROC). The best score, obtained by Random Forest, is 95%. 
Fig. 2 . Subset of user features that best discriminate social bots from humans. Bots retweet more than humans and have longer user names, while they produce fewer tweets, replies and mentions, and they are retweeted less than humans. Bot accounts also tend to be more recent. 
Fig. 3 . Web interface of Bot or Not? (truthy.indiana.edu/botornot). The panels show the likelihood that the inspected accounts are social bots along with individual scores according to six features classes. Left: The Twitter account of one of the authors is identified as likely operated by a human. Right: A popular social bot is correctly assigned a high bot score. 
Fig. 4. Visualizations provided by Bot or Not?. (A) Part-of-speech tag proportions. (B) Language distribution of contacts. (C) Network of co-occurring hashtags. (D) Emotion, happiness and arousal-dominance-valence sentiment scores. (E) Temporal patterns of content consumption and production.
The Rise of Social Bots

July 2014

·

7,489 Reads

The Turing test asked whether one could recognize the behavior of a human from that of a computer algorithm. Today this question has suddenly become very relevant in the context of social media, where text constraints limit the expressive power of humans, and real incentives abound to develop human-mimicking software agents called social bots. These elusive entities wildly populate social media ecosystems, often going unnoticed among the population of real people. Bots can be benign or harmful, aiming at persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then discuss current efforts aimed at detection of social bots in Twitter. Characteristics related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.

Household Demand for Broadband Internet Service

August 2010

·

814 Reads

As part of the Federal Communications Commission (“FCC”) National Broadband Report to Congress, we have been asked to conduct a survey to help determine consumer valuations of different aspects of broadband Internet service. This report details our methodology, sample and preliminary results. We do not provide policy recommendations.This draft report uses data obtained from a nationwide survey during late December 2009 and early January 2010 to estimate household demand for broadband Internet service. The report combines household data, obtained from choices in a real market and an experimental setting, with a discrete-choice model to estimate the marginal willingness-to-pay (WTP) for improvements in eight Internet service characteristics.

On the Categorization of Scientific Citation Profiles in Computer Science

March 2015

·

305 Reads

A common consensus in the literature is that the citation profile of published articles in general follows a universal pattern - an initial growth in the number of citations within the first two to three years after publication followed by a steady peak of one to two years and then a final decline over the rest of the lifetime of the article. This observation has long been the underlying heuristic in determining major bibliometric factors such as the quality of a publication, the growth of scientific communities, impact factor of publication venues etc. In this paper, we gather and analyze a massive dataset of scientific papers from the computer science domain and notice that the citation count of the articles over the years follows a remarkably diverse set of patterns - a profile with an initial peak (PeakInit), with distinct multiple peaks (PeakMul), with a peak late in time (PeakLate), that is monotonically decreasing (MonDec), that is monotonically increasing (MonIncr) and that can not be categorized into any of the above (Oth). We conduct a thorough experiment to investigate several important characteristics of these categories such as how individual categories attract citations, how the categorization is influenced by the year and the venue of publication of papers, how each category is affected by self-citations, the stability of the categories over time, and how much each of these categories contribute to the core of the network. Further, we show that the traditional preferential attachment models fail to explain these citation profiles. Therefore, we propose a novel dynamic growth model that takes both the preferential attachment and the aging factor into account in order to replicate the real-world behavior of various citation profiles. We believe that this paper opens the scope for a serious re-investigation of the existing bibliometric indices for scientific research.

Recursive computation of certain derivatives - A study of error propagation

February 1970

·

8 Reads

Error propagation in linear first order difference equations studied to improve accuracy of derivative recursive computation

The role of strong and weak ties in Facebook: a community structure perspective

March 2012

·

1,107 Reads

In this paper we report our findings on the analysis of two large datasets representing the friendship structure of the well-known Facebook network. In particular, we discuss the quantitative assessment of the strength of weak ties Granovetter's theory, considering the problem from the perspective of the community structure of the network. We describe our findings providing some clues of the validity of this theory also for a large-scale online social network such as Facebook.

The Process Group Approach to Reliable Distributed Computing. Revision

August 1991

·

309 Reads

The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

Onward to Petaflops Computing

February 1997

·

42 Reads

With the recent demonstration of a computing rate of one Tflop/s at Sandia National Lab, one might ask what lies ahead for high-end computing. The next major milestone is a sustained rate of one Pflop/s (also written one petaflops, or 10(exp 15) floating-point operations per second). It should be emphasized that we could just as well use the term "peta-ops", since it appears that large scientific systems will be required to perform intensive integer and logical computation in addition to floating-point operations, and completely non- floating-point applications are likely to be important as well. In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, between ten Tbyte (10(exp 13) byte) and one Pbyte (10 (exp 15) byte) depending on application, as well as commensurate I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have performed initial studies in this field is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. A sustained petaflops computing capability however is a daunting challenge; it appears significantly more challenging from today's state-of-the-art than achieving one Tflop/s has been from the level of one Gflop/s about 12 years ago. Challenges are faced in the arena of device technology, system architecture, system software, algorithms and applications. This talk will give an overview of some of these challenges, and describe some of the recent initiatives to address them.

Efficient System-Enforced Deterministic Parallelism

May 2010

·

219 Reads

Deterministic execution offers many benefits for debugging, fault tolerance, and security. Running parallel programs deterministically is usually difficult and costly, however - especially if we desire system-enforced determinism, ensuring precise repeatability of arbitrarily buggy or malicious software. Determinator is a novel operating system that enforces determinism on both multithreaded and multi-process computations. Determinator's kernel provides only single-threaded, "shared-nothing" address spaces interacting via deterministic synchronization. An untrusted user-level runtime uses distributed computing techniques to emulate familiar abstractions such as Unix processes, file systems, and shared memory multithreading. The system runs parallel applications deterministically both on multicore PCs and across nodes in a cluster. Coarse-grained parallel benchmarks perform and scale comparably to - sometimes better than - conventional systems, though determinism is costly for fine-grained parallel applications. Comment: 14 pages, 12 figures, 3 tables

Syntax-Directed Documentation For PL360

December 1970

·

23 Reads

PL360 is a phrase-structured programming language which provides the facilities of a symbolic machine language for the IBM 360 computers. An automatic process, syntax-directed documentation, is described which acquires programming documentation through the syntactical analysis of a program, followed by the interrogation of the originating programmer. This documentation can be dispensed through reports of file query replies when other programmers later need to know the program structure and its details. A key principle of the programming documentation process is that it is managed solely on the basis of the syntax of programs.

Lessons learned from modeling the dynamics of software development

January 2006

·

126 Reads

"August 1988." "This paper is a companion piece to CISR WP No. 163, Modeling the dynamics of software project management."

More Natural Programming Languages and Environments

October 2006

·

246 Reads

Over the last six years, we have been working to create programming languages and environments that are more natural, by which we mean closer to the way people think about their tasks. The goal is to make it possible for people to express their ideas in the same way they think about them. To achieve this, we performed various studies about howpeople think about programming tasks, and then used this knowledge to develop a new programming language and environment called HANDS. This chapter provides an overviewof the goals and background for the Natural Programming research, the results of some of our user studies, and the highlights of the language design.

PageRank: Standing on the Shoulders of Giants

February 2010

·

175 Reads

PageRank is a Web page ranking technique that has been a fundamental ingredient in the development and success of the Google search engine. The method is still one of the many signals that Google uses to determine which pages are most important. The main idea behind PageRank is to determine the importance of a Web page in terms of the importance assigned to the pages hyperlinking to it. In fact, this thesis is not new, and has been previously successfully exploited in different contexts. We review the PageRank method and link it to some renowned previous techniques that we have found in the fields of Web information retrieval, bibliometrics, sociometry, and econometrics.

Table 1: 
Privacy Implications of Health Information Seeking on the Web

March 2015

·

335 Reads

This article investigates privacy risks to those visiting health-related web pages. The population of pages analyzed is derived from the 50 top search results for 1,986 common diseases. This yielded a total population of 80,124 unique pages which were analyzed for the presence of third-party HTTP requests. 91% of pages were found to make requests to third parties. Investigation of URIs revealed that 70% of HTTP Referer strings contained information exposing specific conditions, treatments, and diseases. This presents a risk to users in the form of personal identification and blind discrimination. An examination of extant government and corporate policies reveals that users are insufficiently protected from such risks.

How Reuse Influences Productivity in Object-Oriented Systems

November 1997

·

54 Reads

Although reuse is assumed to be especially valuable in building high quality software as well as in Object Oriented (OO) development, limited empirical evidence connects reuse with productivity and quality gains. The author's eight system study begins to define such benefits in an OO framework, most notably in terms of reduce defect density and rework as well as in increased productivity.

Table 3 : The Merger of a computer fraud script with the twenty-five Situational Crime Prevention techniques [8]
Overcoming the insider

January 2006

·

381 Reads

Information security has become increasingly important for organizations, given their dependence on ICT. Not surprisingly, therefore, the external threats posed by hackers and viruses have received extensive coverage in the mass media. Yet numerous security surveys also point to the 'insider' threat of employee computer crime. In 2006, for example, the Global Security Survey by Deloitte reports that 28% of respondent organizations encountered considerable internal computer fraud. This figure may not appear high, but the impact of crime perpetrated by insiders can be profound. Donn Parker argues that 'cyber-criminals' should be considered in terms of their criminal attributes, which include skills, knowledge, resources, access and motives (SKRAM). It is as a consequence of such attributes, acquired within the organization, that employers can pose a major threat. Hence, employees use skills gained through their legitimate work duties for illegitimate gain. A knowledge of security vulnerabilities can be exploited, utilising resources and access are provided by companies. It may even be the case that the motive is created by the organization in the form of employee disgruntlement. These criminal attributes aid offenders in the pursuit of their criminal acts, which in the extreme can bring down an organization. In the main, companies have addressed the insider threat through a workforce, which is made aware of its information security responsibilities and acts accordingly. Thus, security policies and complementary education and awareness programmes are now commonplace for organizations. That said, little progress has been made in understanding the insider threat from an offender's perspective. As organizations attempt to grapple with the behavior of dishonest employees, criminology potentially offers a body of knowledge for addressing this problem. It is suggested that Situational Crime Prevention (SCP), a relative newcomer to criminology, can help enhance initiatives aimed at addressing the insider threat. In this article, we discuss how recent criminological developments that focus on the criminal act, represent a departure from traditional criminology, which examines the causes of criminality. As part of these recent developments we discuss SCP. After defining this approach, we illustrate how it can inform and enhance information security practices. In recent years, a number of criminologists have criticised their discipline for assuming that the task of explaining the causes of criminality is the same as explaining the criminal act. Simply to explain how people develop a criminal disposition is only half the equation. What is also required is an explanation of how crimes are perpetrated. Criminological approaches, which focus on the criminal act, would appear to offer more to information security practitioners than their dispositional counterparts. Accordingly, the SCP approach can offer additional tools for practitioners in their fight against insider computer crime.

Perceptual Intelligence

January 1999

·

119 Reads

The objects that surround us d—esks, cars, shoes and coats —are deaf and blind; this limits their ability to adapt to our needs and thus to be useful. We have therefore developed computer systems that can follow people‘s actions, recognizing their faces, gestures, and expressions. Using this technology we have begun to make “smart rooms” and “smart clothes” that can help people in day-to-day life without chaining them to keyboards, pointing devices or special goggles.

Situation Room Analysis in the Information Technologies Market

January 1996

·

88 Reads

It would be interesting and scientifically rewarding to investigate the possibilities of designing a "sheltered" situation room for the Information Technologies and Telecommunications services and products market. The proposed framework might be employed during any phase of the life cycle of an IT&T service or product, i.e. from the early design phases up to the phase of its launching into the market; it aims to be utilised by the various actors involved in the IT&T market, such as the industry e.g. software developers, network infrastructure suppliers, horizontal service providers, e.t.c., policy makers, regulation, legislation and standardisation bodies, as well as the R&D community and end users. In this paper an approach is presented which builds on the notion of a situation room; the latter term is broadly used in the context of military operations and has specific semantical connotations. We, deliberately, exploit the term’s ‘past’ and propose an analytical scheme based on it, which aims to assist planning initiatives and decision making in the application domain of the Information Technologies and Telecommunications (henceforth: IT&T) market.

NASA's TReK Project: A Case Study in Using the Spiral Model of Software Development

February 1998

·

345 Reads

Software development projects face numerous challenges that threaten their successful completion. Whether it is not enough money, too little time, or a case of "requirements creep" that has turned into a full sprint, projects must meet these challenges or face possible disastrous consequences. A robust, yet flexible process model can provide a mechanism through which software development teams can meet these challenges head on and win. This article describes how the spiral model has been successfully tailored to a specific project and relates some notable results to date.

Predicting the Popularity of Online Content

December 2008

·

3,340 Reads

We present a method for accurately predicting the long time popularity of online content from early measurements of user access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.

Increased security through open source

February 2008

·

295 Reads

The impact of open source software (OSS) on both the security and transparency of software system are discussed. The security of system is considered to be an objective measure of the number of its vulnerabilities and severity. Exposure combines security with the likelihood of attack, and risk combines exposure with the damage sustained by the attack. Open source software is a software for which the corresponding source is available for inspection, use, modification, and redistribution by the users. Keeping the open source closed prevents the attackers from having easy access to information that may prove to be efficient to successfully launch an attack. Open source provides the attackers with information to search for vulnerability bugs and potential buffer over flows, and thus increases the exposure of the system. Openness of the design may reveal the logical errors in he security in the worst case.

Information Systems and Organizational Change

February 1979

·

403 Reads

This paper discusses long-term change in organizations in relation to information systems. It reviews causes of social inertia, resistance and counterimplementation. It stresses the pluralistic nature of organizations. Tactics for managing change rely on incremental, facilitative approaches. These limit strategic change which requires coalition-building and careful attention to political mechanisms.

Private Information Retrieval

November 2010

·

51 Reads

In this chapter we turn our attention to the second main subject of the book, namely, private information retrieval. Recall that private information retrieval schemes are cryptographic protocols designed to safeguard the privacy of database users by allowing clients to retrieve records from replicated databases while completely hiding the identity of the retrieved records from the database owners.

Smoothed Analysis

September 2003

·

1,466 Reads

In smoothed analysis, one measures the complexity of algorithms assuming that their inputs are subject to small amounts of random noise. In an earlier work (Spielman and Teng, 2001), we introduced this analysis to explain the good practical behavior of the simplex algorithm. In this paper, we provide further motivation for the smoothed analysis of algorithms, and develop models of noise suitable for analyzing the behavior of discrete algorithms. We then consider the smoothed complexities of testing some simple graph properties in these models.

Law and disorder

August 2008

·

25 Reads

Sometimes life runs more smoothly when you stop trying to control it. Mark Buchanan goes with the flow

New features for CORBA 3.0

September 2000

·

121 Reads

Since its first publication in 1991, the CORBA Specification has provided abstractions for distributed programming that have served as the basis for a variety of distributed systems. Despite its original flexibility and applicability to various environments, however, CORBA has had to evolve to remain viable as a standard for distributed object-oriented applications. This article explains several new features that are being added to CORBA as part of its continuing evolution towards version 3.0. They are the Portable Object Adapter (POA), support for Asynchronous Method Invocation (AMI), and support for passing Objects By Value (OBV). These new features further extend the suitability of CORBA for the development of diverse distributed systems. Originally published in Communications of the ACM, Vol. 41, No. 10, October 1998. Copyright 1998 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part of this work for personal or classroom use is grant...

Meyer, B.: Systematic concurrent object-oriented programming. Commun. ACM 36(9), 56-80

June 2002

·

51 Reads

This is one of the first published descriptions of the SCOOP mechanism (Simple Concurrent Object-Oriented Programming), a general scheme for programming concurrent applications of many different kinds, from multi-threading to distributed computing, multi-processing, Web services. Later developments can be found in the book "Object-Oriented Software Construction, 2nd edition" and at http://se.inf.ethz.ch/scoop.

External Manifestations of Trustworthiness in the Interface”, Communications of the ACM, 43(12), 50-56

September 2000

·

567 Reads

that interaction rituals among humans, such as greetings, small talk and conventional leavetakings, along with their manifestations in speech and in embodied conversational behaviors, can lead the users of technology to judge the technology as more reliable, competent and knowledgeable -- to trust the technology more. Trust is essential for all kinds of interpersonal interactions; it is the loom on which is woven the social fabric of society. Trust between humans has to do with credibility, with believing one another, with confidence in another's judgments, and beliefs that another's actions fit our own schemata of how to act. We use the interaction rituals of conversation, in part, to demonstrate our trustworthiness and, more generally, to establish and maintain social relationships where trust is important. Building rapport and common ground through small talk, intimacy through self-disclosure, credibility through the use of technical jargon, social networks through gossip, and "face" through politeness, are all examples of this phenomenon. These social uses of language are not important just in purely social settings, but are also crucialThis article is about the kind of trust that is demonstrated in human face-to-face interaction, and approaches to and benefits of having our computer interfaces depend on these same manifestations of trustworthiness. In making technology that is actually trustworthy your morals can really be your only guide. But, assuming that you're a good person, and have built a technology that does what it promises, or that represents people who do what they promise, then read on. We're taking as a point of departure our earlier work on the effects of representing the computer as a human body. Here we are going to argue to the establishment and ...

Figure 1:Our video structuring model
Figure 3:A dialog and its face-based classes
Figure 5:Result of video abstracting, compiled into an HTML page
Video Abstracting

September 2000

·

214 Reads

We all know what the abstract of an article is: a short summary of a document, often used to preselect material relevant to the user. The medium of the abstract and the document are the same, namely text. In the age of multimedia, it would be desirable to use video abstracts in very much the same way: as short clips containing the essence of a longer video, without a break in the presentation medium. However, the state of the art is to use textual abstracts for indexing and searching large video archives. This media break is harmful since it typically leads to considerable loss of information. For example it is unclear at what level of abstraction the textual description should be; if we see a famous politician at a dinner table with a group of other politicians, what should the text say? Should it specify the names of the people, give their titles, specify the event, or just describe the scene as if it were a painting, emphasizing colors and geometry? An audio-visual abstract, to be interpreted by a human user, is semantically much richer than a text. We define a video abstract to be a sequence of moving images, extracted from a longer video, much shorter than the original, and preserving the essential message of the original.

Adaptive Interfaces for Ubiquitous Web Access

June 2002

·

50 Reads

The invention of the movable type printing press launched the information age by making the mass distribution of information both feasible and economical. Newspapers, magazines, shopping catalogs, restaurant guides, and classified advertisements can trace their origins to the printing process. Five and a half centuries of technological progress in communications networks, protocols, computers, and user interface design led to the Web, online publishing, and e-commerce. Consumers and businesses have access to-vast stores of information. All this information, however, used to be accessible only while users were tethered to a computer at home or in an office. Wireless data and voice access to-this vast store allows unprecedented access to information from any location at any time.

Figure 1: LPWA HTTP proxy connguration  
Figure 2: New York Times registration page c 1997
Consistent yet Anonymous Web Access with LPWA

January 2000

·

131 Reads

This paper describes the Lucent Personalized Web Assistant (LPWA), a novel software system designed to address these user concerns. Users may browse the web in a personalized, simple, private, and secure fashion using LPWA-generated aliases and other LPWA features. LPWA gen- Also with Computer Science Dept., Tel-Aviv University, Tel-Aviv 69978 Israel. E-mail: matias@math.tau.ac.il.

Figure 2: Anomalous signature for successful syslog exploit of sendmail under SunOS4.1.4. The normal database was generated with sequences of length 6. The x-axis measures the position in the anomalous trace in units of system calls. The y-axis shows how many mismatches were recorded when the anomalous trace was compared with the normal database. The y-axis unit of measure is total number of mismatches over the past 20 system calls in the trace (called the locality frame). That is, for position i in the trace, the locality frame
Computer immunology. Commun ACM

March 1998

·

240 Reads

this article argues that the similarities are compelling and could point the way to improved computer security. Improvements can be achieved by designing computer immune systems that have some of the important properties illustrated by natural immune systems. These include multi-layered protection, highly distributed detection and memory systems, diversity of detection ability across individuals, inexact matching strategies, and sensitivity to most new foreign patterns. We first give an overview of how the immune system relates to computer security. We then illustrate these ideas with two examples. The immune system is comprised of cells and molecules.

Figure 1: Essentials of the impact tracking system  
Tracking Contact and Free Gesture Across Large Interactive Surfaces

May 2003

·

99 Reads

this article are directed at public settings, where they are used for casual information browsing, interactive retail, and artistic installations or entertainment. Because their activity tends to be highly visible, participants at public interactive walls often become performers. These systems are intrinsically collaborative - crowds tend to gather around to watch, participate, and suggest choices as a user interacts with a large display; essentially all applications attain a social, gamelike quality. Although there are several products available that identify and track objects accurately across large electronic whiteboards and tablets, in order to be usable in public settings, it is important that such interactive walls respond to bare hands and do not require the user to wear any kind of active or passive target. At the moment, there are several sensing and tracking approaches that have been used to make large surfaces barehand interactive, many of which are introduced in [4]. The ma

From Adaptive Hypermedia to the Adaptive Web

April 2003

·

708 Reads

hypertext in early 1990, it now attracts many researchers from different communities such as hypertext, user modeling, machine learning, natural language generation, information retrieval, intelligent tutoring systems, cognitive science, and Web-based education. model, an adaptable system requires the user to specify exactly how the system should be different, for example, tailoring the sports section to provide information about a favorite team [9]. In different kinds of adaptive systems, adaptation effects could be realized in a variety of ways. Adaptive hypermedia and Web systems are essentially collections of connected information items that allow users to navigate from one item to another and search for relevant items. The adaptation effect in this reasonably rigid context is limited to three major adaptation technologies---adaptive content selection, adaptive navigation support, and adaptive presentation. When the user searches for relevant information, the system can ada

An Adaptive Framework For Developing Multimedia Software Components

January 1998

·

25 Reads

Introduction Recent improvements in microprocessor performance have made possible the migration of continuous media processing from specialized hardware, such as decompression and digital signal processing boards, to software. The extensibility and configurability of software libraries allows multimedia applications to access a wider range of multimedia objects, stored in a variety of compressed formats, and to employ an extensible set of tools for processing these objects. Furthermore, configurable software libraries enable applications to take advantage of new audio and video compression standards as they emerge, rather than becoming obsolete. Despite these advantages, there are two fundamental problems that have limited the success of software libraries for processing digital audio and video: ffl Difficulty of developing software components. The task of developing software components (e.g., media players, Netscape plug-ins, ActiveX controls) that de