Arthur M. Keller’s research while affiliated with Mountain View College and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (55)


Framework for the Security Component of an Ada* DBMS (Extended Abstract)
  • Article

July 2008

·

38 Reads

·

Arthur M. Keller

·

·

[...]

·

This paper discusses a framework for the design of a security component for a secure Ada data- base management system (DBMS). It is part of a de- velopment effort to produce prototype technologies for the World Wide Military Command and Control Sys- tem (WWMCCS) Information System (WIS). In this paper we present a series of criteria for evaluating data- base security approaches. We develop the high-level framework for the'security component of a DBMS and illustrate how it can support several alternative security models, which we compare using these criteria. The se- curity enforced by the DBMS relies an appropriate se- curity mechanisms enforced by the operating systems for operating system objects, such as files, used by the DBMS. We also present the security barrier or filter as an alternative or adjunct to the notion of a trusted computer base.


The Diana Approach to Mobile Computing

August 2007

·

27 Reads

·

5 Citations

DIANA (Device-Independent, Asynchronous Network Access) is a new application architecture to solve two major difficulties in developing software for mobile computing—diversity of user interface and varied communication patterns. Our architecture achieves display and network independence by de-coupling the user interface logic and communication logic from the processing logic of each application. Such separation allows applications to operate in the workplace as well as in a mobile environment in which multiple display devices are used and communication can be synchronous or asynchronous. Operation during disconnection is also supported.


Flexible Relation: An Approach for Integrating Data

October 2003

·

10 Reads

In this work we address the problem of dealing with data inconsistencies while integrating data sets derived from multiple autonomous relational databases. The fundamental assumption in the classical relational model is that data is consistent and hence no support is provided for dealing with inconsistent data. Due to this limitation of the classical relational model, the semantics for detecting, representing, and manipulating inconsistent data have to be explicitly encoded in the applications by the application developer.


Figure 1: Infomaster Architecture showing the Infomaster Facilitator integration engine, wrappers for ODBC, Z39.50 and custom sources, and user interfaces for WWW, EDI, and ACL. 
Table 4 : Mercedes Car Database 
Virtual Cars Database Table 2 shows the source database for GM cars. The attribute names need to be mapped to the virtual database. The values do not need to be mapped. 
because Infomaster can determine that the Motorsport cars must be one of these. 
Infomaster: An Information Integration System
  • Article
  • Full-text available

May 2000

·

1,454 Reads

·

341 Citations

Infomaster is an information integration system that provides integrated access to multiple distributed heterogeneous information sources on the Internet, thus giving the illusion of a centralized, homogeneous information system. We say that Infomaster creates a virtual data warehouse. The core of Infomaster is a facilitator that dynamically determines an efficient way to answer the user's query using as few sources as necessary and harmonizes the heterogeneities among these sources. Infomaster handles both structural and content translation to resolve differences between multiple data sources and the multiple applications for the collected data. Infomaster connects to a variety of databases using wrappers, such as for Z39.50, SQL databases through ODBC, EDI transactions, and other World Wide Web (WWW) sources. There are several WWW user interfaces to Infomaster, including forms based and textual. Infomaster also includes a programmatic interface and it can download results in structur...

Download

Penguin: Objects for Programs, Relations for Persistence

January 2000

·

23 Reads

·

8 Citations

this paper, we discuss the principles, architecture, and implementation of the Penguin approach to sharing persistent objects. The primary motivation for using a database management system (DBMS) is to allow sharing of data among multiple customers and multiple applications. To support sharing among independent transactions, DBMSs have evolved services including transaction independence, persistence, and concurrency control. When a database is shared among multiple applications, these applications typically have differing requirements for data access and representation. Such differences are supported by having views, which present diverse subsets of the base data [ChamGT75]. The primary motivation for defining objects is to include sharable semantics and structure in the information. Must all applications sharing objects use the same object schema, or is it better to give each application its own object schema and somehow to integrate them? If multiple applications differ in view the needed compromise reduces the relevance and effectiveness of the object representation [AbitB91]. For instance, customers will have an orthogonal view of an inventory versus the suppliers. 2 When combining independently developed applications, we do not have the luxury of choosing a common object schema. Many legacy databases and legacy data are still being used. We must retain the investment in existing application software and databases, while building new software using the object approach. When creating a federation of heterogeneous (pre-existing) databases, we must support a degree of interoperation among these databases and their schemas. Consider also that current projects will become legacy in a few years hence, but their semantics will remain. Whatever solutions we create in shar...


A Classification of Update Methods for Replicated Databases

October 1999

·

19 Reads

·

45 Citations

In this paper we present a classification of the methods for updating replicated databases. The main contribution of this paper is to present the various methods in the context of a structured taxonomy, which accommodates very heterogeneous methods. Classes of update methods are presented through their general properties, such as the invariants that hold for them. Methods are reviewed both in their normal and abnormal behaviour (i.e., after a network partition). We show that several methods presented in the literature, sometimes in independent papers with no crossreference, are indeed very much related, for instance because they share the same basic technique. We also show in what sense they diverge from the basic technique. This classification can serve as a basis for choosing the method that is most suitable to a specific application. It can also be used as a guideline to researchers who aim at developing new mechanisms. 1 Introduction One of the major obstacles to the dev...


The Case for Independent Updates

January 1999

·

10 Reads

We present the case for allowing independent updates on replicated databases. In autonomous, heterogeneous, or large scale systems, using two-phase commit for updates may be infeasible. Instead, we propose that a site may perform updates independently. Sites that are available can receive these updates immediately. But sites that are unavailable, or otherwise do not participate in the update transaction, receive these updates later through propagation, rather than preventing the execution of the update transaction until sufficient sites can participate. Two or more sites come to agreement using an reconciliation procedure that uses reception vectors to determine how much of the history log should be transferred from one site to another. We also consider what events can initiate a reconciliation procedure. 1 Introduction Many recent papers have studied the applicability of replicated databases, and many strategies have been developed to deal with updates in such an environment [1, 6, 7...


Case Study: Creating a Dataweb of Financial Service Data Using Infomaster

October 1998

·

17 Reads

We have created a "dataweb" of financial services data for Merrill Lynch using Infomaster. Merrill Lynch has about 20 separate systems containing data on commercial customer accounts. There are many commercial customers with multiple relationships with Merrill Lynch and therefore have accounts represented in multiple systems. There is common data in multiple customer records across the different systems that can be used to link them. However, the common data is entered manually and does not have referential integrity across the systems. Merrill Lynch is creating the Cross-Reference Utility (CRU) to link the customer data together. CRU is used both for inquiries by account representatives and for cleaning up the incorrect cross-system references. Stanford's Center for Information Technology is experimenting with Infomaster as a "shadow" project to CRU. The primary differences between the CRU approach and the Infomaster approach are: (1) CRU assigns CRU IDs to each customer, so there is ...


Performance Analysis of Associative Caching Schemes for Client-Server Databases

October 1998

·

4 Reads

This paper presents a detailed performance study of three associative caching schemes for client-server databases. In all the three schemes, the client cache loads query results dynamically in the course of transaction execution, and supports evaluation of associative queries on the cache, at the cost of refreshing the cached data and maintaining transaction serializability through asynchronous update notifications from the server. The schemes differ in the way they handle cache containment reasoning and data updates. In one scheme, the cache derives a description of its current contents from the stored queries, and uses predicate-based reasoning to examine and maintain the cache. The second scheme is an optimized version of the first, and considers partial containment of cached queries. It also performs writes at the central server in case of a cache miss. The third caching scheme operates in terms of tuples only, requiring trips to the server to determine cache containment of associa...


Performance Analysis of an Associative Caching Scheme for Client-Server Databases

August 1998

·

20 Reads

·

4 Citations

This paper presents a detailed performance study of the associative caching scheme proposed in [11]. A client cache dynamically loads query results in the course of transaction execution, and formulates a description of its current contents. Predicate-based reasoning is used on the cache description to examine and maintain the cache. The benefits of the scheme include local evaluation of associative queries, at the cost of maintaining the cached query results through update notifications from the server. In this paper, we investigate through detailed simulation the behavior of this caching scheme for a client-server database under different workloads and contention profiles. An optimized version of our basic caching scheme is also proposed and studied. We examine both read-only and update transactions, with the effect of updates on the caching performance as our primary focus. Using an extended version of a standard database benchmark, we identify scenarios where these caching schemes ...


Citations (34)


... (2)-(3) prepare the rules encoding the strategy that resolves conflicts caused by FDs by deletion (lines 8-15) and by revision (lines [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]. ...

Reference:

Synthesis of Bidirectional Programs from Examples with Functional Dependencies
On complementary and independent mappings on databases
  • Citing Conference Paper
  • January 1984

... This component is a part of the data warehousing for the plant-wide activities across the plant lifecycle. The object-oriented modeling approach is used to abstract these common elements within the plant-wide conceptual model while the physical data are within the data warehouse framework [22]. The safety common data (SCD) component includes (but not limited to): the possible source of data errors, documentation standards (vocabulary), generic cause-consequence for each component type, checklists for operation-type jobs, and standard safety interlock levels. ...

Integrating Data into Objects Using Structural Knowledge
  • Citing Article

... Information integration (aka data integration) has received a significant amount of attention in the recent years, and several systems have been developed. These include InfoMaster (Geddis et al., 1995), Information Manifold , Garlic (Haas et al., 1997), TSIMMIS , HERMES , and DISCO (Raschid et al., 1998). The similarities and differences among these systems can be broadly understood in terms of (1) the approach used to relate the mediator and source schemas and (2) the type of application scenario considered by the system. ...

Infomaster: a virtual information system
  • Citing Article
  • January 1995

... Automated mediation approach is also widely used in e-marketplace construction. Classically, two well-known examples of this approach are Smart Catalogues and Virtual Catalogues (Keller and Genesereth, 1996) and NetAcademy (Lincke et al., 1998). The former introduces the facilitator concept to perform routing and translation between distributed product catalogues and catalogue web interfaces based on a set of ontologies (Gruber, 1993). ...

Multivendor catalogs: Smart catalogs and virtual catalogs
  • Citing Article
  • January 1996

... In order to summarize or organize hypertext information in a visual manner, creating an overview map is a good approach. There are many researches on aggregating and generalizing hypertext information and on generating a structure from hypertext information[2, 3, 4]. Also, some clustering algorithms have been developed for reducing the complexity and size of the information space[8]. ...

Implementing hypertext database relationships through aggregations and exception

... This is a practical approach that many biomedical information management systems take. At the other end of the spectrum, the virtually integrated approach provides a mechanism for information sources to dynamically respond to specific queries by mediators or agents, which will then restructure and combine the results for the user [2][3]. ...

Smart catalogs and virtual catalogs
  • Citing Article
  • July 1995

... Besides Mohan and Narang [1994], ARIES-based recovery protocols have been discussed in Deux [1991], Franklin et al. [1992b], Carey et al. [1994a], White and DeWitt [1994], Panagos et al. [1996], and Voruganti et al. [2004]. Algorithms for B-tree management in page-server systems are presented in Gottemukkala et al. [1996], Basu et al. [1997], and Zaharioudakis and Carey [1997]. The performance studies on optimistic detection-based concurrency control and replica-management protocol have not shown how B-tree indexes are managed or how repeatable-read-level isolation could be achieved in a finegrained manner in the presence of insertions and deletions of objects and scans over object collections. ...

Centralized versus Distributed Index Schemes in OODBMS - A Performance Analysis

... [1], [14], there is an ambiguity in which multiple put a s have the potential to form well-behaved pairs with the given get a . By making use of the domain knowledge in the database community [16], [17], [19] to construct a set of templates for well-behaved put a if get a is an atomic query, we can encapsulate the ambiguity within templates and employ example-and-template-based synthesis to determine a solution. By utilizing templates, we create the space for efficient synthesis of put a for get a while guaranteeing that they are well-behaved. ...

Choosing a View Update Translator by Dialog at View Definition Time
  • Citing Conference Paper
  • January 1986