Article

A functional data base model

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This is known as the functional database model [16,22]. This model provides us with a starting point on our combinator roadmap. ...
... The graph representation of the database schema is a variation of the functional database model [16,22], which gave rise to a number of query languages: FQL [3], DAPLEX [21], GENESIS [1], Kleisli [27] and others; see [13] for a comprehensive survey. Among them, FQL and its derivatives are remarkably close to Rabbit-Example 1.1 is a valid query in both. ...
Article
We introduce Rabbit, a combinator-based query language. Rabbit is designed to let data analysts and other accidental programmers query complex structured data. We combine the functional data model and the categorical semantics of computations to develop denotational semantics of database queries. In Rabbit, a query is modeled as a Kleisli arrow for a monadic container determined by the query cardinality. In this model, monadic composition can be used to navigate the database, while other query combinators can aggregate, filter, sort and paginate data; construct compound data; connect self-referential data; and reorganize data with grouping and data cube operations. A context-aware query model, with the input context represented as a comonadic container, can express query parameters and window functions. Rabbit semantics enables pipeline notation, encouraging its users to construct database queries as a series of distinct steps, each individually crafted and tested. We believe that Rabbit can serve as a practical tool for data analytics.
... Prominent examples of semantic data models are the Entity-Relationship Model (ER) [16], the Semantic Data Model (SDM) [23], and the Functional Data Model (FDM) [27,52]. ...
... The Knowledge/Data Model is a hyper-semantic data model [48,46,44], that has its roots in the Functional Data Model (FDM) [27,52], which is a well known semantic data model. ...
... For the mid of the 70's implementation independent data models, the Entity-Relationship Model (ERM) [Che76], Nijssen's Information Analysis Method (NIAM) [Nij77,VB82], Functional Data Models (FDMs) [KP76,Shi81], and Semantic Data Models (SDM,IFO) [HM81,AH87] have been proposed. The latter models focus rather on what kind of information must be stored by the database than on how to represent the information by the computer. ...
... The functional data model was introduced by Kershberg and Pacheco [KP76], and refined by Sibley and Kershberg [SK77]. Shipman [Shi81] uses DAPLEX as the database modeling and manipulation language to implement the functional data model. ...
Article
Full-text available
The thesis discusses the problems of database development and maintenance, and presents an approach to conceptual tuning realized by conceptual design using the HERM/RADD notation. The RADD design tool has been designed in order to develope HERM specifications graphically. RADD adds semantics and operations to the design, which are not directly annotated on the graphical specification, such as "afunctional" dependencies and SQL operations and procedures. The RADD/raddstar system extends the graphical specification of the database schema with the posibility to specify the operations and with the invocations for transforming the schema, for evaluating transactions, and for optimizing the schema, each of which according the implicite requirements graphically modeled and the explicite requirements specified by means of the conceptual specification language (CSL). CSL is used as command line interface of the RADD/raddstar. The graphical RADD schema as well as the CSL specifications are compiled into terms of the RADD* data model by the system, such that these terms are used for further evaluation actions. The actions performed by the RADD/raddstar (schema transformation, transaction and cost evaluating, schema optimization) are based on rules, that can be developed and modified by the user using the CSL.
... The KDM evolved from the Functional Data Model (FDM) [Kers76,Ship81]. This evolution was motivated by the need for knowledge management facilities to be incorporated with advanced data modeling facilities in a tightly coupled manner. ...
Article
Full-text available
Active KDL (Knowledge/Data Language) is an object-oriented database system. It evolved from earlier work on KDL which has been ongoing since 1986 [Pott86]. The foundations of Active KDL are threefold: object-oriented programming, functional programming, and hypersemantic data modeling. These areas strongly influenced the design of Active KDL's three sublanguages: the schema definition language (SDL), the query language (QL), and the database programming language (DBPL). Because of the capabilities and elegance of these sublanguages, Active KDL is able of supporting demanding applications (e.g., simulation, model management, CAD, and intelligent database applications such as a university data/knowledge base capable of advising students). The power and versatility of these sublanguages are concretely demonstrated by showing how they can be used to handle a complex application, namely simulation support. 1. Introduction A renaissance in semantic data modeling [Hull87] research began in ...
... The use of accumulate functions requires switching from the set-oriented paradigm to the function-oriented paradigm where a model is represented as a number of functions and data operations are described as function definitions (or expressions). We do not explicitly define such a function-oriented approach in this paper but familiarity with major principles of the functional data model (Kerschberg and Pacheco, 1976;Sibley and Kerschberg, 1977) could help in understanding how accumulation works. Accumulate functions have been implemented in the DataCommandr system (Savinov, 2016b) which uses the conceptoriented data model (Savinov, 2016c). ...
Conference Paper
Full-text available
Most of the currently existing query languages and data processing frameworks rely on one or another form of the group-by operation for data aggregation. In this paper, we critically analyze properties of this operation and describe its major drawbacks. We also describe an alternative approach to data aggregation based on accumulate functions and demonstrate how it can solve these problems. Based on this analysis, we argue that accumulate functions should be preferred to group-by as the main operation for data aggregation.
... The KDM evolved from the Functional Data Model (FDM) [Kers76,Ship81]. This evolution was motivated by the need for knowledge management facilities to be incorporated with advanced data modeling facilities in a tightly coupled manner. ...
Article
Full-text available
this paper, we highlight some of our ongoing research on integrating knowledge, data, and models. Providing crisp definitions is a difficult proposition. After all, everything in a computer boils down to data and instructions. If we take an object-oriented viewpoint, we can clarify the picture. Under the object-oriented paradigm, everything is an object, which is itself an encapsulation of data and methods to manipulate and access the data within the object. Thus we can define knowledge, data, and models each as special kinds of objects. The purpose of a data object is to store facts or raw information, and the methods are relatively simple and so uniform in their behavior that often they can be automatically generated (or even pulled out of objects into a database management system). One could define a knowledge object as one that stores a minimal amount of data and is able derive additional information. Finally, a model object is similar to a knowledge object, except that it can use a general computational procedure to generate additional information.
... The scope of modeling capabilities for hyper-semantic data models includes knowledge, data, and model management [12], [15], [44]. The KDM evolved from the Functional Data Model (FDM) [45], [46]. This evolution was motivated by the need for knowledge management facilities to be incorporated with advanced data modeling facilities in a tightly coupled manner. ...
Article
Full-text available
Because of the difficulty of simulating large complex systems with traditional tools, new approaches have and are being developed. One group of interrelated approaches attempts to simultaneously make simulation modeling and analysis easier while at the same time providing enough power to handle more complex problems. This group includes the following important (overlapping) approaches: integrated simulation support environments, object-oriented simulation, and knowledge-based simulation. Query driven simulation fits somewhere in the middle of these three approaches. Its fundamental tenant is that simulationists or even naive users should see a system based upon query driven simulation as a sophisticated information system. A system/environment based upon query driven simulation will be able to store information about or to generate information about the behavior of systems that users wish to study. Active KDL (Knowledge/Data Language), which is a functional object-oriented database sys...
... In particular, a COM query may well produce a function (by processing data in other functions) rather than a set. The idea of using functions for data modeling is not new and this branch has a long history of research starting from [6,16]. COM can be viewed as a further development of the functional data modeling paradigm. ...
Preprint
Full-text available
We describe a new logical data model, called the concept-oriented model (COM). It uses mathematical functions as first-class constructs for data representation and data processing as opposed to using exclusively sets in conventional set-oriented models. Functions and function composition are used as primary semantic units for describing data connectivity instead of relations and relation composition (join), respectively. Grouping and aggregation are also performed by using (accumulate) functions providing an alternative to group-by and reduce operations. This model was implemented in an open source data processing toolkit examples of which are used to illustrate the model and its operations. The main benefit of this model is that typical data processing tasks become simpler and more natural when using functions in comparison to adopting sets and set operations.
... Apart from the well-known hierarchical, network and relational data models , many others have also been proposed. Some of these are entity-relationship model [Chen 1976] , entity-propertyassociation model [Pirotte 1977], information space model [Kobayashi 1975], DIAM II [Senko 1975] , semantic works [Roussopoulos 1975], infological approach [Langefors 1975, Sundgren 1975, functional model [Kerschberg 1975, Szeto 1977 and attribute-based model [Kerr 1975]. A good comparison of data models can be found in [Kerschberg 1976a]. ...
Article
We describe how to express constraints in a functional (semantic) data model, which has a working implementation in an object database. We trace the development of such constraints from being integrity checks embedded in procedural code to being something declarative ...
Conference Paper
This paper presents a knowledge-based approach to the specification, design, implementation, and evolution of database applications The knowledge base consists of 1) facts regarding database objects that are organized into a hierarchy of models, and 2) rules that specify the behavior of objects within a model and among modelsThe model hierarchy consists of database application data, database schemas, data model definitions, and system-related objects that control the user's interaction with the system The rules governing the behavior of objects are specified as explicit constraints on those objects User goals are transformed into conjectures that the inference engine must prove are satisfiable by interpreting all applicable constraintsThe semantic architecture of the PRISM system is described, together with the syntax and semantics of the constraint language PRISM is implemented in the C programming language and runs under the UNIX ** operating system
Conference Paper
Datalog is extended to incorporate single-valued “data functions”, which correspond to attributes in semantic models, and which may be base (user-specified) or derived (computed). Both conventional and stratified datalog are considered. Under the extension, a datalog program may not be consistent, because a derived function symbol may evaluate to something which is not a function. Consistency is shown to be undecidable, and is decidable in a number of restricted cases. A syntactic restriction, panwise consistency, is shown to guarantee consistency. The framework developed here can also be used to incorporate single-valued data functions into the Complex Object Language (COL), which supports deductive capabilities, complex database objects, and set-valued data functions.There is a natural correspondence between the extended datalog introduced here, and the usual datalog with functional dependencies. For families &PHgr; and &Ggr; of dependencies and a family of datalog programs &Lgr;, the &PHgr;-&Ggr; implication problem for &Lgr; asks, given sets F ⊆ &PHgr; and G ⊆ &Ggr; and a program P in &Lgr;, whether for all inputs I, I @@@@ F implies P(I) @@@@ G. The FD-FD implication problem is undecidable for datalog, and the TGD-EGD implication problem is decidable for stratified datalog. Also, the Ø-MVD problem is undecidable (and hence also the MVD-preservation problem).
Conference Paper
The IFO model [2, 3] is a formal database model which encompasses the fundamental structural components found in the semantic database modelling literature [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. The IFO model uses a graph-based formalism to represent three basic types of relationships between data: ISA relationships, functional relationships, and relationships arising in the construction of objects from simpler objects (e.g., CONVOYs are objects built from the simpler objects SHIPs), The presence of these types of relationships between data objects leads to intricate types of propagation when updates to the underlying data are made. This extended abstract reports on a development presented in [3], which formally articulates a coherent semantics for updates and update propagation in the IFO model.
Conference Paper
This paper introduces the data structures and graphics-based user interface to the Engineering Support Environment (ESE). This proposed system combines and extends principles of database management, configuration management, and graphics interfaces to support the engineering life cycle activities in connection with the design, analysis, and manufacture of physical objects. ESE can be broken down to three major components: the design-related information storage component, the design data storage component, and a graphics-based user interface. Design-related information is represented in ESE using the semantically oriented IFO database model. Design data in this system is represented using AND/OR DAGs, which are extended in ESE to store historical design information and to provide sophisticated configuration management capabilities. Access to these components is provided through a multi-frame, graphics-based user environment which supports rich browsing capabilities, including interactive data updates during browsing sessions.
Conference Paper
Full-text available
The focus of this paper is on how Active KDL can be used to provide a very powerful simulation support environment. Active KDL (Knowledge/Data Language) is an object-oriented database programming language, which provides access to integrated model, knowledge , and data bases . Simulation inputs and outputs can be stored by Active KDL since it supports complex objects. More importantly, Active KDL also allows users to specify rules to capture heuristic knowledge and methods to specify procedural behavior. Finally, Active KDL provides a simple mechanism for specifying concurrent execution, namely tasks embedded in active objects. These facilities provide a powerful mechanism for building simulation models out of pre-existing model components. These capabilities provide a tight coupling between a SIMODULA-like simulation system and a knowledge/database system, supporting query driven simulation, where model instantiation is used for information generation.
Article
Cactis is an object-oriented, multiuser DBMS developed at the University of Colorado. The system supports functionally-defined data and uses techniques based on attributed graphs to optimize the maintenance of functionally-defined data. The implementation is self-adaptive in that the physical organization and the update algorithms dynamically change in order to reduce disk access. The system is also concurrent. At any given time there are some number of computations that must be performed to bring the database up to date; these computations are scheduled independently and are performed when the expected cost to do so is minimal. The DBMS runs in the Unix/C Sun workstation environment. Cactis is designed to support applications that require rich data modeling capabilities and the ability to specify functionally-defined data, but that also demand good performance. Specifically, Cactis is intended for use in the support of such applications as VLSI and PCB design, and software environments.
Article
The IFO data model was proposed by Abiteboul and Hull [Abiteboul 87] as a formalized semantic database model. It has been claimed by the authors that the model subsumes the Relational model [Codd 70], the Entity-Relationship model [Chen 76], the Functional Data Model [Kerschberg 76] and virtually all of the structured aspects of the Semantic Data Model [Hammer 81], the INSYDE Model [King 85], and the Extended Semantic Hierarchy Model [Brodie 84].This paper examines the IFO data model as presented in [Abiteboul 87], compares it to other models, and thus concludes that the IFO data model is actually a subset of the Semantic Data Model proposed by Hammer in [Hammer 81]. The paper also shows that the IFO data model has failed to support concepts that are essential to both the E-R model and the Semantic Data Model which are claimed to be subsumed by the IFO model.Section 2 discusses the three IFO constructs, objects, fragments, and relationships. The mapping of these constructs to constructs in the Semantic Data Model is established as an informal proof of the result that the IFO model is subsumed by the SDM.Section 3 lists constructs supported by the Entity-Relationship model [Chen 76, Teorey 86] as will as constructs supported by SDM [Hammer 81]that the IFO data model fails to support.
Conference Paper
Four database modeling paradigms are compared along a number of dimensions, including their treatment of object identity; issues of redundant structure and/or data; the notions of type and class; their treatment of sets and context-dependent data; and their treatment of ISA relationships. The modeling paradigms are: complex object types (including nested relations); semantic models; complex object models using object identifiers; and the model of the conceptual language Galileo. The presentation is largely informal, with a focus on philosophic issues.
Article
Future database systems must feature extensible data models and data languages in order to accommodate the novel data types and special-purpose operations that are required by nontraditional database applications. In this paper, we outline a functional data model and data language that are targeted for the semantic interface of GENESIS, an extensible DBMS. The model and language are generalizations of FQL [11] and DAPLEX [40], and have an implementation that fits ideally with the modularity required by extensible database technologies. We explore different implementations of functional operators and present experimental evidence that they have efficient implementations. We also explain the advantages of a functional front-end to ¬1NF databases, and show how our language and implementation are being used to process queries on both 1NF and ¬1NF relations.
Article
Motivated by Geographical Information Systems (GIS) applications, we introduce a new data model for mutually nested objects. Combining features from relational as well as object-oriented database systems, our data model is efficient for queries involving multiple access patterns. By maintaining symmetrical relationships between entities, we allow nesting to be formulated dynamically at the query level rather than the data model level, thus dissociating the data structure from the access method. In addition, we do not favor one access pattern over another by clustering data in one particular manner, giving therefore flexibility and performance to our system. In order to integrate nesting into the relational algebra, we propose an extension to Relix, which is an academic database management system. We then show how those modifications can be used in a wide variety of queries, provide an algorithm to translate nested queries into flat relational expressions, and finally show that similar improvements can be applied to SQL allowing nested queries to be expressed more naturally.
Conference Paper
A novel approach is described for building intelligent information systems (or knowledge-base management systems). The approach utilizes the knowledge data language, which is a schema specification language developed for the knowledge/data model. The model, referred to as a hypersemantic data model, captures both knowledge semantics, as specified in knowledge-based systems, and data semantics, as represented by semantic data models. Hypersemantic data models facilitate the incorporation of knowledge in the form of heuristics, uncertainty, constraints and other artificial intelligence concepts, together with object-oriented concepts found in semantic data models. The unified knowledge/data modeling features and constructs of the language are used to develop a prototype knowledge base management system, the KDL-advisor.< >
Article
. In the paper a new approach to semantic modeling and view integration is proposed. The underlying data model is graph-based yet completely formalized so that graphical schemas themselves are precise specifications suitable for implementation. The formalism is a kind of graph-object-based generalization of the relational data model: analytical assertions about elements (values) are replaced by synthetic assertions about diagrams of sets (object classes) and functions (references); correspondingly, queries are operations on such diagrams. On the other hand, the approach is an adaptation of a familiar in the mathematical category theory specification framework based on the so called sketches. On this ground, a new approach to view integration is suggested. Its distinctive characteristics is in the way of specifying correspondence between different views of the same universe of discourse. The specifications are formalized and based on equations which reduces the integration task to a se...
Conference Paper
Clustering is an effective mechanism for retrieving complex objects. Many object-oriented database management systems have suggested variant clustering schemes to improve their performance. Two issues may compromise the effectiveness of a clustered structure, i.e., object updates and multiple relationships. Updates may destroy the initially cIustered structure, and in a multiple relationship environment, clustering objects based on one relationship may sacrifice others. This paper investigates the updating effects and suggests a dynamic reclustering scheme to reorganize related objects on the disk. A cost model is introduced to estimate the benefit and overhead of reclustering. Reorganizations are performed only when the overhead can be justified. For environments in which multiple relationships among objects exist, the paper proposes a leveled clustering scheme to order related objects into a clustering sequence. Our simulation results show that the leveled clustering scheme has a better access time compared with a single-level clustering scheme.
Article
Traditional data modelling techniques of DSS and modern knowledge representation methodologies of ES are inconsistent. A new unifying model is needed for integrating the two systems into a unified whole. After a brief review of data modelling techniques and knowledge representation methodologies, the unifying model will be described and integrated systems will be used to exemplify the usefulness of the unifying model.
Article
Tools and methods that transform higher level formalisms into logical database designs become very important. Rarely if ever do these transformations take into account integrity constraints existing in the “conceptual” model. Yet these become essential if one is forced to introduce redundancies for reasons of e.g. query efficiency. We therefore adopted the Binary Relationship Model (or “NIAM”) that is rich in constraints and built a flexible tool, RIDL*, that graphically captures NIAM semantic networks, analyzes them and then transforms them into relational designs (normalized or not), under the control of a database engineer assisted by a rule base. This is made possible by a rule-driven implementation of a new, stepwise synthesis process, and its benefits are illustrated by its treatment of e.g. subtypes. RIDL* is operational at several industrial sites in Europe and the U.S. on sizeable database projects.
Chapter
This introductory chapter begins by arguing, by means of examples, that the basic idea of a function is quite straightforward and intuitive, when stripped of its mathematical jargon and notation. We further argue that it is particularly applicable to the management of data. We discuss the role and importance of data models and argue that, with modern computer technology, the functional data model has come of age. We then discuss the advantages of integrating database management software with functional programming and the scope this gives for providing flexible user interfaces and for calculation. We then discuss the role and significance of the list comprehension, developed originally in the context of functional programming, but now seen in the wider context of performance optimisation and the integration of internet data. There follows an introduction to new research which is applying the functional approach to web data that is described by an RDF schema. Finally we present a survey of significant previous work and an extensive bibliography. Our aim is that this chapter will aid the reader in understanding the chapters that will follow.
Chapter
The functional approach to computing has an important role in enabling the Internet-based applications such as the Semantic Web, e-business, web services, and agents for managing the evolving distributed knowledge space. This chapter examines the research issues and trends in these areas, and focuses on how the Functional Data Model and a functional approach can contribute to solving some of the outstanding problems. Specifically, the chapter addresses the role of ontologies and the meta-tagging and indexing of resources; the role of search technologies; the Semantic Web and web services; intelligent agents; and knowledge management.
Article
We describe how to express constraints in a functional (semantic) data model, which has a working implementation in an object database. We trace the development of such constraints from being integrity checks embedded in procedural code to being something declarative and self-contained, combining data access and computation, that can be moved around into other contexts in intelligent distributed systems. We see this as paralleling and extending the original vision of functions as values in functional programming systems. It is greatly helped by using a referentially transparent functional formalisation. We illustrate these ideas by showing how constraints can move around within database systems (Colan & Angelic Daplex), being transformed for various uses, or even moved out into other systems and fused into a specification for a configuration problem. We look forward to future directions involving Agents.
Article
The goal of the Jupiter system is to provide interoperability services to a federation of autonomous and possibly heterogeneous database systems. Interoperability is the ability to connect software systems in a manner which facilitates transparent access to enterprise data resources, which may be distributed among legacy applications across multiple heterogeneous software and hardware platforms. The participants are free to withdraw at any time, with the result that global integration of the participant schemas is not seen as a feasible solution. Our solution is to provide a multidatabase layer and a suitable interoperator language to allow information providers to construct loosely-coupled interoperable autonomous information systems. The enor mous problems faced by information system providers are compounded by the difficulty of integrating key applications which have been developed using traditional methods with applications developed using, for example, object- oriented technology. A goal of our research is to address the area of multidatabase interoperability and, to this end, we have constructed the Jupiter prototype.
Article
Full-text available
This effort was to explore data and knowledge based processing systems with particular attention to: (1) Adequacy of the Information Resource Dictionary Systems (IRDS) to support advanced systems development projects: (2) Conceptual architecture for an Active Data/Knowledge Dictionary System; (3) Exploring in detail intelligent query formulation and processing in heterogeneous database systems; coordinated problem solving with multiple heterogeneous knowledge sources; schema evolution in object-oriented database; Hypermedia requirements for active dictionaries, and providing a better understanding of the role of metadata in the management of knowledge-intensive object-oriented applications.
Conference Paper
The Data Base Management System is now a well established part of information systems technology, but the many architectures and their plethora of data models are confusing to both the practitioner and researcher. In the past, attempts have been made to compare and contrast some of these systems, but the greatest difficulty arises in seeking a common basis. This paper attempts to show how a generalized data system (GDS), represented by two different models, could form such a basis; it then proposes that data policy definitions can restrict the GDS to a specialized model, such as a relational or DBTG-like model. Finally, it proposes that this concept forms a better basis for data structure design of specific system applications.
Article
At last year's Command & Control Research & Technology Symposium (CCRTS) at the Naval Postgraduate School in Monterey, CA, Dr. Richard E. Hayes from EBR and Dr. David T. Signori Jr. from RAND reported on their activities in the Information Superiority Metrics Working Group. Their presentation mentioned the OODA Loop (Observe, Orient, Decide, Act). In the late 1970s, Colonel John Boyd, USAF, wanted to understand why U.S. fighter pilots consistently won air combat engagements with their F-86 fighter aircraft in combat over Korea against pilots that flew Mig-15 aircraft with better maneuverability. His work came to be known as the OODA Loop or Boyd Cycle 1 . For these authors, the thought that a study on how fighter pilot decision-making in air-to-air combat during the Korean War had crept into a presentation on information superiority at the turn of the century was intriguing and suggested a closer look. What we discovered were logical stepping stones that led us from "fighter pilot decision-making" to previous discussions on "information overload" and from that to another CCRT presentation on the "cognitive hierarchy." The final stepping stone for this paper was the authors' abilities to mature this stepping stone process into a discussion of the methodology, applications and uses of Object-Oriented Analysis and Design (OOA&D) techniques to solve the problem of information overload. The authors have long been advocates of common shared data for the description and assessment of architectures, interoperability and information assurance. This paper continues that advocacy.
Article
The proliferation of desktop computing has once again rekindled the interest in making computerized tools available to managers and other decision makers. This paper elaborates on a model that integrates data, knowledge, and model management and shows how decision support systems (DSSs) can be extended to support managers in a truly novel way. The model, the Knowledge/Data Model (KDM), is explained and the significance of its applicability to the management of data, knowledge, and models is illustrated through several examples. KDM continues to evolve and is being applied to domains from computer chip design to production and inventory management systems.
Chapter
Full-text available
Development of today’s advanced applications is increasingly being accomplished using multi-faceted modeling. For example, the areas of simulation and workflow modeling generally need data modeling as a foundational capability. In addition, simulation modeling and workflow modeling can be used together, synergistically. Based on the experience of the LSDIS group in developing systems and models, we have found that establishing rich linkages between disparate models works better than having one comprehensive unified model. In addition, we agree with the consensus that two dimensional models are generally considered to be easier to create and understand than one dimensional models. Furthermore, just as richly linked text is referred to as hyper-text, richly linked diagrams may be referred to as hyper-diagrams. Two modeling toolkits, METEOR Designer and the JSIM Modeling Toolkit, illustrate the advantages of using such approaches.
Article
This paper presents a methodology for logical design of relational schemas and integrity constraints using semantic binary schemas. This is a top-down methodology. In this methodology, a conceptual description of an enterprise is designed using a semantic binary model. Then, this description is converted into the relational database design.The paper also describes a tool which automates all the busy work of the methodology and provides graphic output. With respect to the intelligent design decisions, the tool accepts instructions from its user, who is a database designer, or, when the user defaults, makes decisions itself based on ‘rule-of-thumb’ principles.
Article
This paper describes a new area of data modeling, a model in this new area, and the schema specification language for the model. The Knowledge/Data Model captures both knowledge semantics, as specified in Knowledge Based Systems, and data semantics, as represented by Semantic Data Models. The Knowledge/Data Model is an instance of a new class of models, called hyper-semantic data models, which facilitate the incorporation of knowledge in the form of heuristics, uncertainty, constraints and other Artificial Intelligence concepts, together with object-oriented concepts found in Semantic Data Models. The unified knowledge/data modeling features are provided via the constructs of the Knowledge/Data Language.
Article
Object orientation provides a more direct and natural representation of real-world problems. Object-oriented programming techniques allow the development of extensible and reusable modules. The object-oriented concepts are abstract data typing, inheritance, and object identity. Combining object-oriented concepts with database capabilities such as persistence, transactions, concurrency, query, etc. results in powerful systems called object-oriented databases. Object-oriented databases have become the dominant post-relational database management system and are a necessary evolutionary step towards the more powerful intelligent databases. Intelligent databases tightly couple database and object-oriented technologies with artificial intelligence, information retrieval, and multi-media data-manipulation techniques.
Conference Paper
Fundamental notions of relative information capacity between database structures are studied in the context of the relational model. Four progressively less restrictive formal definitions of "dominance" between pairs of relational database schemata are given. Each of these is shown to capture intuitively appealing, semantically meaningful properties which are natural for measures of relative information capacity between schemata. Relational schemata, both with and without key dependencies, are studied using these notions. A significant intuitive conclusion concerns the informal notion of relative information capacity often suggested in the conceptual database literature, which is based on accessability of data via queries. Results here indicate that this notion is too general to accurately measure whether an underlying semantic connection exists between database schemata. Another important result of the paper shows that under any natural notion of information capacity equivalence, two relational schemata (with no dependencies) are equivalent if and only if they are identical (up to re-ordering of the attributes and relations). The approach and definitions used here can form part of the foundation for a rigorous investigation of a variety of important database problems involving data relativism, including those of schema integration and schema translation.
Conference Paper
A new, formally defined database model is introduced which combines fundamental principles of "semantic" database modeling in a coherent fashion. The model provides mechanisms for representing structured objects and functional and ISA relationships between them. It is anticipated that the model can serve as the foundation for a theoretical investigation into a wide variety of fundamental issues concerning the logical representation of data in databases. Preliminary applications of the model include an efficient algorithm for computing the set of object types which can occur in a given entity set, even in the presence of a complex set of ISA relationships. The model can also be applied to precisely articulate "good" design policies.
Conference Paper
Cacti is a distributed system designed to support derived data in distributed database environments. A series of novel access and optimization policies are used to reduce I/O costs, support the transparent distribution of data, automatically migrate and replicate data, execute computations in parallel, cluster or reblock data, and perform speculative evaluation of derived data. The behavior of the system is dynamically modified on the basis of heuristics, predictive metrics, and user-supplied hints which form a central theme of self-adaptive optimization. In general, Cacti alters its behavior — both locally at a single node and globally across the distributed system — according to the current usage of resources, and typical usage patterns over time.
ResearchGate has not been able to resolve any references for this publication.