Article

Integrated Risk Components in Data Modeling for Risk Databases

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Spatial data and related technologies have proven to be crucial for effective collaborative decision-making in disaster management. However, there are currently substantial problems with availability, access and usage of reliable, up-to-date and accurate data for disaster management. This is a very important aspect to disaster response as timely, up-to-date and accurate spatial data describing the current situation is paramount to successfully responding to an emergency. This includes information about available resources, access to roads and damaged areas, required resources and required disaster response operations that should be available and accessible for use in a short period of time. Any problem or delay in data collection, access, usage and dissemination has negative impacts on the quality of decision-making and hence the quality of disaster response. Therefore, it is necessary to utilize appropriate frameworks and technologies to resolve current spatial data problems for disaster management.This paper aims to address the role of Spatial Data Infrastructure (SDI) as a framework for the development of a web-based system as a tool for facilitating disaster management by resolving current problems with spatial data. It is argued that the design and implementation of an SDI model and consideration of SDI development factors and issues, together with development of a web-based GIS, can assist disaster management agencies to improve the quality of their decision-making and increase efficiency and effectiveness in all levels of disaster management activities.The paper is based on an ongoing research project on the development of an SDI conceptual model and a prototype web-based system which can facilitate sharing, access and usage of spatial data in disaster management, particularly disaster response.
Article
Full-text available
Most of the database textbooks, targeting database design and implementation for information systems curricula support the big database systems (Oracle, MS SQL Server, DB/2, etc.). With respect to the most important aspects of the database management: design and implementation; one should not ignore MySQL—the most widely used, open source, relational database management system developed in Sweden in 1995 and now owned by Oracle Corporation. This paper shows two database-design learning cases. The first one deals with a forward-engineering technique used to transform a data model into a physical database. The second case shows how to reverse-engineer an existing database into a data model. Both the cases utilize MySQL Workbench and MySQL Community Server. By contrast Microsoft Access can only reverse engineer a physical database into its relationship diagram.
Article
Full-text available
A data model is a plan for building a database and is comparable to an architect's building plans. There are two major methodologies used to create a data model: the Entity-Relationship (ER) approach and the Object Model. This paper will be discussed only the object model approach. The goal of the data model is to certify that all data objects required by the database are completely and accurately represented. Ontologies are objects of interest (Universal of discourse). The objective of this paper is to simplify object models compare with ontologies models. There are some similarities between objects in object models and concepts sometimes called classes in ontologies. Ontology can help building object model. The object model is the center of data modeling; on the other hand ontology itself has the concept which is the basis of knowledge base. Because ontologies are closely related to modern object-oriented software design, it is good attempt to adapt existing object-oriented software development methodologies for the task of ontology development. Selected approaches originate from research in artificial intelligence; knowledge representation and object modeling are presented in this paper. Some issues mentioned in this paper are related with their connection; some are addressed directly into the similarities or differences point of view of both. This paper also presents the available tools, methods, procedures, language, reusability which shows the corporation with object modeling and ontologies.
Article
Full-text available
We compare EER and OO data models from the point of view of design quality. Quality is measured in terms of (a) correctness of the conceptual schemas being designed, (b) time to complete the design task, and (c) designers' preferences of the models. Result of an experimental comparison of the two models reveal that the EER model surpasses the OO model for designing unary and ternary relationships, it takes less time to design EER schemas, and the EER model is preferred by designers. We conclude that even if the objective is to implement an OO database schema, the recommended procedure is to: (1) create an EER conceptual scheme, (2) map it to an OO schema, and augment the target schema with behavioral constructs that are unique to the OO approach.
Article
The ability to perceive and manipulate patterns is a fundamental part of human intelligence in general, and of scientific discovery in particular. In scientific discovery a pattern-characterizing concept must be induced from data, tested against subsequent data, and reformulated until it explains the data observed. This research develops and tests one approach to modeling the process of pattern induction and reformulation as data continually become available. In this approach each new piece of data contributes to the construction of a data model, a description of the observed data that organizes or systematizes them. The data model serves as a bridge between the raw data and the world of pattern concepts, eventually suggesting and providing evidence in support of an explanatory conceptual model of the pattern. The data modeling approach is applied to the domain of integer sequence patterns, a classic inductive domain. Strengths and weaknesses of the approach and comparisons to human performance are made.
Harmonizing forest-related definitions for use by various stakeholders
  • Fao Rome
Food and Agriculture Organization, 2002. Harmonizing forest-related definitions for use by various stakeholders. FAO Rome.
A concept of models for supply chain speculative risk analysis and management
  • E Klosa
Klosa E., 2013. A concept of models for supply chain speculative risk analysis and management, Journal of Economics and Management, 12, 46-59.
Data Modeling and Database Design
  • N S Umanath
  • R W Scamell
Umanath N. S. and Scamell R. W., 2007. Data Modeling and Database Design, Thomsom Learning Inc, Boston,MA, USA.
Developing High Quality Data Models. The European Process Industries STEP Technical Liaison Executive (EPISTLE)
  • Switzerland Geneva
  • M West
  • J Andfowler
UNISDR, 2009. Terminology on Disaster Risk Reduction, Geneva, Switzerland. West,M. andFowler,J., 1996. Developing High Quality Data Models. The European Process Industries STEP Technical Liaison Executive (EPISTLE).