How do we measure and improve the quality of a hierarchical ontology?
ABSTRACT Hierarchical ontologies enable organising information in a human–machine understandable form, but constructing them for reuse and maintainability remains difficult. Often supporting tools available lack formal methodological underpinning and their developers are not supported by any concomitant metrics. The paper presents a formal underpinning to provide quality metrics of a taxonomy hierarchical ontology and proposes a methodology for semi-automatic building of maintainable taxonomies. Users provide terms to be used to describe different ontological elements as well as their attributes and their ranges of values. The methodology uses the formalised metrics to assess the quality of the users input and proposes changes according to given quality constraints. The paper illustrates the metrics and the methodology in constructing and repairing two medium size well-known taxonomies.
- [show abstract] [hide abstract]
ABSTRACT: Codifying expert domain knowledge is a difficult and expensive task. To evaluate the quality of the outcome, often the same domain expert or a colleague of similar expertise is relied on to undertake a direct evaluation of the knowledge-based system or indirectly by preparing appropriate test data. During an incremental knowledge acquisition process, a data stream is available, and the knowledge base is observed and amended by an expert each time it produces an error. Using the kept record of the system’s performance, we propose an evaluation process to estimate its effectiveness as it gets evolved. We instantiate this process for an incremental knowledge acquisition methodology, Ripple Down Rules. We estimate the added value in each knowledge base update. Using these values, the decision makers in the organisation employing the knowledge-based information system can apply a cost-benefit analysis of the continuation of the incremental knowledge acquisition process. They can then determine when this process, involving keeping an expert online, should be terminated. As a result, the expert is not kept on-line longer than it is absolutely necessary. Hence, a major expense in deploying the information system—the cost of keeping a domain expert on-line—is reduced.Knowledge and Information Systems 35(1). · 2.23 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: This work contributes to the development of ontology-based user models, devised as overlays over conceptual hierarchies derived from domain ontologies. We tackle the problem of propagation of user interests in such a conceptual hierarchy. In addition to accounting for the hierarchical structure of the domain and the type and amount of feedback provided by the user, the principal contributions introduced in this work are: (i) horizontal propagation which enables propagation among siblings, in addition to vertical propagation among ancestors and descendants; (ii) anisotropic vertical propagation which permits user interests to be propagated differently upward and downward; (iii) context-dependance which introduces the possibility to propagate differently according to various contexts for specific applications; (iv) support for dynamic ontology maintenance, i.e. preserving the user interest values when adding or removing a node from the conceptual hierarchy. Our approach supports finer recommendation modalities and contributes to the resolution of the cold start problem, since it allows for propagation from a small number of initial concepts to other related domain concepts by exploiting the conceptual hierarchy of the domain. A field evaluation confirmed the effectiveness of our approach w.r.t. the traditional vertical propagationInformation Sciences 11/2013; 250:40-60. · 3.64 Impact Factor