Figure 5 - uploaded by Jose F Aldana Montes
Content may be subject to copyright.
Screenshot from the result form of MaF. Results are generated following a standard format proposed by the OAEI in order to be used by other software tools and applications
Source publication
In this work, we present our experience when developing the Matching Framework (MaF), a framework for matching ontologies that allows users to configure their own ontology matching algorithms and it allows developers to perform research on new complex algorithms. MaF provides numerical results instead of logic results provided by other kinds of alg...
Context in source publication
Context 1
... developers are those who use the whole functionality of the framework to develop new matching algorithms. The functional specification for an algorithm developer is different from the role of an end-user. We have developed MaF using Eclipse 2 , so it would not be difficult to extend it. Moreover, in Table 1, we show a summary for the initially provided features included in MaF. All provided algorithms, the different kind of combinations and the different techniques for thresholding are presented. We have borrowed a methodology from Brooke [Brooke, 1996] in order to test the usability of our framework, we have asked several undergraduate and graduate students in the field of Computer Science for working with several ontology matching frameworks. They have to value with a number several key points concerning to these frameworks. In addition to MaF, we have considered three additional ontology matching frameworks in order to compare them. We have chosen COMA++ (Web Edi- tion) [Aumueller et al., 2005], Ontobuilder [Roitman and Gal, 2006], and FOAM [Ehrig and Sure, 2005] for the reasons already advanced previously, and we have asked to our students for solving the OAEI benchmark using them. We did not tell the students that MaF is our software tool. Moreover, it should be taken into account that the students had a good knowledge of databases and ontologies, but most of them are not experts in the ontology matching field. From the results of this experiment, we have obtained that COMA++ and MaF are the tools with the highest degree of usability. For example, it can be extracted that COMA++ is the system that students would like to use more frequently, the system which needs less technical support, the most consistent software tool and the system which needs least previous knowledge to get started and use it. However, according to our experiment, MaF is the least complex system, the easiest system to use, the system with the best integration of the functions, the system that can be learned the most quickly and the tool which is the less cumbersome to use. The tests give us evidence of the benefits of using MaF in matching scenarios and validate the design of our user interface. The results can not be taken as statistically conclusive, so we will keep working in this regard in future work. We have solved a case study that consists of solving several tests from the OAEI Benchmark [OAEI, 2008]. This benchmark dataset offers several test cases which try to measure the quality of proposed methods and tools when solving several use cases which are common in ontology matching scenarios. It should be taken into account that we can solve the cases of the benchmark using our understanding of the problem and appropriately selecting the matchers to address it. Our purpose is not to compete with optimized algorithms, but to show that it is possible to use our tool for solving common scenarios. Table 2 shows several of the most representative cases of the benchmark dataset [OAEI, 2008], the configuration that we propose and the results that we have obtained. The test has been performed using the concepts only. The working mode is as follows: The process begins when the user selects the two ontologies to be processed. After that, the user has the option of defining matching algorithms to be used from the first and second layer. In Figure 4, it is possible to see the screenshot for the selection of algorithms of this kind. The second layer allows user to choose the hybrid algorithms. In the third layer, the composition formula and the threshold for filtering the values in the output results are defined. Finally, the tool performs the matching between the two ontologies according to these criteria. Figure 5 shows an example of the output for this step. The format follows a standard format so that it could be useful as an input for other ...
Similar publications
A low power, low voltage current mode 9 bit pipelined a/d converter and 8 bit self-calibrated d/a converter to interface a DSP system are presented in the paper. The a/d converter is built of 1.5 bit stages with digital error correction logic. The d/a converter is composed of 3 LSBs fine and 5 MSBs coarse current mode converters. The a/d and d/a co...
Citations
... The use of frameworks to perform ontology meta-matching is not new in the area. Approaches like the MaF (Martinez-Gil et al. 2012) and the AML (Faria et al. 2013) emerged in times when the volume of articles for this problem grew year after year, as pointed out by Otero-Cerdeira et al. (2015). ...
Ontology matching has become a key issue to solve problems of semantic heterogeneity. Several researchers propose diverse techniques that can be used in distinct scenarios. Ontology meta-matching approaches are a specialization of ontology matching and have achieved good results in pairs of ontologies with different types of heterogeneities. However, developing a new ontology meta-matcher can be a costly process and a lot of experiments are often carried out to analyze the behavior of the matcher. This article presents a modularized framework that covers the main stages of the ontology meta-matching evaluation process. This framework aims to aid researchers to develop and analyze algorithms for ontology meta-matching, mainly metaheuristic-based supervised and unsupervised approaches. As the main contribution of the research, the framework proposed will facilitate the evaluation of ontology meta-matching approaches and, as the secondary contribution, a data provenance model that captures the main information generated and consumed throughout experiments is presented in the framework.
... However, there are several related works in the field of semantic similarity aggregation. For instance COMA, where a library of semantic similarity measures and friendly user interface to aggregate them are provided [13], or MaF, a matching framework that allow users to combine simple similarity measures to create more complex ones [21]. ...
Semantic similarity measurement aims to determine the likeness between two text expressions that use different lexicographies for representing the same real object or idea. There are a lot of semantic similarity measures for addressing this problem. However, the best results have been achieved when aggregating a number of simple similarity measures. This means that after the various similarity values have been calculated, the overall similarity for a pair of text expressions is computed using an aggregation function of these individual semantic similarity values. This aggregation is often computed by means of statistical functions. In this work, we present CoTO (Consensus or Trade-Off) a solution based on fuzzy logic that is able to outperform these traditional approaches.
... A possible technique which can be used is semantic matching [29]. This process involves determining how the new information and the existing knowledge interact, how existing knowledge should be modified to ac-commodate the new information, and how the new information should be modified in light of the existing knowledge [32]. These techniques can be used for going beyond the literal lexical match of words and operate at the conceptual level when comparing specific labels for concepts (e.g., Finance) also yields matches on related terms (e.g., Economics, Economic Affairs, Financial Affairs, etc.). ...
A fundamental challenge in the intersection of Artificial Intelligence and Databases consists of developing methods to automatically manage Knowledge Bases which can serve as a knowledge source for computer systems trying to replicate the decision-making ability of human experts. Despite of most of the tasks involved in the building, exploitation and maintenance of KBs are far from being trivial, and significant progress has been made during the last years. However, there are still a number of challenges that remain open. In fact, there are some issues to be addressed in order to empirically prove the technology for systems of this kind to be mature and reliable.
... 8 There are a lot of semantic similarity measures for identifying semantic similarity in the biomedical field. 17 However, the best results have been achieved when aggregating a number of simple similarity measures. 2 This means that after the various similarity values have been calculated, the overall similarity for a pair of biomedical expressions is computed using an aggregation function of the individual semantic similarity values. ...
... For instance COMA++, where a library of semantic similarity measures and friendly user interface to aggregate them are provided, 2 or MaF, a matching framework that allows users to combine simple similarity measures to create more complex ones. 17 These approaches can be even improved by using weighted means where the weights are automatically computed by means of heuristic and meta-heuristic algorithms. In that case, most promising measures receive better weights. ...
Semantic similarity measurement of biomedical nomenclature aims to determine the likeness between two biomedical expressions that use different lexicographies for representing the same real biomedical concept. There are many semantic similarity measures for trying to address this issue, many of them have represented an incremental improvement over the previous ones. In this work, we present yet another incremental solution that is able to outperform existing approaches by using a sophisticated aggregation method based on fuzzy logic. Results show us that our strategy is able to consistently beat existing approaches when solving well-known biomedical benchmark data sets.
... However, there are several related works in the field of semantic similarity aggregation. For instance COMA, where a library of semantic similarity measures and friendly user interface to aggregate them are provided [13], or MaF, a matching framework that allow users to combine simple similarity measures to create more complex ones [21]. ...
Semantic similarity measurement aims to determine the likeness between two text expressions that use different lexicographies for representing the same real object or idea. There are a lot of semantic similarity measures for addressing this problem. However, the best results have been achieved when aggregating a number of simple similarity measures. This means that after the various similarity values have been calculated, the overall similarity for a pair of text expressions is computed using an aggregation function of these individual semantic similarity values. This aggregation is often computed by means of statistical functions. In this work, we present CoTO (Consensus or Trade-Off) a solution based on fuzzy logic that is able to outperform these traditional approaches.
... A possible technique which can be used is semantic matching [24]. This process involves determining how the new information and the existing knowledge interact, how existing knowledge should be modified to accommodate the new information, and how the new information should be modified in light of the existing knowledge [25]. These techniques can be used for going beyond the literal lexical match of words and operate at the conceptual level when comparing specific labels for concepts (e.g., Finance) also yields matches on related terms (e.g., Economics, Economic Affairs, Financial Affairs, etc.). ...
A fundamental challenge in the intersection of Artificial Intelligence and Databases consists of developing methods to automatically manage Knowledge Bases which can serve as a knowledge source for computer systems trying to replicate the decision-making ability of human experts. Despite of most of the tasks involved in the building, exploitation and maintenance of KBs are far from being trivial, and significant progress has been made during the last years. However, there are still a number of challenges that remain open. In fact, there are some issues to be addressed in order to empirically prove the technology for systems of this kind to be mature and reliable.
... More precisely, for each Ca-pabilityDefinition in n (line 8) it checks whether there exists a Capability on the boundaries of s such that they exactly match (lines 10-16). The comparison is performed by the match method which checks whether a Capabil-ityDefinition and a Capability have same name and type (lines [24][25][26][27]. If no capability matches the capability definition under consideration, then the latter is added to a new set of unmatched CapabilityDefinitions (line 17). ...
... More precisely, for each CapabilityDefinition in n (that has not yet been matched-line 6), it checks whether there exists a Capability on the boundaries of s such that they plug-in match (lines [7][8][9][10][11][12][13][14]. The comparison is performed by the match method (lines 21-29) which checks whether a Capability c has the same name as CapabilityDefinition cDef (line 22) and whether c either has the same type of or is derived from cDef (lines [23][24][25][26][27][28]. If no capability matches the capability definition under consideration, then the latter is added to the (new) set of unmatched capability definitions (line 15). ...
... The matching definitions given in the previous sections may be implemented by employing ontology-based descriptions of cloud services [31]. To avoid all the ontology-related problems (such as the cross-ontology matchmaking [22,25]), in this section we propose a methodology to manually adapt unmatched plug-in ServiceTemplates so as to exactly match the target NodeTypes. Namely, we show how to exactly match target NodeTypes by non-intrusively adapting flexibly matched ServiceTemplates and by intrusively adapting white-box matched ServiceTemplates. ...
The OASIS TOSCA specification aims at enhancing the portability of cloud applications by defining a language to describe and manage them across heterogeneous clouds. A service template is defined as an orchestration of typed nodes, which can be instantiated by matching other service templates. In this paper, we define and implement the notions of exact and plug-in matching between TOSCA service templates and node types. We then define two other types of matching (flexible and white-box), each permitting to ignore larger sets of non-relevant syntactic differences when type-checking service templates with respect to node types. The paper also describes how a service template S that plug-in, flexibly, or white-box matches a node type N can be suitably adapted so as to exactly match N.
... These methods come from the field of semantic similarity aggregation. For instance COMA, where a library of semantic similarity measures and friendly user interface to aggregate them are provided [11], or MaF, a matching framework that allow users to combine simple similarity measures to create more complex ones [17]. ...
... In that case, most promising measures receive better weights. This means that all the efforts are focused on getting more complex weighted means that after some training are able to recognize the most important atomic measures for solving a given problem [17]. There are two major problems that make these approaches not very appropriate in real environments: First problem is that these techniques require a lot of training efforts. ...
Semantic similarity measurement aims to determine the likeness between two text expressions that use different lexicographies for representing the same real object or idea. In this work, we describe the way to exploit broad cultural trends for identifying semantic similarity. This is possible through the quantitative analysis of a vast digital book collection representing the digested history of humanity. Our research work has revealed that appropriately analyzing the co-occurrence of words in some periods of human literature can help us to determine the semantic similarity between these words by means of computers with a high degree of
accuracy.
... In this paper we abstract from a specific implementation of (cross) ontology matchmaking (like, e.g.,[8] or[11]). ...
The OASIS TOSCA specification aims at enhancing the portability of cloud-based applications by defining a language to describe and manage service orchestrations across heterogeneous clouds. A service template is defined as an orchestration of typed nodes, which can be instantiated by matching other service templates. In this paper, after defining the notion of exact matching between TOSCA service templates and node types, we define three other types of matching (plug-in, flexible and white-box), each permitting to ignore larger sets of non-relevant syntactic differences when type-checking service templates with respect to node types. We also describe how service templates that plug-in, flexibly or white-box match node types can be suitably adapted so as to exactly match them.
Purpose
Although ontology matchers are annually proposed to address different aspects of the semantic heterogeneity problem, finding the most suitable alignment approach is still an issue. This study aims to propose a computational solution for ontology meta-matching (OMM) and a framework designed for developers to make use of alignment techniques in their applications.
Design/methodology/approach
The framework includes some similarity functions that can be chosen by developers and then, automatically, set weights for each function to obtain better alignments. To evaluate the framework, several simulations were performed with a data set from the Ontology Alignment Evaluation Initiative. Simple similarity functions were used, rather than aligners known in the literature, to demonstrate that the results would be more influenced by the proposed meta-alignment approach than the functions used.
Findings
The results showed that the framework is able to adapt to different test cases. The approach achieved better results when compared with existing ontology meta-matchers.
Originality/value
Although approaches for OMM have been proposed, it is not easy to use them during software development. On the other hand, this work presents a framework that can be used by developers to align ontologies. New ontology matchers can be added and the framework is extensible to new methods. Moreover, this work presents a novel OMM approach modeled as a linear equation system which can be easily computed.