March 2014
·
1,415 Reads
·
1 Citation
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
March 2014
·
1,415 Reads
·
1 Citation
January 2011
·
6 Reads
RosettaNet is an industry-driven e-business process standard that defines common inter-company public processes and their associated business documents. RosettaNet is based on the Service-oriented architecture (SOA) paradigm and all business documents are expressed in DTD or XML Schema. Our “ontologically-enhanced RosettaNet” effort translates RosettaNet business documents into a Web ontology language, allowing business reasoning based on RosettaNet message exchanges. This chapter describes our extension to RosettaNet and shows how it can be used in business integrations for better interoperability. The usage of a Web ontology language in RosettaNet collaborations can help accommodate partner heterogeneity in the setup phase and can ease the back-end integration, enabling for example more competition in the purchasing processes. It provides also a building block to adopt a semantic SOA with richer discovery, selection and composition capabilities.
July 2010
·
136 Reads
·
69 Citations
Semantic Web data exhibits very skewed frequency distri- butions among terms. Ecient large-scale distributed rea- soning methods should maintain load-balance in the face of such highly skewed distribution of input data. We show that term-based partitioning, used by most distributed reason- ing approaches, has limited scalability due to load-balancing problems. We address this problem with a method for data distri- bution based on clustering in elastic regions. Instead of as- signing data to xed peers, data ows semi-randomly in the network. Data items \speed-date" while being temporarily collocated in the same peer. We introduce a bias in the rout- ing to allow semantically clustered neighborhoods to emerge. Our approach is self-organising, ecient and does not re- quire any central coordination. We have implemented this method on the MaRVIN plat- form and have performed experiments on large real-world datasets, using a cluster of up to 64 nodes. We compute the RDFS closure over dierent datasets and show that our clustering algorithm drastically reduces computation time, calculating the RDFS closure of 200 million triples in 7.2 minutes.
December 2009
·
118 Reads
·
104 Citations
Journal of Web Semantics
Abstract Many Semantic Web problems are dicult,to solve through common divide-and-conquer strategies, since they are hard to partition. We present Marvin, a parallel and distributed platform for processing large amounts of RDF data, on a network of loosely-coupled peers. We present our divide-conquer-swap strategy and show that this model converges towards completeness. Within this strategy, we address the problem of making distributed reasoning scalable and load-balanced. We present SpeedDate, a routing strategy that combines data clustering with random exchanges. The ran- dom exchanges ensure load balancing, while the data clustering attempts to maximise eciency. SpeedDate is compared against random and deterministic (DHT-like) approaches, on performance and load-balancing. We simulate parameters such as system size, data distribution, churn rate, and network topology. The results indicate that SpeedDate is near-optimally balanced, performs in the same order of magnitude as a DHT-like approach, and has an average throughput per node that scales with p i for i items in the system. We evaluate our overall Marvin system for performance, scalability, load balancing and eciency.
November 2009
·
213 Reads
·
227 Citations
Lecture Notes in Computer Science
We address the problem of scalable distributed reasoning, proposing a technique for materialising the closure of an RDF graph based on MapReduce. We have implemented our approach on top of Hadoop and deployed it on a compute cluster of up to 64 commodity machines. We show that a naive implementation on top of MapReduce is straightforward but performs badly and we present several non-trivial optimisations. Our algorithm is scalable and allows us to compute the RDFS closure of 865M triples from the Web (producing 30B triples) in less than two hours, faster than any other published approach.
July 2009
·
663 Reads
·
10 Citations
Workflow models have been used and refined for years to execute processes within organisations. To deal with collaborative processes (choreographies) these internal work- flow models have to be aligned with the external behaviour advertised through Web service interfaces. However, traditional workflow management systems (WfMS) do not offer this functionality. Simply sharing and merging process models is often not possible, because workflow management lacks a widely accepted standard theory for workflow models. Multiple research and standardisation efforts to integrate different workflow theories have been proposed over the years. XPDL is the most widely used standard for process model interchange and supported by over 80 systems. However, XPDL also lacks the possibility to relate a work- flow model to its possible choreography interface abstractions. To remedy this situation, we propose to abstract the XPDL model to a higher-level model, perform the integration and the compaction algorithms at that level and then ground it back to the desired choreography models. We develop and use an integrated ontology which is based on the XPDL standard for this purpose. To facilitate the abstraction and grounding, we present a mapping procedure to automatically translate XPDL and BPMN workflow models into this ontology. After translation, these models are annotated with a parameterised role model and other collaborative properties. We present a compaction procedure that automatically maps the annotated models into external choreography interfaces that expose only the relevant in- formation for a particular partner collaboration. Our procedure is agnostic with respect to the target choreography model. We demonstrate our approach using WSMO choreographies which enables us to automatically generate interface models from any WfMSs that supports XPDL export.
January 2009
·
8 Reads
SSRN Electronic Journal
January 2009
·
2 Reads
·
1 Citation
RosettaNet is an industry-driven e-business process standard that defines common inter-company public processes and their associated business documents. RosettaNet is based on the Service-oriented architecture (SOA) paradigm and all business documents are expressed in DTD or XML Schema. Our “ontologically-enhanced RosettaNet” effort translates RosettaNet business documents into a Web ontology language, allowing business reasoning based on RosettaNet message exchanges. This chapter describes our extension to RosettaNet and shows how it can be used in business integrations for better interoperability. The usage of a Web ontology language in RosettaNet collaborations can help accommodate partner heterogeneity in the setup phase and can ease the back-end integration, enabling for example more competition in the purchasing processes. It provides also a building block to adopt a semantic SOA with richer discovery, selection and composition capabilities.
October 2008
·
99 Reads
·
44 Citations
Lecture Notes in Computer Science
We present a technique for answering queries over RDF data through an evolutionary search algorithm, using fingerprinting and Bloom filters for rapid approximate evaluation of generated solutions. Our evolutionary approach has several advantages compared to traditional database-style query answering. First, the result quality increases monotonically and converges with each evolution, offering “anytime” behaviour with arbitrary trade-off between computation time and query results; in addition, the level of approximation can be tuned by varying the size of the Bloom filters. Secondly, through Bloom filter compression we can fit large graphs in main memory, reducing the need for disk I/O during query evaluation. Finally, since the individuals evolve independently, parallel execution is straightforward. We present our prototype that evaluates basic SPARQL queries over arbitrary RDF graphs and show initial results over large datasets.
October 2008
·
92 Reads
·
6 Citations
Lecture Notes in Computer Science
RDF is increasingly being used to represent large amounts of data on the Web. Current query evaluation strategies for RDF are inspired by databases, assuming perfect answers on nite repositories. In this paper, we focus on a query method based on evolutionary com- puting, which allows us to handle uncertainty, incompleteness and un- satisability, and deal with large datasets, all within a single conceptual framework. Our technique supports approximate answers with \anytime" behaviour. We present scalability results and next steps for improvement.
... In contrast, many mainstream programming languages like Java or C# do not support either of these features. On the other hand, while data access libraries in interpreted languages like Ruby (e.g., ActiveRDF (Oren, Heitmann, & Decker, 2008)) allow arbitrary properties, they do so in exchange for an increased risk of typos in attribute names (corresponding to properties) in code which may surface at runtime. ...
January 2008
SSRN Electronic Journal
... Workflows are collections of coordinated tasks designed to carry out a welldefined complex process [1]. Both in the business and scientific communities a range of workflow management systems, i.e., generic information systems that support modeling, execution and monitoring of workflows [2], have been devised, and languages and tools made available. Yet from the point of view of any possible end user, what matters most are not the internal workings or the provider-side efficiency of the system, but the accomplishment, in the best possible manner, of the complex task represented by the workflow specification when it is enacted on behalf of the user. ...
January 2005
... Völkel, Schaffert, and Oren [35] argue for a semantic-web solution for KW support and specify following requirements: ...
January 2008
... The distinction is important because it may have consequences on how annotations are interpreted and treated. In [172], the authors let the users explicitly express whether an annotation refers to a text or its meaning by providing them with a syntax that allows the users to distinguish between these two cases. ...
... To solve the scalability problem of semantic web reasoners, proposed solutions [5][6][7][8][9][10][11][12][13] suggest applying distributed computing techniques. Some of these studies guarantee reaching full closure [5][6][7][8][9][10], while some of them argue that they eventually reach full closure with an infinite loop [11][12][13]. ...
... From a general perspective, the architecture is deployed considering the three logical layers typically used by any application to organize the functional components of the system: data layer, engineering layer and UI layer. This distinction reflects a good practice in software engineering following the actual trend in semantic web applications [6]. Next sections from 3 to 6 reflect the organization of methods given in figure1, and contain the description of how we achieved the realization of the functional components in Fig. 2. ...
... It is very important to allocate related resources effectively in the multiprocess instances so that each process instance can obtain the appropriate resources at the appropriate time (Smanchat et al. 2011). Multi instance process systems are more complex, which are composed of multi-process tasks and various kinds of resources, and so on (Petkov et al. 2005). ...
January 2005
... Faceted browsing has become popular as a user-friendly way to navigate through a wide range of data collections [210], however existing faceted interfaces are manually constructed, do not fully support graph-based navigation and are domain-dependent. Among the applications using facets we can mention: Flamenco [211], BrowserRDF [212,197], ...
Reference:
Modelos de Base de Datos de Grafo y RDF
January 2006
... In order to illustrate the coverage of MAPI (in terms of functionality) in comparison with the state-of-the-art frameworks , we evaluated several software frameworks with similar functionalities (seeTable 1): BioMOBY, Globus [26], UDDI [27], Feta [28], WSMX [29] and SADI [30]. Many frameworks do not support all functionalities. ...
January 2004
... Also worth mentioning are OO-Store [24] which is proposed as a prototype implementation for the processing of RDF data based on ORDBs, and ActiveRDF [25] which also comes with programming elements for the management of RDF data. ...
January 2006