Conference PaperPDF Available

Proposal for Extending New Linked Data Rules for the Semantic Web

Authors:
  • National University of Distance Education

Abstract

Semantic content annotations are not enough to construct the Semantic Web; these semantic data need to be linked. This is what the Linked Data fourth rule covers. Apparently, only the author of the content can do this task, but in this article we explore the possibilities of semantic technology and we study whether it is possible to create some types of semantic links more automatically, without intervention by the author. Furthermore, we study the problem of guaranteeing that the annotation in fact represents the content to which it refers, bearing in mind that the annotation should be as immediate as possible. As a result of these considerations, two new rules are proposed that make very clear the need to develop tools that automate the common-ontology based data link, thereby facilitating the update, security and consistency of the semantic information.
Proposal for Extending New Linked Data Rules
for the Semantic Web
Rafael Martínez Tomás and Luis Criado Fernández
Dpto. Inteligencia Artificial. Escuela Técnica Superior de Ingeniería Informática,
Universidad Nacional de Educación a Distancia,
Juan del Rosal 16, 28040 Madrid, Spain
rmtomas@dia.uned.es
Abstract. Semantic content annotations are not enough to construct
the Semantic Web; these semantic data need to be linked. This is what
the Linked Data fourth rule covers. Apparently, only the author of the
content can do this task, but in this article we explore the possibilities
of semantic technology and we study whether it is possible to create
some types of semantic links more automatically, without intervention
by the author. Furthermore, we study the problem of guaranteeing that
the annotation in fact represents the content to which it refers, bearing
in mind that the annotation should be as immediate as possible. As
a result of these considerations, two new rules are proposed that make
very clear the need to develop tools that automate the common-ontology
based data link, thereby facilitating the update, security and consistency
of the semantic information.
Keywords: Semantic Web, Semantic Views, RDFa, OWL, SPARQL,
semantic annotation, Linked Data.
1 Introduction
In the last two years a critical mass of formally annotated information has been
generated with semantics, based on the Linked Data concept [1]. The Semantic
Web not only consists of publishing formally annotated data on the Web, but also
of linking them with others, so that people and computer systems can explore the
web of data and obtain related information from other initial data. It is in this
context where the concept of “Linked Data” arises. In fact, it is a logical evolution
of the foundational concept of the Web, the hyperlink, towards the formalisation
and automation that the Semantic Web adds. It is the data link that gives
the semantic web its value and also its power as a distributed computational
knowledge system.
There are four rules that define this concept:
1. Using URIs (Uniform Resource Identifiers) as unique names for the resources.
2. Using the protocol HTTP to name and determine the location of the data
identified with these URIs.
J.M. Ferrández et al. (Eds.): IWINAC 2011, Part I, LNCS 6686, pp. 531–539, 2011.
c
Springer-Verlag Berlin Heidelb erg 2011
532 R.M. Tomás and L.C. Fernández
3. Offering information on the resources using RDF.
4. Including links to other URIs to locate more linked data.
The first three rules refer to the same elements that must be considered in
the semantic annotation process. The fourth rule, which in fact gives the data
link a name, prescribes using hyperlinks to external semantic information from
the website whose content is being formally represented. This article analyses
the current trend in Linked Data developments, which implies analysing cer-
tain aspects of the semantic annotation strategy. We raise several considerations
(section 2) that start fundamentally from the problem of consistency and updat-
ing of semantic content annotation from the current authorship of the annota-
tions and link. These considerations will later lead us to make some proposals for
extending the Linked Data rules (section 3) that aim to help the aforementioned
problems by improving the update frequency of the semantic data and introduc-
ing coherence between content and formally represented data. The evaluation of
these ideas (section 4) is based on the results obtained with the sw2sws tool ,
which is a tool that populates ontologies that have been located as similar or ap-
proximate from html content pages. A description is also given of how generated
semantic websites (rule 1, 2 and 3) can be used by a semantic search engine (in
our case Vissem ) to obtain the data link indirectly (rule 4). Finally, the article
ends with some conclusions.
2 Considerations about the Current Four Rules
The Semantic Web has to interconnect the semantic contents, as the Linked
Data fourth rule establishes. Yet in addition to this, several aspects that are
key for the generalised acceptance of this concept and for (extended) migration
towards a Semantic Web should be considered.
Consideration 1: link authorship
The simplest way of producing linked data consists of using a file, an URI
that points to another. When an RDF file is written, for example1,<http://
example.criado.info/smith>, local identifiers can be used in it, so that we
could refer to the identifiers #albert, #brian and #carol, which in N32notation
we will express as: <#albert> fam:child <#brian>, <#carol>.
AndinRDFas:
<rdf : Description about="#albert "
<fam: ch ild rdf : Resource="#brian">
<fa m : c h i l d r d f : R e s o u r c e ="#c a r o l ">
</ r d f : D e s c r i p t i o n >
The current architecture of the WWW provides a global identifier "http://
example.criado.info/smith#albert" for “Albert”, i.e. anyone can use this
identifier to refer to “Albert” and thus provide more information. For example,
1http://www.w3.org/DesignIssues/LinkedData.html
2http://www.w3.org/DesignIssues/Notation3
Proposal for Extending New Linked Data Rules for the Semantic Web 533
in the document <http://example.criado.info/jones> someone could write:
<#denise> fam:child <#edwin>, <smith#carol>.
or in RDF/XML:
<rdf:Description about="# denise"
<fam:child rdf:Resource="#edwin">
<fam:child rdf:Resource="http :// example.criado.info / smith#
carol">
</rdf:Description >
Just with this we have a basic Semantic Web in the sense that the semantic data
are linked by the author of the contents. As a consideration it could be argued
that this model is an exact reflection of the traditional web pages, where the
authors of the contents decide the links.
Consideration 2:Embedded annotation
The current trend is to approach semantic annotation in an embedded way. Stan-
dards like HTML 53,RDFa
4and “XHTML+RDFa 1.1”5support this embedded
procedure, and implementations like Linkator[2], the adoption of ontologies like
GoodRelations6by Google7based on the use of “snippet” obviously confirm the
firmness of this W3C proposal.
However, embedded annotation has to solve the problem of guaranteeing the
coherence of the content that it represents. To explain this problem, the follow-
ing must be considered: firstly, we have to be able to do a semantic annotation,
i.e. formally represent a content, for example, of an HTML page in accordance
with one or several ontologies related to that content. Secondly, the actor that
uses or exploits this information now with semantics, for example, a search en-
gine, must be sure that the annotations that it processes are coherent with the
original content. If the annotation is embedded, then the HTML page incorpo-
rates semantics. What happens if something changes in the content affecting the
embedded annotation? What happens if something changes in the content not
affecting the embedded annotation? In both instances, if we calculate the HTML
file digital signature with a hash function, it is verified that the signature has
changed, so we can never be sure whether the “ snippet” in RDFa representing
its semantic content in fact still matches the HTML page. There is no way of
guaranteeing coherence between content and annotation. In other words, be-
cause the annotation is inseparable from the HTML page the annotation update
is uncertain.
As a consideration, embedded annotation has the disadvantage of not being
able to guarantee coherence between the content and what has formally been
expressed in RDFa. Consequently, developing the Semantic Web in such a weak
model of inconsistency does not seem the most appropriate.
3http://www.w3.org/TR/html5/
4http://www.w3.org/TR/2010/WD- rdfa-core-20100803/
5http://www.w3.org/TR/2010/WD- xhtml-rdfa-20100803/
6http://www.heppnetz.de/projects/goodrelations/
7http://www.google.com/support/webmasters/ bin/answer.py?answer=186036
534 R.M. Tomás and L.C. Fernández
Consideration 3: Frequency of generating or updating semantic data
Another aspect that must be considered is the update frequency of the
annotation and data link. A Semantic Web that is always obsolete with the
current Web content cannot be presented. For example, the penultimate ver-
sion of DBpedia, when writing this article (January 2011), was DBpedia 3.58
that formalises Wikipedia data until March 2010 and has been active until the
first fortnight of January 2011 (approximately a 10-month lag with Wikipedia).
DBpedia version 3.69was launched in the last fortnight of January 2011 and
incorporates data up to November 2010. In other words, processing the data in
the DBpedia project seems to need more than a month in the most optimistic
of scenarios. Such a Semantic Web out of date with the current Web cannot
compete with the current Web and is unlikely to be accepted by the general
public.
Semantic data must be generated as soon as possible, ideally at the same time
as the content. So, this leads us to ask who must generate the semantic data.
3 Proposal for New Linked Data Rules
We could refine the current four rules or add another two rules to avoid the
problems detected in earlier considerations:
Rule 5. Semantic annotation must guarantee coherence between the content
and pseudo-immediate update.
To guarantee coherence between annotation and content, as rule 5 establishes,
a non-embedded annotation strategy must be used. This external annotation
must relate the content source to the very annotation to guarantee coherence
between formal representation and content.
A simple way of doing this is to have a mechanism to generate URIs based,
for example, on hash functions derived from the HTML page where the formal
annotations are to be done, which is the third approach that Martin Hepp anal-
yses [4] in relation to GoodRelations ontology annotation. Thus, it is possible to
obtain a unique URI for the very annotation (xml:base element), i.e. from the
formal representation of the HTML page content.
Note that this procedure is no good when the annotation is embedded, since
it is impossible to calculate a hash function to define an URI and then intro-
duce a “snippet” into the web page without modifying the web page hash again.
This occurs because a hash function obtains a unique identifier given an input,
a file in this instance. When the input changes, although only a symbol changes,
the hash result is an identifier completely different from the former. Therefore,
if we obtain an URI from the hash of the HTML file content and then do an
annotation in RDFa (as an example of embedded annotation) where we use this
calculated URI and include all the code (snippet) in the HTML page, then, as
soon as that snippet is included we have altered the HTML file content. Conse-
quently, if we calculate the URI again from the hash of the HTML file content,
8http://lists.w3.org/Archives/Public/semantic-web/ 2010Apr/0111.html
9http://lists.w3.org/Archives/Public/semantic-web/ 2011Jan/0105.html
Proposal for Extending New Linked Data Rules for the Semantic Web 535
we will obtain an URI that is completely different from the URI that we had
previously embedded. This implies that the URI, although unique, never allows
us to determine whether the embedded annotation is coherent with the content
that it represents.
The second aspect of this same rule requires the annotation, coherent with the
content, to be as updated as possible. This problem implies developing proce-
dures that are as automatic as possible, in keeping with our earlier works [3][6],
and generally with automatic annotation.
Rule 6. Urge the shared use of ontologies, since this allows the non-explicit or
dynamic data link, the ABOX-TBOX link.
In a Web without semantics, the links are established by the author of the
content (rule 4). Yet in a Web with semantics there is also an approach where the
data link is established dynamically when a SPARQL10 query is made. Declaring
instances of a same ontology by different authors of different webs without any
semantic data links between the webs does not imply that these data cannot be
linked afterwards. If these semantic data, which have been generated from the
same ontology, are collected or grouped by a third person who wishes to exploit
this information, then when it is all centralised in a SPARQL EndPoint11,class
instances can be obtained regardless of their origin. Thus, a SPARQL query
would generate, under these conditions, a data link dynamically.
In an environment where data can be represented formally and where the
data are classified into concepts (classes) that are defined by some character-
istics (properties) and by the relations of all these elements (restrictions), the
auto-link of some semantic data is possible in their exploitation, particularly
the link between instances and classes (ABOX-TBOX), leaving the link between
instances (ABOX-ABOX) to the author of the content (rule 4).
Rule 6 is complementary to rule 4 and couldevenbeseenasarenedextension
of rule 4. This rule facilitates the process, since just with rule 4, it is apparently
difficult to automate the data link process, a problem that disappears when this
proposal is improved.
To illustrate the meaning of the 6 rules together (including the proposed
ones), we present the following example. Let us assume that on a web page
mascotas (pets)12 we annotate formally that bengala (bengal) is a gato (cat).
We have an ontology to do this vertebrados (verteberates)13, but in order to
know that this annotation matches the content we apply MD514,whichisa
hash function, to fulfil rule 5: a509d1fdbeba807da648b83d45fd8903. Thus we
could represent OWL formally as follows (line numbers have been added for the
explanations):
10 http://www.w3.org/TR/rdf-sparql-query/
11 Un "EndPoint" indica una ubicación específica para acceder a un servicio Web
mediante un protocolo y formato de datos específico. http://www.w3.org/TR/
ws-desc-reqs/#normDefs
12 http://www.mascotas.org/tag/el- gato-bengala
13 http://www.criado.info/owl/vertebrados_es. owl
14 http://tools.ietf.org/html/rfc1321
536 R.M. Tomás and L.C. Fernández
1<rdf:RDF
2 xmlns : j.0="http ://www. criado . i nfo/owl/vertebrados_es .owl#"
3 xmlns: protege="http :// protege . stanford . edu/plugins/owl/
protege#"
4 xmlns : r df ="h ttp : //www. w3 . or g/1999/02/22 rdfsyntaxns#"
5 xmlns : xsd="htt p: //www.w3. org /2001/XMLSchema#"
6 xmlns : r df s="htt p: //www.w3. o rg/2000/01/ r dfschema#"
7 xmlns : owl="htt p: //www.w3. org /2002/07/ owl#"
8 xmlns="http ://www. mascotas . org/t ag/el gatobengala#"
9 xml: base="http: //example . c riado . i nfo /
a509d1fdbeba807da648b83d45fd8903 .owl">
10 <owl : Ontology r df : about="">
11 <owl: imports rdf : resourc e="http ://www. criado . i nfo /owl/
vertebrados_es. owl#"/>
12 </owl : Ontology> row 1 3: <j . 0 : gat o rd f :ID="bengala "/> row
14: <owl : AllDifferent>
13 <owl : distinctMembers rdf :parseType="Collection"> row 16: <
gato rd f : about="#bengala"/>
14 </owl : distinctMembers>
15 </owl : AllDifferent>
16 </ r d f : RDF>
The resource defined bengala fulfils rule 1, because it has an URI (see
row 9) which, of course, is unique. Furthermore, the coherence between con-
tent and annotation that rule 5 defines is also fulfilled in row 9, since it uses the
MD5 algorithm derived from the HTML page of row 8. Rule 2 is fulfilled with
row 8, because it uses the HTTP protocol to name and locate the data.
Continuing with the example, we need another similar file to show the ABOX-
TBOX data link as:
1<rdf:RDF
2xmlns:j.0=" http :// www.criado .info/owl / vertebrados_es . owl
#"
3xmlns:protege ="http :// protege.stanford .edu/plugins /owl/
protege#"
4xmlns:rdf="http ://www.w3. org /1999/02/22 -rdf-syntax -ns #"
5xmlns:xsd="http ://www.w3. org /2001/ XMLSchema #"
6xmlns:rdfs ="http://www.w3. org /2000/01/ rdf -schema#"
7xmlns:owl="http ://www.w3. org /2002/07/ owl#" xmlns="http://
mascotas. facilisimo .com/reportajes/gatos/razas -de -
gatos/korat - el-gato - de- la-buena - suerte_185682 .html #"
8xml: base ="http :// example .criado.info/
f83b88b700dd7389b31887f34a4dde7d.owl">
9<owl:Ontology rdf:about="">
10 <owl:imports rdf:resource =" http :// www.criado.info/owl/
vertebrados_es .owl #"/ >
11 </owl: Ontology >
12 <j.0: gato rdf :ID=" korat"/>
13 <owl: AllDifferent >
14 <owl:distinctMembers rdf: parseType =" Collection ">
Proposal for Extending New Linked Data Rules for the Semantic Web 537
15 <gato rdf :about="# korat"/>
16 </owl:distinctMembers >
17 </owl:AllDifferent >
18 </rdf: RDF >
With both files, we could execute the following SPARQL query:
Obtaining the breeds of cats from the two different websites that have not
established any explicit semantic data link, but that a third person (for example,
us) has exploited, for example, with twinkle (a SPARQL query tool), and we have
linked the ABOXs with the TBOXs. Rule 6 is thus fulfilled.
To generalise the former query, logically we cannot specify FROM of all the
OWL files containing cats. “SPARQL EndPoints” must be used. Assuming that
in some URL we had a SPARQL EndPoint storing semantic data on cats, fed
with some LDSpider tool[5], we could make the following query:
PREFIX r dfs 99 : <h ttp : //www. w3 . or g/1999/02/22 rdfsyntaxns#>
PREFIX vertebrados : <http ://www. criado . in fo/owl/vertebrados_es
.owl#>
SELECT ? s ? v
WHERE { ? s rdfs99 : type vertebrados : gato . ? s rdfs99 : type ?v }
Obtaining the list of all the cats that there are in different URLs of the Semantic
Web that LDSpider would have been able to explore and incorporate into a
database.
4 Conclusions
In this article we have considered Linked Data and Semantic Web concepts and
their problems. A Semantic Web based on the currently accepted four rules
provides a Semantic Web in deferred time, since this Semantic Web will always
be out of date with the current Web, which is the source of information. We
only have to check Linked Data platforms such as, for example, the DBpedia
to discover that they are updated with subsequent versions and thus are not in
real time15. This view of a Semantic Web behind the current Web is apparently
enough of an obstacle to prevent full implementation.
15 http://dbpedia.org/About
538 R.M. Tomás and L.C. Fernández
Obviously, the power of the current Web is primarily due to the fact that it is
very simple to generate web pages, that the information is decentralized and that
the Web is participatory. However, what makes the Web especially powerful and
useful, is that the web pages are linked (the concept of hyperlink is inseparable
from that of the web) and that publishing information and linking it with other
information is very easy (publishing on the web as a task only requires an html
or text editor, it is not even necessary to generate a 100% correct code). Work
is currently being done on the idea that the information generated semantically
(following the Linked Data rules) is linked by the same actor. This approach is
the same as when we used HTML. Although our information can, of course, also
be linked by a third person in both paradigms. These are explicit links. This
characteristic of establishing explicit hyperlinks has been directly transferred
to the Semantic Web environment and is the Linked Data fourth rule. Yet, in
an environment where systems understand something of the content, is this as
interesting? Who must establish the links? And what if the actor exploiting the
semantic data could establish the links?
In this work, we have not only studied these problems, but we have also tried
to provide new ideas that help direct the data link process to the most up-to-date
Semantic Web possible with regard to the current Web. We have proposed intro-
ducing two new rules, which are fundamentally aimed at extending automatic
annotation based on shared ontologies, so that the problems of consistency and
updating of content annotation are solved. The proposals are accompanied with
an application example of the 6 resulting rules.
Acknowledgements
The authors are grateful to the CiCYT for financial aid on project TIN2010-
20845-C03-02.
References
1. Bizer, C., Heath, T., Berners-Lee, T.: Linked Data - The Story So Far. Inter-
national Journal on Semantic Web and Information Systems 5(3), 1–22 (2009),
doi:10.4018/jswis.2009081901
2. Araujo, S., Houben, G.J., Schwabe, D.: Linkator: Enriching Web Pages by Auto-
matically Adding Dereferenceable Semantic Annotations. In: Benatallah, B., et al.
(eds.) ICWE 2010. LNCS, vol. 6189, pp. 355–369. Springer, Heidelberg (2010)
3. Criado, L.: PhD thesis. Semi-automatic Procedure for Transforming the Web into
a Semantic Web. Universidad Nacional de Educa-ción a Distancia. Escuela Técnica
Superior de Ingeniería Informática. Madrid, Espańa (2009)
4. Gomez, M., Preece, A.D., Johnson, M.P., de Mel, G., Vasconcelos, W.W., Gibson,
C., Bar-Noy, A., Borowiecki, K., La Porta, T., Pizzocaro, D., Rowaihy, H., Pear-
son, G., Pham, T.: An ontology-centric approach to sensor-mission assignment. In:
Gangemi, A., Euzenat, J. (eds.) EKAW 2008. LNCS (LNAI), vol. 5268, pp. 347–363.
Springer, Heidelberg (2008)
Proposal for Extending New Linked Data Rules for the Semantic Web 539
5. Isele, R., Umbrich, J., Bizer, C., Harth, A.: LDSpider: An open-source
crawling framework for the Web of Linked Data. Poster at the Interna-
tional Semantic Web Conference (ISWC2010), Shanghai (November 2010),
on line, http://www.wiwiss.fuberlin.de/en/institute/pwo/bizer/research/
publications/IseleHarthUmbrichBizer-LDspider-Poster-ISWC2010.pdf
6. Fernández, L.C., Martínez-Tomás, R.: The Problem of Constructing General-
Purpose Semantic Search Engines. In: Mira, J., Ferrández, J.M., Álvarez, J.R., de la
Paz, F., Toledo, F.J. (eds.) IWINAC 2009. LNCS, vol. 5601, pp. 366–374. Springer,
Heidelberg (2009)
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The term Linked Data refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions-the Web of Data. In this article we present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. We describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
Conference Paper
Full-text available
Sensor-mission assignment involves the allocation of sensor and other information-providing resources to missions in order to cover the information needs of the individual tasks in each mission. This is an important problem in the intelligence, surveillance, and reconnaissance (ISR) domain, where sensors are typically over-subscribed, and task requirements change dynamically. This paper approaches the sensor-mission assignment problem from a Semantic Web perspective: the core of the approach is a set of ontologies describing mission tasks, sensors, and deployment platforms. Semantic reasoning is used to recommend collections of types of sensors and platforms that are known to be “fit-for-purpose” for a particular task, during the mission planning process. These recommended solutions are used to constrain a search for available instances of sensors and platforms that can be allocated at mission execution-time to the relevant tasks. An interface to the physical sensor environment allows the instances to be configured to operate as a coherent whole and deliver the necessary data to users. Feedback loops exist throughout, allowing re-planning of the sensor-task fitness, reallocation of instances, and reconfiguration of the sensor network.
Article
Full-text available
The term "Linked Data" refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions-the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
Conference Paper
Full-text available
The Web of Linked Data is growing and currently consists of several hundred interconnected data sources altogether serving over 25 billion RDF triples to the Web. What has hampered the exploitation of this global dataspace up till now is the lack of an open-source Linked Data crawler which can be employed by Linked Data applications to localize (parts of) the dataspace for further processing. With LDSpider, we are closing this gap in the landscape of publicly available Linked Data tools. LDSpider traverses theWeb of Linked Data by following RDF links between data items, it supports different crawling strategies and allows crawled data to be stored either in files or in an RDF store.
Conference Paper
Full-text available
In this paper, we introduce Linkator, an application architecture that exploits semantic annotations for automatically adding links to previously generated web pages. Linkator provides a mechanism for dereferencing these semantic annotations with what it calls semantic links. Automatically adding links to web pages improves the users’ navigation. It connects the visited page with external sources of information that the user can be interested in, but that were not identified as such during the web page design phase. The process of auto-linking encompasses: finding the terms to be linked and finding the destination of the link. Linkator delegates the first stage to external semantic annotation tools and it concentrates on the process of finding a relevant resource to link to. In this paper, a use case is presented that shows how this mechanism can support knowledge workers in finding publications during their navigation on the web.
Conference Paper
This work proposes the basic ideas to achieve a really semantic search. For this, first the number of semantic web sites must be increased which, on the one hand, maintain compatibility with the current Web and, on the other, offer different interpretations (now with semantics) of the same information according to different ontologies. Thus, the design of tools is proposed that facilitate this translation of HTML contents into OWL contents, as we say, possibly, according to different ontologies. The article continues by analysing the possible functionalities that we consider a semantic search engine based on the Semantic Web paradigm must support and by presenting a general-purpose search engine prototype on the two structures, current and semantic.
PhD thesis. Semi-automatic Procedure for Transforming the Web into a Semantic Web
  • L Criado
Linkator: Enriching Web Pages by Automatically Adding Dereferenceable Semantic Annotations
  • S Araujo
  • G J Houben
  • D Schwabe
Araujo, S., Houben, G.J., Schwabe, D.: Linkator: Enriching Web Pages by Automatically Adding Dereferenceable Semantic Annotations. In: Benatallah, B., et al. (eds.) ICWE 2010. LNCS, vol. 6189, pp. 355-369. Springer, Heidelberg (2010)
Universidad Nacional de Educa-ción a Distancia. Escuela Técnica Superior de Ingeniería Informática
  • L Criado
Criado, L.: PhD thesis. Semi-automatic Procedure for Transforming the Web into a Semantic Web. Universidad Nacional de Educa-ción a Distancia. Escuela Técnica Superior de Ingeniería Informática. Madrid, Espańa (2009)
An ontology-centric approach to sensor-mission assignment
  • M Gomez
  • A D Preece
  • M P Johnson
  • G De Mel
  • W W Vasconcelos
  • C Gibson
  • A Bar-Noy
  • K Borowiecki
  • T La Porta
  • D Pizzocaro
  • H Rowaihy
  • G Pearson
  • T Pham
Gomez, M., Preece, A.D., Johnson, M.P., de Mel, G., Vasconcelos, W.W., Gibson, C., Bar-Noy, A., Borowiecki, K., La Porta, T., Pizzocaro, D., Rowaihy, H., Pearson, G., Pham, T.: An ontology-centric approach to sensor-mission assignment. In: Gangemi, A., Euzenat, J. (eds.) EKAW 2008. LNCS (LNAI), vol. 5268, pp. 347-363. Springer, Heidelberg (2008)