Figure 3 - uploaded by Werner Kuhn
Content may be subject to copyright.
A mapping between PROV's concepts, types, and relations to core concepts. https://www. w3.org/TR/2013/REC-prov-dm-20130430/Copyright © 2011-2013 W3C ® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply. 

A mapping between PROV's concepts, types, and relations to core concepts. https://www. w3.org/TR/2013/REC-prov-dm-20130430/Copyright © 2011-2013 W3C ® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply. 

Source publication
Article
Full-text available
Location data from social network posts are attractive for answering all sorts of questions by spatial analysis. However, it is often unclear what this information locates. Is it a point of interest (POI), the device at the time of posting, or something else? As a result, locational references in posts may get misinterpreted. For example, a restaur...

Contexts in source publication

Context 1
... PROV documents are a family of provenance specifications for the web produced by the W3C Provenance Working Group. PROV 'describes the use and production of entities by activities which may be influenced in various ways by agents (Moreau and Missier 2013).' Figure 2 depicts a graph diagramming PROV's ten core concepts of types and relations. PROV's foundation is a conceptual data model (PROV-DM), which describes a simple provenance vocabulary. Figure 3 depicts the mapping from PROV's ten core concepts to PROV's data model (PROV-DM) core vocabulary types and relations. PROV has several serializations includ- ing PROV-O (PROV ontology), which allows mapping from the OWL2 ontology onto the PROV data model. For more details on PROV including an example application, we point readers to the work in progress PROV Primer ( Gil et al. 2013). In this work, we model the production of a during posting events by a poster. Some may see applying a formal ontology and a controlled vocabulary to posts as excessive and difficult for users to work with. However, posts will evolve, and we are attempting to build a theory of groupable locatable things. Therefore, logical descriptions are a good means of conveying the long-term intended meaning of locational references (Lauriault et al. ...
Context 2
... foundation is a conceptual data model (PROV-DM), which describes a simple provenance vocabulary. Figure 3 depicts the mapping from PROV's ten core concepts to PROV's data model (PROV-DM) core vocabulary types and relations. PROV has several serializations including PROV-O (PROV ontology), which allows mapping from the OWL2 ontology onto the PROV data model. ...

Similar publications

Technical Report
Full-text available
I forbindelse med oppsetting av sirklingslys ved Vadsø Lufthavn ble det i 2007 forvoldt skade på tre automatisk fredete kulturminner. Riksantikvaren fattet vedtak om sikring av de skadete delene av lokaliteten, som innebar utgraving og dokumentasjon, revegetering og restaurering av kulturmiljø, samt miljøovervåking av bevaringsforholdene i en tuft....
Article
Full-text available
The purpose of this study was to examine the effect of service reliability on patronage of quick- service restaurants in port- Harcourt.

Citations

... Geoparsing is a procedure to detect the geographic information in texts and link with gazetteers, a database storing place names and their attributes, including coordinates, population, size, and type [4]. This process generally involves geotagging that recognizes place names in text and geocoding that transforms place names into coordinates [5][6][7]. Geotagging commonly recognizes place names in a text by constructing geographical language models trained on massive corpora of geotagged annotations, such as river, city, etc. [8]. The goal of geocoding is to select the correct coordinate for the place name from a list of candidate coordinates from a gazetteer such as GeoNames [9]. ...
Article
Full-text available
Geocoding is an essential procedure in geographical information retrieval to associate place names with coordinates. Due to the inherent ambiguity of place names in natural language and the scarcity of place names in textual data, it is widely recognized that geocoding is challenging. Recent advances in deep learning have promoted the use of the neural network to improve the performance of geocoding. However, most of the existing approaches consider only the local context, e.g., neighboring words in a sentence, as opposed to the global context, e.g., the topic of the document. Lack of global information may have a severe impact on the robustness of the model. To fill the research gap, this paper proposes a novel global context embedding approach to generate linguistic and geospatial features through topic embedding and location embedding, respectively. A deep neural network called LGGeoCoder, which integrates local and global features, is developed to solve the geocoding as a classification problem. The experiments on a Wikipedia place name dataset demonstrate that LGGeoCoder achieves competitive performance compared with state-of-the-art models. Furthermore, the effect of introducing global linguistic and geospatial features in geocoding to alleviate the ambiguity and scarcity problem is discussed.
... This editorial highlights how these issues are discussed and addressed by the articles of this special issue and how the papers highlight emerging technologies, concepts, platforms, debates, and methodologies and techniques within VGI and suggest future research directions. This special issue gathered papers on the topics of crowdsourced geospatial data quality (Ballatore and Arsanjani 2018), thematic uncertainty and consistency across data sources (Hervey and Kuhn 2018), spatial biases (Millar et al., 2018), trust issues within VGI (Severinsen et al. 2019), and contributors behaviour and interactions (Truong et al. 2018). ...
... Ballatore and Arsanjani (2018) looked at the origin and development of Wikimapia and discussed some aspect of Wikimapia, including the project's intellectual property and strategies for quality management. Hervey and Kuhn (2018) explored uncertainty with locational data obtained from social networks. They presented a taxonomy of things that can be located from social network posts and a means to describe them to users. ...
Article
This paper provides an overview of possibilities to localize acts of communication and their agents based on digital traces and scrutinizes their advantages and disadvantages. It shows, (i) what types of geographic information exist in social media data and to what extent they are available to researchers, (ii) which approaches exist to classify locations and (iii) what the advantages and disadvantages of the various approaches are. Introducing an approach to automatically classify location information based on the location information in users’ profiles and a multi-step cross-validation with time zone information, we show that the less resource-intensive approach yields high precision comparable to the “gold standard” of human coding while recall is comparatively low. The discussion of advantages and limitations of all approaches shows that – depending on the research question – the specific research context and its presumed effect, the aspired granularity of location classification and resource considerations can guide a researchers’ decision.