Technical Report

Uniform Resource Identifiers (URI), Generic Syntax

Authors:
  • https://LarryMasinter.net
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... and URN by name, for instance when using the International Stander Book Number (ISBN) to localise a book (978-972-9347-34-4) 72 . The Uniform Resource Identifiers (URI) -either URLs or URNs -are transferred via an application layer protocol for data communication namely via Hypertext Transfer Protocol (HTTP), which is only accessible over the Internet 73 (see Berners-Lee, Fielding and Masinter, 2005). That is to explain the universal identifier of every piece of information stored in the web needs to be standardised by an URI. ...
... 73 See also: Hypertext Transfer Protocol (2020). Retrieved September 20, 2020, from https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol of a hierarchical sequence of components referred to as the scheme, authority, path, query, and fragment"(Berners-Lee et al., 2005) and a few examples of web content originated from YouTube, Tumblr, Facebook and Twitter. ...
... Understanding the generic syntax of URIs.According toBerners-Lee et al. (2005), schemes consists of a sequence of characters beginning with a letter and followed by any combination of letters, digits, plus ("+"), period ("."), or hyphen ("-"), e.g. https which stands for Hypertext Transfer ProtocolSecure, an extension of HTTP used for secure communication 74 . ...
Thesis
Full-text available
Digital methods are taken here as a research practice crucially situated in the technological environment that it explores and exploits. Through software-oriented analysis, this research practice proposes to re-purpose online methods and data for social-medium research but not considered as a proper type of fieldwork because these methods are new and still in their process of description. These methods impose proximity with software and reflect an environment inhabited by technicity. Thus, this dissertation is concerned with a key element of the digital methods research approach: the computational (or technical) mediums as carriers of meaning (see Berry, 2011; Rieder, 2020). The central idea of this dissertation is to address the role of technical knowledge, practise and expertise (as problems and solutions) in the full range of digital methods, taking the technicity of the computational mediums and digital records as objects of study. By focusing on how the concept of technicity matters in digital research, I argue that not only do digital methods open an opportunity for further enquiry into this concept, but they also benefit from such enquiry, since the working material of this research practice are the media, its methods, mechanisms and data. In this way, the notion of technicity-of-the-mediums is used in two senses pointing on the one hand to the effort to become acquainted with the mediums (from a conceptual, technical and empirical perspective), on the other hand, to the object of technical imagination (the capacity of considering the features and practical qualities of technical mediums as ensemble and as a solution to methodological problems). From the standpoint of non-developer researchers and the perspective of software practice, the understanding of digital technologies starts from direct contact, comprehension and different uses of (research) software and the web environment. The journey of digital methods is only fulfilled by technical practice, experimentation and exploration. Two main arguments are put forward in this dissertation. The first states that we can only repurpose what we know well, which means that we need to become acquainted with the mediums from a conceptual-technical-practical perspective; whereas, the second argument states that the practice of digital methods is enhanced when researchers make room for, grow and establish a sensitivity to the technicity-of-the-mediums. The main contribution of this dissertation is to develop a series of conceptual and practical principles for digital research. Theoretically, this dissertation suggests a broader definition of medium in digital methods and introduces the notion of the technicity-of-the-mediums and three distinct but related aspects to consider – namely platform grammatisation, cultures of use and software affordances, as an attempt to defuse some of the difficulties related to the use of digital methods. Practically, it presents concrete methodological approaches providing new analytical perspectives for social media research and digital network studies, while suggesting a way of carrying out digital fieldwork which is substantiated by technical practices and imagination. (Thesis available here: https://run.unl.pt/handle/10362/127961)
... Textual representation is convenient for digests because it is limited to characters that do not lead to encoding incompatibility across different software and hardware platforms. These characters are explicitly allowed by official specification, and considered safe for use in uniform resource identifiers (URI) [6]. URIs are a good example of critical application regarding compatibility as they are at the core of Internet communication. ...
... This is a conservative reference, but reasonable, as both current and proposed methods aim to provide identification -despite the broader context and flexibility of the setting we address. As a conservative reference, hashing small texts through MD5 takes approximately 1.6µs in current hardware 6 . A more realistic reference would consider the computational cost of constantly hashing potentially large contents after each data modification. ...
... Let ϕ(v), v ∈ V 2,3··· be a multi-valued data object and ϕ(f ), f ∈ F 1 be a function that returns k values. Their respective elements x i ∈ G, 0 ≤ i < k −1 are calculated according to Equation (6), where element ρ +i ∈ G represents the special ith element after the reserved element ρ (Section 4.1) in digest-based lexicographic order. ...
Preprint
Full-text available
Universal identifiers and hashing have been widely adopted in computer science from distributed financial transactions to data science. This is a consequence of their capability to avoid many shortcomings of relative identifiers, such as limited scope and the need for central management. However, the current identifiers in use are isolated entities which cannot provide much information about the relationship between objects. As an example, if one has both the identifiers of an object and its transformed version, no information about how they are related can be obtained without resorting to some kind of authority or additionally appended information. Moreover, given an input object and an arbitrarily long sequence of costly steps, one cannot currently predict the identifier of the outcome without actually processing the entire sequence. The capability of predicting the resulting identifier and avoiding redundant calculations is highly desirable in an efficient unmanaged system. In this paper, we propose a new kind of unique identifier that is calculated from the list of events that can produce an object, instead of directly identifying it by content. This provides an identification scheme regardless of the object's current existence, thus allowing to inexpensively check for its content in a database and retrieve it when it has already been calculated before. These identifiers are proposed in the context of abstract algebra, where objects are represented by elements that can be operated according to useful properties, such as associativity, order sensitivity when desired, and reversibility, while simplicity of implementation and strong guarantees are given by well known group theory results.
... Here, we present the bare essentials of URL structure at an appropriate level of granularity to understand our work. 1 Each URL in our corpus has the form: <scheme>://<authority><rest> The scheme component [Berners-Lee et al. 1998, 1994WHATWG 2019] corresponds to the scheme name, which specifies how to interpret the text following the colon. Common schemes are http, ftp, and file. ...
... Every URL in our corpus uses the https scheme. The authority component specifies a subset of the host, port, username, and password [Berners-Lee et al. 1998;WHATWG 2019]. For URLs in our corpus" the authority component has either the form host or user@host where host represents the host and user represents the username. ...
... It captures everything following the authority component. The rest component includes the path [Berners-Lee et al. 1998, 1994WHATWG 2019], which may be empty; it may also include queries, fragments, and accompanying delimiters [Berners-Lee et al. 1998, 1994WHATWG 2019]. For every URL in our corpus, if the rest component is nonempty, it includes a path that "[identifies] the resource within the scope of [the] scheme and authority" [Berners-Lee et al. 1998], it begins at the first / character following the authority component, and it is the last part of the URL. ...
... Other papers focus on the quality of metadata 4 The baseURL of the NSDL OAI server is http://services.nsdl.org:8080/nsdloai/OAI. 5 Although there are other services in the NSDL, such as an archive service, we will not describe them in this paper. 6 http://lucene.apache.org/ ...
... The URI specification [5] enumerates steps to normalize URLs to determine if they are equivalent. This includes ensuring the scheme and hostname are lower case, the default port is not specified, an empty absolute path is represented as a trailing slash, and so on. ...
Preprint
Over three years ago, the Core Integration team of the National Science Digital Library (NSDL) implemented a digital library based on metadata aggregation using Dublin Core and OAI-PMH. The initial expectation was that such low-barrier technologies would be relatively easy to automate and administer. While this architectural choice permitted rapid deployment of a production NSDL, our three years of experience have contradicted our original expectations of easy automation and low people cost. We have learned that alleged "low-barrier" standards are often harder to deploy than expected. In this paper we report on this experience and comment on the general cost, the functionality, and the ultimate effectiveness of this architecture.
... The DNS is a tree-shaped hierarchy for names [110,111] consisting of multiple labels delimited by dots [110], with the root of the tree at the end of the name, see also Figure 7. Names that reach up to the root, i.e., have a right-most label that is empty, are also called FQDNs (Fully Qualified Domain Names). The final '.' separating the empty root label is usually omitted when spelling out FQDNs [20]. One most regularly encounters names when included in a URI on the web [20], i.e., in the form of https://www.example.com/. ...
... The final '.' separating the empty root label is usually omitted when spelling out FQDNs [20]. One most regularly encounters names when included in a URI on the web [20], i.e., in the form of https://www.example.com/. A zone can contain names (as leaf nodes) that form RRsets consisting of the name and all resource records of one specific RRtype for that name, and a name can have multiple RRsets for different RRtypes [111]. ...
Article
Full-text available
With the emergence of remote education and work in universities due to COVID-19, the 'zoomification' of higher education, i.e., the migration of universities to the clouds, reached the public discourse. Ongoing discussions reason about how this shift will take control over students' data away from universities, and may ultimately harm the privacy of researchers and students alike. However, there has been no comprehensive measurement of universities' use of public clouds and reliance on Software-as-a-Service offerings to assess how far this migration has already progressed. We perform a longitudinal study of the migration to public clouds among universities in the U.S. and Europe, as well as institutions listed in the Times Higher Education (THE) Top100 between January 2015 and October 2022. We find that cloud adoption differs between countries, with one cluster (Germany, France, Austria, Switzerland) showing a limited move to clouds, while the other (U.S., U.K., the Netherlands, THE Top100) frequently outsources universities' core functions and services---starting long before the COVID-19 pandemic. We attribute this clustering to several socio-economic factors in the respective countries, including the general culture of higher education and the administrative paradigm taken towards running universities. We then analyze and interpret our results, finding that the implications reach beyond individuals' privacy towards questions of academic independence and integrity.
... parts [50]: (1) The scheme, which defines the method that should be used to address the resource, e.g., the network protocol (cf. Section 4. 1.2). ...
... To this end, Fielding defines six properties web services and APIs must fulfill in order to be considered RESTful. Roy Fielding was also part of the development teams for the URI standard [50] as well as the HTTP/1.1 protocol [149]. Although REST constitutes a general design pattern of web interfaces, which is independent of the used protocol and implementation details, it shares common ideas and design principles with both standards. ...
... (Left) shows the distribution of tweets from the UK, while (right) is for all the 11 EU countries for associating a tweet (represented as sioc:Post) with schema:Place, i.e., its geographical information. sioc:name from SIOC Core Ontology (Berners-Lee et al. 1998;Bradner 1997) associates a place with its name represented as a text literal. schema:addressCountry specifies the country code of the geographic location of the tweet. ...
... Sect. 3.2), the class sioc_t: Category from SIOC Type Ontology (Berners-Lee et al. 1998;Bradner 1997) is used to represent the topics, whose property rdfs:label represents the top topic words and sioc:id refers to the id for the regarding topic. UnemployedPopulation is used to specify the population of the unemployment rate. ...
Article
Full-text available
Among other ways of expressing opinions on media such as blogs, and forums, social media (such as Twitter) has become one of the most widely used channels by populations for expressing their opinions. With an increasing interest in the topic of migration in Europe, it is important to process and analyze these opinions. To this end, this study aims at measuring the public attitudes toward migration in terms of sentiments and hate speech from a large number of tweets crawled on the decisive topic of migration. This study introduces a knowledge base (KB) of anonymized migration-related annotated tweets termed as (MGKB). The tweets from 2013 to July 2021 in the European countries that are hosts of immigrants are collected, pre-processed, and filtered using advanced topic modeling techniques. BERT-based entity linking and sentiment analysis, complemented by attention-based hate speech detection, are performed to annotate the curated tweets. Moreover, external databases are used to identify the potential social and economic factors causing negative public attitudes toward migration. The analysis aligns with the hypothesis that the countries with more migrants have fewer negative and hateful tweets. To further promote research in the interdisciplinary fields of social sciences and computer science, the outcomes are integrated into MGKB, which significantly extends the existing ontology to consider the public attitudes toward migrations and economic indicators. This study further discusses the use-cases and exploitation of MGKB. Finally, MGKB is made publicly available, fully supporting the FAIR principles.
... As an example, the hotel search operation in 23 the Amadeus API [6] requires users to provide valid ho-24 tel names (e.g., "Hotel California"), hotel chains (e.g., 25 "Hilton"), IATA airport codes (e.g., "BUE" for Buenos Aires), 26 ISO currency codes (e.g., "EUR" for Euro), and ISO language 27 codes (e.g., "FR" for French), among others. Generating 28 meaningful values for these types of parameters randomly 29 • J.C. Alonso, A. Martin-Lopez, S. Segura 30 the syntactic validation generating values with the right 31 format, the chances of constructing API requests that return 32 some results-and therefore exercise the core functionality 33 of the API-would be remote. To address this issue, most 34 test case generation approaches resort to data dictionaries: 35 sets of input values collected by the testers, either manu- 36 ally [10] or, when possible, automatically [11]. ...
... The Web of Data is a global data space in continuous growth that contains billions of interlinked queryable 183 data published following the Linked Data principles[29].According to these principles, resources are identified 185 using Uniform Resource Identifiers (URIs)[30] and resource relationships are specified using the Resource Description Framework (RDF)[31].RDF is a standard that 188 specifies how to identify relationships between resources 189 in the form of triples composed by a subject, a predicate, and an object, denoted as <subject, predicate, ob-ject>. The predicate specifies the relationship (or link) that holds between the subject and object entities. ...
Article
Automated test case generation for web APIs is a thriving research topic, where test cases are frequently derived from the API specification. However, this process is only partially automated since testers are usually obliged to manually set meaningful valid test inputs for each input parameter. In this article, we present ARTE, an approach for the automated extraction of realistic test data for web APIs from knowledge bases like DBpedia. Specifically, ARTE leverages the specification of the API parameters to automatically search for realistic test inputs using natural language processing, search-based, and knowledge extraction techniques. ARTE has been integrated into RESTest, an open-source testing framework for RESTful APIs, fully automating the test case generation process. Evaluation results on 140 operations from 48 real-world web APIs show that ARTE can efficiently generate realistic test inputs for 64.9% of the target parameters, outperforming the state-of-the-art approach SAIGEN (31.8%). More importantly, ARTE supported the generation of over twice as many valid API calls (57.3%) as random generation (20%) and SAIGEN (26%), leading to a higher failure detection capability and uncovering several real-world bugs. These results show the potential of ARTE for enhancing existing web API testing tools, achieving an unprecedented level of automation.
... The basis of the exchanges in NDN is data. Each content is identified by a name or prefix which has a hierarchical structure like Unified Resource Identifiers (URI) [75]. This naming has the advantage of having a semantic meaning for users. ...
... In ) ) 74 | r e p l i c a t i o n I n d e x ' nt => ( twice_plus_one ( twice_plus_one I n t 3 1 . In ) ) 75 | r e p l i c a t i o n P a r a m e t e r ' nt => ( t w i c e ( t w i c e ( t w i c e I n t 3 1 . In ) ) ) 76 | rootDomain ' nt => ( twice_plus_one ( t w i c e ( t w i c e I n t 3 1 . ...
Thesis
N the Big data community, MapReduce has been considered as one of the main approachesto answer the permanent increasing computing resources demand imposed by the large dataamount. Its importance can be explained by the evolution of the MapReduce paradigm whichpermits massively parallel and distributed computing over many nodes.Information-Centric Networking (ICN) aims to be the next Internet architecture. It bringsmany features such as network scalability and in-network caching by moving the networkingparadigm from the current host-centric to content-centric where the resources are important nottheir location. To benefit from the ICN property, Big Data architecture needs to be adapted tocomply with this new Internet architecture. One dominant of these CCN architectures is NamedData Networking (NDN) which has been financed by the American National Science Foundation(NSF) in the scope of the project Future Internet Architecture (FIA).We aim to define Big Data architecture operating on Named Data Networking (NDN) andcapitalizing on its properties. First, we design a fully distributed, resilient, secure and adaptabledistributed file system NDFS (NDN Distributed File System) which is the first layer (Data Layer)in the Big Data stack. To perform computation on the data replicated using a Distributed FileSystem, a Big Data architecture must include a Compute Layer. Based on NDN, N-MapReduceis a new way to distribute data computation for processing large datasets of information. Ithas been designed to leverage the features of the data layer. Finally, through the use of formalverification, we validate our Big Data architecture.
... data to be interpreted directly by humans), but it is also organized to derive new facts from the current ones, that is, deals essentially with knowledge. The feature of the underlying network devised by its creators, its universality (given by URIs [20] and the hypertext transfer protocol HTTP), became a problem for private companies and organizations as it was not "practical" due to privacy and property rights concern. Google overcame this, and develop the notion of knowledge graph as a "finite", manageable, controlled and usually private Semantic Web. ...
... Formally, an RDF graph is a set of triples ( , , ) such that , , ∈ Const, so that ( , , ) represents an edge from to with label . A second important feature of RDF graphs is that Const is considered as a set of Uniform Resource Identifiers (URIs [20,29]), that can be used to identify any resource used by Web technologies. In this way, RDF graphs have a universal interpretation: if ∈ Const is used in two different RDF graphs, then is considered to represent the same element. ...
Preprint
Full-text available
Graphs have become the best way we know of representing knowledge. The computing community has investigated and developed the support for managing graphs by means of digital technology. Graph databases and knowledge graphs surface as the most successful solutions to this program. The goal of this document is to provide a conceptual map of the data management tasks underlying these developments, paying particular attention to data models and query languages for graphs.
... Resources are uniquely identified using URIs (Uniform Resource Identifiers). Unlike URL (Uniform Resource Locator) that has necessarily to locate an existing Web page or a position therein, URI is a universal pointer allowing to uniquely identify an abstract or a physical resource [14] that does not necessarily have (although recommended) a representative Web page. For example, <http://example.com/automobile/Butterfly> ...
... and ex:Butterfly are URIs. In the latter, the prefix ex denotes a namespace [14], which uniquely identifies the scheme under which all related resources are defined. Thanks to the use of prefixes, the car door type Butterfly of URI ex:Butterfly defined inside the scheme ex can be differentiated from the insect Butterfly of URI ins:Butterfly defined inside the scheme ins. ...
Thesis
Full-text available
The remarkable advances achieved in both research and development of Data Management as well as the prevalence of high-speed Internet and technology in the last few decades have caused unprecedented data avalanche. Large volumes of data manifested in a multitude of types and formats are being generated and becoming the new norm. In this context, it is crucial to both leverage existing approaches and propose novel ones to overcome this data size and complexity, and thus facilitate data exploitation. In this thesis, we investigate two major approaches to addressing this challenge: Physical Data Integration and Logical Data Integration. The specific problem tackled is to enable querying large and heterogeneous data sources in an ad hoc manner. In the Physical Data Integration, data is physically and wholly transformed into a canonical unique format, which can then be directly and uniformly queried. In the Logical Data Integration, data remains in its original format and form and a middleware is posed above the data allowing to map various schemata elements to a high-level unifying formal model. The latter enables the querying of the underlying original data in an ad hoc and uniform way, a framework which we call Semantic Data Lake, SDL. Both approaches have their advantages and disadvantages. For example, in the former, a significant effort and cost are devoted to pre-processing and transforming the data to the unified canonical format. In the latter, the cost is shifted to the query processing phases, e.g., query analysis, relevant source detection and results reconciliation. In this thesis we investigate both directions and study their strengths and weaknesses. For each direction, we propose a set of approaches and demonstrate their feasibility via a proposed implementation. In both directions, we appeal to Semantic Web technologies, which provide a set of time-proven techniques and standards that are dedicated to Data Integration. In the Physical Integration, we suggest an end-to-end blueprint for the semantification of large and heterogeneous data sources, i.e., physically transforming the data to the Semantic Web data standard RDF (Resource Description Framework). A unified data representation, storage and query interface over the data are suggested. In the Logical Integration, we provide a description of the SDL architecture, which allows querying data sources right on their original form and format without requiring a prior transformation and centralization. For a number of reasons that we detail, we put more emphasis on the virtual approach. We present the effort behind an extensible implementation of the SDL, called Squerall, which leverages state-of-the-art Semantic and Big Data technologies, e.g., RML (RDF Mapping Language) mappings, FnO (Function Ontology) ontology, and Apache Spark. A series of evaluation is conducted to evaluate the implementation along with various metrics and input data scales. In particular, we describe an industrial real-world use case using our SDL implementation. In a preparation phase, we conduct a survey for the Query Translation methods in order to back some of our design choices.
... In particular, parameters like the user and campaign identifier are passed to tracking scripts through the reference URL. The URL structure required to correctly parse the parameter is defined in public standards (Berners-Lee, Fielding, & Masinter, 1998). The parameters are included behind the file name after a question mark with each key-value pair linked by an equal sign. ...
Preprint
Email tracking allows email senders to collect fine-grained behavior and location data on email recipients, who are uniquely identifiable via their email address. Such tracking invades user privacy in that email tracking techniques gather data without user consent or awareness. Striving to increase privacy in email communication, this paper develops a detection engine to be the core of a selective tracking blocking mechanism in the form of three contributions. First, a large collection of email newsletters is analyzed to show the wide usage of tracking over different countries, industries and time. Second, we propose a set of features geared towards the identification of tracking images under real-world conditions. Novel features are devised to be computationally feasible and efficient, generalizable and resilient towards changes in tracking infrastructure. Third, we test the predictive power of these features in a benchmarking experiment using a selection of state- of-the-art classifiers to clarify the effectiveness of model-based tracking identification. We evaluate the expected accuracy of the approach on out-of-sample data, over increasing periods of time, and when faced with unknown senders.
... It is therefore clear that Linked Data depends on two fundamental technologies for the Web: Uniform Resource Identifier (URI) [2] and HyperText Transfer Protocol (HTTP), which use the Resource Description Framework (RDF) format [3] to create typed links between arbitrary objects. ...
Article
Full-text available
The development of storage standards for databases of different natures and origins makes it possible to aggregate and interact with different data sources in order to obtain and show complex and thematic information to the end user. This article aims to analyze some possibilities opened up by new applications and hypothesize their possible developments. With this work, using the currently available Web technologies, we would like to verify the potential for the use of Linked Open Data in the world of WebGIS and illustrate an application that allows the user to interact with Linked Open Data through their representation on a map. Italy has an artistic and cultural heritage unique in the world and the Italian Ministry of Cultural Heritage and Activities and Tourism has created and made freely available a dataset in Linked Open Data format that represents it. With the aim of enhancing and making this heritage more usable, the National Research Council (CNR) has created an application that presents this heritage via WebGIS on a map. Following criteria definable by the user, such as the duration, the subject of interest and the style of the trip, tourist itineraries are created through the places that host this heritage. New possibilities open up where the tools made available by the Web can be used together, according to pre-established sequences, to create completely new applications. This can be compared to the use of words, all known in themselves, which, according to pre-established sequences, allow us to create ever new texts.
... Technically, the Semantic Desktop aims at bringing Semantic Web [29] technology to users' desktops. In short, information items (files, emails, bookmarks, etc.) are treated as a Semantic Web resources, each identified by a URI [28,30] and accessible and queryable as an RDF graph [44,55]. Ontologies [213] allow users to express personal mental models that interconnect these information items with their mental concepts like persons, organizations, locations, projects, topics, tasks, events, etc. ...
Preprint
This paper presents a retrospective overview of a decade of research in our department towards self-organizing personal knowledge assistants in evolving corporate memories. Our research is typically inspired by real-world problems and often conducted in interdisciplinary collaborations with research and industry partners. We summarize past experiments and results comprising topics like various ways of knowledge graph construction in corporate and personal settings, Managed Forgetting and (Self-organizing) Context Spaces as a novel approach to Personal Information Management (PIM) and knowledge work support. Past results are complemented by an overview of related work and some of our latest findings not published so far. Last, we give an overview of our related industry use cases including a detailed look into CoMem, a Corporate Memory based on our presented research already in productive use and providing challenges for further research. Many contributions are only first steps in new directions with still a lot of untapped potential, especially with regard to further increasing the automation in PIM and knowledge work support.
... Its comparability to CORBA is inevitable, as both were developed by OMG and use IDL [28]. Such OMG protocol offers more than 20 QoS options, such as standard configurations; channel security with DTLS for SSL/TLS; allows authentication; uses the decentralized architecture publisher/subscriber; it is a binary protocol; it has no central component, 1 Low Power Wide Area Network -which can provide long-distance connectivity [17] and excellent energy conservation capacity [18] 2 Uniform Resource Identifier [19] 3 Topics links to a key that represents a registered quantity [1,20]; in some protocols topics, can be nested and have an hierarchy 4 Quality of Service [24,25] it uses a middleware, and its transport can happen both through UDP and also TCP. ...
Preprint
Full-text available
We analyze the utilization of publish-subscribe protocols in IoT and Fog Computing and challenges around security configuration, performance, and qualitative characteristics. Such problems with security configuration lead to significant disruptions and high operation costs. Yet, These issues can be prevented by selecting the appropriate transmission technology for each configuration, considering the variations in sizing, installation, sensor profile, distribution, security, networking, and locality. This work aims to present a comparative qualitative and quantitative analysis around diverse configurations, focusing on Smart Agriculture's scenario and specifically the case of fish-farming. As result, we applied a data generation workbench to create datasets of relevant research data and compared the results in terms of performance, resource utilization, security, and resilience. Also, we provide a qualitative analysis of use case scenarios for the quantitative data produced. As a contribution, this robust analysis provides a blueprint to decision support for Fog Computing engineers analyzing the best protocol to apply in various configurations.
... There are three components of SPIFFE: a specification and identifier used as a referral for a service (SPIFFE-ID), a SPIFFE Verifiable Identity Document (SVID) for embedding the SPIFFE-ID that is signed by a trusted authority, and the Workload API, which is an agent running within the cloud that provides a method of obtaining the SVID. The SPIFFE-ID is a uniform resource identifier (URI) [5] that includes the scheme "spiffe://," a trust domain, and a path that specifies the name of the service. For example, the foo service is a billing system whose administrative domain is in the production environment. ...
Article
Full-text available
As applications move to multiple clouds, the network has become a reactive element to support cloud consumption and application needs. Through each generation of network architectures, identifiers and the use of dynamic locators evolved in different levels of the protocol stack. The identifiers and locators type is defined by the isolation boundary and how the architecture considers semantic overload in the IP address. Each solution is an outcome of incrementalism, resulting in application delivery outgrowing the underlying network. This paper contributes an industrial retrospective of how the schemes and mechanisms for identification and location of network entities have evolved in traditional data centers and how they match cloud-native application requirements. Specifically, there is an evaluation of each application artifact that forced necessary changes in the identifiers and locators. Finally, the common themes are highlighted from observations to determine the investigation areas that may play an essential role in the future of cloud-native networking.
... So a user is an important entity within the presented system. A user is a person with a number of static attributes presented in the following URIs are defined as in (Berners-Lee, 1998). URIs are used here for generality as they provide many advantages over URLs an example of which is persistence. ...
Thesis
Full-text available
p>The main objective of this work is to address the problem of information overload within small groups, driven by similar goals in a way that would enable the delivery of personalised and non-intrusive browsing recommendations and hints as well as aid the users in their information finding activities. The basic idea upon which this work builds is that information gained and created by a user navigating the information space can be used to assist other users in their navigation and information finding activities. The presented model utilises, extends, and combines ideas from open hypermedia with those from Web assistants and recommender systems to achieve its goals. The result of this combination is manifested in the idea of 'linking in context' which this work presents as a novel way of offering Web users recommendations for concepts related to what they are browsing. The integration of the various concepts is facilitated by the use of a multiagent framework. Creating a flexible and open architecture that can accommodate these goals as well as identifying information finding and recommendation building blocks, is one important dimension of this work. Developing a linking model to embrace context on the user and document level, is another.</p
... A Data Entity, is a top level container of information (Esteva et al., 2019). From a machine perspective it is most relevant as a data object that has a unique uniform resource identifier (URI) (Berners-Lee et al., 2005) and at least some technical metadata. From the human perspective, the relevancy of a data entity is that of a data concept, something that has meaning for humans and hence has some descriptive metadata. ...
Article
Full-text available
This paper presents a lightweight, flexible, extensible, machine readable and human-intelligible metadata schema that does not depend on a specific ontology. The metadata schema for metadata of data files is based on the concept of data lakes where data is stored as they are. The purpose of the schema is to enhance data interoperability. The lack of interoperability of messy socio-economic datasets that contain a mixture of structured, semi-structured, and unstructured data means that many datasets are underutilized. Adding a minimum set of rich metadata and describing new and existing data dictionaries in a standardized way goes a long way to make these high-variety datasets interoperable and reusable and hence allows timely and actionable information to be gleaned from those datasets. The presented metadata schema OIMS can help to standardize the description of metadata. The paper introduces overall concepts of metadata, discusses design principles of metadata schemes, and presents the structure and an applied example of OIMS.
... Le Web est construit sur un ensemble de standards simples : -Des URI (Uniform Resource Identifiers, identifiants de ressource uniformes) comme mécanisme d'identification unique et global [T.Berners-Lee et al. 1998] ; -HTTP (HyperText Transfer Protocol, protocole de transfert hypertexte), le mécanisme d'accès universel [R.Fielding 1999] ; -HTML (HyperText Markup Language, langage de balisage hypertexte), le format de contenu largement utilisé [D.Raggett & I.Jacobs 1999]. la structuration des données joue un role principale en leur réutilisation et même facilite la création d'outils pour réutiliser les données de manière fiable. ...
Thesis
Full-text available
The Linked Data initiative aims at publishing structured and interlinked data on the Web by using Semantic Web technologies These technologies provide different languages for expressing data as RDF graphs and querying it with SPARQL. Linked data allow the implementation of applications that reuse data distributed on the Web. To facilitate interoperability between these applications, data issued from different providers has to be interlinked. It means that the same entity in different data sets must be identified. One of the key challenges of linked data is to deal with this heterogeneity by detecting links across datasets. In such a dynamic environment, the Web of data evolves: new data are added; outdated data are removed or changed. Then, links between data have to evolve too. Since links should not be recomputed each time a change occurs, the semantic Web needs methods that consider the evolution. Over the time, dead links can appear. Dead links are those pointing at URIs that are no longer maintained, and those that are not being set when new data is published. Too many dead links lead to a large number of unnecessary HTTP requests by applications consumers. A current research topic addressed by the Linked Data community is link maintenance. We propose in this thesis an approach to discover the links between the RDF data based on the link models that appear around the resources and ontology alignment. Our approach also includes a process to maintain links when a data change occurs. The goal of our approach is to detect correct links and erroneous links in the same database (intra-base links) and in a basic set (inter-base links). After the detection process, we propose a link maintenance method. To evaluate the performance of our approach we used the test of the 2012 OAEI evaluation campaign. We compared our approach with other systems. The obtained results show the good performance of our approach.
... Formally, an RDF graph is a set of triples ( , , ) such that , , ∈ Const, so that ( , , ) represents an edge from to with label . A second important feature of RDF graphs is that Const is considered as a set of Uniform Resource Identifiers (URIs [17,25]), that can be used to identify any resource used by Web technologies. In this way, RDF graphs have a universal interpretation: if ∈ Const is used in two different RDF graphs, then is considered to represent the same element. ...
... Req. 10: All RDFS Literals of type geo:wktLiteral shall consist of an optional URI identifying the coordinate reference system followed by Simple Features Well Known Text (WKT) describing a geometric value. Valid geo:wktLiteral instances are formed by concatenating a valid, absolute URI as defined in [24], one or more spaces (Unicode U+0020 character) as a separator, and a WKT string as defined in Simple Features [25]. ...
Preprint
Full-text available
We propose a series of tests that check for the compliance of RDF triplestores with the GeoSPARQL standard. The purpose of the benchmark is to test how many of the requirements outlined in the standard a tested system supports and to push triplestores forward in achieving a full GeoSPARQL compliance. This topic is of concern because the support of GeoSPARQL varies greatly between different triplestore implementations, and such support is of great importance for the domain of geospatial RDF data. Additionally, we present a comprehensive comparison of triplestores, providing an insight into their current GeoSPARQL support.
... After that, he briefly called all his efforts in creating web as a marriage between Hypertext and the Internet in the form of Uniform Resource Indicator (Berners- Lee & Fischetti, 2001). URI is an addressing schema that can address every resource type, from a file (with text, image, voice or movie, etc. content), a person (email address), an internet forum or even a running computer program (e.g., a database query; T. Berners- Lee et al., 1998Lee et al., , 2005. ...
Article
Full-text available
How is the future of the Web, as one of the most influential inventions of the twentieth century? Today, there are great conceptual gaps between the Web 2.0 (Social Web), Web 3.0 (Semantic Web), and Web 4.0 (Pragmatic Web) generations. Every generation of Web is merely an independent conceptual branch of Web and the future Web needs to benefit from the combination of concepts from each generation. This paper has three distinct contributions to make for paving the way for the future Web. First, we tried to extract all main concepts of today’s Web, and propose them as the principles for the future Web, which we name the Socio-Pragmatic Web (SPWeb). Then, we use the Activity Theory (AT) as a transdisciplinary theory for human social behavior to provide a conceptual model for SPWeb. Our conceptual model offers generic Knowledge-Embodied Agents (KEA) in three different abstraction levels as the building blocks of SPWeb. As different KEA types, Web Service-KEA, Artificial Intelligent-KEA, and Cognitive-KEA have different levels of capabilities. For each KEA type, we propose different elements and technologies of today’s Web as the candidates for their development to prove the feasibility of these KEAs. Finally, we will show that the Concept Realization and Weaving Functions of the KEA models altogether can generate a powerful future web, as it satisfies SPWeb’s principles.
... URIs by themselves have no mechanism for storing metadata about any objects to which they are supposed to resolve, nor do they have any particular associated persistence policy. However, other identifier schemes with such properties, such as DOIs, are often represented as URIs for convenience (Berners-Lee et al. (1998); Jacobs and Walsh (2004). ...
Preprint
Full-text available
Reproducibility and reusability of research results is an important concern in scientific communication and science policy. A foundational element of reproducibility and reusability is the open and persistently available presentation of research data. However, many common approaches for primary data publication in use today do not achieve sufficient long-term robustness, openness, accessibility or uniformity. Nor do they permit comprehensive exploitation by modern Web technologies. This has led to several authoritative studies recommending uniform direct citation of data archived in persistent repositories. Data are to be considered as first-class scholarly objects, and treated similarly in many ways to cited and archived scientific and scholarly literature. Here we briefly review the most current and widely agreed set of principle-based recommendations for scholarly data citation, the Joint Declaration of Data Citation Principles (JDDCP). We then present a framework for operationalizing the JDDCP; and a set of initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data. The main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories, including technical staff members in these organizations. But ordinary researchers can also benefit from these recommendations. The guidance provided here is intended to help achieve widespread, uniform human and machine accessibility of deposited data, in support of significantly improved verification, validation, reproducibility and re-use of scholarly/scientific data.
... The Resource Description Framework (RDF) (Raimond and Schreiber, 2014) is a well-established data model in the Semantic Web community. It is used to express facts about resources identified with uniform resource identifiers (URIs) (Berners-Lee et al., 1998) in the form of statements (subject, predicate and object). Ontologies (Gruber, 1993) are used to model domains and to share formally specified conceptualizations which can be expressed by using RDF Schema (RDFS) (Guha and Brickley, 2004). ...
Preprint
Despite great advances in the area of Semantic Web, industry rather seldom adopts Semantic Web technologies and their storage and query concepts. Instead, relational databases (RDB) are often deployed to store business-critical data, which are accessed via REST interfaces. Yet, some enterprises would greatly benefit from Semantic Web related datasets which are usually represented with the Resource Description Framework (RDF). To bridge this technology gap, we propose a fully automatic approach that generates suitable RDB models with REST APIs to access them. In our evaluation, generated databases from different RDF datasets are examined and compared. Our findings show that the databases sufficiently reflect their counterparts while the API is able to reproduce rather simple SPARQL queries. Potentials for improvements are identified, for example, the reduction of data redundancies in generated databases.
... URIs by themselves have no mechanism for storing metadata about any objects to which they are supposed to resolve, nor do they have any particular associated persistence policy. However, other identifier schemes with such properties, such as DOIs, are often represented as URIs for convenience (Berners-Lee et al. (1998); Jacobs and Walsh (2004)). ...
Preprint
Full-text available
Reproducibility and reusability of research results is an important concern in scientific communication and science policy. A foundational element of reproducibility and reusability is the open and persistently available presentation of research data. However, many common approaches for primary data publication in use today do not achieve sufficient long-term robustness, openness, accessibility or uniformity. Nor do they permit comprehensive exploitation by modern Web technologies. This has led to several authoritative studies recommending uniform direct citation of data archived in persistent repositories. Data are to be considered as first-class scholarly objects, and treated similarly in many ways to cited and archived scientific and scholarly literature. Here we briefly review the most current and widely agreed set of principle-based recommendations for scholarly data citation, the Joint Declaration of Data Citation Principles (JDDCP). We then present a framework for operationalizing the JDDCP; and a set of initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data. The main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories, including technical staff members in these organizations. But ordinary researchers can also benefit from these recommendations. The guidance provided here is intended to help achieve widespread, uniform human and machine accessibility of deposited data, in support of significantly improved verification, validation, reproducibility and re-use of scholarly/scientific data.
... The Uniform Resource Identifier (URI) is a globally scoped character string that is used to identify digital resources or concepts in the web (Berners-Lee et al. 2005). ...
Article
Full-text available
The technologies behind today's web services, tools, and applications are evolving continually. As a result, the workflows and methods of different business sectors are undergoing constant change. The news industry and journalism are heavily affected by these changes. New technological means for practicing journalism and producing news items are being incorporated in media workflows, challenging well established journalistic norms and practices. The perpetual technological evolution of the web creates a wide range of opportunities. For this reason, both technology companies and media organizations have begun to experiment with semantic web technologies. Our focus in this paper was to discover and define the ways that semantic technologies can contribute to the technological upgrade of everyday journalism. From this perspective, we introduced the term 'semantic journalism' and attempted to investigate the transition of journalism to a semantic-oriented technological framework.
... According to RFC 3986 [43], a URI consists of an optional protocol, an optional host, a sequence of one or more path segments, namely absolute path, plus an optional query commonly composed by a sequence of attributes, each of them with an optional value. Thus, a URI or URL has the general form: In this step, the URIs from U I are parsed (box URIparser in Fig. 6) and, following a process similar to [44], three dictionaries are created from the words observed in the URI fields path, attribute and value respectively, as each field has its own semantics. ...
Article
Full-text available
The performance of anomaly-based intrusion detection systems depends on the quality of the datasets used to form normal activity profiles. Suitable datasets are expected to include high volumes of real-life data free from attack instances. On account of these requirements, obtaining quality datasets from collected data requires a process of data sanitization that may be prohibitive if done manually, or uncertain if fully automated. In this work, we propose a sanitization approach for obtaining datasets from HTTP traces suited for training, testing, or validating anomaly-based attack detectors. Our methodology has two sequential phases. In the first phase, we clean known attacks from data using a pattern-based approach that relies on tools to detect URI-based known attacks. In the second phase, we complement the result of the first phase by conducting assisted manual labeling in a systematic and efficient manner, setting the focus of expert examination not on the raw data (which would be millions of URIs), but on the set of words contained in three dictionaries learned from these URIs. This dramatically downsizes the volume of data that requires expert discernment, making manual sanitization of large datasets feasible. We have applied our method to sanitize a trace that includes 45 million requests received by the library web server of the University of Seville. We were able to generate clean datasets in less than 84 h with only 33 h of manual supervision. We have also applied our method to some public benchmark datasets, confirming that attacks unnoticed by signature-base detectors can be discovered in a reduced time span.
... As early as 2000, Tim Berners-Lee, the father of the World Wide Web, proposed the concept and hierarchy of the Semantic Web [4]. Based on URI [5] (Uniform Resource Identifier) and XML [6] (Extensible Markup Language), W3C formally released the first basic ontology modeling language RDF [7] (Resource Description Framework) in 2004. RDF defines property (the connection relationship between ontology resources) and resources in the form of triples. ...
Article
Full-text available
The architecture for IoT is the primary foundation for designing and implementing the System of Internet of things. This paper discusses the theory, method, tools and practice of modeling and reasoning the architecture of the Internet of Things system from the dimension of semantic ontology. This paper breaks the way of static ontology modeling, and proposes an implementation framework for real-time and dynamic ontology modeling of the IoT system from the running system. According to the actual needs of the health cabin IoT system and the combination of theory and practice, the system architecture model of the semantic ontology dimension of IoT is built. Then, based on the reasoning rules of the ontology model, the model is reasoned by Pellet reasoning engine which injects the atom of the custom reasoning built-ins into the source code. In this way we have realized the automatic classification and attribute improvement of resources and behaviors of the IoT system, the real-time working state detection and fault diagnosis of the IoT system, and the automatic control of the IoT system and resources.
... Building a tag requires designing a data carrier and encoding information. Common data carriers are 1-D barcodes, 2-D codes, uniform resource identifiers (URI) [6], device memories, and RFID tags. Common 1-D codes are EAN/UPC, ITF-14, UUC/EAN-128, and GS1 DataBar. ...
Article
With the development of the Internet of Things (IoT), the physical space we are living in is experiencing unprecedented digitalization and virtualization. It is an overwhelming trend to achieve the convergence between physical space and cyberspace, where the fundamental problem is to realize the accurate mapping between the two spaces. Therefore, identity modeling and identity addressing, which serve as the main bridge between physical space and cyberspace, are regarded as important research areas. This paper summarizes the related works regarding identity modeling and identity addressing in IoT, and makes a general comparison and analysis based on their respective features. Following that a flexible and low coupling framework, with strong independence between different modules is proposed, where both identity modeling and identity addressing are integrated. Meanwhile, we discuss and analyze the future development and challenges of identity modeling and addressing. It is proved that the identity modeling and identity addressing are extremely significant topics in the era of IoT.
Article
Full-text available
HTTP video streaming, Flash video is widely positioned to deliver stored media. Due to TCP's consistent service, the picture and sound quality would not be degraded by network impairments, such as high delay and packet loss. In particular, Adobe's Flash video (FLV) plays an important role in storing and streaming videos via HTTP over TCP. This paper extends some of the methods of HTTP video streaming, techniques as a literature survey.
Preprint
Full-text available
The number of available genomes of prokaryotic organisms is rapidly growing enabling comparative genomics studies. The comparison of genomes of organisms with a common phenotype, habitat or phylogeny often shows that these genomes share some common contents. Collecting rules expressing common genome traits depending on given factors is useful, as such rules could be used for quality control or for identifying interesting exceptions and formulating hypothesis. Automatizing the rules verification using computation tools requires the definition of a representation schema. In this study, we present EGC (Expected Genome Contents), a flat-text file format for the representation of expectation rules about the content of prokaryotic genomes. A parser for the EGC format has been implemented using the TextFormats software library, accompanied by a set of related Python packages.
Chapter
Full-text available
Linked Data (LD) emerged as an innovation in libraries over a decade ago. It refers to a set of best practices for publishing and linking structured data using existing Semantic Web technologies. Knowledge organisation in academic libraries can use the advantages of LD technologies to increase availability of library resources on the world wide web. Existing methods of descriptive cataloguing are based on describing metadata and constructing unique authorized access points as text strings. However, this strings-based approach works well in the closed environment of a traditional library catalogue and not in an open environment where data are shared and linked. This chapter investigates the introduction of LD in the organization of knowledge in academic libraries, as literature shows that students prefer to search the internet for their information needs. Secondary literature was reviewed and analysed. Findings indicated that libraries that adopted LD increased the visibility of their products on the internet.
Article
This essay studies the Chinese spectators’ active releasing, searching for, and illicit sharing of imported, voluntarily subtitled, and secretly stored foreign films and videos by offering an etymological study of ziyuan (literally translated to English as ‘resource’), the most pervasively used term in the lexicon of contemporary Chinese cinephilia. Delineating the semantic shift of ziyuan from a concept of computer networking to a film and media idiom, I examines how the discursive practice of calling a digitalized film or video ‘ziyuan’ and the corresponding metadata model of representing a video by its digital identification and location information provide the mechanism both for locating and retrieving films as digital files from the Internet and for hiding them away from clear recognition and immediate access. As the neologism replacing daoban, the Chinese equivalent of ‘piracy’, ziyuan as the popular argot, I argue, rehabilitates Chinese cinephilia thriving on piracy by metaphorically reconceptualizing the global Internet as a vast reservoir and the Internet-based media files as untapped natural resources with potential use value. The common use of this term thus mounts a collective resistance to both the unequal global capitalist order and the party-state intervention in the media market by symbolically exonerating participators and beneficiaries of making, disseminating, downloading, or streaming unauthorized films of any blame or criminal charges.
Article
Full-text available
GeoSPARQL is an important standard for the geospatial linked data community, given that it defines a vocabulary for representing geospatial data in RDF, defines an extension to SPARQL for processing geospatial data, and provides support for both qualitative and quantitative spatial reasoning. However, what the community is missing is a comprehensive and objective way to measure the extent of GeoSPARQL support in GeoSPARQL-enabled RDF triplestores. To fill this gap, we developed the GeoSPARQL compliance benchmark. We propose a series of tests that check for the compliance of RDF triplestores with the GeoSPARQL standard, in order to test how many of the requirements outlined in the standard a tested system supports. This topic is of concern because the support of GeoSPARQL varies greatly between different triplestore implementations, and the extent of support is of great importance for different users. In order to showcase the benchmark and its applicability, we present a comparison of the benchmark results of several triplestores, providing an insight into their current GeoSPARQL support and the overall GeoSPARQL support in the geospatial linked data domain.
Conference Paper
Full-text available
Die Arbeit stellt ein Konzept einer Distributed Ledger-basierten Infrastruktur für das Identitätsmanagement für Industrie 4.0-Komponenten vor und führt das Konzept des dezentralen Registers für die Verwaltungsschalen ein. Anhand von drei Anwendungsfällen wird erklärt, wie diese Ansätze in der Praxis umgesetzt und angewendet werden können. Die Anwendungsfälle zeigen, wie die Verwaltungsschalen und die von diesen bereitgestellten Diensten in dem dezentralen Register angemeldet und von den Nutzern in verschiedenen Asset-Lebenszyklusphasen abgerufen werden können. Darüber hinaus zeigen die Anwendungsfälle, wie die Nutzer von Industrieanlagen mehrere Verwaltungsschalen mit der gleichen Anlage verknüpfen können und wie diese Verwaltungsschalen anhand einer eindeutigen Asset-ID gefunden werden können. Der dritte Anwendungsfall beschreibt die Möglichkeit der automatischen Aktualisierung der Zugriffsinformationen auf die Verwaltungsschalen bei Änderung der Eigentümer oder Betreiber eines Assets.
Article
Full-text available
Metadata descriptions are typically monolithic data structures and their denormalized, text-based nature yields shortcomings such as inconsistencies and heterogeneities. Moreover, fluidity of research environments, coupled with single-tenancy of metadata descriptions, impedes enforcing authority on the related datasets effectively. We propose a novel paradigm for metadata articulation, delegation, that helps solve the aforementioned issues. After elaborating on the requirements to this practice, we present two worked-out implementation examples in the domain of geospatial metadata and discuss its advantages with respect to key issues in this domain; namely, metadata consistency, interoperability, and mitigating semantic heterogeneity. The technique and the supporting software we present are equally applicable to any XML-based metadata schema and application domain.
Article
Full-text available
When researchers analyze data, it typically requires significant effort in data preparation to make the data analysis ready. This often involves cleaning, pre-processing, harmonizing, or integrating data from one or multiple sources and placing them into a computational environment in a form suitable for analysis. Research infrastructures and their data repositories host data and make them available to researchers, but rarely offer a computational environment for data analysis. Published data are often persistently identified, but such identifiers resolve onto landing pages that must be (manually) navigated to identify how data are accessed. This navigation is typically challenging or impossible for machines. This paper surveys existing approaches for improving environmental data access to facilitate more rapid data analyses in computational environments, and thus contribute to a more seamless integration of data and analysis. By analysing current state-of-the-art approaches and solutions being implemented by world‑leading environmental research infrastructures, we highlight the existing practices to interface data repositories with computational environments and the challenges moving forward. We found that while the level of standardization has improved during recent years, it still is challenging for machines to discover and access data based on persistent identifiers. This is problematic in regard to the emerging requirements for FAIR (Findable, Accessible, Interoperable, and Reusable) data, in general, and problematic for seamless integration of data and analysis, in particular. There are a number of promising approaches that would improve the state-of-the-art. A key approach presented here involves software libraries that streamline reading data and metadata into computational environments. We describe this approach in detail for two research infrastructures. We argue that the development and maintenance of specialized libraries for each RI and a range of programming languages used in data analysis does not scale well. Based on this observation, we propose a set of established standards and web practices that, if implemented by environmental research infrastructures, will enable the development of RI and programming language independent software libraries with much reduced effort required for library implementation and maintenance as well as considerably lower learning requirements on users. To catalyse such advancement, we propose a roadmap and key action points for technology harmonization among RIs that we argue will build the foundation for efficient and effective integration of data and analysis.
Chapter
Due to the rapid advancement of mobile communication technologies, the demands for managing mobile devices effectively to fulfill various functionalities are on the rise. It is well known that mobile devices make use of different kinds of modulation approaches to adapt to various channel conditions. Therefore, in this paper, the authors propose a framework of Modulation Module Update (MMU) for updating the modulation module on the mobile device based on OMA DM. The management object for updating modulation module and the parameters associated with it are defined in the framework, and three operation phases are defined in this framework as well.
Chapter
This paper explores the idea of what can be achieved by using the principles and the technologies of the web platform when they are applied to ambient computing. In this paper, the author presents an experience that realizes some of the goals of an Ambient Computing system by making use of the technologies and the common practices of today's Web Platform. This paper provides an architecture that lowers the deployment costs by maximizing the reuse of pre-existing components and protocols, while guaranteeing accessibility, interoperability, and extendibility.
Book
Cambridge Core - Pattern Recognition and Machine Learning - The Art of Feature Engineering - by Pablo Duboue
Chapter
The semantic web aims at making web content interpretable. It is no less than offering knowledge representation at web scale. The main ingredients used in this context are the representation of assertional knowledge through graphs, the definition of the vocabularies used in graphs through ontologies, and the connection of these representations through the web. Artificial intelligence techniques and, more specifically, knowledge representation techniques, are put to use and to the test by the semantic web. Indeed, they have to face typical problems of the web: scale, heterogeneity, incompleteness, and dynamics. This chapter provides a short presentation of the state of the semantic web and refers to other chapters concerning those techniques at work in the semantic web.
Chapter
Full-text available
Model management is a central activity in Software Engineering. The most challenging aspect of model management is to keep models consistent with each other while they evolve. As a consequence, there has been increasing activity in this area, which has produced a number of approaches to address this synchronization challenge. The majority of these approaches, however, is limited to a binary setting; i.e. the synchronization of exactly two models with each other. A recent Dagstuhl seminar on multidirectional transformations made it clear that there is a need for further investigations in the domain of general multiple model synchronization simply because not every multiary consistency relation can be factored into binary ones. However, with the help of an auxiliary artifact, which provides a global view over all models, multiary synchronization can be achieved by existing binary model synchronization means. In this paper, we propose a novel comprehensive system construction to produce such an artifact using the same underlying base modelling language as the one used to define the models. Our approach is based on the definition of partial commonalities among a set of aligned models. Comprehensive systems can be shown to generalize the underlying categories of graph diagrams and triple graph grammars and can efficiently be implemented in existing tools.
ResearchGate has not been able to resolve any references for this publication.