Article

Web services description language (WSDL) 1.1

Authors:
  • Lanka Software Foundation
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The Service attribute is a combination of related ports (line 47 -52). The Port attribute is a single communication endpoint defined as a combination of a binding and a network address (line 49 -51) [4]. ...
... UDDI provide users with necessary information about deployed services in an intended composition. UDDI [4] locates services and provides access to their WSDL documents. This information is important to users because UDDI allows users to interact with their suitable services. ...
... Thus, the causative problem was P 1 . If the given CSV is 11101, the Hamming distance is (2,2,4). Thus, the causative problems are limited to P 1 and P 2 . ...
... The service matching problem is discussed in the literature in different contexts: Internet of Services, SOA, Semantic Web Services and Cloud Services. For service descriptions, several languages are suggested, each focusing on another group of service properties depending on the context such as WSDL [3], Linked-USDL 1 [4], SMI 2 [5] and OWL-S 3 [6]. Based on the context, the scope of matching covers QoS matching, inputoutput matching, precondition and postcondition matching. ...
... Due to different opinions on the content of the service descriptions and different contexts, a variety of service matching approaches have been suggested. However, these approaches have different problem definitions and have to be examined in detail in order to identify the subproblems solved in each. 1 Linked Unified Service Description Language 2 Service Measurement Index 3 Semantic Markup for Web Services Research questions that this survey attempts to answer are: Q1 How do they select QoS parameters to work with? Based on interviews, non-functional properties research or datasets? ...
Preprint
Full-text available
Service matching concerns finding suitable services according to the service requester's requirements, which is a complex task due to the increasing number and diversity of cloud services available. Service matching is discussed in web services composition and user oriented service marketplaces contexts. The suggested approaches have different problem definitions and have to be examined closer in order to identify comparable results and to find out which approaches have built on the former ones. One of the most important use cases is service requesters with limited technical knowledge who need to compare services based on their QoS requirements in cloud service marketplaces. Our survey examines the service matching approaches in order to find out the relation between their context and their objectives. Moreover, it evaluates their applicability for the cloud service marketplaces context.
... The structure of the data captured is likely to change over time. (Christensen et al 2001). XML is also a good format for the computational scientist because it is a simple and extensible way to describe data and additional functionality can be gained from using it, for example automatic archiving, exchange, transformation into other formats, and publishing on the web using a choice of visual presentations. ...
... SOAP messages). The Web Service Description Language (WSDL)(Christensen et al 2001) can be used to describe a web service, providing a standard interface. A WSDL document is written in XML and describes a service as a set of endpoints, each consisting of a collection of operations. ...
Thesis
p>In this thesis we investigate a profile of data management problems, from the initial stages of data capture and storage, through management of evolving data structures, to retrieval and analysis of distributed data. We concentrate on automating the data management process for scientific and engineering users, who are skilled in their field of work, but less familiar with database technologies. We first look at data capture, and automatic database generation and data storage, using metadata about the process of engineering design as an example. We use an abstraction layer that is independent of platform, database and data domain. A persistent archive should be flexible enough to cope with data structures that change over time. We show that our architecture naturally facilitates database schema evolution, providing such flexibility automatically. Finally, we demonstrate that scientific simulation results can be retrieved through an automatically generated, customisable web based interface and analysed with a secure post-processing facility for archived and user uploaded code.</p
... Furthermore, the majority of the cloud description and modeling languages lack formal documentation or specification documents to define in details their constructs and elements. Unlike that, WSDL and OWL-S are thoroughly documented in their specification documents (Christensen et al., 2001) (Martin et al., 2004). Moreover, many legacy systems were based upon these languages and they provide all the basic required concepts to describe a service and to demonstrate our proposal. ...
... WSDL (Web Services Description Language) (Christensen et al., 2001) is an XML format to describe web services as collections of network endpoints operating on messages. To define network services, WSDL 1.1 uses, mainly, six elements, which are Types, Message, PortType, Binding, Port; and Service. ...
Conference Paper
Full-text available
Cloud computing is an emerging computing paradigm, which provides high service availability, high scalability as well as low usage costs. This has encouraged enterprises and individual users to embrace cloud technology. However, the lack of service interoperability (also known as the vendor lock-in) issue still persists. The vendor lock-in is caused by the cloud service providers, who aim to prevent the clients from switching to other clouds or providers. The solutions to overcome the vendor lock-in addressed a specific cloud actor or a specific cloud model, which makes them not generic. Thus, we present in this paper, our Cloud Interoperability Pivot Model (CIPiMo). CIPiMo is a Model-as-a-Service, which standardizes the cloud service description languages by transforming them into a Generic Cloud Service Description model (GCSD) to make them interoperable. We rely on MDE techniques to achieve a Model-to-Model transformation. Therefore, we define mappings between the source description languages (OWL-S and WSDL) and the target language (GCSD). Furthermore, we illustrate our proposed meta-models for each language, and we implement our transformations using ATL with OCL. Eventually, we use a static analyzer (AnATLyzer IDE) to validate the correctness of our transformations. We provide use cases to demonstrate the applicability of our approach.
... Services are computational entities that run on different platforms or owned by different organizations" [Bucchiarone and Gnesi 2006]. They are described using appropriate service description languages such as Web Service Description Language (WSDL) [Christensen 2001], published and discovered according to predefined protocols. ...
... BPEL4Chor offers another internal layer of abstraction by separating Choreography technical issues from web service specifications . Finally, at the lowest level, the choreography is realized by interacting real web services which are defined by WSDL [Christensen 2001], UDDI [OASIS 2004, and XSD [Beech et al. 2012] documents which are understandable by an expert web service programmer. ...
Preprint
For modeling complex, collaborative business environments the need for a comprehensive choreography language is obvious. Since the introduction of WS-CDL by W3C in 2004 as a recommendation, there has been substantial research resulting in various language proposals aimed at correcting its perceived deficiencies. In this paper, we perform a detailed and comprehensive survey of outstanding choreography languages introduced after WS-CDL, tracing the evolution and emphasizing the salient features of each language. We propose a novel categorization scheme that permits meaningful comparison between choreography languages , present existing prominent choreography language evaluation frameworks, determine aspects of choreography languages that were not addressed in previous evaluations, and proceed to carry out a thorough comparative evaluation of these languages, based on previous work and our own discoveries made in this study.
... Shin et al. [80] used the Web Service Description Language (WSDL) to describe the services provided by the REST API of their UAV [101]. WSDL [101], similar to Web Application Description Language (WADL) which is generally more suitable for Web-based applications [102], is used to describe service and their semantics in order for humans and machines to be able automatically to use these services by creating the appropriate requests via TCP/IP calls. ...
... Shin et al. [80] used the Web Service Description Language (WSDL) to describe the services provided by the REST API of their UAV [101]. WSDL [101], similar to Web Application Description Language (WADL) which is generally more suitable for Web-based applications [102], is used to describe service and their semantics in order for humans and machines to be able automatically to use these services by creating the appropriate requests via TCP/IP calls. ...
Article
Full-text available
As the Internet of Things (IoT) penetrates different domains and application areas, it has recently entered also the world of robotics. Robotics constitutes a modern and fast-evolving technology, increasingly being used in industrial, commercial and domestic settings. IoT, together with the Web of Things (WoT) could provide many benefits to robotic systems. Some of the benefits of IoT in robotics have been discussed in related work. This paper moves one step further, studying the actual current use of IoT in robotics, through various real-world examples encountered through a bibliographic research. The paper also examines the potential of WoT, together with robotic systems, investigating which concepts, characteristics, architectures, hardware, software and communication methods of IoT are used in existing robotic systems, which sensors and actions are incorporated in IoT-based robots, as well as in which application areas. Finally, the current application of WoT in robotics is examined and discussed.
... Historically one of the most popular syntactic description model for web services is Web Services Description Language (WSDL) [4], which describes web services by separating the abstract functionality offered by a service from concrete details such as how and where that functionality is offered. It supports descriptions for both SOAP-based services and REST API services, and serves as the standard for the former. ...
... A number of new standards [6][7][8], tools [9], and applications have been developed recently to enhance the use of Web service. Significant progress has been made towards making Web service a scalable solution for distributed computing. ...
... Therefore, service offerings can vary widely regarding service levels and QoS they can deliver. Service descriptions may also vary as they may use different description languages such as WSDL (Web services Description Language) [6], WSOL (Web Services Offering Language) [26], or proprietary languages. The description of their QoS offerings may also be heterogeneous. ...
Preprint
In the service landscape, the issues of service selection, negotiation of Service Level Agreements (SLA), and SLA-compliance monitoring have typically been used in separate and disparate ways, which affect the quality of the services that consumers obtain from their providers. In this work, we propose a broker-based framework to deal with these concerns in an integrated manner for Software as a Service (SaaS) provisioning. The SaaS Broker selects a suitable SaaS provider on behalf of the service consumer by using a utility-driven selection algorithm that ranks the QoS offerings of potential SaaS providers. Then, it negotiates the SLA terms with that provider based on the quality requirements of the service consumer. The monitoring infrastructure observes SLA-compliance during service delivery by using measurements obtained from third-party monitoring services. We also define a utility-based bargaining decision model that allows the service consumer to express her sensitivity for each of the negotiated quality attributes and to evaluate the SaaS provider offer in each round of negotiation. A use-case with few quality attributes and their respective utility functions illustrates the approach.
... It was a clear decision to choose XML as the basis for our framework due to its undeniable success within the Internet community and its acceptance as basis for beneath any standard movement in the Grid community (e.g. WSDL [5]). ...
Preprint
Full-text available
We present xDGDL, an approach towards a concise but comprehensive Datagrid description language. Our framework is based on the portable XML language and allows to store syntactical and semantical information together with arbitrary files. This information can be used to administer, locate, search and process the stored data on the Grid. As an application of the xDGDL approach we present ViPFS, a novel distributed file system targeting the Grid.
... With this protocol, an application running in a machine anywhere in the world can use algorithms, data and resources stored in different servers [23]. Web Services Description Language (WSDL) is based in XML and allows one to have the description of a web service, specifying the abstract interface trough which a client may access the service and the details on how to use it [24]. These technologies allowed us to access Kegg API and to construct a service web client using Java [25] as programming language and Netbeans [26] 6.8 as integrated development environment. ...
Preprint
Full-text available
Background: Nowadays, the reconstruction of genome scale metabolic models is a non-automatized and interactive process based on decision taking. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. Results: This work presents the automation of a methodology for the reconstruction of genome scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome scale metabolic model of a photosynthetic organism, {\it Synechocystis sp. PCC6803}. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. Conclusions: For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models like connectivity and average shortest mean path of the different models have been compared and analyzed.
... 2 Background: OpenAPI APIs can be described using natural language, informal models, or generalpurpose modeling languages. There exist also machine-readable Domain Specific Languages [14] for describing them, such as RAML [3], WADL [17], WSDL [11], I/O Docs [4], and OpenAPI [1], which gained more importance in the five last years by being selected as a standard language for APIs description. ...
Chapter
Full-text available
The current paper presents a novel Command Line Interface (CLI) tool called ExpressO. This tool is specifically developed for developers who seek to analyze Web APIs implemented using the Express.js framework. ExpressO can automatically extract a specification written in OpenAPI, which is a widely used interface description language. The extracted specification consists of all the implemented endpoints, response status codes, and path and query parameters. Additionally, apart from facilitating automatic documentation generation for the API, ExpressO can also automatically verify the conformity of the Web API interface to its implementation, based on the Express.js framework. The tool has been released on the npm component registry as ‘expresso-api’, and can be globally installed using the command: npm install -g expresso-api.KeywordsOpenAPI SpecificationREST APIExpress.jsDocumentation generation
... interactions over a network. It is an interface described in a machine processable format (specifically Web Services Description Language (WSDL) [117]. Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with Extensible Markup Language (XML) serialization in conjunction with Web-related standard. ...
Thesis
The huge and steady growth in terms of the number of distributed devices con- nected to the Internet, the so-called Internet of Things (IoT), calls for newly developed infrastructure management techniques to deal with the complexity of emerging IoT deployments, especially in light of the growing impact of the sharing economy. In this context, most of the management platforms tackle IoT issues from a high level where IoT data is managed using Cloud oriented solutions. In such a scenario, the approach adopted can be categorized under the data-centric approach where IoT devices are considered as simple mere data generators uploading data towards Cloud platforms that provide, afterwards, processed data to the users. In order to challenge this mainstream consensus on the relationship between the Cloud and IoT, what is interesting to investigate is the adoption of the Cloud "as- a-Service" approach from a low level when dealing with IoT infrastructure. Indeed, the as-a-Service paradigm provides well-investigated mechanisms for infrastructure and service provisioning; thus the challenge then is to adapt this approach to fit the management of a dynamic, possibly virtualized, infrastructure of sensing and actuation resources. Cloud providers can extend then their offerings portfolios by providing access to shareable IoT resources according to the utility model using access at the lowest level where possible. Besides providing access to virtualized IoT nodes, an interesting capability to enable is related to computing at the network edge (even on the IoT nodes themselves) to meet the requirements of typical IoT applications, such low processing delays and data privacy. The thesis presents the design and implementation of a set of mechanisms to integrate IoT within the Cloud wisdom. In particular, the approach enables the capa- bility of offering IoT resources (e.g., sensors and actuators) as virtualized resources. Therefore, the virtual IoT instances can take benefits of the resources (e.g., storage, networking and compute) offered by the Cloud, Fog or the edge-based IoT nodes. The premise then lies in engaging the research from a device-centric perspective using the Stack4Things framework.
... While the Internet has been traditionally used for enabling interactions between humans, providing sites full of information and enabling communication between people, very little had been done until recently to make these features universally available to machines. Web Services are software systems enabling machine-to-machine interactions, encompassing techniques such as data interchange (using XML-based protocols), service discovery (UDDI), machinereadable descriptions of a service (Christensen et al., 2001) and remote invocation of services (World Wide Web Consortium, 2003). As web services have matured and gained in popularity, grid systems have begun to adopt web services techniques. ...
Thesis
p>With the growth of the Internet over recent years, the use of distributed systems has increased dramatically. Components of distributed systems require a communications infrastructure in order to interact with other components. One such method of communication is a notification service (NS). which delivers notifications of events between publishers and consumers that have subscribed to these events. A distributed NS is made up of multiple NS instances, enabling publishers and consumers to be connected to different NSs and still communicate. The NSs attempt to optimise message flow between them by sharing subscriptions between consumers with similar interests. In many cases, there is a mismatch between the dissemination notifications from a publisher and the delivery preferences of the consumer in terms of frequency of delivery, quality, etc. Consumers wish to receive a high quality of service, while a service provider acting as a publisher wishes to make its service available to many consumers without overloading itself. Negotiation is applicable to the resolution of this mismatch. However, existing forms of negotiation are incompatible with distributed NSs, where negotiation needs to take into account the preferences of the publisher and consumer, as well as existing subscriptions held by NSs. We introduce the concept of chained negotiation, where one or more intermediaries sit between the client and supplier in a negotiation, as a solution to this problem. Automated chained negotiation can enable a publisher and consumer to find a mutually acceptable set of delivery preferences for a service to be delivered through a distributed NS, while still enabling NSs to share subscriptions between consumers with similar interests. In this thesis, we present the following contributions: first, we show that by using negotiation over quality of service conditions, a service provider can serve more clients with a lower load on itself, presenting a direct negotiation engine for this purpose. We present chained negotiation as a novel form of negotiation enabling quality of service negotiations to involve intermediaries which may be able to satisfy a client's request without involving the service provider. Finally, we present a distributed notification service with support for chained negotiation, showing the benefit gained from chained quality of service negotiation in a real application.</p
... e communications usually happen between the service provider, service requestor, and the registry. Web Services Description Language (WSDL) [3] provides a recognized, computer-readable description of Web services. WSDL follows XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. ...
Article
Full-text available
Web services are progressively being used to comprehend service-oriented architectures. Web services facilitate the integration of applications and simplify interoperability. Additionally, it assists in wrapping accessible applications in order for developers to access them using standard languages and protocols. The user faces a difficult challenge in selecting the appropriate service in accordance with the user request as the behavior of the participating service affects the overall performance in discovery, selection, and composition. As a result, it is critical to select a high-quality service provider for these activities. Existing approaches rely on nonfunctional qualities for discovery and selection, but the user cannot always rely on these features, and these QoS values cannot be used to determine the user’s or quality perspective. Additionally, the user indicates an interest in a high-quality service based on quality attributes or service with a good reputation throughout the selection process rather than a newly registered service. As a result, a proper bootstrapping mechanism is required to evaluate newly registered services prior to their use by service requestors. This paper proposes a novel bootstrapping mechanism. The contribution of this paper involves (a) a method for evaluating the quality of service (QoS) by focusing on performance-related indicators such as response time, execution time, throughput, latency, and dependability; (b) a methodology for evaluating the QoE attributes based on user reviews that take into account both attributes and opinions; (c) bootstrap the newly registered service based on quality of service and quality of experience; and (d) building a recommender system that suggests the top-rated service for composition. The evaluation results are used to augment currently available online services by providing up-to-date quality of service and quality of experience attributes for discovery, selection, and composition.
... A typical implementation of SOA is based on web services with interfaces defined in Web Services Description Language (WSDL). 7 A web service is a loosely coupled software application that can be discovered, described, and accessed based on XML and standard web protocols over intranets, extranets, and the Internet. The WSDL document used in web services can exhibit design problems with the increase in size, which could impact quality attributes like performance and maintainability. ...
Article
Full-text available
Service‐oriented architecture (SOA) has been widely used to design enterprise applications in the past two decades. The services in SOA are becoming complex with the increase in changing user requirements and SOA is still seen as monolithic from a deployment perspective. Monolithic services make the application complex, and it becomes difficult to maintain. With the evolution of microservices architecture, software architects started migrating legacy applications to microservices. However, existing migration approaches in the literature mostly focus on migrating monolithic applications to microservices. To the best of our knowledge, very few works have been done in migrating SOA applications to microservices. One of the major challenges in the migration process is the extraction of microservices from the existing legacy applications. To address this, we propose an approach to extract the candidate microservices using graph based algorithms. In particular, four algorithms are defined: (i) construction of service graph (SG), (ii) construction of task graph (TG) for each service of the a SOA application, (iii) extraction of candidate microservices using the SG of SOA application, and (iv) construction of a SG for a microservices application to retain the dependencies between the generated microservices. We chose a SOA‐based web application to demonstrate the proposed microservices extraction approach and extracted the microservices. Additionally, we have evaluated the extracted microservices and compared them with SOA based services.
... Yet, SOAP does not provide contracts description. So, since 2001 the Web Services Description Language [53] (WSDL) enabled the description of the interfaces of the services and became the de-facto standard for syntactic description. ...
Thesis
The World Wide Web is mainly composed of two types of application components: applications and services. Applications, whether they are mobile or Web applications, i.e. intended to be used from a browser, have in common that they are a kind of text to holes and communicate with the services to customize the application for each user. It is therefore the service that owns and manages the data. To make this communication possible, the services offer APIs following the REST architecture. The management of the life cycle of a REST API is then a central element of the development of systems on the Web. The first step in this life cycle is the definition of the requirements of an API (functionality and software properties). Then, the technologies that will allow it to be designed, implemented and documented are chosen. It is then implemented and documented and put online. From then on, applications can use it. Then follows a phase of maintenance and evolution of the API, in which bugs are fixed and functionalities evolve to adapt to the changes of its users' expectations. In this thesis, we review the methods and technologies that accompany the developer during this life cycle. We identify two open challenges. First, there are many technologies for creating and documenting an API. Choosing the most relevant technologies for a project is a difficult task. As a first contribution of this thesis, we establish criteria to compare these technologies. Then, we use these criteria to compare existing technologies and propose three comparison matrices. Finally, to simplify this selection, we have developed an open-source wizard available on the Web, which guides the developer in his choice. The second challenge we have identified is related to the maintenance and evolution of REST APIs. The existing literature does not allow a REST API to evolve freely, without the risk of breaking the applications that use it (their clients). The second contribution of this work is a new approach to the co-evolution of REST APIs and their clients. We have identified that by following 7 rules governing the documentation of the API and the data they return in response to its clients, it is possible to create Web user interfaces capable of adapting to the majority of evolutions of REST APIs without producing bugs, nor breaking them and without even requiring the modification of their source code.
... Services are computational entities that run on different platforms or owned by different organizations" [3]. They are described using appropriate service description languages such as Web Service Description Language (WSDL) [4], published and discovered according to predefined protocols. ...
Preprint
For modeling complex, collaborative business environments the need for a comprehensive choreography language is obvious. Since the introduction of WS-CDL by W3C in 2004 as a recommendation, there has been substantial research resulting in various language proposals and some frameworks for evaluation of these languages. In this paper, we describe three prominent choreography language evaluation frameworks, give an overview of two recent choreography modeling languages, and perform previously nonexistent evaluations of these, highlighting their strong and weak points.
... Le W3C définit un service Web comme étant un système logiciel qui est conçu pour prendre en charge l'interaction M2M (machine-to-machine) sur un réseau. Il a une interface décrite dans un format qui peut être traité par une machine, en particulier WSDL(Web Service Description Language) [ 10 ] . Les autres systèmes interagissent avec le service Web d'une manière prescrite par sa description. ...
Thesis
Dans le contexte de l’Internet des Objets, la conception de services connectés – c’est-à-dire de services portés par des objets connectés – nécessite une approche de bout en bout pour non seulement répondre aux attentes des bénéficiaires de ces services mais aussi pour adapter le fonctionnement de ces services à des conditions d’exécution très variées allant de la maison à la ville connectée. L’approche sémantique proposée par cette thèse offre un niveau d’abstraction qui permet aux concepteurs de services de se concentrer sur les aspects fonctionnels des services et des objets. Elle s’inscrit dans un cadre d’architecture plus large qui aborde, en plus de ce niveau sémantique, les aspects plus opérationnels de mise en oeuvre de ces services (niveau Artefacts) dans des environnements techniques éventuellement hétérogènes (niveau Ressources). En proposant cette approche sémantique de conception, la thèse vise plusieurs objectifs qui peuvent être regroupés en trois catégories. La première catégorie d’objectifs est de décloisonner le monde actuel des services connectés en découplant les services des objets connectés et en permettant le partage d’objets par plusieurs services connectés. L’ouverture induite par ces premiers objectifs conduit à viser une deuxième catégorie d’objectifs qui a trait à la composition des services connectés. Chaque service devra être conscient et adopter un comportement compatible avec les autres éléments de son contexte d’exécution. Ces éléments de contexte comprennent bien sûr les autres services mais aussi les phénomènes physiques et les actions des occupants des espaces concernés. Enfin, la troisième catégorie d’objectifs s’adresse plus s’adresse plus particulièrement aux bénéficiaires des services connectés afin d’optimiser l’expérience utilisateur par des attentes mieux prises en compte et des automatismes respectueux des comportements humains. Le fondement théorique de l’approche sémantique proposée dans cette thèse s’appuie sur un méta-modèle qui permet de définir les éléments de modélisation nécessaires pour modéliser les services, les objets connectés et les comportements des services sous forme déclarative.
... BPEL is based on classic light-weight web services, typically developed with the SOAP messaging protocol [34]. The BPEL specification is written in an XML-based Web Service Description Language (WSDL) [35]. However, these technologies are not commonly used in the biomedical informatics domain. ...
Article
Full-text available
Closed-loop neuromodulation control systems facilitate regulating abnormal physiological processes by recording neurophysiological activities and modifying those activities through feedback loops. Designing such systems requires interoperable service composition, consisting of cycles. Workflow frameworks enable standard modular architectures, offering reproducible automated pipelines. However, those frameworks limit their support to executions represented by directed acyclic graphs (DAGs). DAGs need a pre-defined start and end execution step with no cycles, thus preventing the researchers from using the standard workflow languages as-is for closed-loop workflows and pipelines. In this paper, we present NEXUS, a workflow orchestration framework for distributed analytics systems. NEXUS proposes a Software-Defined Workflows approach, inspired by Software-Defined Networking (SDN), which separates the data flows across the service instances from the control flows. NEXUS enables creating interoperable workflows with closed loops by defining the workflows in a logically centralized approach, from microservices representing each execution step. The centralized NEXUS orchestrator facilitates dynamically composing and managing scientific workflows from the services and existing workflows, with minimal restrictions. NEXUS represents complex workflows as directed hypergraphs (DHGs) rather than DAGs. We illustrate a seamless execution of neuromodulation control systems by supporting loops in a workflow as the use case of NEXUS. Our evaluations highlight the feasibility, flexibility, performance, and scalability of NEXUS in modeling and executing closed-loop workflows.
... WSDL is an XML-based speci¯cation for describing web service [9], including the functional interface information, the interactive rules and the location of the service. In web service-based business process, all the interactions between services are performed through the WSDL speci¯cations exposed externally. ...
Article
Full-text available
Business process speci¯ed in Business Process Execution Language (BPEL), which integrates existing services to develop composite service for o®ering more complicated function, is error-prone. Veri¯cation and testing are necessary to ensure the correctness of business processes. SPIN, for which the input language is PROcess MEtaLAnguage (Promela), is one of the most popular tools for detecting software defects and can be used both in veri¯cation and testing. In this paper, an automatic approach is proposed to construct the veri¯able model for BPEL-based business process with Promela language. Business process is translated to an intermediate two-level representation, in which eXtended Control Flow Graph (XCFG) describes the behavior of BPEL process in the¯rst level and Web Service Description Models (WSDM) depict the interface information of composite service and partner services in the second level. With XCFG of BPEL process, XCFGs for partner services are generated to describe their behavior. Promela model is constructed by de¯ning data types based on WSDM and de¯ning channels, variables and processes based on XCFGs. The constructed Promela model is closed, containing not only the BPEL process but also its execution environment. Case study shows that the proposed approach is e®ective.
... There are many approaches that have been proposed for web service quality composition modeling. In [11] [12] the authors depend on the Web Service Description Language (WSDL) to define the functional properties and non-functional properties of the service; this approach has some problems such as the issue of run-time support is not addressed. In [13], the authors define QoS for web service by using XML schemas that both service consumers and providers apply to define the agreed QoS parameters; also this approach allows for the dynamic selection of web service depending on various QoS requirements. ...
Article
Full-text available
The fast spread of web services in our businesses and day-to-day lives has made QoS an essential aspect for both the service providers and consumers. The main problem is how the consumer obtains a high comprehensive quality composite service when there are a large number of web services available; the choice of the optimal path depends on the QoS for every atomic service. Our contribution is studying the influence of the reputation factor in the process of selecting the optimal path in the absence of one of four factors (Availability, Reliability, Response Time, and Price) and the possibility of covering for this absence. We have used the reputation factor when calculating the QoS by using artificial bee colony algorithm for selecting the optimal web service composition; then we analyzed the impact of reputation on the process of selecting web service composition in terms of the QoS and accuracy of the solution. Also, we studied the impact of the reputation factor in the case of the absence of one of the four factors through three experiments and a set of comparisons. The result was that the reputation factor could cover factors such as availability, Response Time, and technical support. We used multiple linear regression and polynomial regression to show the prediction of the reputation factor using the four other factors. The result had higher confidence when we used multiple polynomial regression where the Residual Sum of Squares (RSS) was less than the multiple linear regression. In addition, we analyzed the association between reputation and the four other factors using ANOVA test; the result indicates that there is a significant association between reputation and (availability, response time, and price), but the association is not significant with the reliability.
... It also uses an XML-centered procedure to exchange data in a localized and concentrated environment. A WSDL file defines a web service [2]. SOAP-It is a communication protocol which formats the sending requests and receiving response. ...
Conference Paper
Web Services are software applications with a standardized way of giving interoperability between disparate applications. It utilizes an XML messaging system by communicating with each other. In the vast network of web services, it becomes tough for users to differentiate between valid and invalid web services. There are millions of web services available on the internet out of which a small proportion of web services are valid. The primary concept of this paper is invoking proper web services based on their response time. This is achieved by maintaining a registry of services which is validated with certain checkpoints and assertions.
... Service composition is a process of discovering the required services, reserving them and connecting them to each other. There has been significant work done in this area where the initial methods such as Web Service Description Language (WSDL) [6] focused only on the syntactical information of services for composition. Later came the approaches that took semantic information about services into consideration during composition. ...
Article
Full-text available
The evolution of IoT has revolutionized industrial automation. Industrial devices at every level such as field devices, control devices, enterprise level devices etc., are connected to the Internet, where they can be accessed easily. It has significantly changed the way applications are developed on the industrial automation systems. It led to the paradigm shift where novel IoT application development tools such as Node-RED can be used to develop complex industrial applications as IoT orchestrations. However, in the current state, these applications are bound strictly to devices from specific vendors and ecosystems. They cannot be re-used with devices from other vendors and platforms, since the applications are not semantically interoperable. For this purpose, it is desirable to use platform-independent, vendor-neutral application templates for common automation tasks. However, in the current state in Node-RED such reusable and interoperable application templates cannot be developed. The interoperability problem at the data level can be addressed in IoT, using Semantic Web (SW) technologies. However, for an industrial engineer or an IoT application developer, SW technologies are not very easy to use. In order to enable efficient use of SW technologies to create interoperable IoT applications, novel IoT tools are required. For this purpose, in this paper we propose a novel semantic extension to the widely used Node-RED tool by introducing semantic definitions such as iot.schema.org semantic models into Node-RED. The tool guides a non-expert in semantic technologies such as a device vendor, a machine builder to configure the semantics of a device consistently. Moreover, it also enables an engineer, IoT application developer to design and develop semantically interoperable IoT applications with minimal effort. Our approach accelerates the application development process by introducing novel semantic application templates called Recipes in Node-RED. Using Recipes, complex application development tasks such as skill matching between Recipes and existing things can be automated. We will present the approach to perform automated skill matching on the Cloud or on the Edge of an automation system. We performed quantitative and qualitative evaluation of our approach to test the feasibility and scalability of the approach in real world scenarios. The results of the evaluation are presented and discussed in the paper.
... In future, we'd like to extend the service registry in such a way that it makes use of standardized OWL-S [20] and WSDL [21] descriptions, not only making the interop layer easier to use for existing semantically described services, but also making a more flexible service management and discovery possible. ...
Conference Paper
With large scale prototype projects like Dubai, New York or the Smart Cities Mission in India, Smart Cities are no longer a vague idea from the future but are getting more and more momentum in today’s world. While the Internet of Things is emerging as the technology of choice to connect devices and services of all size and complexity to form a Smart City, the problem of interoperability between business layer applications is far from being solved. A remedy for this may come from the world of the Semantic Web: Lifting exchanged data to a semantic level not only allows to abstract from existing interfaces, a vast selection of already available ontologies for different use case scenarios facilitate the creation of an abstraction layer by already providing a commonly available vocabulary to describe functionality and data to be provided. In this paper, we present a light weight, domain specific semantic interoperability layer, and its use in a Smart City environment.
Article
The Metaverse, as a paradigm continuously evolving in the next generation of the Internet, aims to integrate various network applications. However, existing applications on the Internet, such as serve computing, and edge computing, have highly complex technical requirements. These applications face compatibility issues with the Metaverse in terms of protocols, applications, and services. So they can’t be directly integrated into the Metaverse. How to efficiently deploy service computing in the Metaverse has become a hotspot research area. Moreover, Metaverse implements innovative services to offer individuals more immersive experiences, such as virtual reality services and augmented Reality services. These new services demand high computational resources including computing power, network performance, data security, etc. Ensuring optimal service quality for these new services in the Metaverse is another critical aspect of Metaverse research. To address the aforementioned challenges, Meta-services, designed to describe, discover, compose, and manage other services, are gradually attracting widespread attention and research. In this paper, we provide a comprehensive review, analysis, and discussion of existing research work. We summarize the services computing and novel services in Metaverse and categorize the meta-services framework into three layers: meta-bottom layer (meta-data), meta-middle layer (meta-models, meta-objects, and meta-languages), and meta-top layer (meta-programming). Based on the meta-services framework, we discuss some current challenges, as well as provide future research directions. We hope that this paper can enable readers to quickly understand the reasons for each problem and the current research progress, thereby providing guidance and motivation for further research in this field.
Conference Paper
This document describes the creation of the Web Application "Geographic Names Dictionary of the Cuban Republic" using Geospatial Semantic Web. Also describes the creation process of the geographic names ontology. The creation of this ontology is included in the CYTED IDEDES project goals and it is a use case of ontology integration. It starts on a database that contains the information related to a literary publication named "Diccionario de Nombres Geográficos de la República de Cuba". This ontology has been created in a semiautomatic form using Jena semantic framework. The ontology was refined using protégé and linking other ontologies to add axioms and relations with spatial meaning. It is also created a web application based on the Semantic IDE model established by the project CYTED IDEDES.
Chapter
This chapter comprises an overview of model based approaches for the Internet of Things. These models are able to represent the whole life cycle of an IoT application by modeling the physical environment, the application’s business logic, deployment scenarios, execution, monitoring, and adaptation of such applications. We will provide an overview of available models for each step of the introduced lifecycle of IoT applications. These models mostly originate from our previous work in the last years.
Chapter
With the rapid development of sensor and data technologies, the volume of agricultural-related Earth observation data has been expanded exponentially. Meanwhile, these data are tremendously diverse in terms of the format, sensor, heterogeneity, and quality. Geospatial Web service technologies have shown great potential to build the infrastructure for the collaborative sharing of distributed resources; it is widely used in geospatial data system development for fulfilling the requirements from different organizations. In order to maximize the utilization of the agro-geoinformation, geospatial data service technologies could play a crucial role in agricultural data planning, collection, curation, transformation, analysis, dissemination, cataloging, discovery, access, mining, visualization, quality control, privacy, provenance, and long-term preservation. This chapter discusses how to design and implement standard-based geospatial web services for the discovery, access, and management of the agricultural data interactively at the diverse geographic level. The agricultural data categories and life cycle are discussed. Emphatically, the architecture of geospatial service, implementation of service functionalities, and standardization are demonstrated. The solution of the agricultural data retrieval efficiency is proposed.
Thesis
p>Adaptive hypermedia is an emerging technology that changes the conventional way of presenting information online. Instead of displaying a static page, information can now be adapted dynamically to users, where different users with different needs, interests and domain background are directed to different pieces of information. This idea has attracted the educational community in recent years to develop adaptive systems and applications for learning purposes. Many techniques and methodologies have sprang from these developments for organising the content (domain modeling) and capturing information from the users (user modeling) in order to implement the adaptive mechanisms. This thesis proposes two methodologies that build upon existing methods in both domain modeling and user modeling for adaptive hypermedia, particularly in the educational context. This work demonstrates the use of effective reading speed to present a novel way of modeling the user's browsing history in adaptive hypermedia. Also, in terms of domain modeling, the work proposes the use of a keyword representation technique to free the domain expert from their heavy involvement in the conventional way of organising the content for adaptive hypermedia. As a result, a web-based medical learning application, namely JointZone, is developed to embrace these new methodologies. Joint-Zone's content is made adaptive using two adaptive techniques: knowledge-based link hiding and browsing history-based link annotation. The thesis also presents a usability study and an evaluation that assess the impact and effectiveness of these adaptive techniques in web-based learning. A secondary objective of this work is to explore how better linkage between information entities on the web can be achieved through the adoption of linkbases. The thesis hence also demonstrates methods for generating structural, associative and referential links to provide an additional dimension to the conventional methods of linking information in a web-based learning application.</p
Article
The main important aspects of SOA are web service recommendation, which aids in the integration of services in the development of a specific or unique application. Since selecting a web service communication code is the critical design concern for enhancing an application, influential effects on the process of development, that recommends the appropriate web service interaction procedure is based on two entities, where (1) (SOAP) Simple Object Access Protocol is a protocol that ensures data interchange in an integrated language environment and (2) (REST) Representational State Transfer Protocol enables unprocessed information (raw data) transfer and communication. The performance of web services for business Applications according to SOAP and the other is REST which is evaluated in this article. Because web services are delivered over the internet, metrics such as throughput and response time are used to assess them. According to the literature review, the emphasis for recommending web services is placed on interaction style. Nowadays, the number of services available rises in complexity, making it taking a long time and hard to interpret by promote certain services with same functions. Features and the set of operations are two terms described frequently in Webservices. If a low-quality application or content chosen by a consumer has an impact on the application's overall performance. As a result, online services are suggested on the basis of various (QoS) quality of service characteristics. Different prototypes in web service recommendation are constructed utilizing soft computing approaches in this suggested paper, and the performances of various parameters are compared.
Thesis
Aujourd'hui, suite à l'avancement des technologies de services Web, un nombre croissant de Services Web géospatiaux (SWG) conçus pour l'interopérabilité des informations géospatiales sur le Web ont vu le jour. L'Open Geospatial Consortium (OGC) fournit une liste de standards pour assurer un environnement orienté services. Le Web Processing Service (WPS) est un standard de l'OGC qui fournit un modèle de description syntaxique pour les services Web géospatiaux. La découverte de services Web permet d'identifier les services qui correspondent à une fonctionnalité souhaitée par un utilisateur en comparant la requête de l’utilisateur avec les éléments de description de services. En se basant sur une description syntaxique, la découverte des SWG peut être limitée à de petits groupes d'utilisateurs. Généralement, elle se fait soit de façon manuelle sous certaines hypothèses restrictives en connaissant l’identifiant ou l'URI des services, soit de manière syntaxique en interrogeant des annuaires ou des portails de services en comparant les mots clés de la requête avec les descriptions syntaxiques des services publiés. Dans cette thèse, l'objectif est d'améliorer le processus de découverte de SWG en faisant face à trois défis de recherche. Le premier consiste à proposer une approche de description sémantique des SWG avec une prise en compte de la description des propriétés non fonctionnelles (Quality of Service (QoS)). Le deuxième consiste à proposer une approche de découverte sémantique des SWG en exploitant le modèle de description sémantique. Dans certains cas, il n'existe pas de service atomique qui puisse répondre aux besoins d'un utilisateur, par conséquent, nous sommes amenés à composer plusieurs services afin de satisfaire une requête. Ainsi, le troisième défi de recherche consiste à proposer une approche pour la composition automatique de SWG. Pour faire face à ces différents défis, nous proposons un processus cohérent pour améliorer la découverte de SWG qui intègre les trois approches suivantes : une approche de description sémantique de SWG pour résoudre les problématiques liées aux descriptions syntaxiques de SWG. L’approche est basée sur l'ajout d'annotations sémantiques pour apporter une couche sémantique au modèle de description syntaxique de SWG. Ensuite, ces annotations sont exploitées pour améliorer le processus de découverte de SWG. L'approche de découverte sémantique de SWG est basée sur le calcul d'un score fonctionnel qui exploite une méthode d'appariement sémantique hybride et d'un score de QoS. En ce qui concerne le troisième défi, nous proposons une approche de composition automatique de SWG qui représente différents éléments de composition de SWG dans un programme logique (Answer Set Programming (ASP)). En effet, l'approche représente un graphe de dépendance de SWG, des règles et des contraintes de composition dans un programme logique, permettant à un solveur de rechercher des compositions de SWG en réponse à la requête utilisateur.
Chapter
In this chapter, we revisit the problem, summarize the approach to address the problem, review the validation strategy, and identify the main benefits and contributions of this monograph. Based on the summary of this monograph, we speculate about the opportunities for improving PDSIDES by proposing a framework for a Cloud-Based Platform for Decision Support in the Design of engineered Systems (CB-PDSIDES) and identify several challenges and research questions for the realization of CB-PDSIDES.
Article
Consumers evaluate and choose cloud-based services based on the Service Level Agreements (SLA). These agreements list the service terms and metrics to be agreed upon by the service providers and the customers. Current cloud SLAs are text documents that require significant manual effort to parse and determine if providers meet the SLAs. Moreover, due to the lack of standardization, providers differ in the way they define the terms and metrics, making it more difficult to compare different provider SLAs. We have developed a novel framework to significantly automate the process of extracting knowledge embedded in cloud SLAs and representing it in a semantically rich knowledge graph helping the user to make a calculated decision in choosing a provider. Our framework captures the key terms, measures, and deontic rules, in the form of obligations and permissions present in the cloud SLAs. In this paper, we discuss our framework, technique, and challenges in automating the cloud services agreement. We also describe our results and their validation against well-established standards.
Article
Full-text available
Praxeme is an enterprise methodology that combines the two approaches SOA and MDA to build an information system. It is based on the ideology of separating the business concern into a homogeneous set called "aspect" to better control the usual complexity of a system. The instantiation of the ReLEL requirements model in the Praxeme methodology is an interesting approach to improve the quality of the software development process because it not only allows to specify the user requirements but also to represent the very precise conceptual level of the future system to be designed. Therefore, this article deals with the automatic generation of SOAP web services that represent the software aspect of the Praxeme methodology from the intentional aspect specified by the ReLEL requirements model. To achieve this, we proceeded in two steps, namely the proposal of the rules for deriving the Praxeme Logical Factory Model (logical aspect) into a Web Service Description Language model or WSDL (software aspect) classified as Model to Model Transformation (M2M) in the MDA approach and then the proposal of the rules for translating the WSDL model into a WSDL document known as Model to Text Translation (M2T). The result obtained from the approach proposed in this paper is a WSDL file which is an XML language allowing the complete description of a Web service. We used ATL for the implementation of the M2M approach transformation rules and then the Acceleo template engine for the translation of the M2T.
Article
More and more attention has been paid to web service classification as it can improve the quality of service discovery and management in the service repository, and can be widely used to locate developers’ desired services. Although traditional classification method based on supervised learning model to this task shows promising results, it still suffered from the following shortcomings: (i) the performance of conventional machine learning methods highly depends on the quality of manual feature engineering; (ii) some classification methods (such as CNN, RNN, etc.) are usually limited to very shallow models due to the vanishing gradient problem and cannot extract more features which have great impact on the accuracy of web service classification. To overcome these challenges, a novel web service classification model named Residual Attention Graph Convolutional Network (RAGCN) is proposed. Firstly, adding an attention mechanism to the graph convolutional network which can assign different weights to the neighborhood nodes without complicated matrix operations or relying on understanding the entire graph structure. Secondly, using residual learning to deepen the depth of the model can extract more features. The comprehensive experimental results on real dataset show that the proposed model outperforms the state-of-the-art approaches and proves its potential good interpretability for graphical analysis.
Thesis
Les réseaux sans fil IP envahissent nos lieux de vie et le défaut de sécurité d'accès est un sérieux frein au développement de nouveaux services en leur sein. Dans ce travail nous proposons l'emploi de la carte à puce Java comme module de sécurité pour l'accès à ces réseaux, comme le sont les puces pour la téléphonie mobile GSM.Pour y parvenir malgré les limitations de ces cartes en matière de puissance de traitement et de capacité de stockage, on propose un nouveau protocole du nom de EAP-SSC (EAP Secured Smartcard Channel). Il assure une authentification mutuelle fondée sur la cryptographie à clés symétriques ou asymétriques.La diversité des autorités administrant les réseaux sans fil IP commande la prise en compte d'une variété de politiques de sécurité applicables. Aussi, proposons-nous une plate-forme dénommée OpenEAPSmartcard pour toute carte Java du marché. Son architecture est ouverte et facile à adapter aux scénarii d'authentification des développeurs.La sécurité des matériaux cryptographiques stockés sur les serveurs n'est pas garantie, à cause des attaques profitant des failles et des vulnérabilités des systèmes d'exploitation ; celle des bornes d'accès à la portée des utilisateurs l'est moins encore. Notre solution est d'implanter dans les cartes Java des serveurs EAP dénommés micro-serveurs d'authentification.Le déploiement de ces micro-serveurs pose le problème de leur mise à jour dans le temps et dans l'espace. Une architecture logicielle dénommée TEAPM (Trusted EAP Module) est proposée. En son cœur sont les protocoles EAP et EAP-TLS surmontés de XML et HTTP pour faciliter l'administration distante et sécurisée "Over The Air" des cartes à puce Java.
Chapter
This chapter identifies challenges and requirements for resource sharing to support high performance distributed Service-Oriented Computing (SOC) systems. The chapter draws attention to two popular and important design paradigms: Grid and Peer-to-Peer (P2P) computing systems, which are evolving as two practical solutions to supporting wide-area resource sharing over the Internet. As a fundamental task of resource sharing, the efficient resource discovery is playing an important role in the context of the SOC setting. The chapter presents the resource discovery in Grid and P2P environments through an overview of related systems, both historical and emerging. The chapter then discusses the exploitation of both technologies for facilitating the resource discovery within large-scale distributed computing systems in a flexible, scalable, fault-tolerant, interoperable and security fashion.
Chapter
Although the areas of Service-Oriented Computing (SOC) and Agile and Lean Software Development (LSD) have been evolving separately in the last few years, they share several commonalities. Both are intended to exploit reusability and exhibit adaptability. SOC in particular aims to facilitate the widespread and diverse use of small, loosely coupled units of functionality, called services. Such services have a decided agility advantage, because they allow for changing a service provider at runtime without affecting any of a group of diverse and possibly anonymous consumers. Moreover, they can be composed at both development-time and run-time to produce new functionalities. Automatic service discovery and selection are key aspects for composing services dynamically. Current approaches attempting to automate discovery and selection make use of only structural and functional aspects of the services, and in many situations, this does not suffice to discriminate between functionally similar but disparate services. Service behavior is difficult to specify prior to service execution and instead is better described based on experience with the execution of the service. In this chapter, the authors present a behavioral approach to service selection and runtime adaptation that, inspired by agile software development techniques, is based on behavioral queries specified as test cases. Behavior is evaluated through the analysis of execution values of functional and non-functional parameters. In addition to behavioral selection, the authors’ approach allows for real-time evaluation of non-functional quality-of-service parameters, such as response time, availability, and latency.
Chapter
This chapter introduces a complete storage and retrieval architecture for a database environment for XML documents. DocBase, a prototype system based on this architecture, uses a flexible storage and indexing technique to allow highly expressive queries without the necessity of mapping documents to other database formats. DocBase is an integration of several techniques that include (i) a formal model called Heterogeneous Nested Relations (HNR), (ii) a conceptual model XER (Extensible Entity Relationship), (ii) formal query languages (Document Algebra and Calculus), (iii) a practical query language (Document SQL or DSQL), (iv) a visual query formulation method with QBT (Query By Templates), and (v) the DocBase query processing architecture. This paper focuses on the overall architecture of DocBase including implementation details, describes the details of the query-processing framework, and presents results from various performance tests. The paper summarizes experimental and usability analyses to demonstrate its feasibility as a general architecture for native as well as embedded document manipulation methods.
Chapter
Dependability assessment is an important aspect of any software system and shows the degree of trust and quality of service that is delivered by a system. Validation and verification techniques commonly employed to ensure that systems are fit for use attempt to remove all faults so that error conditions cannot occur but since it is not feasible to verify all states a system can achieve, it is not possible to completely test a system. Conversely, dependability assumes that failures may occur in a system and that mechanisms exist to mitigate any failures and thus provide a trustworthy system. This chapter discusses the different issues associated with dependability. The different techniques that can be used to assess dependability are discussed and are related to Service Orientated Architectures. A number of cases studies are used to show the practicality of the techniques used.
ResearchGate has not been able to resolve any references for this publication.