Publications (74)30.06 Total impact
Conference Paper: A Rule-Based Approach for Availability of Web Service[Show abstract] [Hide abstract] ABSTRACT: Sustainable success of service oriented applications relies on capabilities to manage possible service failures. To substitute a failed service with some other equivalent service is unavoidable in recovering a suspended application due to failure of a constituent service. In this paper, we report a rule based approach to Web service substitution in order to secure availability of services. Availability provides delivery assurance for each Web service so that Simple Object Access Protocol (SOAP) messages cannot be lost undetectably, especially in a Web service composition. The rules are written in Semantic Web Rule Language. The rules are a formal representation of a categorization-based scheme to identify exchangeable Web services. This scheme not only tackles the issue of heterogeneity of domain ontology in describing the Web services, it also adapts itself by learning newly discovered ontology instances. A technical framework of Web service substitution using rule based deduction is demonstrated. Experiments on service substitution based on the proposed framework achieve a best precision of 85%.
Conference Paper: Web Service Matching By Ontology Instance Categorization[Show abstract] [Hide abstract] ABSTRACT: Identifying similar Web services is becoming increasingly important to ensure the success of dynamically integrated Web-service-based applications. We propose a categorization-based scheme to match equivalent Web services that can operate on heterogeneous domain ontologies. Given the upper ontology for services and domain ontologies, our service matching scheme determines whether a given Web service is a possible replacement using a categorization utility called OnExCat. OnExCat categorizes ontology instances extracted from the service descriptions by a probabilistic categorization measurement that incorporates the concept relationships in the upper ontology for services. In addition to tackling the issue of heterogeneity of domain ontology in service descriptions using categorization, our matching scheme also adapts itself by enhancing the known ontologies with newly discovered ontology instances. Experiments on service matching using our matching scheme based on the OnExCat utility have been performed with promising results, a correct matching rate of over 85%.
- [Show abstract] [Hide abstract] ABSTRACT: Science gateways require the easy enabling of legacy scientific applications on computing Grids and the generation of user-friendly interfaces that hide the complexity of the Grid from the user. This paper presents the In-VIGO approach to the creation and management of science gateways. First, we discuss the virtualization of machines, networks and data to facilitate the dynamic creation of secure execution environments that meet application requirements. Then we discuss the virtualization of applications, i.e. the execution on shared resources of multiple isolated application instances with customized behavior, in the context of In-VIGO. A Virtual Application Service (VAS) architecture for automatically generating, customizing, deploying, and using virtual applications as Grid services is then described. Starting with a grammar-based description of the command-line syntax, the automated process generates the VAS description and the VAS implementation (code for application encapsulation and data binding) that is deployed and made available through a Web interface. A VAS can be customized on a per-user basis by restricting the capabilities of the original application or by adding to it features such as parameter sweeping. This is a scalable approach to the integration of scientific applications as services into Grids and can be applied to any tool with an arbitrarily complex command-line syntax. Copyright
- [Show abstract] [Hide abstract] ABSTRACT: Science gateways require the easy enabling of legacy scientific applications on computing Grids and the generation of user-friendly interfaces that hide the complexity of the Grid from the user. This paper presents the In-VIGO approach to the creation and management of science gateways. First, we discuss the virtualization of machines, networks and data to facilitate the dynamic creation of secure execution environments that meet application requirements. Then we discuss the virtualization of applications, i.e. the execution on shared resources of multiple isolated application instances with customized behavior, in the context of In-VIGO. A Virtual Application Service (VAS) architecture for automatically generating, customizing, deploying, and using virtual applications as Grid services is then described. Starting with a grammar-based description of the command-line syntax, the automated process generates the VAS description and the VAS implementation (code for application encapsulation and data binding) that is deployed and made available through a Web interface. A VAS can be customized on a per-user basis by restricting the capabilities of the original application or by adding to it features such as parameter sweeping. This is a scalable approach to the integration of scientific applications as services into Grids and can be applied to any tool with an arbitrarily complex command-line syntax. Copyright © 2006 John Wiley & Sons, Ltd.
- [Show abstract] [Hide abstract] ABSTRACT: Much effort is being made by the IT industry towards the development of a Web Service infrastructure to enable the discovery and sharing of heterogeneous applications and data resources. The existing implementation of Web Service registries does not have constraint specification and processing capabilities to achieve intelligent service discovery. In this work, we have extended the Web Service Description Language to allow service providers to specify their service constraints, and developed a Constraint-based Web Service Broker capable of matching a service requestor’s requirement specification against providers’ constraints to find their desired services. This paper presents the extended Web Service Description Language, the architecture and implementation of the Broker, the constraint matching technique, and the result of a performance evaluation.
- [Show abstract] [Hide abstract] ABSTRACT: To best exploit the potential of the grid, it is necessary to "grid-enable" legacy applications that were not originally developed to run on a grid. In this paper, a generic framework to grid-enable legacy applications is presented. In particular, this paper focuses on a general approach to model and represent an application in a way that is supportive of the key properties of the grid-enabling framework: generality, automatic generation and integration of grid applications, plug-and-play deployment, and interoperability. Based on the model, a configuration language is developed to describe a wide range of command-line applications and their execution requirements. It is easy to use and does not require the application enabler to know the details of the grid middleware. A case study is presented to illustrate the automated grid-enabling of a legacy application in In-VIGO (in-virtual information grid organizations), a grid computing infrastructure that makes extensive use of visualization technology.
- [Show abstract] [Hide abstract] ABSTRACT: The Internet has become the major platform for future inter-organizational knowledge-based applications. There is a need for knowledge modeling and processing techniques to perform event management and rule processing in such a distributed environment. We present an Event-Trigger-Rule (ETR) model, which differs from the conventional ECA rule model in that events and rules can be defined and managed independently by different people/organizations at different sites. Triggers are specifications that link distributed events with potentially complex structures of distributed rules to capture semantically rich and useful knowledge. Triggers can also be specified by different people/organizations in a distributed environment. Based on the ETR model, we have implemented an ETR Server that can be installed at multiple sites over the Internet. The ETR Server provides platform inde-pendence, extensibility, processing of event histories and rule structures, dynamic rule change at run-time, and Web-based GUI tools. The ETR Model and the implemented ETR Server can support various inter-organizational collaborative knowledge-based applications such as a Web-based negotiation system, supply chains, dynamic workflow management system, Knowledge Networks, and transnational information system.
- [Show abstract] [Hide abstract] ABSTRACT: In recent years, there has been increasing interest in automated e‐business negotiations. The automation of negotiation requires a decision model to capture the negotiation knowledge of policymakers and negotiation experts so that the decision‐making process can be carried out automatically. Current research on automated e‐business negotiations has focused on defining low‐level tactics (or negotiation rules) so that automated negotiation systems can carry out automated negotiation processes. These low‐level tactics are usually defined from a technical perspective, not from a business perspective. There is a gap between high‐level business negotiation goals and low‐level tactics. In this article, we distinguish the concepts of negotiation context, negotiation goals, negotiation strategy, and negotiation tactics and introduce a formal decision model to show the relations among these concepts. We show how high‐level negotiation goals can be formally mapped to low‐level tactics that can be used to affect the behavior of a negotiation system during the negotiation process. In business, a business organization faces different negotiation situations (or contexts) and determines different sets of goals for different negotiation contexts. In our decision model, a business policymaker sets negotiation goals from different perspectives, which are called goal dimensions. A negotiation policy is a functional mapping from a negotiation context to some quantitative measures (or goal values) for the goal dimensions to express how competitive the policymaker wants to reach that set of goals. A negotiation expert who has the experience and expertise to conduct negotiations would define the negotiation strategies needed for reaching the negotiation goals. Formally, a negotiation strategy is a functional mapping from a set of goal values to a set of decision‐action rules that implement negotiation tactics. The selected decision‐action rules can then be used to control the execution of an automated negotiation system, which conducts a negotiation on behalf of a business organization.
- [Show abstract] [Hide abstract] ABSTRACT: As the global marketplace becomes more and more competitive, business organisations often need to team up and operate as a virtual enterprise to utilise the best of their resources for achieving their common business goals. As the business environment of a virtual enterprise is highly dynamic, it is necessary to develop a workflow management technology that is capable of handling dynamic workflows across enterprise boundaries. This paper describes a Dynamic Workflow Model (DWM) and a dynamic workflow management system (DynaFlow) for modelling and controlling the execution of inter-organisational business processes. DWM enables the specification of dynamic properties associated with a business process model. It extends the underlying model of the WfMC's WPDL by adding connectors, events, triggers and rules as its modelling constructs. It also encapsulates activity definitions and allows web service (or e-service) requests to be included as a part of the activity specification. Using DWM as the underlying model, DynaFlow makes use of an Event-Trigger-Rule (ETR) server to trigger rules during the enactment of a workflow process to enforce business rules and policies and/or to modify the process model at run-time. A constraint-based, dynamic service binding mechanism is used to dynamically bind web service requests to web services that satisfy the requirements of the requests.
- [Show abstract] [Hide abstract] ABSTRACT: The In-VIGO approach to Grid-computing relies on the dynamic establishment of virtual grids on which application services are instantiated. In- VIGO was conceived to enable computational science to take place In Virtual Information Grid Organizations. Having its first version deployed on July of 2003, In-VIGO middleware is currently used by scientists from various disciplines, a noteworthy example being the computational nanoelectronics research community (http://www.nanohub.org). All components of an In- VIGO-generated virtual grid - machines, networks, applications and data - are themselves virtual and services are provided for their dynamic creation. This article reviews the In-VIGO approach to Grid-computing and overviews the associated middleware techniques and architectures for virtualizing Grid components, using services for creation of virtual grids and automatically Grid- enabling unmodified applications. The In-VIGO approach to the implementation of virtual networks and virtual application services are discussed as examples of Grid-motivated approaches to resource virtualization and Web-service creation.
- [Show abstract] [Hide abstract] ABSTRACT: This paper describes a scalable approach to the enabling of legacy scientific applications on computing grids using a service-oriented architecture. In the context of this paper grid-enabling means turning an existing application, installed on a grid resource, into a service and generating the application-specific user interfaces to use that application through a Web portal. Scalability is achieved by providing a common abstraction for a category of applications and providing a "generic" application service to wrap those applications as services. The focus of this paper's approach is on grid-enabling "command-oriented" scientific applications. The novel aspect of the approach is that the entire process -from turning an application into a service to the user-interface generation for that application - is done automatically, without requiring coding or grid-system downtime. Portlet technology is used to dynamically generate application-specific interfaces. Further, the approach makes it possible to customize the applications for different user groups by way of simplifying, restricting or composing the functionalities of applications. The approach is useful for building grid portals on which a large number of applications need to be dynamically enabled.
- [Show abstract] [Hide abstract] ABSTRACT: In the business world, exchange of receipts of business transactions is a common practice. These receipts serve as evidences that the transactions did take place in case of future repudiations and disputes. Likewise, it is critical in e-commerce applications to have a third party security service, which generates, distributes, validates, and maintains information and evidence of an electronic transaction. Quite a number of non-repudiation protocols have been proposed and evaluated based on some established evaluation criteria. However, in the context of collaborative e-commerce, there are additional evaluation criteria to be considered: e.g., the recipient role in the protocol execution, the degree of trust on a third party, and the dependency on the existence of a third-party for dispute settlement. In this paper, we identify a number of security requirements in collaborative e-commerce, and propose a new non-repudiation message transfer protocol, which makes use of the techniques of message digest, message encryption, double-encrypted key, and dual signatures. The protocol can satisfy the additional criteria better than the existing protocols. The implementation of the protocol in the web service platform is also presented.
- [Show abstract] [Hide abstract] ABSTRACT: This poster briefly introduces two resource-virtualization techniques needed for the creation of virtual(ized) grids: virtual networks and virtual application services. The former provides bidirectional network connectivity even in the presence of firewalls, network address translation gateways and proxies by creating virtual routers and virtual IP space. The later allows automated creation and deployment of legacy applications into grids by generating a virtual application service that allows the execution in shared resources of multiple isolated application instances with customized behavior.
- [Show abstract] [Hide abstract] ABSTRACT: The current Web technology is not suitable for representing knowledge nor sharing it among organizations over the Web. There is a rapidly increasing need for exchanging and linking knowledge over the Web, especially when several sellers and buyers come together on the Web to form a virtual marketplace. Virtual marketplaces are increasingly being required to become more intelligent and active, thus leading to an active virtual marketplace concept. This paper explains an infrastructure called the knowledge network that enables sharing of knowledge over the Web and thus effectively supports the formation of virtual marketplaces on the Web. The concept of an active virtual marketplace can be realized using this infrastructure by allowing buyers and sellers to effectively specify their knowledge in the form of events, triggers, and rules. The knowledge network can actively distribute and process these knowledge elements to help buyers and sellers to easily find each other. An example active virtual marketplace application has been developed using the knowledge network.
- [Show abstract] [Hide abstract] ABSTRACT: This chapter presents the design and implementation of an Event-Trigger-Rule-based electronic supply-chain management system (ESCM). The ESCM is constructed by a network of Knowledge Web Servers, each of which consists of a Web server, an Event Manager, an Event-Trigger-Rule (ETR) Server, a Knowledge Profile Manager, a Persistent Object Manager, a Metadata Manager, a Negotiation Server, and a Cost-Benefit Evaluation Server. Together, they manage the activities and interactions among Manufacturers, Distributors and Retailers. ESCM offers a number of features. First and foremost is the flexibility offered to business entities in defining their own rules according to their own business strategies. Second, since the rules that control the business activities are installed and processed by the multiple copies of the ETR server installed at business entities' individual sites, their privacy and security are safeguarded. Third, ESCM's event, event filtering and event notification mechanisms keep both Buyers and Suppliers better informed with more timely information about business events so that they or their software systems can take the proper actions in different business processes.
- [Show abstract] [Hide abstract] ABSTRACT: Existing grid computing technologies take advantage of underused computing capacity to solve business problems and provide IT-level infrastructure to support business applications. A business grid's ultimate goal, however, is to apply the utility model of grid computing to business applications; that is, provide support services for charging users on a pay-per-use basis, much as a utility company charges for electricity. That way, the vendor takes the responsibility for application maintenance and upgrade. Thus, a business grid provides a virtualized infrastructure to support the transparent use and sharing of business functions on demand.
- [Show abstract] [Hide abstract] ABSTRACT: Much effort is being made by the IT industry towards the establishment of a Web services infrastructure and the refinement of its component technologies to enable the sharing of heterogeneous application resources. Traditional roles of the service provider, service requestor and service broker and their interactions are now being improved upon to enable more effective services. The implementation of the Web service broker is currently limited to being an interface to the service repository for service registration, browsing and/or programmatic access. In this work, we have extended the functionality of the Web services broker to include constraint specification and processing, which enables the broker to find a good match between a service provider's capabilities and a service requestor's requirements. This paper presents the extension made to the Web Services Description Language to include constraint specifications in service descriptions and requests, the architecture of a constraint-based broker, the constraint matching technique, some implementation details, and preliminary evaluation results.
Conference Paper: Adaptive grid service flow management: Framework and model[Show abstract] [Hide abstract] ABSTRACT: Grid computing provides the basic software infrastructure for integrating geographically distributed resources and services through standardized grid services. One of the key challenges to enable the broader use of grid services beyond the domain of scientific computing is the ability to perform complex tasks that require the modeling and coordination of the enactment of a number of distributed grid services. Workflow technology is a good candidate for supporting grid service flow. However, traditional workflow is static, thus unable to exploit the dynamic information available in the grid and respond to the dynamic nature of the grid. In this paper, we present an adaptive framework that provides adaptive management of grid service flows. The framework is based on an adaptive grid service flow model and is supported by an event-trigger-rule (ETR) technology that will be used to trigger rules in a distributed fashion to adapt a grid service flow to the dynamic grid environment and the changing requirements of a grid application.
- [Show abstract] [Hide abstract] ABSTRACT: The coming generation of Internet applications promises to incorporate a distinctly different view of software, one based on services. Services computing is the evolution of Internet computing toward a services-oriented architecture. By service oriented, we mean that business will purchase functionality in chunks. Rather than buying software for permanent in-house installation, companies will buy services as needed. A services model removes the burden of updates and patches from the IT department, returning such work to its rightful owners: the vensors that sell the software. To support such a scenario, an architecture must embrace a new technology suite that includes Web services and a service-oriented architecture for grid and utility computing, and autonomic computing.
- [Show abstract] [Hide abstract] ABSTRACT: Web services technology is emerging as a promising infrastructure to support loosely coupled, Internet-based applications that are distributed, heterogeneous, and dynamic. It provides a standards-based, process-centric framework for achieving the sharing of distributed heterogeneous applications. While Web services technology provides a promising foundation for developing distributed applications for e-business, additional features are required to make this paradigm truly useful in the real world. In particular, interactions among business organizations need to follow the policies, regulations, security and other business rules of the organizations. An effective way to control, restrict and enforce business rules in the use of Web services is to integrate business event and rule management concepts and techniques into the Web services model. In this paper, we focus on incorporating the business event and rule-management concepts into the Web services model at the service provider side. Based on a code-generation approach, we have developed techniques and implemented tools to generate Web service "wrappers" and other objects required to integrate an Event-Trigger-Rule (ETR) technology with the Web services technology.
University of Florida
Gainesville, Florida, United States
- • Department of Electrical and Computer Engineering
- • Department of Computer and Information Science and Engineering
- • Database Systems Research and Development Center
Alabama, United States
- Computer Science
Ewha Womans University
Sŏul, Seoul, South Korea
- Department of Computer Science and Engineering
Texas Instruments Inc.Dallas, Texas, United States
Bull HN Information Systems Inc.Chelmsford, Massachusetts, United States