Stanley Y. W. Su

University of Florida, Gainesville, Florida, United States

Are you Stanley Y. W. Su?

Claim your profile

Publications (204)59.13 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: All nations are facing many global problems, the solutions to which require efficient and effective sharing of distributed, heterogeneous data, knowledge and application systems. We present a way to capture organizations' multi-faceted knowledge by three popular rule types and rule structures, and wrap them as Web services for their registration, discovery and invocation. Distributed data associated with events defined by these organizations are transmitted through a distributed event infrastructure to those sites that contain applicable rules. The processing of heterogeneous rules and application operations specified in rules may add to or modify the event data to produce a dynamic event data set that can be used to support collaborating organizations' decision-making and problem solving. A peer-to-peer architecture of a distributed system and the functions of its components are described. Issues and approaches related to event data evolution and distributed rule and trigger processing are also discussed.
    Information Reuse and Integration, Conf, 2005. IRI -2005 IEEE International Conference on.; 09/2005
  • Minsoo Lee, Stanley Y. W. Su, Herman Lam
    [Show abstract] [Hide abstract]
    ABSTRACT: The current Web technology is not suitable for representing knowledge nor sharing it among organizations over the Web. There is a rapidly increasing need for exchanging and linking knowledge over the Web, especially when several sellers and buyers come together on the Web to form a virtual marketplace. Virtual marketplaces are increasingly being required to become more intelligent and active, thus leading to an active virtual marketplace concept. This paper explains an infrastructure called the knowledge network that enables sharing of knowledge over the Web and thus effectively supports the formation of virtual marketplaces on the Web. The concept of an active virtual marketplace can be realized using this infrastructure by allowing buyers and sellers to effectively specify their knowledge in the form of events, triggers, and rules. The knowledge network can actively distribute and process these knowledge elements to help buyers and sellers to easily find each other. An example active virtual marketplace application has been developed using the knowledge network.
    J. Data Semantics. 01/2005; 2:113-135.
  • Seokwon Yang, Stanley Y. W. Su, Herman Lam
    [Show abstract] [Hide abstract]
    ABSTRACT: In the business world, exchange of receipts of business transactions is a common practice. These receipts serve as evidences that the transactions did take place in case of future repudiations and disputes. Likewise, it is critical in e-commerce applications to have a third party security service, which generates, distributes, validates, and maintains information and evidence of an electronic transaction. Quite a number of non-repudiation protocols have been proposed and evaluated based on some established evaluation criteria. However, in the context of collaborative e-commerce, there are additional evaluation criteria to be considered: e.g., the recipient role in the protocol execution, the degree of trust on a third party, and the dependency on the existence of a third-party for dispute settlement. In this paper, we identify a number of security requirements in collaborative e-commerce, and propose a new non-repudiation message transfer protocol, which makes use of the techniques of message digest, message encryption, double-encrypted key, and dual signatures. The protocol can satisfy the additional criteria better than the existing protocols. The implementation of the protocol in the web service platform is also presented.
    International Journal of Business Process Integration and Management - Int J Bus Process Integrat Manag. 01/2005; 1(1).
  • Source
    Qianhui Althea Liang, Stanley Y. W. Su
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a formalization of the Web Service composition problem as a search problem in an AND/OR graph, and a search algorithm for searching the graph to identify composite service(s) that satisfies a Web Service request. Given a service request that can only be satisfied by a composition of Web Services, we identify the service categories that are relevant to the request and dynamically construct an AND/OR graph to capture the input/output dependencies among the Web Services of these service categories. The graph is modified, based on the information provided in the service request. The search algorithm is then used to search the modified AND/OR graph for a minimal and complete composite service template that satisfies the service request. The algorithm can be applied repeatedly to the graph to search for alternative templates until the result is approved by the service requester. We have evaluated the algorithm both analytically and experimentally, and the experiment results are presented. [PUBLICATION ABSTRACT]
    Int. J. Web Service Res. 01/2005; 2:48-67.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Simulations are increasingly used for instruction and training purposes in many application areas such as commercial and military aviation, battlefield management, building construction, product manufacturing, medical education and others. However, most simulation systems are monolithic, ad hoc and non-reusable. In this work, we apply learning object and e-learning service technologies to modularize an existing Web-based simulation system called the Virtual Anesthesia Machine (VAM). Instructional materials associated with the components of the simulation system are encapsulated as reusable Atomic Learning Objects, each of which consists of content items, practice items, assessment items, meta-information and constraints. Instructional materials associated with the entire simulation system is modeled as a Composite Learning Object having a structure of activities, which is used by a Learning Process Execution Engine to enact the process of delivering contents, instructing learners to use the simulation system for practice on what they learn, and performing assessment to evaluate the performances of learners in learning the functions and operations of the simulation system and its components. An event-trigger-rule server is used in an event-driven, rule-based execution of a learning process to make the learning process active, adaptive, customizable and flexible.
    Web Information Systems Engineering - WISE 2005 Workshops, WISE 2005 International Workshops, New York, NY, USA, November 20-22, 2005, Proceedings; 01/2005
  • [Show abstract] [Hide abstract]
    ABSTRACT: This chapter presents the design and implementation of an Event-Trigger-Rule-based electronic supply-chain management system (ESCM). The ESCM is constructed by a network of Knowledge Web Servers, each of which consists of a Web server, an Event Manager, an Event-Trigger-Rule (ETR) Server, a Knowledge Profile Manager, a Persistent Object Manager, a Metadata Manager, a Negotiation Server, and a Cost-Benefit Evaluation Server. Together, they manage the activities and interactions among Manufacturers, Distributors and Retailers. ESCM offers a number of features. First and foremost is the flexibility offered to business entities in defining their own rules according to their own business strategies. Second, since the rules that control the business activities are installed and processed by the multiple copies of the ETR server installed at business entities' individual sites, their privacy and security are safeguarded. Third, ESCM's event, event filtering and event notification mechanisms keep both Buyers and Suppliers better informed with more timely information about business events so that they or their software systems can take the proper actions in different business processes.
    12/2004: pages 299-322;
  • Gilliean Lee, Stanley Y. W. Su
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, distributed and sharable learning resources are modeled by two types of Learning Objects (LOs): Atomic Learning Object and Composite Learning Object. LOs are uniformly published as Web-services in a constraint-based Web-service registry and are made sharable and reusable. This paper presents the learning object models for the specification of these two types of LOs and an extended Web- service infrastructure, which provides a standard framework for the registration, discovery, binding and invocation of these objects. An Event-Trigger-Rule Server is integrated with a Learning Process Execution Engine to make Composite Learning Objects active, flexible, customizable and adaptive.
    12/2004: pages 247-280;
  • Yu Long, H. Lam, S.Y.W. Su
    [Show abstract] [Hide abstract]
    ABSTRACT: Grid computing provides the basic software infrastructure for integrating geographically distributed resources and services through standardized grid services. One of the key challenges to enable the broader use of grid services beyond the domain of scientific computing is the ability to perform complex tasks that require the modeling and coordination of the enactment of a number of distributed grid services. Workflow technology is a good candidate for supporting grid service flow. However, traditional workflow is static, thus unable to exploit the dynamic information available in the grid and respond to the dynamic nature of the grid. In this paper, we present an adaptive framework that provides adaptive management of grid service flows. The framework is based on an adaptive grid service flow model and is supported by an event-trigger-rule (ETR) technology that will be used to trigger rules in a distributed fashion to adapt a grid service flow to the dynamic grid environment and the changing requirements of a grid application.
    Web Services, 2004. Proceedings. IEEE International Conference on; 08/2004
  • S. Degwekar, S.Y.W. Su, H. Lam
    [Show abstract] [Hide abstract]
    ABSTRACT: Much effort is being made by the IT industry towards the establishment of a Web services infrastructure and the refinement of its component technologies to enable the sharing of heterogeneous application resources. Traditional roles of the service provider, service requestor and service broker and their interactions are now being improved upon to enable more effective services. The implementation of the Web service broker is currently limited to being an interface to the service repository for service registration, browsing and/or programmatic access. In this work, we have extended the functionality of the Web services broker to include constraint specification and processing, which enables the broker to find a good match between a service provider's capabilities and a service requestor's requirements. This paper presents the extension made to the Web Services Description Language to include constraint specifications in service descriptions and requests, the architecture of a constraint-based broker, the constraint matching technique, some implementation details, and preliminary evaluation results.
    Web Services, 2004. Proceedings. IEEE International Conference on; 08/2004
  • Source
    Q. Liang, S.Y.W. Su, H. Li, J.-Y. Chung
    [Show abstract] [Hide abstract]
    ABSTRACT: Web services technology has been making steady progress since its initial emergence in the beginning of this century. Since multimedia data have become ubiquitous on the Internet, it is not surprising that multimedia Web services have been receiving attention by the Web services community. On the Web services platform, UDDI is the current de facto service discovery approach. However, researchers have noticed that the UDDI business model has not really achieved its designated goal. We have proposed an approach to complement UDDI with WSIL in the Web services discovery. The idea behind Unified Web Service Discovery (UWSD) is to use both the brokering-based approach and the trust-based approach in Web services discovery. Further, UWSD is designed for handling multimedia service discovery with specific QoS considerations. The services discovered by UWSD are separated into two groups. The first group contains a relatively limited number of services that are trustworthy and guaranteed. The second group contains a large number of services, but the content is not guaranteed to be trustworthy. A markup language is also designed to facilitate the discovery process. We believe that the UWSD approach can better meet the current demand of multimedia Web services discovery.
    Multimedia Software Engineering, 2003. Proceedings. Fifth International Symposium on; 01/2004
  • Karthik Nagarajan, Herman Lam, Stanley Y. W. Su
    Int. J. Web Service Res. 01/2004; 1:41-57.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the post-genomic era, biologists interested in systems biology often need to import data from public databases and construct their own system-specific or subject-oriented databases to support their complex analysis and knowledge discovery. To facilitate the analysis and data processing, customized and centralized databases are often created by extracting and integrating heterogeneous data retrieved from public databases. A generalized methodology for accessing, extracting, transforming and integrating the heterogeneous data is needed. This paper presents a new data integration approach named JXP4BIGI (Java XML Page for Biological Information Gathering and Integration). The approach provides a system-independent framework, which generalizes and streamlines the steps of accessing, extracting, transforming and integrating the data retrieved from heterogeneous data sources to build a customized data warehouse. It allows the data integrator of a biological database to define the desired bio-entities in XML templates (or Java XML pages), and use embedded extended SQL statements to extract structured, semi-structured and unstructured data from public databases. By running the templates in the JXP4BIGI framework and using a number of generalized wrappers, the required data from public databases can be efficiently extracted and integrated to construct the bio-entities in the XML format without having to hard-code the extraction logics for different data sources. The constructed XML bio-entities can then be imported into either a relational database system or a native XML database system to build a biological data warehouse. JXP4BIGI has been integrated and tested in conjunction with the IKBAR system (http://www.ikbar.org/) in two integration efforts to collect and integrate data for about 200 human genes related to cell death from HUGO, Ensembl, and SWISS-PROT (Bairoch and Apweiler, 2000), and about 700 Drosophila genes from FlyBase (FlyBase Consortium, 2002). The integrated data has been used in comparative genomic analysis of x-ray induced cell death. Also, as explained later, JXP4BIGI is a middleware and framework to be integrated with biological database applications, and cannot run as a stand-alone software for end users. For demonstration purposes, a demonstration version is accessible at (http://www.ikbar.org/jxp4bigi/demo.html).
    Bioinformatics 01/2004; 19(18):2351-8. · 5.32 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the popularity of the Web services technology, more and more software systems?functionalities become available by being published and registered as Web services. Registered Web services need to be dynamically combined to form composite services when individual simple services fail to meet service requestors?complex service needs. In this article, we propose a semi-automatic approach to composite Web services discovery, description and invocation. We present an intelligent registry with constraint matching capabilities to support composite service discovery and description. It provides a user interface to interactively compose a service request. It then uses a semi-automatic mechanism and a search algorithm to construct a composite service template that satisfies the request. The operations of the template are bound to registered service operations by constraint matching subsequently. The resulting composite service is specified in the Web Services Flow Language. A composite service processor is designed to execute composite services by invoking the component service operations of various service providers.
    Int. J. Web Service Res. 01/2004; 1:64-89.
  • Gilliean Lee, Xu Zhang, Stanley Y. W. Su
    Proceedings of the 7th IASTED International Conference on Computers and Advanced Technology in Education, August 16-18, 2004, Kauai, Hawaii, USA; 01/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses the detection and management of a soybean rust outbreak in the context of agricultural homeland security. An Event-Trigger-Rule system is used for event registration, filtering and notification, and for process coordination and enforcement of agencies' policies, constraints, regulations and data integrity/security/privacy. A 'Response and Action Plan' for combating the disease proposed by one of the 12 member states of the Southern Plant Diagnostic Network is used in a prototype implementation to demonstrate the utility of the system.
    01/2004;
  • Source
    Nicky Joshi, Kushal Thakore, Stanley Y. W. Su
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the design and implementation of an Event-Trigger-Rule-Based auction system called IntelliBid. A network of Knowledge Web Servers, each consisting of a Web server, an Event-Trigger-Rule (ETR) Server, an Event Engine, a Knowledge Profile Manager, and Bid Servers and their proxies constitutes IntelliBid. Together, they provide auction-related services to the creator of an auction site and the bidders and suppliers of products. IntelliBid offers a number of desirable features. First and foremost is the flexibility offered to bidders for defining their own rules to control their bids in an automatic bidding process, which frees the bidders from having to be on-line to place bids. By using different rules, the bidders can apply different bidding strategies. Second, it furnishes valuable statistical information about past auctions to both suppliers (or sellers) and bidders. The information can assist a bidder in bidding and a seller in setting a reasonable base price and/or the minimum incremental price. Third, since rules that control the automatic bidding are installed and processed by the ETR servers installed at bidders' individual sites, bidders' privacy and security are safeguarded. The statistical information that is released by IntelliBid only depicts the trend of the bidding prices of a product. The information about bidders is kept completely secret, thus safeguarding the privacy of the bidders. Fourth, IntelliBid's event, event filtering and event notification mechanisms keep both bidders and suppliers timely informed of auction events so that they or their software system can take the proper actions in the auction process. Fifth, any registered user of IntelliBid, bidder or supplier, can monitor the bids placed to any product being auctioned in IntelliBid. Sixth, IntelliBid allows bidders to do both on-line (or manual) bidding and automatic bidding. It also allows a bidder to participate in several auctions at the same time, in both manual and automated modes. The bidding of a product can depend on the result of the bidding of another product. Last, but not least, IntelliBid allows a person or organization to play both the role of bidder and the role of supplier simultaneously. The Profile Manager keeps the information as a bidder and information as a supplier separately. Moreover, IntelliBid's architecture uses a parallel event management system to do event registration and notification. This paper also reports the result of a performance study on the implication of using such a parallel system to achieve scalability.
    World Wide Web 01/2004; 7:181-210. · 1.20 Impact Factor
  • Seokwon Yang, S.Y.W. Su, H. Lam
    [Show abstract] [Hide abstract]
    ABSTRACT: In the business world, exchange of signatures or receipts is a common practice in case of future dispute. Likewise, it is critical in e-commerce applications to have the security service that generates, distributes, validates, and maintains the evidence of an electronic transaction. Quite of number of non-repudiation protocols have been proposed in distributed systems and evaluated based on some evaluation criteria. However, in the context of e-commerce, there are additional evaluation criteria to be considered: fairness to both the message sender and the message receiver with respective to their control over the completion of a transaction, the degree of trust on a third party, and existence dependency on a third-party for dispute settlement on a committed transaction. We identify the set of requirements for a message transfer protocol in e-commerce, and propose a new non-repudiation message transfer protocol that meets these additional criteria. Our protocol protects the confidentiality of message contents such that no unauthorized intermediary is able to see the contents. And, the protocol is superior to other protocols in that continuous existence of the third-party authority is not needed beyond the completion of a message transfer. Furthermore, with respect to the control over the commitment of a transaction, our protocol is fair to both the message sender and the receiver.
    E-Commerce, 2003. CEC 2003. IEEE International Conference on; 07/2003
  • Karthik Nagarajan, Herman Lam, Stanley Y. W. Su
    Proceedings of the International Conference on Web Services, ICWS '03, June 23 - 26, 2003, Las Vegas, Nevada, USA; 01/2003
  • Source
    Youzhong Liu, Fahong Yu, Stanley Y.W. Su, Herman Lam
    [Show abstract] [Hide abstract]
    ABSTRACT: Business organizations are often faced with decision situations in which the costs and benefits of some competing business specifications such as business offers, product specifications, or negotiation proposals need to be evaluated in order to select the best or desirable ones. In e-business, there is a need to automate the cost–benefit evaluation process to support decision making. This paper presents a general-purpose Cost–Benefit Evaluation Server (CBES) and its underlying Cost–Benefit Decision Model (CBDM), which models benefits in terms of costs and logical scoring and aggregation of preferences associated with products and services. The Server provides build-time tools for users to specify preference and cost information and a run-time engine to perform cost–benefit evaluations. A business scenario involving supplier selection and automated negotiation is given to illustrate the application of the Server and its four evaluation modes.
    Decision Support Systems. 01/2003;
  • Source
    Stanley Y. W. Su, Gilliean Lee
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes an on-going effort to investigate problems and approaches for achieving Web-service-based, dynamic and collaborative e-learning. In this work, a Learning Content Definition Model is used to model distributed and sharable learning resources as content objects. Distributed and sharable software systems/components for supporting e-learning are modeled as software objects. Both types of objects are uniformly published as Web-services in a constraint-based Web-service registry and made sharable and reusable. An extended Web-service infrastructure provides a standard framework for the modeling, registration, discovery, binding and invocation of these objects. In this work, we also introduce a Learning Process Definition Model and a Learning Process Execution Engine for specifying and executing learning process models, which represent instructional modules in the forms of activity trees. An Event-Trigger-Rule Server is integrated with the Learning Process Execution Engine to make learning process models active, flexible, customizable and adaptable. It is also used to facilitate the interaction and coordination among learners, administrators, authors, and other personnel involved in collaborative e- learning.
    PGLDB'2003, Proceedings of the I PGL Database Research Conference, Rio de Janeiro, Brazil, April 10-11, 2003; 01/2003

Publication Stats

2k Citations
59.13 Total Impact Points

Institutions

  • 1971–2011
    • University of Florida
      • • Department of Computer and Information Science and Engineering
      • • Database Systems Research and Development Center
      • • Department of Electrical and Computer Engineering
      Gainesville, Florida, United States
  • 2002
    • IBM
      Armonk, New York, United States
  • 1995
    • Tatung Institute of Commerce and Technology
      T’ai-pei, Taipei, Taiwan
  • 1992
    • Texas Instruments Inc.
      Dallas, Texas, United States
  • 1991
    • Bull HN Information Systems Inc.
      Chelmsford, Massachusetts, United States
  • 1988
    • Honeywell
      Morristown, New Jersey, United States
  • 1986
    • University of Michigan
      • Department of Electrical Engineering and Computer Science (EECS)
      Ann Arbor, MI, United States