Conference Paper

Management of Distributed and Redundant Storage in High Demand Web Servers for Heterogeneous Networks Access by Agents.

DOI: 10.1007/978-3-540-85863-8_16 Conference: International Symposium on Distributed Computing and Artificial Intelligence, DCAI 2008, University of Salamanca, Spain, 22th-24th October 2008
Source: DBLP

ABSTRACT To improve the performance of a web server, the most usual solution is based on construcingt a distributed architecture, in
which a set of nodes offer the web service. A web site is composed by a set of elements or resources, where each one of them
can be of a specific type. The present article describes a new architecture proposed to add dynamic replication of the resources,
supporting high demand file provisioning for web-based access arriving from heterogeneous (mobile & fixed) networks.

  • [Show abstract] [Hide abstract]
    ABSTRACT: The integration of the computer and telecommunication products of the academy and industry is giving impressive impulses to systems, languages, applications, services. The performance community caught this train only in part. There has been an increment of monitoring and workload characterization activities, but the real core represented by modeling and resolution activities seems to be confined to the lower layers of the system and network infrastructure. Few braves touched the top layers of the applications that represent the main interest for the real world. The main focus will not be on the performance models for the design stage, but on the phases following the realization of these systems and services. This includes the tuning-testing, the operational, and the post-production stage including capacity planning. The good news is that there is a huge potential for involving performance models and modelers in each of these phases.
    Quantitative Evaluation of Systems, 2004. QEST 2004. Proceedings. First International Conference on the; 10/2004
  • [Show abstract] [Hide abstract]
    ABSTRACT: A dependable middleware should be able to adaptively share the distributed resources it manages in order to meet diverse application requirements, even when the quality of service (QoS) is degraded due to uncertain variations in load and unanticipated failures. We have addressed this issue in the context of a dependable middleware that adaptively manages replicated servers to deliver a timely and consistent response to time-sensitive client applications. These applications have specific temporal and consistency requirements, and can tolerate a certain degree of relaxed consistency in exchange for better response time. We propose a flexible QoS model that allows clients to specify their timeliness and consistency constraints. We also propose an adaptive framework that dynamically selects replicas to service a client's request based on the prediction made by probabilistic models. These models use the feedback from online performance monitoring of the replicas to provide probabilistic guarantees for meeting a client's QoS specification. The experimental results we have obtained demonstrate the role of feedback and the efficacy of simple analytical models for adaptively sharing the available replicas among the users under different workload scenarios.
    IEEE Transactions on Parallel and Distributed Systems 12/2003; 14(11):1112- 1125. DOI:10.1109/TPDS.2003.1247672 · 2.17 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study approximate algorithms for placing a set of documents into M distributed Web servers in this paper. We define the load of a server to be the summation of loads induced by all documents stored. The size of a server is defined in a similar manner. We propose five algorithms. Algorithm 1 balances the loads and sizes of the servers by limiting the loads to k<sub>l</sub> and the sizes to k<sub>s</sub> times their optimal values, where 1/k<sub>l</sub>-1 + 1/k<sub>n</sub>-1. This result improves the bounds on load and size of servers in (L.C. Chen et al., 2001). Algorithm 2 further reduces the load bound on each server by using partial document replication, and algorithm 3 by sorting. Algorithm 4 employs both partial replication and sorting. Last, without using sorting and replication, we give algorithm 5 for the dynamic placement at the cost of a factor Q(log M) in the time-complexity.
    IEEE Transactions on Parallel and Distributed Systems 07/2005; 16(6):489- 496. DOI:10.1109/TPDS.2005.63 · 2.17 Impact Factor