Mesaac Makpangou’s research while affiliated with National Institute for Research in Computer Science and Control and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (38)


Replica Divergence Control Protocol Based on Predicted Profile PDPTA 2002
  • Article

January 2008

·

12 Reads

Ahmed Jebali

·

Mesaac Makpangou

The Divergence Control Protocol Based on Predicted Profile allows cooperative processes deployed on a weakly- connected environment to conciliate the offering of low re- sponse time to clients with the guarantee of a bounded con- sistency, despite characteristics of the underlying commu- nication infrastructure. This paper presents how to decom- pose a global divergence bound into a set of local crite- ria that are locally checked at any invocation. We propose a protocol relying on these criteria to incrementally con- struct histories that preserve at any time, the global diver- gence bound between any replica and the "ideal state" of the replicated object. An implementation of the protocol is discussed. This paper reports a number of experiments conductedin order to evaluate the rationale and the perfor- mance of the proposed protocol. These experiments show that the protocol benefits a lot from the provision of a cor- rect profile and outperformsintuitive cooperation protocols in large-scale settings. Keywords: Distributed System, Network Service, Replication Protocol, Divergence Con- trol, Bounded Consistency.


FRACS : A Hybrid Fragmentation-Based CDN for E-commerce Applications FRACS : un CDN Hybride pour les Applications de Commerce Electonique Base sur la Fragmentation

June 2007

·

35 Reads

In order to accelerate access to Web applications, content providers are increasingly relying on Content Delivery Networks. Currently, CDNs serve the dynamic content from the edge in two major ways : page assembly or edge computing. Page assembly assumes that the proportion of cacheable content is high, that the cached fragments are reusable and that they do not change very often, while edge computing generally assumes that the whole application is replicated on the edge, which is not always suitable. Besides, current CDNs do not provide a means of scaling the database component of a Web application. In this study we propose a hybrid CDN called FRACS which combines both page assembly and edge computing in order to address the needs of relatively static applications as well as more dynamic applications. FRACS automatically determines the replicable fragments of a Web application, then it modies the application's code so as to generate fragmented pages in ESI format and to enable the server to serve the fragments separately. Moreover, FRACS maintains the consistency of all the manipulated fragments. Using the TPC-W benchmark we were able to achieve up to 60% savings in bandwidth and more than an 80% reduction in response time.


A response time-driven server selection substrate for application replica hosting systems

February 2006

·

18 Reads

·

1 Citation

This paper presents a server selection approach that relies on a response time estimator and a scalable monitoring infrastructure that provides the utilization measures for replica server hosts. The big challenge rises from the difficulty to capture, both accurately and efficiently, the dynamic state of the hosts and network resources in a large-scale replication context. The monitoring infrastructure can be tuned such as to minimize the network traffic while maintaining the estimation accuracy high.


Efficient and Transparent Web-Services Selection

December 2005

·

13 Reads

·

24 Citations

Lecture Notes in Computer Science

Web services technology standards enable description, publication, discovery of and binding to services distributed towards the Internet. However, current standards do not address the service selection issue : how did a consumer select the service that matches its functional (e.g. operations’ semantics) and non-functional (e.g. price, reputation, response time) properties ? Most projects advocate automatic selection mechanism, advising adaptation or modification of the web-services model and its entities (UDDI, WSDL, Client, Provider). These proposals also do not take advantage of distributed-systems’ state of the art, mainly with respect to the collection and the dissemination of services’ QoS. This paper presents an extension of the initial model that permits automatic service selection, late binding and collection of metrics that characterize the quality of service. The extension consists on a web-service access infrastructure, made of web service proxies and a peer to peer network of QoS metrics repository (the proposal does not impose modification on UDDI registries or services). The proxies interact with common UDDI registries to find suitable services for selection and to publish descriptions. They collect QoS metrics and store them on a p2p network.


Exploiting Application Workload Characteristics to Accurately Estimate Replica Server Response Time

October 2005

·

17 Reads

·

6 Citations

Lecture Notes in Computer Science

Our proposition, presented in this paper, consists in the definition of a function estimating the response time and a method for applying it to different application workloads. The function combines the application demands for various resources (such as the CPU, the disk I/O and the network bandwidth) with the resource capabilities and availabilities on the replica servers. The main benefits of our approach include: the simplicity and the transparency, from the perspective of the clients, who don’t have to specify themselves the resource requirements, the estimation accuracy, by considering the application real needs and the current degree of resource usage, determined by concurrent applications and the flexibility, with respect to the precision with which the resource-concerned parameters are specified. The experiments we conducted show two positive results. Firstly, our estimator provides a good approximation of the real response time obtained by measurements. Secondly, the ordering of the servers according to our estimation function values, matches with high accuracy the ordering determined by the real response times.


Figure 2. Distribution of fragments 
Caching Dynamic Content with Automatic Fragmentation.
  • Conference Paper
  • Full-text available

January 2005

·

87 Reads

·

2 Citations

In this paper we propose a fragment-based caching system that aims at improving the performance of Web- based applications. The system fragments the dynamic pages automatically. Our approach consists in statically analyzing the programs that generate the dynamic pages rather than their output. This approach has the considerable advantage of optimizing the overhead due to fragmentation. Furthermore, we propose a mechanism that increases the reuse rate of the stored fragments, so that the site response time can be improved among other benets. We validate our approach by using TPC-W as a benchmark.

Download

Pandora: An Efficient Platform for the Construction of Autonomic Applications

January 2005

·

23 Reads

·

2 Citations

Lecture Notes in Computer Science

Autonomic computing has been proposed recently as a way to address the difficult management of applications whose complexity is constantly increasing. Autonomic systems will have to diagnose the problems they face themselves, devise solutions and act accordingly. In consequence, they require a very high level of flexibility and the ability to constantly monitor themselves. This work presents a framework, Pandora, which eases the construction of applications that satisfy this double goal. Pandora relies on an original application programming pattern — based on stackable layers and message passing — to obtain a minimalist model and architecture that allows control of the overhead imposed by the full reflexivity of the framework. A prototype of the framework has been implemented in C++, freely available for download on the Internet. A detailed performance study is given, together with examples of use, to assess the usability of the platform in real usage conditions.


A Generic and Flexible Model for Replica Consistency Management

December 2004

·

7 Reads

·

7 Citations

Lecture Notes in Computer Science

This paper presents a flexible consistency model, aggregating a parameterized representation common for all the models along the spectrum delimited by strong consistency and eventual consistency. A specific model, required by a particular Data Object, is derived from this representation by selecting and combining the proper consistency parameters values.


Pandora : une plate-forme efficace pour la construction d'applications autonomes

November 2004

·

20 Reads

L'informatique autonome a récemment été proposée comme une réponse à la difficulté de gérer au quotidien des applications dont la complexité ne cesse d'augmenter. Les applications autonomes devront être particulièrement flexibles et pouvoir se surveiller en permanence. Cette étude présente une plate-forme, Pandora, qui facilite la construction d'applications qui satisfont ce double objectif. Pandora s'appuie sur un mode de programmation original des applications - fondé sur la composition de couches et le passage de messages - pour aboutir à un modèle et une architecture minimalistes qui lui permettent de contrôler les surcoûts imposés par la complète réflexivité de la plate-forme. Un prototype fonctionnel de la plate-forme a par ailleurs été développé en C++. Une étude détaillée des performances, ainsi que des exemples d'utilisation, complètent cette présentation. ––––– Autonomic computing has been proposed recently as a way to address the difficult management of applications whose complexity is constantly increasing. Autonomous applications will have to be especially flexible and be able to monitor themselves permanently. This work presents a framework, Pandora, which eases the construction of applications that satisfy this double goal. Pandora relies on an original application programming pattern - based on stackable layers and message passing - to obtain minimalist model and architecture that allows to control the overhead imposed by the full reflexivity of the framework. Besides, a prototype of the framework has been implemented in C++. A detailed performance study, together with examples of use, complement this presentation


A Configuration Tool for Caching Dynamic Pages

Lecture Notes in Computer Science

The efficacy of a fragment-based caching system fundamen- tally depends on the fragments' definition and the bringing into play of mechanisms that improve reuse and guarantee the consistency of the cache content (notably "purification" and invalidation mechanisms). Ex- isting caching systems assume that the administrator provides the re- quired configuration data manually, which is likely to be a heavy, time- consuming task and one that is prone to human error. This paper pro- poses a tool that helps the administrator to cope with these issues, by automating the systematic tasks and proposing a default fragmentation with the prerequisite reuse and invalidation directives, that may either be augmented or overwritten if necessary.


Citations (23)


... It does not compare results for differences in content. Patarin and Makpangou's Pandora platform [24] can measure the efficiency of proxy caches. A stack is located in front of the tested cache to catch the traffic between the clients and the cache; another one is located after the tested cache to get the traffic between the cache and origin servers. ...

Reference:

Implementing a web proxy evaluation architecture
On-line Measurement of Web Proxy Cache Efficiency
  • Citing Article
  • January 2003

... Future research will include the specication of network interconnection and the exploration of application facilities. Contact: Marc Shapiro, INRIA, B.P. 105, 78153 Le Chesnay C edex, France References: [303], [304], [305], [306], [307] 2.48 Sprite ...

Un recueil de papiers sur le système d'exploitation reparti à objets SOS
  • Citing Article
  • January 1987

·

Vadim Abrossimov

·

P. Gautron

·

[...]

·

Mesaac Makpangou

... Among these domains, Web services based on P2P computing require special attention from collaboration and interoperability in a distributed computing environment [8]. Web Services technology is considered as a revolution for the web in which a network of heterogeneous software components interoperate and exchange dynamic information [4], [5], [6]. In the last few years, other technologies have been used to improve the automatic discovery of Web services. ...

A Scalable Peer-to-Peer Approach To Service Discovery Using Ontology
  • Citing Article

... A document's primary server is responsible for evaluating the current strategy assigned to the document. The primary server evaluates its choice by collecting the document's most recent trace data and simulating several alternative strategies using a modified version of Saperlipopette [22] that can replay the trace files and calculate a number of metrics. The primary informs the secondary servers when it chooses a new strategy. ...

Saperlipopette!: a Distributed Web Caching Systems Evaluation Tool

... Another use of this technology is to use object based encapsulations of operating system services in order to represent operating system services internally in different ways, invisibly to services users. Examples of such uses are the internally parallel operating system servers offered in the Eden system [134] or in CHAOS [213,92] and Presto [24,23], the association of protection boundaries with certain objects as intended in Psyche [80], or the internally fragmented objects offered by Shapiro [227,229,98] for distributed systems, in 'Topologies' [211] for hypercube machines, and in 'Distributed Shared Abstractions' [57] for multiprocessor engines. ...

Distributed Abstractions, Lightweight References.
  • Citing Conference Paper
  • January 1992

... The system does not support partitioning, although the key concept of conits could be used in a partitioned context. In CoRe [12] the principle of specifying consistency is extended to allow the programmer to define consistency using a larger set of parameters. AQua [8] approaches the solution from the other direction: configuration of the allowed consistency in order to increase availability; that is, by allowing availability requirements to be specified. ...

A Generic and Flexible Model for Replica Consistency Management
  • Citing Conference Paper
  • December 2004

Lecture Notes in Computer Science

... Challenger et al. [8] analyzed the data dependencies between dynamic web pages and the back-end database to maintain the coherence of the dynamic content in proxy caches. Challenger et al. [9] later extended their work to determining the dependencies between fragments of dynamic web pages and the backend database, and in [7] Chabbouh and Makpangou showed how to use an offline analysis similar to ours to determine the dependencies between a web application's templated database requests and the dynamic page fragments they affect. Finally, we have previously studied several problems related to database query result caches in CDN-like environments . ...

Caching Dynamic Content with Automatic Fragmentation.

... Therefore, BPM tends to face dynamicity and heterogeneity in using such services, which increasingly become part of BPs. While trust can be conceived when services are backed by BC, nonfunctional requirements like QoS are still the major decisive factors for service selection [54,135]. In this circumstance, QoS, e.g., response time, availability, error rate, reputation, etc., will be the major determinants in service selection strategies. ...

Efficient and Transparent Web-Services Selection
  • Citing Conference Paper
  • December 2005

Lecture Notes in Computer Science

... But we currently have no general predictive theory for what kinds of tasks can and can not be tackled in this way so we have relied on incremental simulation models. Dynamic protocol platforms have been developed that allow for rapid run-time protocol adaptation without disturbing running applications [10]. These kinds of platforms could provide the essential packet-level infrastructure on which meta-protocols such as ASB would execute. ...

Pandora: An Efficient Platform for the Construction of Autonomic Applications
  • Citing Conference Paper
  • January 2005

Lecture Notes in Computer Science