Christophe Diot

Technicolor, Lutetia Parisorum, Île-de-France, France

Are you Christophe Diot?

Claim your profile

Publications (265)55.19 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The success of over-the-top (OTT) services reflects users' demand for personalization of digital services at home. ISPs propose fulfilling this demand with a cloud delivery model, which would simplify the management of the service portfolio and bring them additional revenue streams. We argue that this approach has many limitations that can be fixed by turning the home gateway into a flexible execution platform. We define requirements for such a "service-hosting gateway" and build a proof of concept prototype using a virtualized Intel Groveland system-on-a-chip platform. We discuss remaining challenges such as service distribution, security and privacy, management, and home integration.
    ACM SIGCOMM Computer Communication Review 09/2012; 42(5):37-43. · 0.91 Impact Factor
  • Source
    Asher Levi, Osnat Mokryn, Christophe Diot, Nina Taft
    [Show abstract] [Hide abstract]
    ABSTRACT: Online hotel searching is a daunting task due to the wealth of online information. Reviews written by other travelers replace the word-of-mouth, yet turn the search into a time consuming task. Users do not rate enough hotels to enable a collaborative filtering based rec-ommendation. Thus, a cold start recommender system is needed. In this work we design a cold start hotel recommender system, which uses the text of the reviews as its main data. We define con-text groups based on reviews extracted from TripAdvisor.com and Venere.com. We introduce a novel weighted algorithm for text min-ing. Our algorithm imitates a user that favors reviews written with the same trip intent and from people of similar background (na-tionality) and with similar preferences for hotel aspects, which are our defined context groups. Our approach combines numerous ele-ments, including unsupervised clustering to build a vocabulary for hotel aspects, semantic analysis to understand sentiment towards hotel features, and the profiling of intent and nationality groups. We implemented our system which was used by the public to conduct 150 trip planning experiments. We compare our solution to the top suggestions of the mentioned web services and show that users were, on average, 20% more satisfied with our hotel recom-mendations. We outperform these web services even more in cities where hotel prices are high.
    ACM RecSys; 09/2012
  • Source
    Anna-Kaisa Pietilänen, Christophe Diot
    [Show abstract] [Hide abstract]
    ABSTRACT: Epidemic content dissemination in opportunistic social networks (OSN) has been analyzed in depth, theoretically and empirically. Most related works have studied the pairwise contact history among nodes in conference or campus environments. We claim that given the nature of these networks, this approach leads to a biased understanding of the content dissemination process. We design a methodology to break OSN traces down into 'temporal communities', i.e., groups of people who meet periodically during an experiment. We show that these communities correlate with people's social communities. As in previous works, we observe that efficient content dissemination is mostly due to high contact rate nodes. However, we show that high contact rate nodes that are more frequently involved in temporal communities contribute less to the dissemination process, leading us to conjecture that social communities tend to limit dissemination in OSNs.
    Proceedings of the thirteenth ACM international symposium on Mobile Ad Hoc Networking and Computing; 06/2012
  • Giuseppe Reina, Ernst Biersack, Christophe Diot
    [Show abstract] [Hide abstract]
    ABSTRACT: Massively multiplayer online games have become popular in the recent years. Scaling with the number of users is challenging due to the low latency requirements of these games. Peer-to-peer techniques naturally address the scalability issues at the expense of additional complexity to maintain consistency among players. We design and implement Quiver, a middleware that allows an existing game to be played in peer-to-peer mode with minimal changes to the engine. Quiver focuses on achieving scalability by distributing the game state. It achieves consistency by keeping the state synchronized among all the players. We have built a working prototype of Quake II using Quiver. We analyze the changes necessary to Quake II and discuss how generic a software like Quiver can be.
    Proceedings of the 22nd international workshop on Network and Operating System Support for Digital Audio and Video; 06/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Opportunistic ad-hoc communication enables portable devices such as smartphones to effectively exchange information, taking advantage of their mobility and locality. The nature of human interaction makes information dissemination using such networks challenging. We use three different experimental traces to study fundamental properties of human interactions. We break our traces down in multiple areas and classify mobile users in each area according to their social behavior: Socials are devices that show up frequently or periodically, while Vagabonds represent the rest of the population. We find that in most cases the majority of the population consists of Vagabonds. We evaluate the relative role of these two groups of users in data dissemination. Surprisingly, we observe that under certain circumstances, which appear to be common in real life situations, the effectiveness of dissemination predominantly depends on the number of users in each class rather than their social behavior, contradicting some of the previous observations. We validate and extend the findings of our experimental study through a mathematical analysis.
    INFOCOM, 2011 Proceedings IEEE; 05/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Social virtual worlds such as Second Life (SL) are digital representations of the real world where human-controlled avatars evolve and interact through social activities. Understanding the characteristics of virtual worlds can be extremely valuable in order to optimize their design. In this paper, we perform an extensive analysis of SL. We exploit standard avatar capabilities to monitor the virtual world, and we emulate avatar behaviors in order to evaluate user experience. We make several surprising observations. We find that 30% of the regions are never visited during the six-day monitoring period, whereas less than 1% of the regions have large peak populations. Moreover, the vast majority of regions are static, i.e., objects are seldom created or destroyed. Interestingly, we show that avatars interact similarly to humans in real life, gathering in small groups of 2-10 avatars. We also show that user experience is poor. Most of the time, avatars have an incorrect view of their neighbor avatars, and inconsistency can last several seconds, impacting interactivity among avatars.
    IEEE/ACM Transactions on Networking 01/2011; 19:80-91. · 2.01 Impact Factor
  • Source
    Ítalo Cunha, Renata Teixeira, Christophe Diot
    [Show abstract] [Hide abstract]
    ABSTRACT: Since Paxson’s study over ten years ago, the Internet has changed considerably. In particular, routers often perform load balancing. Disambiguating routing changes from load balancing using traceroute-like probing requires a large number of probes. Our first contribution is FastMapping, a probing method that exploits load balancing characteristics to reduce the number of probes needed to measure accurate route dynamics. Our second contribution is to reappraise Paxson’s results using datasets with high-frequency route measurements and complete load balancing information. Our analysis shows that, after removing dynamics due to load balancing, Paxson’s observations on route prevalence and persistence still hold.
    Passive and Active Measurement - 12th International Conference, PAM 2011, Atlanta, GA, USA, March 20-22, 2011. Proceedings; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The success of broadband residential Internet access is changing the way home users consume digital content and services. Currently, each home service requires the installation of a separate physical box (for instance, the NetFlix box or IPTV set-top-boxes). Instead, we argue for deploying a single box in the home that is powerful and flexible enough to host a variety of home services. In addition, this box is managed by the Internet Service provider and is able to provide service guarantees. We call such a box a service-hosting gateway (SHG), as it combines the functionalities of the home gateway managed by the network service provider with the capability of hosting services. Isolation between such services is ensured by virtualization. We demonstrate a prototype of our (SHG). It is based on the hardware platform that will be used for future home gateways. We illustrate the features of the SHG with multiple use cases ranging from simple service deployment scenarios to complex media distribution services and home automation features.
    Proceedings of the ACM SIGCOMM 2011 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, Toronto, ON, Canada, August 15-19, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates to what extent it is possible to use traceroute-style probing for accurately tracking Internet path changes. When the number of paths is large, the usual traceroute based approach misses many path changes because it probes all paths equally. Based on empirical observations, we argue that monitors can optimize probing according to the likelihood of path changes. We design a simple predictor of path changes using a nearest neighbor model. Although predicting path changes is not very accurate, we show that it can be used to improve probe targeting. Our path tracking method, called DTrack, detects up to two times more path changes than traditional probing, with lower detection delay, as well as providing complete load-balancer information.
    Proceedings of the ACM SIGCOMM 2011 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, Toronto, ON, Canada, August 15-19, 2011; 01/2011
  • Source
    R. Gass, C. Diot
    [Show abstract] [Hide abstract]
    ABSTRACT: Wi-Fi access points can be found in any urban environment and have the potential to be a source of useful bandwidth even for users that are only in range for short periods of time. In this paper, we describe how in-motion networking is capable of exploiting short connection opportunities and transferring significant amounts of data by using only open or community access points. We describe the issues facing the in-motion user of today and present a protocol that is capable of eliminating backhaul bottlenecks at the access point's connection to the Internet. We describe and present the results of our implementation with measurements in an urban area of a city.
    Vehicular Technology Conference (VTC 2010-Spring), 2010 IEEE 71st; 06/2010
  • Source
    F. Silveira, C. Diot
    [Show abstract] [Hide abstract]
    ABSTRACT: Traffic anomaly detection has received a lot of attention over recent years, but understanding the nature of these anomalies and identifying the flows involved is still a manual task, in most cases. We introduce Unsupervised Root Cause Analysis (URCA) which isolates anomalous traffic and classifies alarms with minimal manual assistance and high accuracy. URCA proceeds by successive reduction of the anomalous space, eliminating normal traffic based on feedback from the anomaly detection method. Classification is done by clustering a new anomaly with previously labeled events. We validate URCA using manually analyzed real anomalies as well as synthetic anomaly injection. Our validation shows that URCA can accurately diagnose a large range of anomaly types, including network scans, DDoS attacks, and major routing changes.
    INFOCOM, 2010 Proceedings IEEE; 04/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When many flows are multiplexed on a non-saturated link, their volume changes over short timescales tend to cancel each other out, making the average change across flows close to zero. This equilibrium property holds if the flows are nearly independent, and it is violated by traffic changes caused by several, potentially small, correlated flows. Many traffic anomalies (both malicious and benign) fit this description. Based on this observation, we exploit equilibrium to design a computationally simple detection method for correlated anomalous flows. We compare our new method to two well known techniques on three network links. We manually classify the anomalies detected by the three methods, and discover that our method uncovers a different class of anomalies than previous techniques do.
    Proceedings of the ACM SIGCOMM 2010 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, New Delhi, India, August 30 -September 3, 2010; 01/2010
  • Richard Gass, Christophe Diot
    [Show abstract] [Hide abstract]
    ABSTRACT: Mobile Internet users have two options for connectivity: pay premium fees to utilize 3G or wander around looking for open Wi-Fi access points. We perform an experimental evaluation of the amount of data that can be pushed to and pulled from the Internet on 3G and open Wi-Fi access points while on the move. This side-by-side comparison is carried out at both driving and walking speeds in an urban area using standard devices. We show that significant amounts of data can be transferred opportunistically without the need of always being connected to the network. We also show that Wi-Fi mostly suffers from not being able to exploit short contacts with access points but performs comparably well against 3G when downloading and even significantly better while uploading data.
    Passive and Active Measurement, 11th International Conference, PAM 2010, Zurich, Switzerland, April 7-9, 2010. Proceedings; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When many flows are multiplexed on a non-saturated link, their volume changes over short timescales tend to cancel each other out, making the average change across flows close to zero. This equilibrium property holds if the flows are nearly independent, and it is violated by traffic changes caused by several correlated flows. We exploit this empirical property to design a computationally simple anomaly detection method.
    SIGMETRICS 2010, Proceedings of the 2010 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, New York, New York, USA, 14-18 June 2010; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In opportunistic networks, end-to-end paths between two communicating nodes are rarely available. In such situations, the nodes might still copy and forward messages to nodes that are more likely to meet the destination. The question is which forwarding algorithm offers the best trade off between cost (number of message replicas) and rate of successful message delivery. We address this challenge by developing the PeopleRank approach in which nodes are ranked using a tunable weighted social information. Similar to the PageRank idea, PeopleRank gives higher weight to nodes if they are socially connected to important other nodes of the network. We develop centralized and distributed variants for the computation of PeopleRank. We present an evaluation using real mobility traces of nodes and their social interactions to show that PeopleRank manages to deliver messages with near optimal success rate (close to Epidemic Routing) while reducing the number of message retransmissions by 50% compared to Epidemic Routing.
    INFOCOM 2010. 29th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 15-19 March 2010, San Diego, CA, USA; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Second Life (SL) is currently the most popular social virtual world, i.e., a digitalization of the real world where avatars can meet, socialize and trade. SL is managed through a Client/Server (C/S) architecture with a very high cost and limited scalability. A scalable and cheap alternative to C/S is to use a Peer-to-Peer (P2P) approach, where SL users rely only on their own resources (storage, CPU and bandwidth) to run the virtual world. We develop a SL client that allows its users to take advantage of a P2P network structured as a Delaunay overlay. We compare the performance of a P2P and C/S architecture for Second Life, executing several instances of our client over Planetlab and populating a SL region with our controlled avatars. Avatar mobility traces collected in SL are used to drive avatar behaviors. The results show that P2P improves user experience by about 20% compared to C/S (measured in term of consistency). Avatar interactivity is also 5 times faster in P2P than in C/S.
    Network and Systems Support for Games (NetGames), 2009 8th Annual Workshop on; 12/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Pocket switched network (PSN) communications paradigm builds on opportunistic contacts between mobile devices and human mobility to disseminate content without relying on any infrastructure. We propose MobiClique, a mobile social software that builds and maintains an ad hoc social network overlay over a pocket switched network and allows users to exchange messages between each other and groups of users sharing similar interests. We describe MobiClique implementation and initial experimental results.
    INFOCOM Workshops 2009, IEEE; 05/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The automatic detection of failures in IP paths is an essential step for operators to perform diagnosis or for overlays to adapt. We study a scenario where a set of monitors send probes toward a set of target end-hosts to detect failures in a given set of IP interfaces. Unfortunately, there is a large probing cost to monitor paths between all monitors and targets at a very high frequency. We make two major contributions to reduce this probing cost. First, we propose a formulation of the probe optimization problem which, in contrast to the established formulation, is not NP complete. Second, we propose two linear programming algorithms to minimize probing cost. Our algorithms combine low frequency per-path probing to detect per-interface failures at a higher frequency. We analyze our solutions both analytically and experimentally. Our theoretical results show that the probing cost increases linearly with the number of interfaces in a random power-law graph. We confirm this linear increase in Internet graphs measured from PlanetLab and RON. Hence, Internet graphs belong to the most costly class of graph to probe.
    INFOCOM 2009, IEEE; 05/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In wireless mesh routing, it is common to use periodic estimation of link quality to identify high throughput paths. While extensive work has been devoted to the design and static throughput evaluation of path metrics, evaluating the accuracy of the underlying link cost metrics under dynamic conditions has not received due attention. We introduce an experimental and analytical methodology that quantifies the ability of link cost metrics to accurately estimate link capacity. We use this methodology on a wireless mesh testbed to evaluate network layer and cross-layer estimation approaches of ETT, a popular wireless link cost metric. Our results show that the network layer approach exhibits low correlation with link capacity across time and across multiple links. On the other hand, cross layer information obtained by our methodology can significantly improve accuracy and can aid in identifying the dominant factors that lead to link cost metric inaccuracies.
    Wireless On-Demand Network Systems and Services, 2009. WONS 2009. Sixth International Conference on; 03/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider a mobile ad hoc network setting where Blue- tooth enabled mobile devices communicate directly with other devices as they meet opportunistically. We design and im- plement a novel mobile social networking middleware named MobiClique. MobiClique forms and exploits ad hoc social networks to disseminate content using a store-carry-forward technique. Our approach distinguishes itself from other mo- bile social software by removing the need for a central server to conduct exchanges, by leveraging existing social networks to bootstrap the system, and by taking advantage of the so- cial network overlay to disseminate content. We also propose an open API to encourage third-party application develop- ment. We discuss the system architecture and three exam- ple applications. We show experimentally that MobiClique successfully builds and maintains an ad hoc social network leveraging contact opportunities between friends and people sharing interest(s) for content exchanges. Our experience also provides insight into some of the key challenges and short-comings that researchers face when designing and de- ploying similar systems.
    Proceedings of the 2nd ACM Workshop on Online Social Networks, WOSN 2009, Barcelona, Spain, August 17, 2009; 01/2009

Publication Stats

10k Citations
55.19 Total Impact Points

Institutions

  • 2010–2011
    • Technicolor
      Lutetia Parisorum, Île-de-France, France
  • 2009
    • University of Adelaide
      Tarndarnya, South Australia, Australia
    • Princeton University
      Princeton, New Jersey, United States
  • 2008
    • University of California, Irvine
      • Department of Electrical Engineering and Computer Science
      Irvine, CA, United States
  • 2007
    • University of Missouri
      Columbia, Missouri, United States
    • Mountain View Pharmaceuticals, Inc.
      Menlo Park, California, United States
  • 2006–2007
    • Thomson Reuters
      New York City, New York, United States
  • 2005
    • University of Pennsylvania
      • Department of Electrical and Systems Engineering
      Philadelphia, PA, United States
    • University of Cambridge
      Cambridge, England, United Kingdom
  • 2004–2005
    • University of Massachusetts Amherst
      • School of Computer Science
      Amherst Center, Massachusetts, United States
    • Intel
      Santa Clara, California, United States
    • Concordia University–Ann Arbor
      Ann Arbor, Michigan, United States
  • 2003–2005
    • Cancer Research UK Cambridge Institute
      Cambridge, England, United Kingdom
    • University of Minnesota Duluth
      Duluth, Minnesota, United States
  • 2003–2004
    • Stanford University
      • Department of Electrical Engineering
      Stanford, CA, United States
  • 2001–2003
    • Ecole Normale Supérieure de Paris
      Lutetia Parisorum, Île-de-France, France
    • Georgia Institute of Technology
      Atlanta, Georgia, United States
  • 2002
    • University College London
      Londinium, England, United Kingdom
  • 1998
    • University of Technology Sydney 
      Sydney, New South Wales, Australia
  • 1996–1998
    • University of Nice-Sophia Antipolis
      Nice, Provence-Alpes-Côte d'Azur, France
  • 1997
    • National Institute for Research in Computer Science and Control
      Le Chesney, Île-de-France, France