M. Mellia

Politecnico di Torino, Torino, Piedmont, Italy

Are you M. Mellia?

Claim your profile

Publications (213)62.58 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The goal of this paper is to investigate rate control mechanisms for unstructured P2P-TV applications adopting UDP as transport protocol. We focus on a novel class of Hose Rate Controllers (HRC), which aim at regulating the aggregate upload rate of each peer. This choice is motivated by the peculiar P2P-TV needs: video content is not elastic but it is subject to real-time constraints, so that the epidemic chunk exchange mechanism is much more bursty for P2P-TV than file sharing applications. Furthermore, the peer up-link (e.g., ADSL/Cable) is typically the shared for flows in real scenarios. We compare two classes of aggregate rate control mechanisms: Delay Based (DB) less-than-best-effort mechanisms, which aim at tightly controlling the chunk transfer delay, and loss-based Additive Increase Multiplicative Decrease (AIMD) rate controllers, which are designed to be more aggressive and can compete with other AIMD congestion controls, i.e., TCP. Both families of mechanisms are implemented in a full-fledged P2P-TV application that we use to collect performance results. Only actual experiments – conducted both in a controlled test-bed and over the wild Internet, and involving up to 1800 peers – are presented to assess performance in realistic scenarios. Results show that DB-HRC tends to outperform AIMD-HRC when tight buffering time constraints are imposed to the application, while AIMD-HRC tends to be preferable in severely congested scenarios, especially when the buffering time constraints are relaxed.
    Computer Networks 08/2014; 69:101–120. · 1.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: YouTube is the most popular service in today’s Internet. Its own success forces Google to constantly evolve its functioning to cope with the ever growing number of users watching YouTube. Understanding the characteristics of YouTube’s traffic as well as the way YouTube flows are served from the massive Google CDN is paramount for ISPs, specially for mobile operators, who must handle the huge surge of traffic with the capacity constraints of mobile networks. This papers presents a characterization of the YouTube traffic accessed through mobile and fixed-line networks. The analysis specially considers the YouTube content provisioning, studying the characteristics of the hosting servers as seen from both types of networks. To the best of our knowledge, this is the first paper presenting such a simultaneous characterization from mobile and fixed-line vantage points.
    European Conference on Networks and Communications, Bologna, Italy; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: YouTube is the most popular service in today’s Internet. Its own success forces Google to constantly evolve its functioning to cope with the ever growing number of users watching YouTube. Understanding the characteristics of YouTube’s traffic as well as the way YouTube flows are served from the massive Google CDN is paramount for ISPs, specially for mobile operators, who must handle the huge surge of traffic with the capacity constraints of mobile networks. This papers presents a characterization of the YouTube traffic accessed through mobile and fixed-line networks. The analysis specially considers the YouTube content provisioning, studying the characteristics of the hosting servers as seen from both types of networks. To the best of our knowledge, this is the first paper presenting such a simultaneous characterization from mobile and fixed-line vantage points.
    European Conference on Networks and Communications 2014, Bologna, Italy; 06/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Access and aggregation networks account nowadays for a large share of the consumed energy in communication networks, and actions to ameliorate their energy cost are under investigation by the research community. In this work, we present a study of the possible savings that could be achieved if such technologies were in place. We take advantage of large datasets of measurements collected from the network of FASTWEB, a national-wide Internet Service Provider in Italy. We first perform a detailed characterization of the energy consumption of Points of Presence (PoPs) investigating on how factors such as external temperature, cooling technology and traffic load influence the consumed energy. Our measurements precisely quantify how the power consumption in today networks is practically independent from the traffic volume, while it is correlated only with the external temperature. We then narrow down our analysis to consider the traffic generated by each household. More specifically, by observing about 10,000 ADSL customers, we characterize the typical traffic patterns generated by users who access the Internet. Using the available real data, we thus investigate if the energy consumption can be significantly reduced by applying simple energy-efficient policies that are currently under studies. We investigate energy-to-traffic proportional and resource consolidation technologies for the PoP, while sleep modes policies are considered at the ADSL lines. All these energy-efficient policies, even if they are not yet available, are currently being widely investigated by both manufacturers and researchers. At the PoP level, our dataset shows that it would be possible to save up to 50% of energy, and that even simple mechanisms would easily allow to save 30% of energy. Considering the ADSL lines, it results that sleep mode policies can be effectively implemented, reducing the energy consumption of ADSL modems with little or marginal impact on the Quality of Service offered to users. We make available all datasets used in this paper to allow other researchers to benchmark their proposals considering actual traffic traces.
    Computer Networks 06/2014; · 1.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present methodological advances in anomaly detection tailored to discover abnormal traffic patterns under the presence of seasonal trends in data. In our setup we impose specific assumptions on the traffic type and nature; our study features VoIP call counts, for which several traces of real data has been used in this study, but the methodology can be applied to any data following, at least roughly, a non-homogeneous Poisson process (think of highly aggregated traffic flows). A performance study of the proposed methods, covering situations in which the assumptions are fulfilled as well as violated, shows good results in great generality. Finally, a real data example is included showing how the system could be implemented in practice.
    Computer Networks. 01/2014; 60:187–200.
  • Marco Mellia
    [Show abstract] [Hide abstract]
    ABSTRACT: Dr. Antonio Nucci is the chief technology officer of Narus and is responsible for setting the company's direction with respect to technology and innovation. He oversees the entire technology innovation lifecycle, including incubation, research, and prototyping. He also is responsible for ensuring a smooth transition to engineering for final commercialization. Antonio has published more than 100 technical papers and has been awarded 38 U.S. patents. He authored a book, "Design, Measurement and Management of Large-Scale IP Networks Bridging the Gap Between Theory and Practice", in 2009 on advanced network analytics. In 2007 he was recognized for his vision and contributions with the prestigious Infoworld CTO Top 25 Award. In 2013, Antonio was honored by InfoSecurity Products Guide's 2013 Global Excellence Awards as "CTO of the Year" and Gold winner in the "People Shaping Info Security" category. He served as a technical lead member of the Enduring Security Framework (ESF) initiative sponsored by various U.S. agencies to produce a set of recommendations, policies, and technology pilots to better secure the Internet (Integrated Network Defense). He is also a technical advisor for several venture capital firms. Antonio holds a Ph.D. in computer science, and master's and bachelor's degrees in electrical engineering from Politecnico di Torino, Italy.
    ACM SIGCOMM Computer Communication Review 12/2013; 44(1):53-55. · 0.91 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Content caching is a fundamental building block of the Internet. Caches are widely deployed at network edges to improve performance for end-users, and to reduce load on web servers and the backbone network. Considering mobile 3G/4G networks, however, the bottleneck is at the access link, where bandwidth is shared among all mobile terminals. As such, per-user capacity cannot grow to cope with the traffic demand. Unfortunately, caching policies would not reduce the load on the wireless link which would have to carry multiple copies of the same object that is being downloaded by multiple mobile terminals sharing the same access link. In this paper we investigate if it is worth to push the caching paradigm even farther. We hypothesize a system in which mobile terminals implement a local cache, where popular content can be pushed/pre-staged. This exploits the peculiar broadcast capability of the wireless channels to replicate content "for free" on all terminals, saving the cost of transmitting multiple copies of those popular objects. Relying on a large data set collected from a European mobile carrier, we analyse the content popularity characteristics of mobile traffic, and quantify the benefit that the push-to-mobile system would produce. We found that content pre-staging, by proactively and periodically broadcasting "bundles" of popular objects to devices, allows to both greatly i) improve users' performance and ii) reduce up to 20% (40%) the downloaded volume (number of requests) in optimistic scenarios with a bundle of 100 MB. However, some technical constraints and content characteristics could question the actual gain such system would reach in practice.
    Proceedings of the ninth ACM conference on Emerging networking experiments and technologies; 12/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Internet measured data collected via passive measurement are analyzed to obtain localization information on nodes by clustering (i.e., grouping together) nodes that exhibit similar network path properties. Since traditional clustering algorithms fail to correctly identify clusters of homogeneous nodes, we propose the NetCluster novel framework, suited to analyze Internet measurement datasets. We show that the proposed framework correctly analyzes synthetically generated traces. Finally, we apply it to real traces collected at the access link of Politecnico di Torino campus LAN and discuss the network characteristics as seen at the vantage point.
    Computer Networks 12/2013; 57(17):3300–3315. · 1.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Personal cloud storage services are data-intensive applications already producing a significant share of Internet traffic. Several solutions offered by different companies attract more and more people. However, little is known about each service capabilities, architecture and -- most of all -- performance implications of design choices. This paper presents a methodology to study cloud storage services. We apply our methodology to compare 5 popular offers, revealing different system architectures and capabilities. The implications on performance of different designs are assessed executing a series of benchmarks. Our results show no clear winner, with all services suffering from some limitations or having potential for improvement. In some scenarios, the upload of the same file set can take seven times more, wasting twice as much capacity. Our methodology and results are useful thus as both benchmark and guideline for system design.
    Proceedings of the 2013 conference on Internet measurement conference; 10/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays two main approaches are being pursued to reduce energy consumption of networks: the use of sleep modes in which devices enter a low-power state during inactivity periods, and the adoption of energy proportional mechanisms where the device architecture is designed to make energy consumption proportional to the actual load. Common to all the proposals is the evaluation of energy saving performance by means of simulation or experimental evidence, which typically consider a limited set of benchmarking scenarios. In this paper, we do not focus on a particular algorithm or procedure to offer energy saving capabilities in networks, but rather we formulate a theoretical model based on random graph theory that allows to estimate the potential gains achievable by adopting sleep modes in networks where energy proportional devices are deployed. Intuitively, when some devices enter sleep modes some energy is saved. However, this saving could vanish because of the additional load (and power consumption) induced onto the active devices. The impact of this effect changes based on the degree of load proportionality. As such, it is not simple to foresee which are the scenarios that make sleep mode or energy proportionality more convenient. Instead of conducting detailed simulations, we consider simple models of networks in which devices (i.e., nodes and links) consume energy proportionally to the handled traffic, and in which a given fraction of nodes are put into sleep mode. Our model allows to predict how much energy can be saved in different scenarios. The results show that sleep modes can be successfully combined with load proportional solutions. However, if the static power consumption component is one order of magnitude less than the load proportional component, then sleep modes become not convenient anymore. Thanks to random graph theory, our model gauges the impact of different properties of the network topology. For instance, highly connected networks tend to make the use of sleep modes more convenient.
    Computer Networks: The International Journal of Computer and Telecommunications Networking. 10/2013; 57(15):3051-3066.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A branch of green networking research is consoli- dating. It aims at routing traffic with the goal of reducing the network energy consumption. It is usually referred to as Energy- Aware Routing. Previous works in this branch only focused on pure IP networks, e.g., assuming an Open Shortest Path First (OSPF) control plane, and best effort packet forwarding on the data plane. In this work, we consider instead Generalized Multi- Protocol Label Switching (GMPLS) backbone networks, where optical technologies allow to design “circuit switching” network management policies with strict bandwidth reservation policies. We define a simple and generic framework which generates a family of routing algorithms, based on an energy-aware weight assignment. In particular, routing weights are functions of both the energy consumption and the actual load of network devices. Using such weights, a simple minimum-cost routing allows finding the current least expensive circuit, minimising the additional energy cost. Results obtained on realistic case studies show that our weight assignment policy favours a consistent reduction of the network power consumption, without significantly affecting the network performance. Furthermore, the framework allows to trade energy efficiently and network performance, a desirable property at which ISPs are looking for. Simple and robust parameter settings allow reaching a win-win situation, with excellent performance in terms of both energy efficiency and network resource utilization.
    Teletraffic Congress (ITC), 2013 25th International, Shangai; 09/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Twitter has attracted millions of users that generate a humongous flow of information at constant pace. The research community has thus started proposing tools to extract meaningful information from tweets. In this paper, we take a different angle from the mainstream of previous works: we explicitly target the analysis of the timeline of tweets from "single users". We define a framework - named TUCAN - to compare information offered by the target users over time, and to pinpoint recurrent topics or topics of interest. First, tweets belonging to the same time window are aggregated into "bird songs". Several filtering procedures can be selected to remove stop-words and reduce noise. Then, each pair of bird songs is compared using a similarity score to automatically highlight the most common terms, thus highlighting recurrent or persistent topics. TUCAN can be naturally applied to compare bird song pairs generated from timelines of different users. By showing actual results for both public profiles and anonymous users, we show how TUCAN is useful to highlight meaningful information from a target user's Twitter timeline.
    The 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining; 08/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Network visibility is a critical part of traffic engineering, network management, and security. Recently, unsupervised algorithms have been envisioned as a viable alternative to automatically identify classes of traffic. However, the accuracy achieved so far does not allow to use them for traffic classification in practical scenario. In this paper, we propose SeLeCT, a Self-Learning Classifier for Internet traffic. It uses unsupervised algorithms along with an adaptive learning approach to automatically let classes of traffic emerge, being identified and (easily) labeled. SeLeCT automatically groups flows into pure (or homogeneous) clusters using alternating simple clustering and filtering phases to remove outliers. SeLeCT uses an adaptive learning approach to boost its ability to spot new protocols and applications. Finally, SeLeCT also simplifies label assignment (which is still based on some manual intervention) so that proper class labels can be easily discovered. We evaluate the performance of SeLeCT using traffic traces collected in different years from various ISPs located in 3 different continents. Our experiments show that SeLeCT achieves overall accuracy close to 98%. Unlike state-of-art classifiers, the biggest advantage of SeLeCT is its ability to help discovering new protocols and applications in an almost automated fashion.
    The 5th IEEE International Traffic Monitoring and Analysis Workshop (IEEE INFOCOM - TMA 2013); 04/2013
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: Optimizing the tradeoff between power saving and Quality of Service (QoS) in the current Internet is a challenging research objective, whose difficulty stems also from the dominant presence of TCP traffic, and its elastic nature. In a previous work we have shown that an intertwining exists between capacity scaling approaches and TCP congestion control. In this paper we investigate the reasons of such intertwining, and we evaluate how and how much the dynamics of the two algorithms affect each other's performance. More specifically, we will show that such an interaction is essentially due to the overlap of the two closed loop controls, with different time constants.
    Energy Efficient and Green Networking (SSEEGN), 2013 22nd ITC Specialist Seminar on; 01/2013
  • F. Khuhawar, M. Mellia, M. Meo
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we model and investigate the interaction between the TCP protocol and rate adaptation at intermediate routers. Rate adaptation aims at saving energy by controlling the offered capacity of links and adapting it to the amount of traffic. However, when TCP is used at the transport layer, the control loop of rate adaptation and one of the TCP congestion control mechanism might interact and disturb each other, compromising throughput and Quality of Service (QoS). Our investigation is lead through mathematical modeling consisting in depicting the behavior of TCP and of rate adaption through a set of Delay Differential Equations (DDEs). The model is validated against simulation results and it is shown to be accurate. The results of the sensitivity analysis of the system performance to control parameters show that rate adaptation can be effective but a careful parameter setting is needed to avoid undesired disruptive interaction among controllers at different levels, that impair QoS.
    Teletraffic Congress (ITC), 2013 25th International; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a characterization of Amazon's Web Services (AWS), the most prominent cloud provider that offers computing, storage, and content delivery platforms. Leveraging passive measurements, we explore the EC2, S3 and CloudFront AWS services to unveil their infrastructure, the pervasiveness of content they host, and their traffic allocation policies. Measurements reveal that most of the content residing on EC2 and S3 is served by one Amazon datacenter, located in Virginia, which appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to the risks of outages. The CloudFront CDN, on the contrary, shows much better performance thanks to the effective cache selection policy that serves 98% of the traffic from the nearest available cache. CloudFront exhibits also dynamic load-balancing policies, in contrast to the static allocation of instances on EC2 and S3. Information presented in this paper will be useful for developers aiming at entrusting AWS to deploy their contents, and for researchers willing to improve cloud design.
    INFOCOM, 2013 Proceedings IEEE; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: A careful perusal of the Internet evolution reveals two major trends - explosion of cloud-based services and video streaming applications. In both of the above cases, the owner (e.g., CNN, YouTube, or Zynga) of the content and the organization serving it (e.g., Akamai, Limelight, or Amazon EC2) are decoupled, thus making it harder to understand the association between the content, owner, and the host where the content resides. This has created a tangled world wide web that is very hard to unwind, impairing ISPs' and network administrators' capabilities to control the traffic flowing in their networks. In this paper, we present DN-Hunter, a system that leverages the information provided by DNS traffic to discern the tangle. Parsing through DNS queries, DN-Hunter tags traffic flows with the associated domain name. This association has several applications and reveals a large amount of useful information: (i) Provides a fine-grained traffic visibility even when the traffic is encrypted (i.e., TLS/SSL flows), thus enabling more effective policy controls,(ii) Identifies flows even before the flows begin, thus providing superior network management capabilities to administrators, $(iii)$ Understand and track (over time) different CDNs and cloud providers that host content for a particular resource, (iv) Discern all the services/content hosted by a given CDN or cloud provider in a particular geography and time interval, and (v) Provides insights into all applications/services running on any given layer-4 port number. We conduct extensive experimental analysis and show results from real traffic traces (including FTTH and 4G ISPs) that support our hypothesis. Simply put, the information provided by DNS traffic is one of the key components required for understanding the tangled web, and bringing the ability to effectively manage network traffic back to the operators.
    Proceedings of the 2012 ACM conference on Internet measurement conference; 11/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we consider mesh based P2P streaming systems focusing on the problem of regulating peer transmission rate to match the system demand while not overloading each peer upload link capacity. We propose Hose Rate Control (HRC), a novel scheme to control the speed at which peers offer chunks to other peers, ultimately controlling peer uplink capacity utilization. This is of critical importance for heterogeneous scenarios like the one faced in the Internet, where peer upload capacity is unknown and varies widely.HRC nicely adapts to the actual peer available upload bandwidth and system demand, so that Quality of Experience is greatly enhanced. To support our claims we present both simulations and actual experiments involving more than 1000 peers to assess performance in real scenarios. Results show that HRC consistently outperforms the Quality of Experience achieved by non-adaptive schemes.
    Computer Communications. 11/2012; 35(18):2237–2244.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we face the problem of reducing the power consumption of Internet backbone networks. We propose a novel algorithm, called GRiDA, to selectively switch off links in an Internet Service Provider IP-based network to reduce the system energy consumption. Differently from approaches that have been proposed in the literature, our solution is completely distributed and thus not requiring any centralized oracle. It leverages link-state protocol like OSPF to share a limited amount of information, and to reduce the problem complexity. Another key feature of GRiDA is that it does not require the knowledge of the actual and past/future traffic matrices, being able to run in real-time, where this information would not be available. Results, obtained on realistic case studies, show that GRiDA achieves performance comparable to several existing centralized algorithms, guaranteeing energy savings up to 50%.
    Computer Networks. 09/2012; 56(14):3219–3232.

Publication Stats

2k Citations
62.58 Total Impact Points


  • 1997–2014
    • Politecnico di Torino
      • DET - Department of Electronics and Telecommunications
      Torino, Piedmont, Italy
  • 2012
    • Consorzio Nazionale Interuniversitario per le Telecomunicazioni
      Genova, Liguria, Italy
  • 2011
    • Federal University of Juiz de Fora
      Juiz de Fora, Minas Gerais, Brazil
  • 2010
    • Politecnico di Bari
      Bari, Apulia, Italy
    • AGH University of Science and Technology in Kraków
      Cracovia, Lesser Poland Voivodeship, Poland
  • 2009–2010
    • France Télécom
      Lutetia Parisorum, Île-de-France, France
    • Federal Technological University of Parana
      Curityba, Paraná, Brazil
  • 2007
    • Blekinge Institute of Technology
      Karlskrona, Blekinge, Sweden
  • 2005
    • Università degli Studi di Trento
      Trient, Trentino-Alto Adige, Italy
  • 2003
    • Carnegie Mellon University
      • Computer Science Department
      Pittsburgh, Pennsylvania, United States