Liming Lu

National University of Singapore, Tumasik, Singapore

Are you Liming Lu?

Claim your profile

Publications (8)0.48 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Lookup is crucial to locate peers and resources in structured P2P networks. In this paper, we measure and analyze the traffic characteristics of lookup in Kad, which is a widely used DHT network. Some previous works studied the user behaviors of Kad, yet we believe that investigating its traffic characteristics will also be beneficial, as it gives feedbacks to fine tune the system parameters, helps to uncover the abnormalities or misuses, and provides solid ground for synthesizing P2P traffic to evaluate future designs.To track the lookup requests more efficiently and from more peers in Kad, we develop an active traffic monitor, named Rememj. From the one-week data it collected, we uncover some interesting phenomena. Moreover, we characterize the traffic characteristics from the collected data in a form that can be used for constructing representative synthetic workloads for evaluating DHT optimizations or designs. In particular, the analysis exposes heterogeneous behavior that occurs in different geographical regions (i.e., Europe, Asia, and America) or during different periods of the day. The workload measures include distribution of peers, distribution of request load, distribution of targets, as well as similarity among targets.
    Computer Communications. 01/2011; 34:1622-1629.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a framework that uses environment information to enhance computer security. We apply our framework to: enhance IDS performance; and to enrich the expressiveness of access/rate controls. The environment information is gathered by external (w.r.t the host) sensors, and transmitted via an out-of-band channel, and thus it is hard for adversaries not having physical access to compromise the system. The information gathered still remains intact even if malware use rootkit techniques to hide its activities. Due to requirements on user privacy, the information gathered could be coarse and simple. We show that such simple information is already useful in several experimental evaluations. For instance, binary user presence indicating at a workstation can help to detect DDoS zombie attacks and illegal email spam. Our framework takes advantage of the growing popularity of multimodal sensors and physical security information management systems. Trends in sensor costs suggest that it will be cost-effective in the near future.
    International Journal of Information Security 01/2011; 10:285-299. · 0.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays, web servers are suffering from flash crowds and application layer DDoS attacks that can severely degrade the availability of services. It is difficult to prevent them because they comply with the communication protocol. Peer-to-peer (P2P) networks have been exploited to amplify DDoS attacks, but we believe their available resource, such as distributed storage and network bandwidth, can be used to mitigate both flash crowds and DDoS attacks. In this paper, we propose a server initiated approach to employ deployed P2P networks as distributed web caches, so that the workload directed to web servers can be reduced. In experiments, we use Kad as the particular P2P network for the realization of a large-scale distributed web cache. We performed comprehensive evaluation on the feasibility, efficiency and robustness of our scheme, through experiments and simulations on the prototype we implemented. The evaluation results show that our scheme can increase the capacity of the protected web servers at least 10 times at the same cost of connection and bandwidth consumption. The web contents cached in Kad remain reachable even under churn of peers and targeted DoS attack, and the access latency is comparable to normal direct access to web servers. It also achieves good load balancing under the heavy-tailed distribution of object popularity.
    39th International Conference on Parallel Processing, ICPP 2010, San Diego, California, USA, 13-16 September 2010; 01/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider website fingerprinting over encrypted and proxied channel. It has been shown that information on packet sizes is sufficient to achieve good identification accuracy. Recently, traffic morphing [1] was proposed to thwart website fingerprinting by changing the packet size distribution so as to mimic some other website, while minimizing bandwidth overhead. In this paper, we point out that packet ordering information, though noisy, can be utilized to enhance website fingerprinting. In addition, traces of the ordering information remain even under traffic morphing and they can be extracted for identification. When web access is performed over OpenSSH and 2000 profiled websites, the identification accuracy of our scheme reaches 81%, which is 11% better than Liberatore and Levine’s scheme presented in CCS’06 [2]. We are able to identify 78% of the morphed traffic among 2000 websites while Liberatore and Levine’s scheme identifies only 52%. Our analysis suggests that an effective countermeasure to website fingerprinting should not only hide the packet size distribution, but also aggressively remove the ordering information.
    Computer Security - ESORICS 2010, 15th European Symposium on Research in Computer Security, Athens, Greece, September 20-22, 2010. Proceedings; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Application layer DDoS attacks, to which network layer solutions is not applicable as attackers are indistinguishable based on packets or protocols, prevent legitimate users from accessing services. In this paper, we propose Trust Management Helmet (TMH) as a partial solution to this problem, which is a lightweight mitigation mechanism that uses trust to differentiate legitimate users and attackers. Its key insight is that a server should give priority to protecting the connectivity of good users during application layer DDoS attacks, instead of identifying all the attack requests. The trust to clients is evaluated based on their visiting history, and used to schedule the service to their requests. We introduce license, for user identification (even beyond NATs) and storing the trust information at clients. The license is cryptographically secured against forgery or replay attacks. We realize this mitigation mechanism and implement it as a Java package and use it for simulation. Through simulation, we show that TMH is effective in mitigating session flooding attack: even with 20 times number of attackers, more than 99% of the sessions from legitimate users are accepted with TMH; whereas less than 18% are accepted without it.
    Scalable Information Systems, 4th International ICST Conference, Infoscale 2009, Hong Kong, June 10-11, 2009, Revised Selected Papers; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we model Probabilistic Packet Marking (PPM) schemes for IP traceback as an identiflcation problem of a large number of markers. Each potential marker is asso- ciated with a distribution on tags, which are short binary strings. To mark a packet, a marker follows its associated distribution in choosing the tag to write in the IP header. Since there are a large number of (for example, over 4,000) markers, what the victim receives are samples from a mix- ture of distributions. Essentially, traceback aims to identify individual distribution contributing to the mixture. Guided by this model, we propose Random Packet Marking (RPM), a scheme that uses a simple but efiective approach. RPM does not require sophisticated structure/relationship among the tags, and employs a hop-by-hop reconstruction similar to AMS (16). Simulations show improved scalability and trace- back accuracy over prior works. For example, in a large network with over 100K nodes, 4,650 markers induce 63% of false positives in terms of edges identiflcation using the AMS marking scheme; while RPM lowers it to 2%. The ef- fectiveness of RPM demonstrates that with prior knowledge of neighboring nodes, a simple and properly designed mark- ing scheme su-ces in identifying large number of markers with high accuracy.
    Proceedings of the 2008 ACM Symposium on Information, Computer and Communications Security, ASIACCS 2008, Tokyo, Japan, March 18-20, 2008; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the impact of malicious synchronization on com- puter systems that serve customers periodically. Systems supporting au- tomatic periodic updates are common in web servers providing regular news update, sports scores or stock quotes. Our study focuses on the pos- sibility of launching an efiective low rate attack on the server to degrade performance measured in terms of longer processing time and request drops due to timeouts. The attackers are assumed to behave like nor- mal users and send one request per update cycle. The only parameter utilized in the attack is the timing of the requests sent. By exploiting the periodic nature of the updates, a small number of attackers can herd users' update requests to a cluster and arrive in a short period of time. Herding can be used to discourage new users from joining the system and to modify the user arrival distribution, so that the subsequent burst attack will be efiective. While the herding based attacks can be launched with a small amount of resource, they can be easily prevented by adding a small random component to the length of the update interval.
    Applied Cryptography and Network Security, 4th International Conference, ACNS 2006, Singapore, June 6-9, 2006, Proceedings; 01/2006
  • Source
    Jie Yu, Liming Lu, Zhoujun Li
    [Show abstract] [Hide abstract]
    ABSTRACT: Flash crowds or application layer DDoS attacks can severely degrade the availability of websites. Peer-to-peer (P2P) net-works have been exploited to amplify DDoS attacks, but we believe their available resource, such as distributed stor-age and network bandwidth, can be used to mitigate both flash crowds and DDoS attacks. In this poster, we propose a server initiated approach to employing the P2P network as a distributed web cache, so that the workload directed to web servers can be reduced. The experiment using Kad demon-strates the feasibility and robustness of our approach. The latency is comparable to normal direct access to web servers, and the web contents cached in Kad remain reachable de-spite of the dynamic departure of peers.