Jose Nazario’s research while affiliated with F5 Networks and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (15)


Internet Infrastructure Security
  • Article

July 2012

·

37 Reads

·

3 Citations

IEEE Security and Privacy Magazine

Jose Nazario

·

John Kristoff

The Internet's crucial role in modern life, commerce, and government underscores the need to study the security of the protocols and infrastructure that comprise it. For years, we've focused on endpoint security and ignored infrastructure weaknesses. Recent discoveries and initiatives highlight a simple fact: the core is just as vulnerable as the edge. This special issue brings together Internet infrastructure security articles that focus on IPv6, multinational policy, and wireless protocols.


Figure 1: An email lure for a "mailbox over quota" theme. The link in this example goes to a Google Spreadsheets page which collects the information.
Figure 2: Google Spreadsheets landing page for an email account phish. The attacker in this case has done some basic work to theme the page to not look like a typical spreadsheet page.
Phishing by form: The abuse of form sites
  • Conference Paper
  • Full-text available

October 2011

·

1,524 Reads

·

3 Citations

The evolution of phishing methods has resulted in a plethora of new tools and techniques to coerce users into providing credentials, generally for nefarious purposes. This paper discusses the relatively recent emergence of an evolutionary phishing technique called phishing by form that relies on the abuse of online forms to elicit information from the target population. We evaluate a phishing corpus of emails and over a year's worth of phishing URLs to investigate the methodology, history, spread, origins, and life cycle as well as identifying directions for future research in this area. Our analysis finds that these hosted sites represent less than 1% of all phishing URLs, appear to have shorter active lifetimes, and focus mainly on email account credential theft. We also provide defensive recommendations for these free application sites and users.

Download

WebPatrol: Automated collection and replay of web-based malware scenarios

March 2011

·

3,695 Reads

·

24 Citations

Kevin Zhijie Chen

·

Guofei Gu

·

Jianwei Zhuge

·

[...]

·

Xinhui Han

Traditional remote-server-exploiting malware is quickly evolving and adapting to the new web-centric computing paradigm. By leveraging the large population of (insecure) web sites and exploiting the vulnerabilities at client-side modern (complex) browsers (and their extensions), web-based malware becomes one of the most severe and common infection vectors nowadays. While traditional malware collection and analysis are mainly focusing on binaries, it is important to develop new techniques and tools for collecting and analyzing web-based malware, which should include a complete web-based malicious logic to reflect the dynamic, distributed, multi-step, and multi-path web infection trails, instead of just the binaries executed at end hosts. This paper is a first attempt in this direction to automatically collect web-based malware scenarios (including complete web infection trails) to enable fine-grained analysis. Based on the collections, we provide the capability for offline "live" replay, i.e., an end user (e.g., an analyst) can faithfully experience the original infection trail based on her current client environment, even when the original malicious web pages are not available or already cleaned. Our evaluation shows that WebPatrol can collect/cover much more complete infection trails than state-of-the-art honeypot systems such as PHoneyC [11] and Capture-HPC [1]. We also provide several case studies on the analysis of web-based malware scenarios we have collected from a large national education and research network, which contains around 35,000 web sites.


POLITICALLY MOTIVATED DENIAL OF SERVICE ATTACKS

January 2010

·

613 Reads

·

68 Citations

Cyberwarfare has been waged for well over a decade, utilizing methods such as website defacement, data leakage, and distributed denial of service attacks (DDoS). This paper focuses on the latter, attacks that are easily carried out and designed to overwhelm a victim's network with wasted traffic. The goal of a DDoS attack is to make the use of the network impossible for internal or external users. Through a brief examination of the history of these attacks, we find they previously were designed to inflict punitive damage on the victim but have since grown into sophisticated censorship tools. Our approach measure such attacks by looking at Internet backbone traffic, botnet activities, BGP routing changes, and community chatter about such attacks to provide a robust picture of politically targeted DDoS attacks. Our analysis indicates that most of the attackers are non-state actors but are able to fluidly utilize a growing botnet population to launch massive denial of service attacks. This finding has broad ramifications for the future of these attacks.


Phoneyc: A virtual client honeypot

May 2009

·

450 Reads

·

102 Citations

The number of client-side attacks has grown significantly in the past few years, shifting focus away from defendable po-sitions to a broad, poorly defended space filled with vulner-able clients. Just as honeypots enabled deep research into server-side attacks, honeyclients can permit the deep study of client-side attacks. A complement to honeypots, a hon-eyclient is a tool designed to mimic the behavior of a user-driven network client application, such as a web browser, and be exploited by an attacker's content. These systems are instrumented to discover what happened and how. This paper presents PhoneyC, a honeyclient tool that can provide visibility into new and complex client-side attacks. Phon-eyC is a virtual honeyclient, meaning it is not a real ap-plication but rather an emulated client. By using dynamic analysis, PhoneyC is able to remove the obfuscation from many malicious pages. Furthermore, PhoneyC emulates specific vulnerabilities to pinpoint the attack vector. Phon-eyC is a modular framework that enables the study of mali-cious HTTP pages and understands modern vulnerabilities and attacker techniques.


As the net churns: Fast-flux botnet observations

November 2008

·

172 Reads

·

176 Citations

While botnets themselves provide a rich platform for financial gain for the botnet master, the use of the infected hosts as webservers can provide an additional botnet use. Botnet herders often use fast-flux DNS techniques to host unwanted or illegal content within a botnet. These techniques change the mapping of the domain name to different bots within the botnet with constant shifting, while the bots simply relay content back to a central server. This can give the attackers additional stepping stones to thwart takedown and can obscure their true origins. Evidence suggests that more attackers are adopting fast-flux techniques, but very little data has been gathered to discover what these botnets are being used for. To address this gap in understanding, we have been mining live traffic to discover new fast-flux domains and then tracking those botnets with active measurements for several months. We identified over 900 fast-flux domain names from early to mid 2008 and monitored their use across the Internet to discern fast-flux botnet behaviors. We found that the active lifetimes of fast-flux botnets vary from less than one day to months, domains that are used in fast-flux operations are often registered but dormant for months prior to activation, that these botnets are associated with a broad range of online fraud and crime activities, and that we can identify distinct botnets across multiple domain names. We support our findings through an in-depth examination of an Internet-scale data continuously collected for hundreds of domain names over several months.


Towards an Understanding of Anti-virtualization and Anti-debugging Behavior in Modern Malware

July 2008

·

765 Reads

·

292 Citations

Many threats that plague todaypsilas networks (e.g., phishing, botnets, denial of service attacks) are enabled by a complex ecosystem of attack programs commonly called malware. To combat these threats, defenders of these networks have turned to the collection, analysis, and reverse engineering of malware as mechanisms to understand these programs, generate signatures, and facilitate cleanup of infected hosts. Recently however, new malware instances have emerged with the capability to check and often thwart these defensive activities - essentially leaving defenders blind to their activities. To combat this emerging threat, we have undertaken a robust analysis of current malware and developed a detailed taxonomy of malware defender fingerprinting methods. We demonstrate the utility of this taxonomy by using it to characterize the prevalence of these avoidance methods, to generate a novel fingerprinting method that can assist malware propagation, and to create an effective new technique to protect production systems.


DDoS attack evolution

July 2008

·

573 Reads

·

83 Citations

Network Security

DDoS attacks are constantly evolving as the nature of technology used and the motivations of the attackers are changing. Even today, perpetrators are being caught and charged with DDoS attacks launched via botnets that cause tens of thousands of dollars of damage to the victims. Last year's massive attack on Estonian Government web sites bought this attack method squarely into the public eye. What motivates these attacks, and how are they carried out? What are the unique characteristics of these attacks, and how can their effects be mitigated? A distributed denial of service (DDoS) attack is designed to overwhelm victims with traffic and prevent their network resources from working correctly for their legitimate clients. DDoS attacks require a significant amount of bandwidth to successfully attack a big adversary, such as a Web-based media company, so they often command thousands of hosts in a botnet to simultaneously send traffic to a victim. This action has the effect of aggregating bandwidth to match or surpass the victim's network resources, as well as making specific host filtering difficult, since the attack is coming from so many places all at once.


Automated Classification and Analysis of Internet Malware

September 2007

·

384 Reads

·

564 Citations

Lecture Notes in Computer Science

Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus (AV) software--a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects--from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.



Citations (15)


... Consequentl target many industries in the world such as services, financial organizations, e-banks, most profitable s exploiting web neering; phishers ns of users' and ate websites or ion, various antit few years from they have been software, or web toolbars, or as tools are in risk ber-security and increase of novel ing anti-phishing ch as password more substantial i-Phishing Work onal non-profit track current and (b) illustrate the and activity year ly, these phishes online payment retail and ISP services, social networks and online respectively and cause dramatically 2, 10-13], as presented in Fig. 1(c). For the aforesaid issues, none of the notably known phishing tools provides an optimum solution against novel phishes [4,[14][15][16][17][18][19][20][21][22][23]. Targeting academic and industry researchers, this survey aims to hang the issues behind the problem of novel phishes and detection incapability of anti-phishing tools against them by characterizing the elements of the problem and presenting the potential areas for further study. ...

Reference:

Survey of anti-phishing tools with detection capabilities
A SERIES OF METHODS FOR THE SYSTEMATIC REDUCTION OF PHISHING

... Although the Internet gives users' bright life and good businesses, it also has its unknown dark face. Since many new Internet services, devices, and hosts are developing, the number of vulnerabilities either in user smartphones, computers or servers is also increasing [2]. The more computers connected to the Internet the more possibility that the attacks take place. ...

Internet Infrastructure Security
  • Citing Article
  • July 2012

IEEE Security and Privacy Magazine

... The VM was connected to the Internet through a home DSL router and only the network packets from the VM were captured. This test-bed was created because current botnets usually target this type of home computers, looking for its high bandwidth and personal sensitive information (Corrons, PandaLabs, 2010), (Nazario, 2006). The malware in the Botnet1 capture sent 37,389 TCP flows during a little more than eleven hours. ...

Botnet Tracking: Tools, Techniques, and Lessons Learned
  • Citing Article
  • January 2007

... Russia significantly escalated its military involvement in cyber operations with major DDoS attacks on Estonia in 2007 and Georgia in 2008. These attacks targeted and disrupted critical services, demonstrating Russia's capability to combine cyberwarfare with traditional military tactics to weaken and destabilize its adversaries [9][10][11][12]. Russia learned from its cyber operations in Estonia and Georgia that international responses typically involve sanctions and criticism rather than effective solutions [13]. ...

POLITICALLY MOTIVATED DENIAL OF SERVICE ATTACKS
  • Citing Article
  • January 2010

... Research on malicious code detection has been extensively explored in many related works. For instance, early works employed conventional methods [57]- [59], such as signaturebased malicious code manually analyzed by security experts [60], [61]. This approach relied on detecting malicious signatures as a baseline for identifying new cases. ...

Phoneyc: A virtual client honeypot
  • Citing Article
  • May 2009

... These sensors are valuable for detecting and studying such threats, how they spread across the Internet and how attackers select their targets. However, [19] shows that the size of blocks allocated for this purpose affects its capabilities. effectively scanning the IPv6 address space often lead attackers to prefer IPv4. ...

The Internet motion sensor: A distributed global scoped Internet threat monitoring system