Mansour Alsaleh

King Abdulaziz City for Science and Technology, Ar Riyāḑ, Ar Riyāḑ, Saudi Arabia

Are you Mansour Alsaleh?

Claim your profile

Publications (17)3.52 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Convenience and the ability to perform advanced transactions encourage banks clients to use online banking. As security and usability are two growing concerns for online banking users, banks have invested heavily in improving their web portals security and user experience and trust in them. Despite considerable efforts to evaluate particular security and usability features in online banking, a dedicated security and usability evaluation framework that can be used as a guide in online banking development remains much less explored. In this work, we first extract security and usability evaluation metrics from the conducted literature review. We then include several other evaluation metrics that were not previously identified in the literature. We argue that the proposed online banking security and usability evaluation frameworks in the literature in addition to the existing standards of security best practices (e.g., NIST and ISO) are by no means comprehensive and lack some essential and key evaluation metrics that are of particular interest to online banking portals. In order to demonstrate the inadequacy of existing frameworks, we use some frameworks to evaluate five major banks. The evaluation reveals several shortcomings in identifying both missing or incorrectly implemented security and privacy features. Our goal is to encourage other researchers to build upon our work.
    11th International Conference on Web Information Systems and Technologies (WEBIST), Lisbon, Purtogal; 05/2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: The prevalence and severity of application-layer vulnerabilities increase dramatically their corresponding attacks. In this paper, we present an extension to PHPIDS, an open source intrusion detection and prevention system for PHP-based web applications, to visualize its security log. Our usage of security data visualization is motivated by the fact that most security defense systems are mainly based on text-based logs for recording security-related events, which are difficult to analyze and correlate. The proposed extension analyzes PHPIDS logs, correlates these logs with the corresponding web server logs, and plots the security-related events. We use a set of tightly coupled visual representations of hypertext transfer protocol server requests containing known and suspicious malicious content, to provide system administrators and security analysts with fine-grained visual-based querying capabilities. We present multiple case studies to demonstrate the ability of our PHPIDS visualization extension to support security analysts with analytic reasoning and decision making in response to ongoing web server attacks. Experimenting the proposed PHPIDS visualization extension on real-world datasets shows promise for providing complementary information for effective situational awareness. Copyright © 2014 John Wiley & Sons, Ltd.
    Security and Communication Networks 10/2014; DOI:10.1002/sec.1147 · 0.72 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Today, users share large amounts of information about them-selves on their online social networks. Besides the intended information, this sharing process often also "leaks" sensitive information about the users -and by proxy -about their peers. This study investigates the effect of awareness about such leakage of information on user behavior. In particular, taking inspiration from "second-hand smoke" campaigns, this study creates "social awareness" campaign where users are reminded of the information they are leaking about themselves and their friends. The results indicate that the number of users disallowing the access per-missions doubles with the social awareness campaign as compared to a baseline method. The findings are useful for system designers considering privacy as a holistic social challenge rather than a purely technical issue.
    HCI International 2014; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Spam in Online Social Networks (OSNs) is a systemic problem that imposes a threat to these services in terms of undermining their value to advertisers and potential investors, as well as negatively affecting users' engagement. In this work, we present a unique analysis of spam accounts in OSNs viewed through the lens of their behavioral characteristics (i.e., profile properties and social interactions). Our analysis includes over 100 million tweets collected over the course of one month, generated by approximately 30 million distinct user accounts, of which over 7% are suspended or removed due to abusive behaviors and other violations. We show that there exist two behaviorally distinct categories of twitter spammers and that they employ different spamming strategies. The users in these two categories demonstrate different individual properties as well as social interaction patterns. As the Twitter spammers continuously keep creating newer accounts upon being caught, a behavioral understanding of their spamming behavior will be vital in the design of future social media defense mechanisms.
    Proceedings of the 2014 ACM Conference on Web Science, Bloomington, Indiana, USA; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Complex enterprise environments consist of globally distributed infrastructure with a variety of applications and a large number of activities occurring on a daily basis. This increases the attack surface and narrows the view of ongoing intrinsic dynamics. Thus, many malicious activities can persist under the radar of conventional detection mechanisms long enough to achieve critical mass for full-fledged cyber attacks. Many of the typical detection approaches are signature-based and thus are expected to fail in the face of zero-day attacks. In this paper, we present the building-blocks for developing a Malicious Activity Detection System (MADS). MADS employs predictive modeling techniques for the detection of malicious activities. Unlike traditional detection mechanisms, MADS includes the detection of both network-based intrusions and malicious user behaviors. The system utilizes a simulator to produce holistic replication of activities, including both benign and malicious, flowing within a given complex IT environment. We validate the performance and accuracy of the simulator through a case study of a Fortune 500 company where we compare the results of the simulated infrastructure against the physical one in terms of resource consumption (i.e., CPU utilization), the number of concurrent users, and response times. In addition to an evaluation of the detection algorithms with varying hyper-parameters and comparing the results.
    2014 IEEE 11th Consumer Communications and Networking Conference (CCNC); 01/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Web spam techniques aim to mislead search engines so that web spam pages get ranked higher than they deserve. This leads to misleading search results as spam pages might appear in search results although the content of these spam pages might not be related to the search terms. Despite the effort of search engines to deploy various techniques to detect and filter out web spam pages from being listed in their search results, spammers continue to develop new tactics to evade search engines detection mechanisms. In this paper, we study the effectiveness and accuracy of newly developed anti-spamming techniques in Google search engine. Focusing on Arabic spam pages, our study results show that Google anti-spamming techniques are ineffective against spam pages with Arabic content. We explore various types of web spam detection features to obtain an appropriate set of detection features that yield a reasonable detection accuracy. In order to build and evaluate our classifier, we collect and manually label a dataset of Arabic web pages, including both benign and spam pages. We believe this Arabic web spam corpus helps researchers in conducting sound measurement studies. We also develop a browser plug-in that utilizes our classifier and warns the user about web spam pages before accessing them, upon clicking on a search term. The plug-in has also the ability to filter out search engine results.
    Proceedings of the 2013 12th International Conference on Machine Learning and Applications - Volume 02; 12/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: The prevalence and severity of application-layer vulnerabilities increase dramatically their corresponding attacks. In this paper, we present an extension to PHPIDS, an open source intrusion detection and prevention system for PHP-based web applications, to visualize its security log. The proposed extension analyzes PHPIDS logs, correlates these logs with the corresponding web server logs, and plots the security-related events. We use a set of tightly coupled visual representations of HTTP server requests containing known and suspicious malicious content, to provide system administrators and security analysts with fine-grained visual-based querying capabilities. We present multiple case studies to demonstrate the ability of our PHPIDS visualization extension to support security analysts with analytic reasoning and decision making in response to ongoing web server attacks. Experimenting the proposed PHPIDS visualization extension on real-world datasets shows promise for providing complementary information for effective situational awareness.
    Proceedings of the Tenth Workshop on Visualization for Cyber Security; 10/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Open Source Software (OSS) solutions are growing rapidly as they become more mature. Countries have focused their efforts to support OSS initiatives and foster their development by providing government support through laws and legislation, and education. Because of the growing national interest in OSS, we surveyed efforts of twenty major world economies, otherwise known as the Groupof- Twenty (G-20). Within the twenty countries, we examined over forty-five national initiatives and we were able to identify seven distinctive common strategies applied within the past ten years. Each strategy has been adapted by at least three countries. The result of the survey shows a significant growth in interest to support OSS by major economies. Based on the results of our survey, we present a stepwise process to align the seven strategies to national objectives and market needs, and provide a prioritization scheme for strategy implementation.
    8th International Conference on Standardization and Innovation in Information Technology, Sofia Antipolis, France; 09/2013
  • Z. Alshaikh, A. Alarifi, M. Alsaleh
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective of security data visualizations is to help cyber analysts to perceive trends and patterns, and gain insights into security data. Sound and systematic evaluations of security data visualizations are rarely performed, in part due to the lack of proper quantitative and qualitative measures. In this paper, we present a novel evaluation approach for security visualization based on Christopher Alexander's fifteen properties of living structures. Alexander's fifteen properties are derived from various visual patterns that appear in nature. Each property represents the guidelines for good design. We believe that using these fundamental properties have the potential for building a more robust evaluation. Each property offers essential qualities that enable better analytical reasoning. We demonstrate how to use Alexander's properties to evaluate security related visualizations. We derive a set of visualization-specific properties for evaluation, developed based on Alexander's original properties.
    Intelligence and Security Informatics (ISI), 2013 IEEE International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: The richness and effectiveness of client-side vulnerabilities contributed to an accelerated shift toward client-side Web attacks. In order to understand the volume and nature of such malicious Web pages, we perform a detailed analysis of a subset of top visited Web sites using Google Trends. Our study is limited to the Arabic content in the Web and thus only the top Arabic searching terms are considered. To carry out this study, we analyze more than 7,000 distinct domain names by traversing all the visible pages within each domain. To identify different types of suspected phishing and malware pages, we use the API of Sucuri SiteCheck, McAfee SiteAdvisor, Google Safe Browsing, Norton, and AVG website scanners. The study shows the existence of malicious contents across a variety of types of Web pages. The results indicate that a significant number of these sites carry some known malware, are in a blacklisting status, or have some out-of-date software. Throughout our analysis, we characterize the impact of the detected malware families and speculate as to how the reported positive Web servers got infected.
    Advanced Communication Technology (ICACT), 2013 15th International Conference on; 01/2013
  • Mansour Alsaleh, Paul C. van Oorschot
    [Show abstract] [Hide abstract]
    ABSTRACT: Network scanning is a common, effective technique to search for vulnerable Internet hosts and to explore the topology and trust relationships between hosts in a target network. Given that the purpose of scanning is to search for responsive hosts and network services, behavior-based scanning detection techniques based on the state of inbound connection attempts remain effective against evasion. Many of today's network environments, however, feature a dynamic and transient nature with several network hosts and services added or stopped (either permanently or temporarily) over time. In this paper, working with recent network traces from two different environments, we re-examine the Threshold Random Walk (TRW) scan detection algorithm, and we show that the number of false positives is proportional to the transiency of the offered services. To address the limitations found, we present a modified algorithm (Stateful Threshold Random Walk (STRW) algorithm) that utilizes active mapping of network services to take into account benign causes of failed connection attempts. The STRW algorithm eliminates a significant portion of TRW false positives (e.g., 29% and 77% in two datasets studied). Copyright © 2012 John Wiley & Sons, Ltd.
    Security and Communication Networks 12/2012; 5(12). DOI:10.1002/sec.416 · 0.72 Impact Factor
  • Mansour Alsaleh, P. C. van Oorschot
    [Show abstract] [Hide abstract]
    ABSTRACT: Although network reconnaissance through scanning has been well explored in the literature, new scan detection proposals with various detection features and capabilities continue to appear. To our knowledge, however, there is little discussion of reliable methodologies to evaluate network scanning detectors. In this paper, we show that establishing ground truth labels of scanning activity on non-synthetic network traces is a more difficult problem relative to labeling conventional intrusions. The main problem stems from lack of absolute ground truth (AGT). We identify the specific types of errors this admits. For real-world network traffic, typically many events can be equally interpreted as legitimate or intrusions, and therefore, establishing AGT is infeasible since it depends on unknowable intent. We explore how an estimated ground truth based on discrete classification criteria can be misleading since typical detection accuracy measures are strongly dependent on the chosen criteria. We also present a methodology for evaluating and comparing scan detection algorithms. The methodology classifies remote addresses based on continuous scores designed to provide a more accurate reference for evaluation. The challenge of conducting a reliable evaluation in the absence of AGT applies to other areas in network intrusion detection, and corresponding requirements and guidelines apply.
    International Journal of Information Security 04/2012; 12(2). DOI:10.1007/s10207-012-0178-1 · 0.94 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Brute force and dictionary attacks on password-only remote login services are now widespread and ever increasing. Enabling convenient login for legitimate users while preventing such attacks is a difficult problem. Automated Turing Tests (ATTs) continue to be an effective, easy-to-deploy approach to identify automated malicious login attempts with reasonable cost of inconvenience to users. In this paper, we discuss the inadequacy of existing and proposed login protocols designed to address large-scale online dictionary attacks (e.g., from a botnet of hundreds of thousands of nodes). We propose a new Password Guessing Resistant Protocol (PGRP), derived upon revisiting prior proposals designed to restrict such attacks. While PGRP limits the total number of login attempts from unknown remote hosts to as low as a single attempt per username, legitimate users in most cases (e.g., when attempts are made from known, frequently-used machines) can make several failed login attempts before being challenged with an ATT. We analyze the performance of PGRP with two real-world data sets and find it more promising than existing proposals.
    IEEE Transactions on Dependable and Secure Computing 03/2012; 9(1-9):128 - 141. DOI:10.1109/TDSC.2011.24 · 1.14 Impact Factor
  • A. Alarifi, M. Alsaleh
    [Show abstract] [Hide abstract]
    ABSTRACT: Although search engines have deployed various techniques to detect and filter out Web spam, Web stammers continue to develop new tactics to influence the result of search engines ranking algorithms, for the purpose of obtaining an undeservedly high ranks. In this paper, we study the effect of the page language on the spam detection features. We examine how the distribution of a set of selected detection features changes according to the page language. Also, we study the effect of the page language on the detection rate of a given classifier using a selected set of detection features. The analysis results show that selecting suitable features for a classifier that segregates spam pages depends heavily on the language of the examined Web page, due in part to the different set of Web spam mechanisms used by each type of stammers.
    Machine Learning and Applications (ICMLA), 2012 11th International Conference on; 01/2012
  • Source
    Mansour Alsaleh, Paul C. van Oorschot
    [Show abstract] [Hide abstract]
    ABSTRACT: Network scanning reveals valuable information of accessible hosts over the Internet and their offered network services, which allows significant narrowing of potential targets to attack. Addressing and balancing a set of sometimes competing desirable properties is required to make network scanning detection more appealing in practice: 1) fast detection of scanning activity to enable prompt response by intrusion detection and prevention systems; 2) acceptable rate of false alarms, keeping in mind that false alarms may lead to legitimate traffic being penalized; 3) high detection rate with the ability to detect stealthy scanners; 4) efficient use of monitoring system resources; and 5) immunity to evasion. In this paper, we present a scanning detection algorithm designed to accommodate all of these goals. LQS is a fast, accurate, and light-weight scan detection algorithm that leverages the key properties of the monitored network environment as variables that affect how the scanning detection algorithm operates. We also present what is, to our knowledge, the first automated way to estimate a reference baseline in the absence of ground truth, for use as an evaluation methodology for scan detection. Using network traces from two sites, we evaluate LQS and compare its scan detection results with those obtained by the state-of-the-art TRW algorithm. Our empirical analysis shows significant improvements over TRW in all of these properties.
    Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security, ASIACCS 2011, Hong Kong, China, March 22-24, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Graphical analysis of network traffic flows helps security analysts detect patterns or behaviors that would not be ob- vious in a text-based environment. The growing volume of network data generated and captured makes it increasingly difficult to detect increasingly sophisticated reconnaissance and stealthy network attacks. We propose a network flow filtering mechanism that leverages the exposure maps tech- nique of Whyte et al. (2007), reducing the traffic for the vi- sualization process according to the network services being offered. This allows focus to be limited to selected subsets of the network traffic, for example what might be catego- rized (correctly or otherwise) as the unexpected or poten- tially malicious portion. In particular, we use this technique to filter out traffic from sources that have not gained knowl- edge from the network in question. We evaluate the benefits of our technique on different visualizations of networkflows. Our analysis shows a significant decrease in the volume of network traffic that is to be visualized, resulting in visible patterns and insights not previously apparent.
    Twenty-Fourth Annual Computer Security Applications Conference, ACSAC 2008, Anaheim, California, USA, 8-12 December 2008; 01/2008
  • Source
    Mansour Alsaleh