[Show abstract][Hide abstract] ABSTRACT: Software process improvement (SPI) approaches are highly mature within both product and service development enterprises. Among SPI approaches, particularly within large enterprises, CMMI is widely used and demonstrated its influencein leading the application of performance improvement methods in industry. In this paper, we test the readiness of government agencies by conducting internal CMMI appraisals on two selected government agencies. Using CMMI as a benchmark, we aim to determine how well do these agencies apply sound processes. We chose a sample of two large organizations from Saudi Arabia, a young but fast growing software market. We found that process areas such as requirements development, technical solution, configuration management, and verification follow well-defined processes with some areas of weaknesses. While both agencies have very limited knowledge of process improvement approaches, and they are relatively new software development establishments, yet they demonstrated areas of strength that were not expected. Our results indicate early signs of market maturity. We argue that the market is becoming more mature whereby SPI approaches may need to be enhanced reaching higher levels of maturity.
[Show abstract][Hide abstract] ABSTRACT: Convenience and the ability to perform advanced transactions encourage banks clients to use online banking.
As security and usability are two growing concerns for online banking users, banks have invested heavily
in improving their web portals security and user experience and trust in them. Despite considerable efforts
to evaluate particular security and usability features in online banking, a dedicated security and usability
evaluation framework that can be used as a guide in online banking development remains much less explored.
In this work, we first extract security and usability evaluation metrics from the conducted literature review.
We then include several other evaluation metrics that were not previously identified in the literature. We argue
that the proposed online banking security and usability evaluation frameworks in the literature in addition to
the existing standards of security best practices (e.g., NIST and ISO) are by no means comprehensive and lack
some essential and key evaluation metrics that are of particular interest to online banking portals. In order to
demonstrate the inadequacy of existing frameworks, we use some frameworks to evaluate five major banks.
The evaluation reveals several shortcomings in identifying both missing or incorrectly implemented security
and privacy features. Our goal is to encourage other researchers to build upon our work.
[Show abstract][Hide abstract] ABSTRACT: Fake identities and user accounts (also called 'Sybils') in online communities represent today a treasure for adversaries to spread fake product reviews, malware and spam on social networks, and Astroturf political campaigns. State-of-the-art in the defense mechanisms includes Automated Turing Tests (ATTs such as CAPTCHAs) and graph-based Sybil detectors. Sybil detectors in social networks leverage the assumption that Sybils will find it hard to befriend real users which leads to Sybils being connected to each other forming strongly connected sub graphs that can be detected using graph theory. However, the large majority of Sybils are in fact successful in integrating themselves into real user communities (such as the case in Twitter and Facebook). In this paper, we first study and compare the current detection mechanisms of Sybil accounts. We also explore various types of Twitter Sybil accounts detection features with the objective of building an effective and practical classifier. In order to build and evaluate our classifier, we collect and manually label a dataset of twitter accounts, including human users, bots, and hybrid (i.e., Tweets are posted by both human and bots). We believe this Twitter Sybils corpus will help researchers in conducting sound measurement studies. We also develop a browser plug-in (that we call Twitter Sybils Detector or TSD for short) that utilizes our classifier and warns the user about possible Sybil accounts before accessing them, upon clicking on a Twitter account.
[Show abstract][Hide abstract] ABSTRACT: The user location information represents a core dimension as understanding user context is a prerequisite for providing human-centered services that generally improve quality of life. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task due in part to the expected various objects (such as walls and people) that reflect and disperse signals. In this paper, we survey the related work in the field of indoor positioning by providing a comparative analysis of the state-of-the-art technologies, techniques, and algorithms. Unlike previous studies and surveys, our survey present new taxonomies, review some major recent advances, and argue on the area open problems and future potential. We believe this paper would spur further exploration by the research community of this challenging problem space.
[Show abstract][Hide abstract] ABSTRACT: Today, users share large amounts of information about them-selves on their online social networks. Besides the intended information, this sharing process often also "leaks" sensitive information about the users -and by proxy -about their peers. This study investigates the effect of awareness about such leakage of information on user behavior. In particular, taking inspiration from "second-hand smoke" campaigns, this study creates "social awareness" campaign where users are reminded of the information they are leaking about themselves and their friends. The results indicate that the number of users disallowing the access per-missions doubles with the social awareness campaign as compared to a baseline method. The findings are useful for system designers considering privacy as a holistic social challenge rather than a purely technical issue.
[Show abstract][Hide abstract] ABSTRACT: Spam in Online Social Networks (OSNs) is a systemic problem that imposes a threat to these services in terms of undermining their value to advertisers and potential investors, as well as negatively affecting users' engagement. In this work, we present a unique analysis of spam accounts in OSNs viewed through the lens of their behavioral characteristics (i.e., profile properties and social interactions). Our analysis includes over 100 million tweets collected over the course of one month, generated by approximately 30 million distinct user accounts, of which over 7% are suspended or removed due to abusive behaviors and other violations. We show that there exist two behaviorally distinct categories of twitter spammers and that they employ different spamming strategies. The users in these two categories demonstrate different individual properties as well as social interaction patterns. As the Twitter spammers continuously keep creating newer accounts upon being caught, a behavioral understanding of their spamming behavior will be vital in the design of future social media defense mechanisms.
[Show abstract][Hide abstract] ABSTRACT: In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. To cope with this surge of interest, much research effort has focused on meeting the needs of these applications and overcoming their shortcomings. Ultra WideBand (UWB) is an important technology in the field of indoor positioning and has shown great performance compared to others. In this work, we identify and analyze existing ultra wideband positioning technologies and present a detailed comparative survey. We also provide a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis, a method generally used in management science to evaluate the strengths, weaknesses, opportunities, and threats involved in a product or technology, to analyze the present state of UWB positioning technologies.
[Show abstract][Hide abstract] ABSTRACT: Complex enterprise environments consist of globally distributed infrastructure with a variety of applications and a large number of activities occurring on a daily basis. This increases the attack surface and narrows the view of ongoing intrinsic dynamics. Thus, many malicious activities can persist under the radar of conventional detection mechanisms long enough to achieve critical mass for full-fledged cyber attacks. Many of the typical detection approaches are signature-based and thus are expected to fail in the face of zero-day attacks. In this paper, we present the building-blocks for developing a Malicious Activity Detection System (MADS). MADS employs predictive modeling techniques for the detection of malicious activities. Unlike traditional detection mechanisms, MADS includes the detection of both network-based intrusions and malicious user behaviors. The system utilizes a simulator to produce holistic replication of activities, including both benign and malicious, flowing within a given complex IT environment. We validate the performance and accuracy of the simulator through a case study of a Fortune 500 company where we compare the results of the simulated infrastructure against the physical one in terms of resource consumption (i.e., CPU utilization), the number of concurrent users, and response times. In addition to an evaluation of the detection algorithms with varying hyper-parameters and comparing the results.
[Show abstract][Hide abstract] ABSTRACT: Web spam techniques aim to mislead search engines so that web spam pages get ranked higher than they deserve. This leads to misleading search results as spam pages might appear in search results although the content of these spam pages might not be related to the search terms. Despite the effort of search engines to deploy various techniques to detect and filter out web spam pages from being listed in their search results, spammers continue to develop new tactics to evade search engines detection mechanisms. In this paper, we study the effectiveness and accuracy of newly developed anti-spamming techniques in Google search engine. Focusing on Arabic spam pages, our study results show that Google anti-spamming techniques are ineffective against spam pages with Arabic content. We explore various types of web spam detection features to obtain an appropriate set of detection features that yield a reasonable detection accuracy. In order to build and evaluate our classifier, we collect and manually label a dataset of Arabic web pages, including both benign and spam pages. We believe this Arabic web spam corpus helps researchers in conducting sound measurement studies. We also develop a browser plug-in that utilizes our classifier and warns the user about web spam pages before accessing them, upon clicking on a search term. The plug-in has also the ability to filter out search engine results.
[Show abstract][Hide abstract] ABSTRACT: The prevalence and severity of application-layer vulnerabilities increase dramatically their corresponding attacks. In this paper, we present an extension to PHPIDS, an open source intrusion detection and prevention system for PHP-based web applications, to visualize its security log. The proposed extension analyzes PHPIDS logs, correlates these logs with the corresponding web server logs, and plots the security-related events. We use a set of tightly coupled visual representations of HTTP server requests containing known and suspicious malicious content, to provide system administrators and security analysts with fine-grained visual-based querying capabilities. We present multiple case studies to demonstrate the ability of our PHPIDS visualization extension to support security analysts with analytic reasoning and decision making in response to ongoing web server attacks. Experimenting the proposed PHPIDS visualization extension on real-world datasets shows promise for providing complementary information for effective situational awareness.
[Show abstract][Hide abstract] ABSTRACT: Open Source Software (OSS) solutions are growing rapidly as they become more mature. Countries
have focused their efforts to support OSS initiatives and foster their development by providing
government support through laws and legislation, and education. Because of the growing national
interest in OSS, we surveyed efforts of twenty major world economies, otherwise known as the Groupof-
Twenty (G-20). Within the twenty countries, we examined over forty-five national initiatives and we
were able to identify seven distinctive common strategies applied within the past ten years. Each
strategy has been adapted by at least three countries. The result of the survey shows a significant
growth in interest to support OSS by major economies. Based on the results of our survey, we present
a stepwise process to align the seven strategies to national objectives and market needs, and provide
a prioritization scheme for strategy implementation.
[Show abstract][Hide abstract] ABSTRACT: The objective of security data visualizations is to help cyber analysts to perceive trends and patterns, and gain insights into security data. Sound and systematic evaluations of security data visualizations are rarely performed, in part due to the lack of proper quantitative and qualitative measures. In this paper, we present a novel evaluation approach for security visualization based on Christopher Alexander's fifteen properties of living structures. Alexander's fifteen properties are derived from various visual patterns that appear in nature. Each property represents the guidelines for good design. We believe that using these fundamental properties have the potential for building a more robust evaluation. Each property offers essential qualities that enable better analytical reasoning. We demonstrate how to use Alexander's properties to evaluate security related visualizations. We derive a set of visualization-specific properties for evaluation, developed based on Alexander's original properties.
[Show abstract][Hide abstract] ABSTRACT: The richness and effectiveness of client-side vulnerabilities contributed to an accelerated shift toward client-side Web attacks. In order to understand the volume and nature of such malicious Web pages, we perform a detailed analysis of a subset of top visited Web sites using Google Trends. Our study is limited to the Arabic content in the Web and thus only the top Arabic searching terms are considered. To carry out this study, we analyze more than 7,000 distinct domain names by traversing all the visible pages within each domain. To identify different types of suspected phishing and malware pages, we use the API of Sucuri SiteCheck, McAfee SiteAdvisor, Google Safe Browsing, Norton, and AVG website scanners. The study shows the existence of malicious contents across a variety of types of Web pages. The results indicate that a significant number of these sites carry some known malware, are in a blacklisting status, or have some out-of-date software. Throughout our analysis, we characterize the impact of the detected malware families and speculate as to how the reported positive Web servers got infected.
[Show abstract][Hide abstract] ABSTRACT: Although search engines have deployed various techniques to detect and filter out Web spam, Web stammers continue to develop new tactics to influence the result of search engines ranking algorithms, for the purpose of obtaining an undeservedly high ranks. In this paper, we study the effect of the page language on the spam detection features. We examine how the distribution of a set of selected detection features changes according to the page language. Also, we study the effect of the page language on the detection rate of a given classifier using a selected set of detection features. The analysis results show that selecting suitable features for a classifier that segregates spam pages depends heavily on the language of the examined Web page, due in part to the different set of Web spam mechanisms used by each type of stammers.
[Show abstract][Hide abstract] ABSTRACT: Although network reconnaissance through scanning has been well explored in the literature, new scan detection proposals with various detection features and capabilities continue to appear. To our knowledge, however, there is little discussion of reliable methodologies to evaluate network scanning detectors. In this paper, we show that establishing ground truth labels of scanning activity on non-synthetic network traces is a more difficult problem relative to labeling conventional intrusions. The main problem stems from lack of absolute ground truth (AGT). We identify the specific types of errors this admits. For real-world network traffic, typically many events can be equally interpreted as legitimate or intrusions, and therefore, establishing AGT is infeasible since it depends on unknowable intent. We explore how an estimated ground truth based on discrete classification criteria can be misleading since typical detection accuracy measures are strongly dependent on the chosen criteria. We also present a methodology for evaluating and comparing scan detection algorithms. The methodology classifies remote addresses based on continuous scores designed to provide a more accurate reference for evaluation. The challenge of conducting a reliable evaluation in the absence of AGT applies to other areas in network intrusion detection, and corresponding requirements and guidelines apply.
No preview · Article · Apr 2012 · International Journal of Information Security