Article

Provable Protection against Web Application Vulnerabilities Related to Session Data Dependencies

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Web applications are widely adopted and their correct functioning is mission critical for many businesses. At the same time, Web applications tend to be error prone and implementation vulnerabilities are readily and commonly exploited by attackers. The design of countermeasures that detect or prevent such vulnerabilities or protect against their exploitation is an important research challenge for the fields of software engineering and security engineering. In this paper, we focus on one specific type of implementation vulnerability, namely, broken dependencies on session data. This vulnerability can lead to a variety of erroneous behavior at runtime and can easily be triggered by a malicious user by applying attack techniques such as forceful browsing. This paper shows how to guarantee the absence of runtime errors due to broken dependencies on session data in Web applications. The proposed solution combines development-time program annotation, static verification, and runtime checking to provably protect against broken data dependencies. We have developed a prototype implementation of our approach, building on the JML annotation language and the existing static verification tool ESC/Java2, and we successfully applied our approach to a representative J2EE-based e-commerce application. We show that the annotation overhead is very small, that the performance of the fully automatic static verification is acceptable, and that the performance overhead of the runtime checking is limited.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We need a single interface that is the only entry point to create or update shared account data. Some of the data we have in the database might be app specific and therefore should stay within each app, anything that is shared across apps should be moved behind the new interface [4].Often centralized authentication system will store the following information: ...
Conference Paper
Single sign-on (SSO) is an identity management technique that provides users the ability to use multiple Web services with one set of credentials. However, when the authentication server is down or unavailable, users cannot access Web services, even if the services are operating normally. Therefore, enabling continuous use is important in single sign on. In this paper, we present security framework to overcome credential problems of accessing multiple web application. We explain system functionality with authorization and Authentication. We consider these methods from the viewpoint of continuity, security and efficiency makes the framework highly secure.
... The important aims within secured software are to confirm the presence of security attributes in the software and to increase the delivered reliability of the software to the user as in [35], [36] and [37]. As the vulnerability population is vast, it is impossible to examine each of them individually [37]. ...
Article
Full-text available
This article describes the process of simplifying the software security classification. The inputs of this process include a reference model from previous researcher and existing Common Vulnerabilities and Exposure (CVE) database. An interesting aim is to find out how we can make the secured software framework implementable in practice. In order to answer this question, some inquiries were set out regarding reference model and meta-process for classification to be a workable measurement system. The outputs of the process are the results discussion of experimental result and expert's validation. The experimental result use the existing CVE database which serves as an analysis when a) the framework is applied on three mix datasets, and b) when the framework is applied on two focus datasets. The first explains the result when the framework is applied on the CVE data randomly which consist mix of vendors and the latter is applied on the CVE data randomly but on selective vendors. The metric used in this assessment are precision and recall rate. The result shows there is a strong indicator that the framework can produce acceptable output accuracy. Apart from that, several experts' views were discussed to show the correctness and eliminate the ambiguity of classification rules and to prove the whole framework process. © 2018 International Journal of Advanced Computer Science and Applications.
... We need a single interface that is the only entry point to create or update shared account data. Some of the data we have in the database might be app specific and therefore should stay within each app, anything that is shared across apps should be moved behind the new interface [4].Often centralized authentication system will store the following information: ...
Conference Paper
Single sign-on (SSO) is an identity management technique that provides users the ability to use multiple Web services with one set of credentials. However, when the authentication server is down or unavailable, users cannot access Web services, even if the services are operating normally. Therefore, enabling continuous use is important in single sign on. In this paper, we present security framework to overcome credential problems of accessing multiple web application. We explain system functionality with authorization and Authentication. We consider these methods from the viewpoint of continuity, security and efficiency makes the framework highly secure.
... The verification of traditional, non-MVC web applications, has also been investigated in recent years [15,9,10,11]. In this paper, by focusing on MVC style web applications we are able to exploit the modularity in the MVC architecture and extract formal data models from existing applications. ...
Conference Paper
The use of scripting languages to build web applications has increased programmer productivity, but at the cost of degrading dependability. In this paper we focus on a class of bugs that appear in web applications that are built based on the Model-View-Controller architecture. Our goal is to automatically discover data model errors in Ruby on Rails applications. To this end, we created an automatic translator that converts data model expressions in Ruby on Rails applications to formal specifications. In particular, our translator takes Active Records specifications (which are used to specify data models in Ruby on Rails applications) as input and generates a data model in Alloy language as output. We then use bounded verification techniques implemented in the Alloy Analyzer to look for errors in these formal data model specifications. We applied our approach to two open source web applications to demonstrate its feasibility.
Preprint
Full-text available
Marketplaces for distributing software products and services have been getting increasing popularity. GitHub, which is most known for its version control functionality through Git, launched its own marketplace in 2017. GitHub Marketplace hosts third party apps and actions to automate workflows in software teams. Currently, this marketplace hosts 440 Apps and 7,878 Actions across 32 different categories. Overall, 419 Third party developers released their apps on this platform which 111 distinct customers adopted. The popularity and accessibility of GitHub projects have made this platform and the projects hosted on it one of the most frequent subjects for experimentation in the software engineering research. A simple Google Scholar search shows that 24,100 Research papers have discussed GitHub within the Software Engineering field since 2017, but none have looked into the marketplace. The GitHub Marketplace provides a unique source of information on the tools used by the practitioners in the Open Source Software (OSS) ecosystem for automating their project's workflow. In this study, we (i) mine and provide a descriptive overview of the GitHub Marketplace, (ii) perform a systematic mapping of research studies in automation for open source software, and (iii) compare the state of the art with the state of the practice on the automation tools. We conclude the paper by discussing the potential of GitHub Marketplace for knowledge mobilization and collaboration within the field. This is the first study on the GitHub Marketplace in the field.
Conference Paper
Full-text available
Web applications are widely adopted in today's life. At the same time, web applications tend to be error prone and implementation vulnerabilities are readily exploited by the attackers. Therefore it becomes more important to ensure the reliability and security of the web applications. In this paper, we focus on the specific problem of broken authentication and session management attacks against web applications. We present a study of this kind of attacks and their preventive measures.
Conference Paper
As web applications become more complex, automated techniques for their testing and verification have become essential. Many of these techniques, such as ones for identifying security vulnerabilities, require information about a web application’s control flow. Currently, this information is manually specified or automatically generated using techniques that cannot give strong guarantees of completeness. This paper presents a new static analysis based approach for identifying control flow in web applications that is both automated and provides stronger guarantees of completeness. The empirical evaluation of the approach shows that it is able to identify more complete control flow information than other approaches with comparable analysis run time.
Article
In many applications, SQL injection attacks are becoming serious threat to the integrity and security of web database systems. These types of attacks are vulnerable since the queries provided by the users are modified by the unethical users and hence they make the systems to execute their own query structures. Therefore, it often requires careful and vigilant coding by the legimate developers while writing SQL queries since the developer has to validate the user input queries (a source of possible injection attack). However, this type of validation becomes tedious for large projects where many users write the program code and relevant queries for the same application. This paper introduces an intelligent dynamic query intent evaluation technique to learn and predict the intent of the SQL queries provided by users and to compare the identified query structure with the query structure which has been generated with user input in order to detect possible attacks by unethical users automatically. This type of evaluation is helpful in reducing the need for the user to have more consciousness when SQL queries are written. This work has been fully automated by implementing intelligent validation techniques in order to minimize user intervention. The main advantage of this system is that it applies the decision tree classification algorithm which is enhanced with temporal rules to find the unethical users intelligently at the query execution points where the database manager of the system can be informed of the new possible query execution points with an intent for attacks, and thereby preventing the SQL injection attacks.
Article
Web Applications have become crucial components in providing services on the internet. But at the same time, vulnerabilities are effecting the functioning of web applications severely. Web Applications may expose organizations to significant risk if they are not properly protected. Several hardware dependent solutions are there.But hardware maintenance is big problem. This paper proposes a new approach of locking the restricted or confidential pages with authentication page. This would prevent the unauthorised direct access of the restricted pages by the malicious user, then by keeping the confidential information secure.
Conference Paper
Full-text available
The enforcement of navigation constraints in web applications is challenging and error prone due to the unrestricted use of navigation functions in web browsers. This often leads to navigation errors, producing cryptic messages and exposing information that can be exploited by malicious users. We propose a runtime enforcement mechanism that restricts the control flow of a web application to a state machine model specified by the developer, and use model checking to verify temporal properties on these state machines. Our experiments, performed on three real-world applications, show that 1) our runtime enforcement mechanism incurs negligible overhead under normal circumstances, and can even reduce server processing time in handling unexpected requests; 2) by combining runtime enforcement with model checking, navigation correctness can be efficiently guaranteed in large web applications.
Article
Full-text available
The Java Modeling Language (JML) can be used to specify the detailed design of Java classes and interfaces by adding annotations to Java source files. The aim of JML is to provide a specification language that is easy to use for Java programmers and that is supported by a wide range of tools for specification typechecking, runtime debugging, static analysis, and verification.This paper gives an overview of the main ideas behind JML, details about JML’s wide range of tools, and a glimpse into existing applications of JML.
Conference Paper
Full-text available
It is not easy to build Internet applications with common techniques, such as CGI scripts and Perl. Therefore, we designed the DiCons language, which supports the development of a range of Internet applications at the appropriate level of abstraction. In this paper we discuss the design of DiCons, we give an overview of the tool support and we explain the language by means of an example.
Conference Paper
Full-text available
Smart card applications often handle privacy-sensitive information, and therefore must obey certain security policies. Such policies are usually described by high-level security properties, stating for example that no authentication must take place within a transaction.
Conference Paper
Full-text available
The Java Modeling Language (JML) is widely used in academic research as a common language for formal methods tools that work with Java. JML is a design by contract language that can be used to specify detailed designs of Java programs, frameworks, and class libraries. Over twenty research groups worldwide have built several tools for checking code and finding bugs (see jmlspecs.org). This tutorial will give background for researchers and practitioners interested in doing formal methods research and in using JML for specifying the sequential behavior of Java classes and interfaces. Attendees will write JML specifications for a data type, including pre- and postconditions for methods and object invariants. They will also learn how to use the most important JML tools. In addition, they will learn how to use model fields to hide the actual field declarations in classes, and how JML supports modular reasoning about subtypes with behavioral subtyping
Conference Paper
Full-text available
Developing safe multithreaded software systems is difficult due to the potential unwanted interference among concurrent threads. This paper presents a flexible methodology for object-oriented programs that protects object structures against inconsistency due to race conditions. It is based on a recent methodology for single-threaded programs where developers define aggregate object structures using an ownership system and declare invariants over them. The methodology is supported by a set of language elements and by both a sound modular static verification method and run-time checking support. The paper reports on preliminary experience with a prototype implementation.
Article
Full-text available
Given a network that deploys multiple firewalls and network intrusion detection systems (NIDSs), ensuring that these security components are correctly configured is a challenging problem. Although models have been developed to reason independently about the effectiveness of firewalls and NIDSs, there is no common framework to analyze their interaction. This paper presents an integrated, constraint-based approach for modeling and reasoning about these configurations. Our approach considers the dependencies among the two types of components, and can reason automatically about their combined behavior. We have developed a tool for the specification and verification of networks that include multiple firewalls and NIDSs, based on this approach. This tool can also be used to automatically generate NIDS configurations that are optimal relative to a given cost function.
Article
Full-text available
Smart card applications often handle privacy-sensitive information, and therefore must obey certain security policies. Typically, such policies are described as high-level security properties, stating for example that no pin verification must take place within a transaction. Behavioural interface specification languages, such as JML (Java Modeling Language), have been successfully used to validate functional properties of smart card applications. However, high-level security properties cannot directly be expressed in such languages. Therefore, this paper proposes a method to translate high-level security properties into JML annotations. The method synthesises appropriate annotations and weaves them throughout the application. In this way, security policies can be validated using existing tools for JML. The method is general and applies to a large class of security properties. To validate the method, it has been applied to several realistic examples of smart card applications. This allowed us to find violations against the documented security policies for some of these applications.
Article
Full-text available
Permission is hereby granted to make and distribute verbatim copies of this document without royalty or fee. Permission is granted to quote excerpts from this documented provided the original source is properly cited. ii When separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference. Great progress on the composition problem has been made within the object paradigm, chiefly in the context of sequential, single-machine programming among benign components. We show how to extend this success to support robust composition of concurrent and potentially malicious components distributed over potentially malicious machines. We present E, a distributed, persistent, secure programming language, and CapDesk, a virus-safe desktop built in E, as embodiments of the techniques we explain.
Article
Full-text available
Design by contract is a lightweight technique for embedding elements of formal specification (such as invariants, pre and postconditions) into an object-oriented design. When contracts are made executable, they can play the role of embedded, online oracles. Executable contracts allow components to be responsive to erroneous states and, thus, may help in detecting and locating faults. In this paper, we define vigilance as the degree to which a program is able to detect an erroneous state at runtime. Diagnosability represents the effort needed to locate a fault once it has been detected. In order to estimate the benefit of using design by contract, we formalize both notions of vigilance and diagnosability as software quality measures. The main steps of measure elaboration are given, from informal definitions of the factors to be measured to the mathematical model of the measures. As is the standard in this domain, the parameters are then fixed through actual measures, based on a mutation analysis in our case. Several measures are presented that reveal and estimate the contribution of contracts to the overall quality of a system in terms of vigilance and diagnosability
Conference Paper
Full-text available
Spec# is the latest in a long line of work on programming languages and systems aimed at improving the development of correct software. This paper describes the goals and architecture of the Spec# programming system, consisting of the object-oriented Spec# programming language, the Spec# compiler, and the Boogie static program verifier. The language includes constructs for writing specifications that capture programmer intentions about how methods and data are to be used, the compiler emits run-time checks to enforce these specifications, and the verifier can check the consistency between a program and its specifications.
Article
Full-text available
JML is a behavioral interface specification language tailored to Java. It also allows assertions to be intermixed with Java code, as an aid to verification and debugging. JML is designed to be used by working software engineers, and requires only modest mathematical training. To achieve this goal, JML uses Eiffel-style assertion syntax combined with the model-based approach to specifications typified by VDM and Larch. However, JML supports quantifiers, specification-only variables, frame conditions, and other enhancements that make it more expressive for specification than Eiffel. This paper discusses the goals of JML, the overall approach, and prospects for giving JML a formal semantics through a verification logic. 1 Introduction JML [23], which stands for "Java Modeling Language," is a behavioral interface specification language (BISL) [44] designed to specify Java [2, 9] modules. Java modules are classes and interfaces. A behavioral interface specification describes both ...
Article
Firewalls and network intrusion detection systems (NIDSs) are widely used to secure computer networks. Given a network that deploys multiple firewalls and NIDSs, ensuring that these security components are correctly configured is a challenging problem. Although models have been developed to reason independently about the effectiveness of firewalls and NIDSs, there is no common framework to analyze their interaction. This paper presents an integrated, constraint-based approach for modeling and reasoning about these configurations. Our approach considers the dependencies among the two types of components, and can reason automatically about their combined behavior. We have developed a tool for the specification and verification of networks that include multiple firewalls and NIDSs, based on this approach. This tool can also be used to automatically generate NIDS configurations that are optimal relative to a given cost function.
Article
Policy-based confinement, employed in SELinux and specification-based intrusion detection systems, is a pop- ular approach for defending against exploitation of vul- nerabilities in benignsoftware. Conventionalaccesscon- trol policies employed in these approaches are effective in detecting privilege escalation attacks. However, they are unable to detect attacks that "hijack" legitimate ac- cess privileges granted to a program, e.g., an attack that subverts an FTP server to download the password file. (Note that an FTP server would normally need to ac- cess the password file for performing user authentica- tion.) Some of the common attack types reported today, such as SQL injection and cross-site scripting, involve such subversion of legitimate access privileges. In this paper, we present a new approach to strengthen policy enforcement by augmenting security policies with infor- mation about the trustworthiness of data used in security- sensitive operations. We evaluated this technique using 9 available exploits involving several popular software packages containing the above types of vulnerabilities. Our technique sucessfully defeated these exploits.
Book
Component Software: Beyond Object-Oriented Programming explains the technical foundations of this evolving technology and its importance in the software market place. It provides in-depth discussion of both the technical and the business issues to be considered, then moves on to suggest approaches for implementing component-oriented software production and the organizational requirements for success. The author draws on his own experience to offer tried-and-tested solutions to common problems and novel approaches to potential pitfalls. Anyone responsible for developing software strategy, evaluating new technologies, buying or building software will find Clemens Szyperski's objective and market-aware perspective of this new area invaluable.
Article
Usability is a key concern in the development of verification tools. In this paper, we present an usability extension for the verification tool ESC/Java2. This enhancement is not achieved through extensions to the underlying logic or calculi of ESC/Java2, but instead we focus on its human interface facets. User awareness of the soundness and completeness of the tool is vitally important in the verification process, and lack of information about such is one of the most requested features from ESC/Java2 users, and a primary complaint from ESC/Java2 critics. Areas of unsoundness and incompleteness of ESC/Java2 exist at three levels: the level of the underlying logic; the level of translation of program constructs into verification conditions; and at the level of the theorem prover. The user must be made aware of these issues for each particular part of the source code analysed in order to have confidence in the verification process. Our extension to ESC/Java2 provides clear warnings to the user when unsound or incomplete reasoning may be taking place.
Article
Nowadays, software systems are evolving towards modular com- posed applications, in which existing, loosely-coupled software com- ponents are reused in new compositions. In practice, these loosely- coupled software components tend to have quite often a set of hidden dependencies on other components in software systems. In this re- port, we illustrate the complexity of inter-component dependencies in loosely-coupled software systems by exploring the dependencies in an existing component-based webmail application, GatorMail. We identify four types of dependencies in the GatorMail webmail ap- plication, resulting in more that 2000 dependencies. By creating a better understanding of dependencies in software compositions, we hope to come to a better management of dependencies and to achieve more reliable software compositions. Two versions of this report are available: a technical report and a shrinked version without the appendices.
Conference Paper
Firewall is the de facto core technology of today's network security and defense. However, the management of firewall rules has been proven to be complex, error-prone, costly and inefficient for many large-networked organizations. These firewall rules are mostly custom-designed and hand-written thus in constant need for tuning and validation, due to the dynamic nature of the traffic characteristics, ever-changing network environment and its market demands. One of the main problems that we address in this paper is that how much the firewall rules are useful, up-to-dated, well-organized or efficient to reflect the current characteristics of network traffics. In this paper, we present a set of techniques and algorithms to analysis and manage firewall policy rules: (1) data mining technique to deduce efficient firewall policy rules by mining its network traffic log based on its frequency, (2) filtering-rule generalization (FRG) to reduce the number of policy rules by generalization, and (3) a technique to identify any decaying rule and a set of few dominant rules, to generate a new set of efficient firewall policy rules. The anomaly detection based on the mining exposes many hidden but not detectable by analyzing only the firewall policy rules, resulting in two new types of the anomalies. As a result of these mechanisms, network security administrators can automatically review and update the rules. We have developed a prototype system and demonstrated usefulness of our approaches
Conference Paper
The future of programming languages is not what it used to be. From the 50's to the 90's, richer, more flexible, and more robust structures were imposed on raw computation. Generally, new models of data and control managed to subsume older ones. But now, as programs and applications expand beyond a single local network and a single administrative domain, the very nature of data and control changes, and many long-lasting conceptual invariants are disrupted. We discuss three of these disruptive changes, which seem to be happening all at the same time, and for related reasons: asynchronous concurrency, semistructured data, and (in much less detail) security abstractions. We outline research project that address issues in those areas, mostly as examples of much larger territories yet to explore.
Conference Paper
Injection vulnerabilities pose a major threat to application-level security. Some of the more common types are SQL injection, cross-site scripting and shell injection vulnerabilities. Existing methods for defending against injection attacks, that is, attacks exploiting these vulnerabilities, rely heavily on the application developers and are therefore error-prone. In this paper we introduce CSSE, a method to detect and prevent injection attacks. CSSE works by addressing the root cause why such attacks can succeed, namely the ad-hoc serialization of user-provided input. It provides a platform-enforced separation of channels, using a combination of assignment of metadata to user-provided input, metadata-preserving string operations and context-sensitive string evaluation. CSSE requires neither application developer interaction nor application source code modifications. Since only changes to the underlying platform are needed, it effectively shifts the burden of implementing countermeasures against injection attacks from the many application developers to the small team of security-savvy platform developers. Our method is effective against most types of injection attacks, and we show that it is also less error-prone than other solutions proposed so far. We have developed a prototype CSSE implementation for PHP, a platform that is particularly prone to these vulnerabilities. We used our prototype with phpBB, a well-known bulletin-board application, to validate our method. CSSE detected and prevented all the SQL injection attacks we could reproduce and incurred only reasonable run-time overhead.
Conference Paper
Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities have been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabilities in legacy C programs, raising hopes that the same success can be achieved with Web applications. In this paper, we describe a sound and holistic approach to ensuring Web application security. Viewing Web application vulnerabilities as a secure information flow problem, we created a lattice-based static analysis algorithm derived from type systems and typestate, and addressed its soundness. During the analysis, sections of code considered vulnerable are instrumented with runtime guards, thus securing Web applications in the absence of user intervention. With sufficient annotations, runtime overhead can be reduced to zero. We also created a tool named.WebSSARI (Web application Security by Static Analysis and Runtime Inspection) to test our algorithm, and used it to verify 230 open-source Web application projects on SourceForge.net, which were selected to represent projects of different maturity, popularity, and scale. 69 contained vulnerabilities. After notifying the developers, 38 acknowledged our findings and stated their plans to provide patches. Our statistics also show that static analysis reduced potential runtime overhead by 98.4%.
Conference Paper
To maintain loose coupling and facilitate dynamic composi- tion, components in a pipe-and-filter architecture have a very limited syn- tactic interface and often communicate indirectly by means of a shared data repository. This severely limits the possibilities for compile time compatibility checking. Even static type checking is made largely irrele- vant due to the very general types given in the interfaces. The combina- tion of pipe-and-filter and a shared data repository is widely used, and in this paper we study this problem in the context of the Struts frame- work. We propose simple, but formally specified, behavioural contracts for components in such frameworks and show that automated formal verification of certain semantical compatibility properties is feasible. In particular, our verification guarantees that indirect data sharing through the shared data repository is performed consistently.
Conference Paper
The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of at- tacks that target them. In particular, SQL injection, a class of code- injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evalu- ate a new technique for detecting and preventing SQL injection at- tacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the applica- tion. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the tech- nique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique de- tected and prevented. The results of the study show that our tech- nique was able to stop all of the attempted attacks without generat- ing any false positives.
Conference Paper
The number and the importance of web applications have increased rapidly over the last years. At the same time, the quantity and impact of security vulnerabilities in such applications have grown as well. Since manual code reviews are time-consuming, error- prone and costly, the need for automated solutions has become evident. In this paper, we address the problem of vulnerable web appli- cations by means of static source code analysis. To this end, we present a novel, precise alias analysis targeted at the unique refer- ence semantics commonly found in scripting languages. Moreover, we enhance the quality and quantity of the generated vulnerability reports by employing a novel, iterative two-phase algorithm for fast and precise resolution of file inclusions. We integrated the presented concepts into Pixy (14), a high- precision static analysis tool aimed at detecting cross-site scripting vulnerabilities in PHP scripts. To demonstrate the effectiveness of our techniques, we analyzed three web applications and discovered 106 vulnerabilities. Both the high analysis speed as well as the low number of generated false positives show that our techniques can be used for conducting effective security audits.
Conference Paper
Characteristic problem areas experienced in the past are considered here, as well as some of the challenges that must be confronted in trying to achieve greater trustworthiness in computer systems and networks and in the overall environments in which they must oper- ate. Some system development recommendations for the future are also discussed.
Conference Paper
Most web applications contain security vulnerabilities. The simple and natural ways of creating a web application are prone to SQL injection attacks and cross-site scripting attacks as well as other less common vulnerabilities. In response, many tools have been developed for detecting or mitigating common web application vulnerabilities. Existing techniques either require effort from the site developer or are prone to false positives. This paper presents a fully automated approach to securely hardening web applications. It is based on precisely tracking taintedness of data and checking specifically for dangerous content only in parts of commands and output that came from untrustworthy sources. Unlike previous work in which everything that is derived from tainted input is tainted, our approach precisely tracks taintedness within data values. Full Text at Springer, may require registration or fee
Conference Paper
The right software architecture is critical to achieving essential quality attributes, but these qualities are only realized if the program as implemented conforms to its intended architecture. Previous techniques for enforcing architecture are either unsound or place significant limitations on either architectural design or on implementation techniques. This paper presents the first system to statically enforce complete structural conformance between a rich, dynamic architectural description and object-oriented implementation code. We extend previous work to (1) explain what full structural conformance means in an object-oriented setting, and (2) enforce architectural structure in the presence of shared data. We show that the resulting system can express and enforce important structural constraints of an architecture, while still supporting key object-oriented implementation techniques. As a result of our conformance property, developers can be assured that their intended architecture is realized in code, so the system will exhibit the desired quality attributes.
Conference Paper
Improperly validated user input is the underlying root cause for a wide variety of attacks on Web-based applications. Static approaches for detecting this problem help at the time of development, but require source code and report a number of false positives. Hence, they are of little use for securing fully deployed and rapidly evolving applications. We propose a dynamic solution that tags and tracks user input at runtime and prevents its improper use to maliciously affect the execution of the program. Our implementation can be transparently applied to Java classfiles, and does not require source code. Benchmarks show that the overhead of this runtime enforcement is negligible and can prevent a number of attacks.
Conference Paper
Summary form only given. The future of programming languages is not what it used to be. From the 50's to the 90's, richer, more flexible, and more robust structures were imposed on raw computation. Generally, new models of data and control managed to subsume older ones. But now, as programs and applications expand beyond a single local network and a single administrative domain, the very nature of data and control changes, and many long-lasting conceptual invariants are disrupted. We discuss three of these disruptive changes, which seem to be happening all at the same time, and for related reasons: asynchronous concurrency, semistructured data, and (in much less detail) security abstractions. We outline research project that address issues in those areas, mostly as examples of much larger territories yet to explore.
Conference Paper
Web software applications are increasingly being deployed in sensitive situations. Web applications are used to transmit, accept and store data that is personal, company confidential and sensitive. Input validation testing (IVT) checks user inputs to ensure that they conform to the program's requirements, which is particularly important for software that relies on user inputs, including Web applications. A common technique in Web applications is to perform input validation on the client with scripting languages such as JavaScript. An insidious problem with client-side input validation is that end users can bypass this validation. Bypassing validation can cause failures in the software, and can also break the security on Web applications, leading to unauthorized access to data, system failures, invalid purchases and entry of bogus data. We are developing a strategy called bypass testing to create client-side tests for Web applications that intentionally violate explicit and implicit checks on user inputs. This paper describes the strategy, defines specific rules and adequacy criteria for tests, describes a proof-of-concept automated tool, and presents initial empirical results from applying bypass testing.
Conference Paper
Many data-intensive applications dynamically construct queries in response to client requests and execute them. Java servlets, e.g., can create string representations of SQL queries and then send the queries, using JDBC, to a database server for execution. The servlet programmer enjoys static checking via Java's strong type system. However, the Java type system does little to check for possible errors in the dynamically generated SQL query strings. Thus, a type error in a generated selection query (e.g., comparing a string attribute with an integer) can result in an SQL runtime exception. Currently, such defects must be rooted out through careful testing, or (worse) might be found by customers at runtime. In this paper, we present a sound, static, program analysis technique to verify the correctness of dynamically generated query strings. We describe our analysis technique and provide soundness results for our static analysis algorithm. We also describe the details of a prototype tool based on the algorithm and present several illustrative defects found in senior software-engineering student-team projects, online tutorial examples, and a real-world purchase order system written by one of the authors.
Conference Paper
Architecture description languages (ADLs) are emerging as viable tools for formally representing the architectures of systems. While growing in number, they vary widely in terms of the abstractions they support and analysis capabilities they provide. Further, many languages not originally designed as ADLs serve reasonably well at representing and analyzing software architectures. This paper summarizes a taxonomic survey of ADLs that is in progress. The survey characterizes ADLs in terms of: the classes of systems they support; the inherent properties of the languages themselves; and the process and technology support they provide to represent, refine, analyze, and build systems from an architecture. Preliminary results allow us to draw conclusions about what constitutes an ADL, and how contemporary ADLs differ from each other
Article
Software architectures shift the focus of developers from lines-of-code to coarser-grained architectural elements and their overall interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecture-based development. There is, however, little consensus in the research community on what is an ADL, what aspects of an architecture should be modeled in an ADL, and which of several possible ADLs is best suited for a particular problem. Furthermore, the distinction is rarely made between ADLs on one hand and formal specification, module interconnection, simulation and programming languages on the other. This paper attempts to provide an answer to these questions. It motivates and presents a definition and a classification framework for ADLs. The utility of the definition is demonstrated by using it to differentiate ADLs from other modeling notations. The framework is used to classify and compare several existing ADLs, enabling us, in the process, to identify key properties of ADLs. The comparison highlights areas where existing ADLs provide extensive support and those in which they are deficient, suggesting a research agenda for the future
Article
Methodological guidelines for object-oriented software construction that improve the reliability of the resulting software systems are presented. It is shown that the object-oriented techniques rely on the theory of design by contract, which underlies the design of the Eiffel analysis, design, and programming language and of the supporting libraries, from which a number of examples are drawn. The theory of contract design and the role of assertions in that theory are discussed.< ></ETX
Article
Chairs of the Supervisory Committee: Professor Craig Chambers Professor David Notkin Computer Science and Engineering Software architecture describes the high-level structure of a software system, and can be used for design, analysis, and software evolution tasks. However, existing tools decouple architecture from implementation, allowing inconsistencies to accumulate as a software system evolves. Because of the potential for inconsistency, engineers evolving a program cannot fully trust the architecture to accurately describe the properties or structure of the implementation.
Article
The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol which can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers [47]. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred. HTTP has been in use by the World-Wide Web global information initiative since 1990. This specification defines the protocol referred to as "HTTP/1.1", and is an update to RFC 2068 [33]. RFC 2616 HTTP/1.1 June, 1999 Fielding, et al Standards Track [Page 2] Table of Contents HYPERTEXT TRANSFER PROTOCOL -- HTTP/1.1.................................................1 Status of this Memo ...............................................................................................
Article
The purpose of this paper is to build the foundation for software architecture. Wefirstdevelop an intuition for software architecture by appealing to several wellestablished architectural disciplines. On the basis of this intuition, we present a model of software architecture that consists of three components: elements, form, and rationale. Elements are either processing, data, or connecting elements. Form is defined in terms of the properties of, and the relationships among, the elements --- that is, the constraints on the elements. The rationale provides the underlying basis for the architecture in terms of the system constraints, which most often derive from the system requirements. We discuss the components of the model in the context of both architectures and architectural styles and present an extended example to illustrate some importantarchitecture and style considerations. We conclude by presenting some of the benefits of our approach to software architecture, summarizing our ...
Article
It is not easy to build Internet applications with common techniques, such as CGI scripts and Perl. Therefore, we designed the DiCons language, which supports the development of a range of Internet applications at the appropriate level of abstraction. In this paper we discuss the design of DiCons, we give an overview of the tool support and we explain the language by means of an example.