Vijayalakshmi Atluri

National Institute of Standards and Technology, Maryland, United States

Are you Vijayalakshmi Atluri?

Claim your profile

Publications (44)11.96 Total impact

  • Arindam Roy · Shamik Sural · Arun Kumar Majumdar · Jaideep Vaidya · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Large systems are complex and typically need automatic configuration to be managed effectively. In any organization, numerous tasks have to be carried out by employees. However, due to security needs, it is not feasible to directly assign any existing task to the first available employee. In order to meet many additional security requirements, constraints such as separation of duty, cardinality and binding have to be taken into consideration. Meeting these requirements imposes extra burden on organizations, which, however, is unavoidable in order to ensure security. While a trivial way of ensuring security is to assign each user to a single task, business organizations would typically like to minimize their costs and keep staffing requirements to a minimum. To meet these contradictory goals, we define the problem of Cardinality Constrained-Mutually Exclusive Task Minimum User Problem (CMUP), which aims to find the minimum users that can carry out a set of tasks while satisfying the given security constraints. We show that the CMUP problem is equivalent to a constrained version of the weak chromatic number problem in hypergraphs, which is NP-hard. We, therefore, propose a greedy solution. Our experimental evaluation shows that the proposed algorithm is both efficient and effective.
    No preview · Article · Sep 2015 · ACM Transactions on Management Information Systems
  • D. Lorenzi · E. Uzun · J. Vaidya · S. Sural · V. Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Text based CAPTCHAs are the de facto method of choice to ensure that humans (rather than automated bots) are interacting with websites. Unfortunately, users often find it inconvenient to read characters and type them in. Image CAPTCHAs provide an alternative that is often preferred to text-based implementations. However, Image CAPTCHAs have their own set of security and usability problems. A key issue is their susceptibility to Reverse Image Search (RIS) and Computer Vision (CV) attacks. In this paper, we present a generalized methodology to transform existing images by applying various noise generation algorithms into variants that are resilient to such attacks. To evaluate the usability/security tradeoff, we conduct a user study to determine if the method can provide “usable” images that meet our security requirements – thus improving the overall security provided by Image CAPTCHAs.
    No preview · Article · Jan 2015
  • David F. Ferraiolo · Vijayalakshmi Atluri · Serban I. Gavrila
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to control access to sensitive data in accordance with policy is perhaps the most fundamental security requirement. Despite over four decades of security research, the limited ability for existing access control mechanisms to generically enforce policy persists. While researchers, practitioners and policy makers have specified a large variety of access control policies to address real-world security issues, only a relatively small subset of these policies can be enforced through off-the-shelf technology, and even a smaller subset can be enforced by any one mechanism. In this paper, we propose an access control framework, referred to as the Policy Machine (PM) that fundamentally changes the way policy is expressed and enforced. Employing PM helps in building high assurance enforcement mechanisms in three respects. First, only a relatively small piece of the overall access control mechanism needs to be included in the host system (e.g., an operating system or application). This significantly reduces the amount of code that needs to be trusted. Second, it is possible to enforce the precise policies of resource owners, without compromise on enforcement or resorting to less effective administrative procedures. Third, the PM is capable of generically imposing confinement constraints that can be used to prevent leakage of information to unauthorized principals within the context of a variety of policies to include the commonly implemented Discretionary Access Control and Role-Based Access Control models.
    No preview · Article · Apr 2011 · Journal of Systems Architecture
  • Samrat Mondal · Shamik Sural · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Security analysis is a formal verification technique to ascertain certain desirable guarantees on the access control policy specification. Given a set of access control policies, a general safety requirement in such a system is to determine whether a desirable property is satisfied in all the reachable states. Such an analysis calls for the use of formal verification techniques. While formal analysis on traditional Role Based Access Control (RBAC) has been done to some extent, recent extensions to RBAC lack such an analysis. In this paper, we consider the temporal RBAC extensions and propose a formal technique using timed automata to perform security analysis by analyzing both safety and liveness properties. Using safety properties one ensures that something bad never happens while liveness properties show that some good state is also achieved. GTRBAC is a well accepted generalized temporal RBAC model which can handle a wide range of temporal constraints while specifying different access control policies. Analysis of such a model involves a process of mapping a GTRBAC based system into a state transition system. Different reduction rules are proposed to simplify the modeling process depending upon the constraints supported by the system. The effect of different constraints on the modeling process is also studied.
    No preview · Article · Mar 2011 · Computers & Security
  • Source
    Samrat Mondal · Shamik Sural · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: An access control system is often viewed as a state transition system. Given a set of access control policies, a general safety requirement in such a system is to determine whether a desirable property is satisfied in all the reachable states. Such an analysis calls for formal verification. While formal analysis on traditional RBAC has been done to some extent, the extensions of RBAC lack such an analysis. In this paper, we propose a formal technique to perform security analysis on the Generalized Temporal RBAC (GTRBAC) model which can be used to express a wide range of temporal constraints on different RBAC components like role, user and permission. In the proposed approach, at first the GTRBAC system is mapped to a state transition system built using timed automata. Characteristics of each role, user and permission are captured with the help of timed automata. A single global clock is used to express the various temporal constraints supported in a GTRBAC model. Next, a set of safety and liveness properties is specified using computation tree logic (CTL). Model checking based formal verification is then done to verify the properties against the model to determine if the system is secure with respect to a given set of access control policies. Both time and space analysis has been done for studying the performance of the approach under different configurations.
    Preview · Conference Paper · Jun 2009
  • Vandana Pursnani Janeja · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial outlier detection approaches identify outliers by first defining a spatial neighborhood. However, existing approaches suffer from two issues: (1) they primarily consider autocorrelation alone in forming the neighborhood, but ignore heterogeneity among spatial objects. (2) they do not consider interrelationships among the attributes for identifying how distinct the object is with respect to its neighbors, but consider them independently (either single or multiple). As a result, one may not identify truly unusual spatial objects and may also end up with frivolous outliers. In this paper, we revisit the computation of the spatial neighborhood and propose an approach to address the above two issues. We begin our approach with identifying a spatially related neighborhood, capturing autocorrelation. We then consider interrelationships between attributes and multiple, multilevel distributions within these attributes, thus considering autocorrelation and heterogeneity in various forms. Subsequently, we identify outliers in these neighborhoods. Our experimental results in various datasets (North Carolina SIDS data, New Mexico Leukemia data, etc.) indicate that our approach indeed correctly identifies outliers in heterogeneous neighborhoods.
    No preview · Article · Jan 2009 · Intelligent Data Analysis
  • Li Qin · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: It is natural for ontologies to evolve over time. These changes could be at structural and semantic levels. Due to changes to an ontology, its data instances may become invalid, and as a result, may become non-interpretable. In this paper, we address precisely this problem, validity of data instances due to ontological evolution. Towards this end, we make the following three novel contributions to the area of Semantic Web. First, we propose formal notions of structural validity and semantic validity of data instances, and then present approaches to ensure them. Second, we propose semantic view as part of an ontology, and demonstrate that it is sufficient to validate a data instance against the semantic view rather than the entire ontology. We discuss how the semantic view can be generated through an implication analysis, i.e., how semantic changes to one component imply semantic changes to other components in the ontology. Third, we propose a validity identification approach that employs locally maintaining a hash value of the semantic view at the data instance.
    No preview · Article · Jan 2009 · Information and Software Technology
  • Source
    Vandana P. Janeja · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Often, it is required to identify anomalous windows reflecting unusual rate of occurrence of a specific event of interest. Spatial scan statistic approach moves scan window over the region and computes the statistic of a parameter(s) of interest, and identifies anomalous windows. While this approach has been successfully employed, earlier proposals suffer from two limitations: (i) In general, the scan window is regular shaped (e.g., circle, rectangle) identifying anomalous windows of fixed shapes only. However, the region of anomaly is not necessarily regular shaped. Recent proposals to identify windows of irregular shapes identify windows larger than the true anomalies, or penalize large windows. (ii) These techniques account for autocorrelation among spatial data, but not spatial heterogeneity often resulting in inaccurate anomalous windows. We propose a random walk based Free-Form Spatial Scan Statistic (FS3). We construct a Weighted Delaunay Nearest Neighbor graph (WDNN) to capture spatial autocorrelation and heterogeneity. Using random walks we identify natural free-form scan windows, not restricted to a predefined shape and prove that they are not random. FS3 on real datasets has shown that it identifies more refined anomalous windows with better likelihood ratio of it being an anomaly as compared to earlier spatial scan statistic approaches.
    Preview · Article · Nov 2008 · IEEE Transactions on Knowledge and Data Engineering
  • Article: SemDiff
    Li Qin · Vijayalakshmi Atluri

    No preview · Article · Oct 2008 · International journal on Semantic Web and information systems
  • Source
    Heechang Shin · Vijayalakshmi Atluri · Jaideep Vaidya
    [Show abstract] [Hide abstract]
    ABSTRACT: Location based services (LBS) aim at delivering point of need information. Personalization and customization of such services, based on the profiles of mobile users, would significantly increase the value of these services. Since profiles may include sensitive information of mobile users and moreover can help identify a person, customization is allowed only when the security and privacy policies dictated by them are respected. While LBS are often presumed as untrusted entities, the location services that capture and maintain mobile users' location to enable communication are considered trusted, and therefore can capture and manage the profile information. In this paper, we address the problem of privacy preservation via anonymization. Prior research in this area attempts to ensure k-anonymity by generalizing the location. However, a person may still be identified based on his/her profile if the profiles of all k people are not the same. We extend the notion of k-anonymity by proposing a profile based k-anonymization model that guarantees anonymity even when profiles of mobile users are known to untrusted entities. Specifically, our proposed approaches generalize both location and profiles to the extent specified by the user. We support three types of queries - mobile users requesting stationary resources, stationary users requesting mobile resources, and mobile users requesting mobile resources. We propose a novel unified index structure, called the (P<sup>TPR</sup>- tree), which organizes both the locations of mobile users as well as their profiles using a single index, and as a result, offers significant performance gain during anonymization as well as query processing.
    Preview · Conference Paper · May 2008
  • Source
    Haibing Lu · Jaideep Vaidya · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: A decomposition of a binary matrix into two matrices gives a set of basis vectors and their appropriate combination to form the original matrix. Such decomposition solutions are useful in a number of application domains including text mining, role engineering as well as knowledge discovery. While a binary matrix can be decomposed in several ways, however, certain decompositions better characterize the semantics associated with the original matrix in a succinct but comprehensive way. Indeed, one can find different decompositions optimizing different criteria matching various semantics. In this paper, we first present a number of variants to the optimal Boolean matrix decomposition problem that have pragmatic implications. We then present a unified framework for modeling the optimal binary matrix decomposition and its variants using binary integer programming. Such modeling allows us to directly adopt the huge body of heuristic solutions and tools developed for binary integer programming. Although the proposed solutions are applicable to any domain of interest, for providing more meaningful discussions and results, in this paper, we present the binary matrix decomposition problem in a role engineering context, whose goal is to discover an optimal and correct set of roles from existing permissions, referred to as the role mining problem (RMP). This problem has gained significant interest in recent years as role based access control has become a popular means of enforcing security in databases. We consider several variants of the above basic RMP, including the min-noise RMP, delta-approximate RMP and edge-RMP. Solutions to each of them aid security administrators in specific scenarios. We then model these variants as Boolean matrix decomposition and present efficient heuristics to solve them.
    Preview · Conference Paper · May 2008
  • Source
    Vijayalakshmi Atluri · Ae Soon · Chun
    [Show abstract] [Hide abstract]
    ABSTRACT: Geospatial databases include any data with reference to geo-coordinate information. The geospatial data can either be digital raster images that represent the data on the earth in the form of pixels or digital vector data that is primarily from satellites. Due to the fact that many of the high-resolution satellites are commercial in nature, uncontrolled dissemination of the high resolution imagery may cause severe threats to national security as well as personal privacy. The severity of the threats is even more significant when this information is combined with vector maps or other publicly available vector data. In this paper, we present a GeoSpatial Authorisation System (GSAS), which is based on a GeoSpatial Authorisation Model (GSAM), for specifying and enforcing access control policies that makes reference to the spatial regions and locational credentials. The specification of authorisations is based on the spatial and temporal attributes associated with the image data, resolution of the images, geospatial credentials associated with users and privilege modes including view, zoom-in, overlay, view-thumbnail, view-annotation, identify, animate and fly-by that are relevant for geospatial image data. We present the GSAS system and its functionalities.
    Preview · Article · Jan 2007 · International Journal of Information and Computer Security
  • Source
    Prasenjit Mitra · Chi-Chun Pan · Peng Liu · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Today, many applications require users from one organiza- tion to access data belonging to organizations. While tra- ditional solutions oered for the federated and mediated databases facilitate this by sharing metadata, this may not be acceptable for certain organizations due to privacy con- cerns. In this paper, we propose a novel solution - Privacy- preserving Access Control Toolkit (PACT) - that enables privacy-preserving secure semantic access control and al- lows sharing of data among heterogeneous databases with- out having to share metadata. PACT uses encrypted on- tologies, encrypted ontology-mapping tables and conversion functions, encrypted role hierarchies and encrypted queries. The encrypted results of queries are sent directly from the responding system to the requesting system, bypassing the mediator to further improve the security of the system. PACT provides semantic access control using ontologies and seman- tically expanded authorization tables at the mediator. One of the distinguishing features of the PACT is that it requires very little changes to underlying databases. Despite using encrypted queries and encrypted mediation, we demonstrate that PACT provides acceptable performance.
    Preview · Conference Paper · Jan 2006
  • Li Qin · Vijayalakshmi Atluri

    No preview · Article · Jan 2006
  • Li Qin · Vijayalakshmi Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: To achieve improved availability and performance, often, local copies of remote data from autonomous sources are maintained. Web search engines are the primary examples of such services. Increasingly, these services are utilizing the Semantic Web as it is often envisioned as a machine-interpretable web. In order to keep the local repositories current, it is essential to synchronize their content with that of their original sources. Change detection is the first step to accomplish this. It is essential to have efficient change detection mechanisms as the size of the local repositories is often very large. In this paper, we present an approach that exploits the semantic relationships among the concepts in guiding the change detection process. Given changes to some seed instances, a reasoning engine fires a set of pre-defined rules to characterize the profile of the changed target instances. In addition to change detection, our proposed semantics-based approach of utilizing semantic associations can be utilized in other applications such as guiding information discovery for agents, consistency maintenance among distributed information sources, among others.
    No preview · Article · Jan 2006 · Lecture Notes in Computer Science
  • V.P. Janeja · V. Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: Often, it is required to identify anomalous windows over a spatial region that reflect unusual rate of occurrence of a specific event of interest. A spatial scan statistic essentially considers a scan window, and identifies anomalous windows by moving the scan window in the region. While spatial scan statistic has been successful, earlier proposals suffer from two limitations: (i) They restrict the scan window to be of a regular shape (e.g., circle, rectangle, cylinder). However, the region of anomaly, in general, is not necessarily of a regular shape. (ii) They take into account autocorrelation among spatial data, but not spatial heterogeneity. As a result, they often result in inaccurate anomalous windows. To address these limitations, we propose a random walk based free-form spatial scan statistic (FS<sup>3</sup>). Application of FS<sup>3</sup> on real datasets has shown that it can identify more refined anomalous windows with better likelihood ratio of it being an anomaly, than those identified by earlier spatial scan statistic approaches.
    No preview · Conference Paper · Dec 2005
  • Source
    Vijayalakshmi Atluri · Qi Guo
    [Show abstract] [Hide abstract]
    ABSTRACT: Often, enforcing security incurs overhead, and as a result may degrade the performance of a system. In this paper, we attempt to address this problem in the context of enforcing access control policies in a mobile data object environment. There are a number of applications that call for fine-grained specification of security policies in guaranteeing the confidentiality of data or privacy of individuals in a mobile environment. In particular, the security policies state the rules for providing controlled access to the mobile user profiles, to their current location and movement trajectories, to mobile resources, and stationary resources based on the mobile user location. Either a subject or an object in an authorization specification can be a moving object. The access requests in such an environment can typically be based on past, present and future status of the moving objects. To effectively serve such access requests, one must efficiently organize the mobile objects as well as authorizations. Although implementation of authorizations as access control list, capability list or access matrix is suitable for traditional data, it is not suitable to search mobile object authorizations as they are based on spatial and temporal attributes of subjects and objects, rather than subject and object identifiers. When a subject issues an access request, the system must first retrieve the relevant objects from the moving object database, and then verify whether there exists an authorization that allows the subject to access these objects. Since both the moving objects and authorizations are spatiotemporal in nature, for efficient processing of access requests, it is essential that they both be organized using some index structures. As a result, processing an access request requires searching two indexes – one, the moving object index, and the other, the authorization index. To improve the response time of access requests, in this paper, we propose a unified index structure, called S TPR-tree to index both moving objects and authorizations that govern access to them. As a result of the unified index, access requests can be processed in one pass, thereby improving the response time. Note that current access control systems do not use any index for authorizations; our work is a step in this direction. We show how the S TPR-tree can be constructed and maintained, and provide algorithms to process access requests.
    Preview · Conference Paper · Sep 2005
  • Dong-Ho Kim · Il Im · V. Atluri
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, clickstream-based collaborative filtering (CCF) recommendation models have received much attention mainly due to their scalability. The common CCF recommendation models are Markov models, sequential association rules, association rules, and clustering. The models have shown the trade-off relationship between precision and recall in performance. To address the trade-off relationship, some study has combined two or more different models or applied multi-order models. The increase of recommendation effectiveness by these models is also at best marginal. To increase recall while minimizing the loss of precision and therefore to increase overall performance measured by the F value, we build a sequentially applied model (SAM) by applying the individual models in tandem in an order determined through a learning process. We evaluated SAM over the individual models with Web usage data, and the result is promising.
    No preview · Conference Paper · Aug 2005
  • Conference Paper: FS
    Vandana Pursnani Janeja · Vijayalakshmi Atluri

    No preview · Conference Paper · Jan 2005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As a part of technology transfer efforts for New Jersey State government interagency business integration and data sharing, we address Streamlined Revenue Management initiatives for Division of Revenue to integrate budget planning and reporting process, and to implement scalable database and retrieval system for large-scale data from the State and Federal income tax E-filings. The business service integration model is also compared with those in EU counterparts, in terms of available business services, technology and strategies.
    Preview · Conference Paper · Jan 2005

Publication Stats

672 Citations
11.96 Total Impact Points


  • 2011
    • National Institute of Standards and Technology
      Maryland, United States
  • 1996-2011
    • Rutgers, The State University of New Jersey
      • Department of Management Science and Information Systems
      New Brunswick, New Jersey, United States
  • 1993
    • George Mason University
      • Center for Secure Information Systems
      Fairfax, Virginia, United States