Article

Access control systems. Security, identity management and trust models

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Access Control Systems: Security, Identity Management and Trust Models provides a thorough introduction to the foundations of programming systems security, delving into identity management, trust models, and the theory behind access control models. The book details access control mechanisms that are emerging with the latest Internet programming technologies, and explores all models employed and how they work. The latest role-based access control (RBAC) standard is also highlighted. This unique technical reference is designed for security software developers and other security professionals as a resource for setting scopes of implementations with respect to the formal models of access control systems. The book is also suitable for advanced-level students in security programming and system design. © 2006 Springer Science+Business Media, Inc., All rights reserved.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Тај процес назива се контрола приступа. Скуп правила која одређују које активности су дозвољене у систему над којим ресурсима у контексту контроле приступа називају се безбедносне политике [3]. Политике контроле приступа могу се поделити у три групе [3]: Discretionary Access Control (DAC), Mandatory Access Control (MAC) и Role-Based Access Control (RBAC). ...
... Скуп правила која одређују које активности су дозвољене у систему над којим ресурсима у контексту контроле приступа називају се безбедносне политике [3]. Политике контроле приступа могу се поделити у три групе [3]: Discretionary Access Control (DAC), Mandatory Access Control (MAC) и Role-Based Access Control (RBAC). Једна или више безбедносних политика могу се описати помоћу модела контроле приступа који дефинишу формализме за спецификацију и имплементацију безбедносних политика и омогућавају анализу безбедносних политика [3]. ...
... Политике контроле приступа могу се поделити у три групе [3]: Discretionary Access Control (DAC), Mandatory Access Control (MAC) и Role-Based Access Control (RBAC). Једна или више безбедносних политика могу се описати помоћу модела контроле приступа који дефинишу формализме за спецификацију и имплементацију безбедносних политика и омогућавају анализу безбедносних политика [3]. Имплементација безбедносних политика врши се помоћу метода или алата који се називају безбедносни механизми [3]. ...
Article
Ивично рачунарство јавља се као парадигма која треба да превазиђе недостатке централизоване организације рачунарских ресурса традиционалног рачунарства у облаку. Приликом раз­воја сервиса ивичног рачунарства, потребно је реши­ти и проблем контроле приступа. Кроз рад је опи­сана политика и механизам контроле приступа који­ма се проширује постојећа платформа која нуди ус­луге ивичног рачунарства. Политика контроле прис­тупа оптимизована је за рад са хијерархијском орга­низацијом ресурса. На основу дефинисане политике, имплементиран је систем за централизовану ауторизацију.
... Different terms are used as alternatives to authorization strategy. Eckert (2014) refers to it as access control strategy, Samarati and De Capitani di Vimercati (2001) as well as Benantar (2005) called it access control policies class while others regard it as an access control model (Bertino and Sandhu, 2005). ...
... The subject, who is allowed to access a resource, is either the object creator (i.e. the default owner) or a principal with delegated ownership rights. The resource can be only destroyed by the owner and its ownership may optionally be shared with other subjects as well (Benantar, 2005). ...
... The owner and subject users can neither control the defined access nor override the policy. This strategy often is based on the security label concept where the subjects are associated to security clearance and objects to sensitivity classifications (Hu et al., 2017b;Benantar, 2005). ...
Article
Purpose Authorization and access control have been a topic of research for several decades. However, existing definitions are inconsistent and even contradicting each other. Furthermore, there are numerous access control models and even more have recently evolved to conform with the challenging requirements of resource protection. That makes it hard to classify the models and decide for an appropriate one satisfying security needs. Therefore, this study aims to guide through the plenty of access control models in the current state of the art besides this opaque accumulation of terms meaning and how they are related. Design/methodology/approach This study follows the systematic literature review approach to investigate current research regarding access control models and illustrate the findings of the conducted review. To provide a detailed understanding of the topic, this study identified the need for an additional study on the terms related to the domain of authorization and access control. Findings The authors’ research results in this paper are the distinction between authorization and access control with respect to definition, strategies, and models in addition to the classification schema. This study provides a comprehensive overview of existing models and an analysis according to the proposed five classes of access control models. Originality/value Based on the authors’ definitions of authorization and access control along with their related terms, i.e. authorization strategy, model and policy as well as access control model and mechanism, this study gives an overview of authorization strategies and propose a classification of access control models providing examples for each category. In contrast to other comparative studies, this study discusses more access control models, including the conventional state-of-the-art models and novel ones. This study also summarizes each of the literature works after selecting the relevant ones focusing on the database system domain or providing a survey, a classification or evaluation criteria of access control models. Additionally, the introduced categories of models are analyzed with respect to various criteria that are partly selected from the standard access control system evaluation metrics by the National Institute of Standards and Technology.
... This section presents a short history of user identity management (IdM) models in the Internet era, focusing on three phases of centralized digital identities: fully centralized identity management [1], federated identity management [6], and usercentric identity management [14]. Through the development of these models, the limitations of centralized management are exposed, and the demand to move beyond centralized models becomes clear. ...
... Each service that features personalized experience typically creates a profile for the user with a login credential, e.g., a combination of username and password. This common model marks the first stage of identity development in the digital age-fully centralized identity management [1]. ...
... [29] In the example, attribute "@context" specifies the type and version of the standards this document implements, and "id" represents the unique ID of the owner or subject this document refers to . 1 . The "authentication" field lists a set of verification methods the DID holder has authorized-in this case, the holder (indicated by the "controller" attribute) has only one method of authentication, which is a private key that pairs with the given public key (denoted by "publicKeyBase58") of type "Ed25519VerificationKey2018." ...
Chapter
The wide adoption of wireless communication and mobile devices has facilitated the development of numerous applications to provide citizens with convenient access to health-related tracking and management services. Most of those services require the storage of some personal data and therefore resort to common user authentication practice (e.g., using a username and password combination) to ensure data is delivered to the appropriate party. As a result, users often find themselves having to maintain or memorize many combinations of accounts and their associated login credentials during their interaction with different services throughout the lifespan. Given the advancement of blockchain and distributed ledger technologies, a wealth of services in various domains including health care has explored the feasibility of migrating existing centralized services to such decentralized infrastructures. Because of this exploration, traditionally centralized authentication approaches managed by one party can no longer support the need of onboarding users, managing, and monitoring user activities and transactions in a decentralized manner. A community of researchers has hence been formed to study blockchain-based identity solutions, such as decentralized identities and self-sovereign identities, that would allow users to have a more common way to identify themselves when accessing a plethora of services. The main goal of these identity methods is to eliminate the need of requiring users to maintain multiple identifiers or online credentials as each individual has only one identity that truly represents themselves. These identities would be established and secured by cryptographic principles such that they still preserve at least the same security and privacy levels as their centralized counterparts. In this chapter, we first present a systematic overview of the underlying motivations and principles of blockchain-based identities to provide the audience with a basic understanding of how such identities operate and the pressing need to incorporate them. We will also introduce two of the popular blockchain-based identity frameworks currently adopted in decentralized applications. We then discuss the potential applications of these identities and their feasibility using the health care domain as a case study to hopefully inspire our readers with ideas that can be further investigated as research solutions in the health care or other domains. Lastly, we will conclude the chapter with additional discussions on the practicality of blockchain-based identities and the potential caveats or limitations associated. This chapter will serve as a cornerstone for healthcare executives, informaticians, and security/privacy experts to further investigate and make infrastructural decisions.
... Identity management (Bertino & Takahashi, 2010) and Identity Management and Access (IAM) systems (Benantar, 2005;Bertino & Takahashi, 2010;Cameron & Williamson, 2020) are at the heart of digital infrastructures. Digital identity is the foundation for building security mechanisms, such as authentication and authorization (Zhu & Badr, 2018). ...
... In digital infrastructures, things are identified and authorized to perform certain actions. The creation of the identity may require various credentials as proof of identity possession (Benantar, 2005). The authorization mechanism identifies them as institutional entities, which qualifies them to perform actions associated with those entities. ...
... After its creation, the identity must be validated to ensure that the institutional entity referred to exists, which is often performed by direct access to a register. Validation may also include authentication, credentials that ensure trust in the identity (Benantar, 2005). For example, authentication with username and password may give a human agent the right to use a digital service. ...
Article
Full-text available
Conceptual models capture knowledge about domains of reality. Therefore, conceptual models and their modelling constructs should be based on theories about the world—that is, they should be grounded in ontology. Identity is fundamental to ontology and conceptual modelling because it addresses the very existence of objects and conceptual systems in general. Classification involves grouping objects that share similarities and delineating them from objects that fall under other concepts (qualitative identity). However, among objects that fall under the same concept, we must also distinguish between individual objects (individual identity). In this paper, we analyze the ontological question of identity, focusing specifically on institutional identity, which is the identity of socially constructed institutional objects. An institutional entity is a language construct that is ‘spoken into existence’. We elaborate on how institutional identity changes how we understand conceptual modelling and the models produced. We show that different models result if we base modelling on a property‐based conception of identity compared to an institutional one. We use the Bunge‐Wand‐Weber principles, which embrace a property‐based view of identity, as an anchor to the existing literature to point out how this type of ontology sidesteps identity in general and institutional identity in particular. We contribute theoretically by providing the first in‐depth ontological analysis of what the notion of institutional identity can bring to conceptual modelling. We also contribute a solid ontological grounding of identity management and the identity of things in digital infrastructures.
... Digital certificates, in the web ecosystem, are used for binding real-world entities, ranging from individuals to computer systems to servers, with their public keys. This is known as key binding [15][16][17][18]. These certificates are then used for identification over the web. ...
... Hence, there is a need to make sure that two entities on the web should not have identical public keys. Furthermore, having similar public keys can also result in false key binding problems, impersonation as a valid authority, etc. [17,20]. ...
Article
Full-text available
SSL certificates hold immense importance when it comes to the security of the WebPKI. The trust in these certificates is driven by the strength of their cryptographic attributes and the presence of revocation features. In this paper, we perform a historical measurement study of cryptographic strength and the adoption of revocation mechanisms in the X.509 SSL certificates. In particular, it provides a real-world picture of the adoption of new certificate features and pushing new changes to the WebPKI ecosystem. We analyze the features like Online Certificate Status Protocol (OCSP) Stapling, RSA public key collisions, and the strength of certificate serial numbers. We observe the improvement in the adoption and reliability of these features for 2011–2020. Our analysis helps in identifying weaknesses and negligence in certificate issuance practices of Certificate Authorities such as lack of revocation, weak serial numbers, and issuance of the same public key across different certificates for different entities on the web known as the public key collision problem. Our results show that there is an overall increase of up to 97% in the adoption of OCSP-Stapling and OCSP extensions. Along with this, there are also significant improvements in the certificate serial number length with the top 6 CAs in our dataset issuing the majority of certificates with serial byte count greater than 30. We also discovered 803 public key collision sets in our dataset. To distinguish public key collisions, we provide a working criterion to distinguish permissible, safe collisions from unsafe, risky ones. Analysis of these features holds immense importance as weakness in any of these features could allow an adversary to forge certificate(s) and conduct several attacks examples of which include Flame malware, breach of the DigiNotar and Comodo certificate authorities.
... setAudience ( audiences ) . build () ; 9 IdToken idToken = IdToken . parse ( new GsonFactory () , tokenString ) ; 10 return verifier . ...
... The Improper ID token verification detector detector looks for improper verification of ID 9 . setRedirectUri ( redirectUri ) ; ...
Thesis
Full-text available
OpenID Connect has become a de facto standard for managing authentication and authorization in Web applications. It is however challenging for developers to understand the protocol and securely implement a client application. Even using an SDK that helps them along the way, developers are responsible for doing data validation in a precise manner. The correctness of this validation can be ensured using security analysis and vulnerability detection tools. Previous solutions on security analysis and tools for vulnerability detection of OpenID are mostly based on complex, formal models and comprehensive penetration testing frameworks that cover the whole protocol. These often require much work to understand, develop and use. The objective of this thesis is to introduce a more developer-oriented way to ensure fewer vulnerabilities in such client applications. This thesis proposes (1) a pragmatic model of the authorization code flow, as a straightforward checklist targeted specifically at the concerns of the developer, and (2) a demonstration that relatively simple static analysis techniques, based on this model, can be used to find vulnerabilities related to the needed security checks. The effectiveness of the analysis techniques is demonstrated experimentally on six open-source clients, of which four were found to have vulnerabilities. 20 vulnerabilities regarding incomplete or missing token validation were detected. The analyzer for token validation had a precision of 61%, recall of 100% and a true negative rate of 90%. Its precision may be improved further with a few weeks of engineering effort. More reliable metrics of its performance can be found by doing a large-scale empirical study.
... Access control is a combination of three security concepts [8], [9]: (1) authentication that is the process by which a system verifies the identity of an entity (e.g., user, application, IoT device, etc.); (2) authorization determines whether an accessing entity has sufficient privileges to access system resources and what operations are allowed or prohibited for this specific entity on the resources of interest; (3) accountability ensures that the actions of an entity can be solely traced back to this entity. It guarantees that all operations carried out by individuals, systems or processes can be identified and that the trace to the entity and the operation is maintained. ...
... The main objective of the access control is to enforce the system security and privacy requirements on protected services and resources [8]. The level of authorization an entity can be assigned is determined by evaluating its associated properties (e.g., identity, roles, proximity, access history) against a set of predefined access control policies. ...
Conference Paper
Full-text available
Abstract—In the era of the Internet of Things (IoT), it has become possible for a set of smart devices to collaborate autonomously and communicate seamlessly to achieve complex tasks that require a high degree of intelligence. Unlike traditional internet devices, a compromised IoT device can cause real-world damages. The severity of these damages increases dangerously in sensitive contexts especially when these devices are controlled by system insiders. Detecting abnormal access behaviors in such environments is quite challenging, due to frequent changes in the access contexts under which the IoT device can be accessed. In this paper, we propose an adaptive access control policy framework that dynamically refines the system access policies in response to changes in the device-to-device access behavior. We apply supervised machine learning to model and classify the device access behavior based on a real-life data set. We provide a use case scenario of a door locking system to validate our work. Results show that our framework provides improved security, dynamic adaptability and sufficient scalability to the target application domain.
... In addition, and as evident in case studies of historic insider attacks [16], the use of human administrators alone is inefficient in mitigating abuse in a timely manner. Improving on access control methodologies is one solution, yet such approaches [10,33,41,57] are unable to actively mitigate abuse, since they are constrained to a static definition of the criteria for access control at run-time. ...
... Authorisation embodies two concepts: identities and permissions. An identity is a digital representation of a subject (a user), where a subject could be a human being, a system, or even a process [10]. An identity contains information about the subject, particularly relevant for authentication [58], where a subject must identify themselves, for example, entering in a username and password, or use of biometrics [61]. ...
Preprint
As organisations expand and interconnect, authorisation infrastructures become increasingly difficult to manage. Several solutions have been proposed, including self-adaptive authorisation, where the access control policies are dynamically adapted at run-time to respond to misuse and malicious behaviour. The ultimate goal of self-adaptive authorisation is to reduce human intervention, make authorisation infrastructures more responsive to malicious behaviour, and manage access control in a more cost effective way. In this paper, we scope and define the emerging area of self-adaptive authorisation by describing some of its developments, trends and challenges. For that, we start by identifying key concepts related to access control and authorisation infrastructures, and provide a brief introduction to self-adaptive software systems, which provides the foundation for investigating how self-adaptation can enable the enforcement of authorisation policies. The outcome of this study is the identification of several technical challenges related to self-adaptive authorisation, which are classified according to the different stages of a feedback control loop.
... An access control system is represented by the following tuples: <u, o, ± a>. Where, u is the requested user, o is the object or service, a is the action performed by the user u on the object o, and +a denotes the positive authorization and -a denotes the negative authorization [3,4]. In addition, it also consist of a reference monitor, which is act as a guard, checks every user request and according to the rules and regulations they pass the user request. ...
... Definition 3: The objective trust degree T st (obj) of the healthcare data and services HS j in session st for a healthcare user RU i is calculated by using Equation 3. ...
... An important aspect of the implementation of privacy in the cloud is Identity Management (IdM), which allows Identity Providers (IdPs) to centralize user's identification data and send it to SPs in order to enable the processes of authentication and access control [6]. IdM systems, such as OpenId Connect [7] and Shibboleth [8], allow the creation of federations, i.e., trust relationships that make possible for users authenticated in one IdP to access services provided by various SPs belonging to different administrative domains. ...
... These systems enable the concept of federated identity, which is the focus of this work and allows users authenticated in various IdPs to access services offered by SPs located in different administrative domains due to a previously established trust relationship [11]. Some important IdM concepts are described next, as defined in [6][12] [13]: 1) Personally Identifiable Information (PII): information that can be used to identify the person to whom it relates or can be directly or indirectly linked to that person. Thus, depending on the scope, information such as date of birth, GPS location, IP address and personal interests inferred by the tracking of the use of web sites may be considered as PII. ...
Article
Full-text available
With the increasing amount of personal data stored and processed in the cloud, economic and social incentives to collect and aggregate such data have emerged. Therefore, secondary use of data, including sharing with third parties, has become a common practice among service providers and may lead to privacy breaches and cause damage to users since it involves using information in a non-consensual and possibly unwanted manner. Despite numerous works regarding privacy in cloud environments, users are still unable to control how their personal information can be used, by whom and for which purposes. This paper presents a mechanism for identity management systems that instructs users about the possible uses of their personal data by service providers, allows them to set their privacy preferences and sends these preferences to the service provider along with their identification data in a standardized, machine-readable structure, called privacy token. This approach is based on a three-dimensional classification of the possible secondary uses of data, four predefined privacy profiles and a customizable one, and a secure token for transmitting the privacy preferences. The applicability and the utility of the proposal were demonstrated through a case study, and the technical viability and the correct operation of the mechanism were verified through a prototype developed in Java in order to be incorporated, in future work, to an implementation of the OpenID Connect protocol. The main contributions of this work are the preference specification model and the privacy token, which invert the current scenario where users are forced to accept the policies defined by service providers by allowing the former to express their privacy preferences and requesting the latter to align their actions.
... [4] [5] These characteristics are unique to each person and are therefore ideal for identification, access control and security purposes. [6] [7] 2 ...
Chapter
One of the most important approaches to securing digital and physical access to protected systems is biometric identification. This approach can be combined with other identification methods such as passwords and various 2FA tools to improve the security level. Whatever the security policy and governance approach, none of these systems can provide a fully secure system unless the assets are fully physically and digitally isolated. The literature and history are replete with examples of security breaches and data leaks. Regardless of the damage from leak, if the leaked and exposed data can be easily replaced/changed and updated with more secure and unique data, the problems can be more easily overcome. However, if the biometric data is breached or leaked, that biometric identification data cannot be used for the rest of the person's life. Since biometric-based authentication approaches are easy to implement, disseminate and use, there is a growing intend to use them in most common daily applications, even for cloud computing. Concerns about security and ownership of data are much greater in cloud computing due to its nature and notoriety. In this paper, the possible use, system performance and protection of biometric data with homomorphic encryption combined with matching the biometric data by distance based machine learning approaches. A real use case also implemented by performing all computations in the encrypted domain is explained using an example set of hand recognition sample image datasets.
... Evaluates the AC system with respect to its adaptability for policy changes and its capability of dynamically interposing AC rules according to the system states. Adaptability not only concerns changes in rules, but also in the applied AC mechanism (e.g., RBAC, Discretionary Access Control (DAC) [35,36], Mandatory Access Control (MAC) [37], and Chinese Wall). Furthermore, the capability to dynamically interpose rules with respect to the entire current system state including its history is relevant. ...
Article
Full-text available
The high increase in the use of graph databases also for business- and privacy-critical applications demands for a sophisticated, flexible, fine-grained authorization and access control (AC) approach. Attribute-based access control (ABAC) supports a fine-grained definition of authorization rules and policies. Attributes can be associated with the subject, the requested resource and action, but also the environment. Thus, this is a promising starting point. However, specific characteristics of graph-structured data, such as attributes on vertices and edges along a path from a given subject to the resource to be accessed, are not yet considered. The well-established eXtensible Access Control Markup Language (XACML), which defines a declarative language for fine-grained, attribute-based authorization policies, is the basis for our proposed approach—XACML for Graph-structured data (XACML4G). The additional path-specific constraints, described in graph patterns, demand for specialized processing of the rules and policies as well as adapted enforcement and decision-making in the access control process. To demonstrate XACML4G and its enforcement process, we present a scenario from the university domain. Due to the project’s environment, the prototype is built with the multi-model database ArangoDB. Finally, compliance of XACML4G with quality standards for access control systems administration and enforcement is assessed. The results are promising and further studies concerning performance and use in practice are planned.
... Enabling such dynamic interactions would deliver a number of challenges related to establishing connections to both known nodes and unfamiliar devices. Historically, the field of identity and access management (IAM) aimed at o ering solutions for flexible access to systems and services [69,70]. The epoch of new and continually emerging applications supports the requirement for easy-to-use and straightforward IAM solution development. ...
Thesis
Full-text available
There has been an unprecedented increase in the use of smart devices globally, together with novel forms of communication, computing, and control technologies that have paved the way for a new category of devices, known as high-end wearables. While massive deployments of these objects may improve the lives of people, unauthorized access to the said private equipment and its connectivity is potentially dangerous. Hence, communication enablers together with highly-secure human authentication mechanisms have to be designed. In addition, it is important to understand how human beings, as the primary users, interact with wearable devices on a day-to-day basis; usage should be comfortable, seamless, user-friendly, and mindful of urban dynamics. Usually the connectivity between wearables and the cloud is executed through the user’s more power independent gateway: this will usually be a smartphone, which may have potentially unreliable infrastructure connectivity. In response to these unique challenges, this thesis advocates for the adoption of direct, secure, proximity-based communication enablers enhanced with multi-factor authentication (hereafter refereed to MFA) that can integrate/interact with wearable technology. Their intelligent combination together with the connection establishment automation relying on the device/user social relations would allow to reliably grant or deny access in cases of both stable and intermittent connectivity to the trusted authority running in the cloud. The introduction will list the main communication paradigms, applications, conventional network architectures, and any relevant wearable-specific challenges. Next, the work examines the improved architecture and security enablers for clusterization between wearable gateways with a proximity-based communication as a baseline. Relying on this architecture, the author then elaborates on the social ties potentially overlaying the direct connectivity management in cases of both reliable and unreliable connection to the trusted cloud. The author discusses that social-aware cooperation and trust relations between users and/or the devices themselves are beneficial for the architecture under proposal. Next, the author introduces a protocol suite that enables temporary delegation of personal device use dependent on di�erent connectivity conditions to the cloud. After these discussions, the wearable technology is analyzed as a biometric and behavior data provider for enabling MFA. The conventional approaches of the authentication factor combination strategies are compared with the ‘intelligent’ method proposed further. The assessment finds significant advantages to the developed solution over existing ones. On the practical side, the performance evaluation of existing cryptographic primitives, as part of the experimental work, shows the possibility of developing the experimental methods further on modern wearable devices. In summary, the set of enablers developed here for wearable technology connectivity is aimed at enriching people’s everyday lives in a secure and usable way, in cases when communication to the cloud is not consistently available.
... The security level management system consists of a data flow analyzer, data processor, expression analyzer, and security manager with access control list (ACL) [2,9]. Figure 2 shows the structure of the security level management system for the data secure language designed in Section 3.1. ...
... DAC is the first model towards access control mechanism and adopted by the Linux OS. DAC is an owner-centric security model which means that it restricts or grants access to an object by the policy defined by owner of the object [19]. For example, a user A creates a file then A will decide that what kind of access to file is provided to other users i.e. groups or others. ...
Chapter
Full-text available
In Trusted Computing, the client platform is checked for its trustworthiness using Remote Attestation. Integrity Measurement Architecture (IMA) is a well-known technique of TCG based attestation. However, due to static nature of IMA, it cannot be aware of the runtime behavior of applications which leads to integrity problems. To overcome this problem several dynamic behavior-based attestation techniques have been proposed that can measure the run-time behavior of applications by capturing all system-calls produced by them. In this paper, we have proposed a system call based technique of intrusion detection for remote attestation in which macros are used for reporting. Macros are used to denote subsequences of system calls of variable length. The basic goal of this paper is to shorten the number of system calls by the concept of macros which ultimately reduces the processing time as well as network overhead.
... They said that "The elements of security in computing begin with Identity" [7]. Digital identity and authentication, act as an essential foundation for IoT networks as they make communication, data exchange, and transactions possible. ...
... However, each of these models maintains certain pros and cons, thus based on specific needs, organizations opt for a particular model that suits the business environment [4]. Generally, these models follow the principle of minimum privilege, meaning that an authorized user should be able to perform minimum operations deemed to be necessary to complete the assigned job [5]. ...
... Trust assurance is concerned with guarantees that there is a sufficient amount of trustworthiness to proceeded with a particular action that requires trust. Trust assurance is often considered to be a part of security practices [5], e.g., in a form of trust assurance for identity management [15] or evidence gathering and processing for the purpose of demonstrating and assuring trust [81]. ...
Book
Full-text available
The existence of large information and communication technology (ICT) structures, such as the Internet and the Web, and their impact on our everyday lives is an unquestionable fact of modern life. Trust and trustworthiness of such systems is often taken for granted, and accepted as a solution to all the ills of our society, duly replicated on the Web. However, there is no agreement on how to develop trustworthy systems. There is even no agreement on what 'trustworthy Web' may actually mean. The Trustworthy and Trusted Web is a thorough investigation of the complex question of trustworthy ICT. It analyses this concept from the dual perspectives of the technical architecture and the sociological angle of the creation of social reality. It addresses conditions to discuss trustworthiness of ICT, discussing whether a single notion of trustworthiness can be agreed upon and whether it will generate useful design criteria for trustworthy ICT. Against the background defined by theories of social systems, The Trustworthy and Trusted Web reveals the structure behind conflicts and misunderstandings of our modern perception of the trustworthiness of ICT. It proposes a systemic approach that should bring trustworthy ICT, trustworthy Web and trustworthy Semantic Web closer to everyday reality. The Trustworthy and Trusted Web is an excellent book for anyone who is interested in learning about, analysing, designing or implementing trustworthy IC, and specifically trustworthy Web. It is comprehensive and informative in analysing the current situation while being prescriptive and visionary in proposed solutions.
... the usage of such digital resources(Benantar, 2005). Such systems have proliferated in recent years. ...
Preprint
Full-text available
Effective digital identity systems offer great economic and civic potential. However, unlocking this potential requires dealing with social, behavioural, and structural challenges to efficient market formation. We propose that a marketplace for identity data can be more efficiently formed with an infrastructure that provides a more adequate representation of individuals online. This paper therefore introduces the ontological concept of Homo Datumicus: individuals as data subjects transformed by HAT Microservers, with the axiomatic computational capabilities to transact with their own data at scale. Adoption of this paradigm would lower the social risks of identity orientation, enable privacy preserving transactions by default and mitigate the risks of power imbalances in digital identity systems and markets.
... The existing ACMs like mandatory [32], discretionary [32], attribute-based [26], Identity-based, credential-based, and role-based [7] are insufficient to provide the complete security and protection against the existing threats and attacks in the EHS. Most of the model is based on static in nature. ...
Article
Full-text available
The patient and healthcare professionals use the Electronic Healthcare System (EHS) for accessing medical records from the remote locations via the Internet. The emerging healthcare system has several advantages such as better management of the healthcare data, streamlined collaboration, improvement of medical care, insurance purpose, medical data backup, etc. Regardless of its advantages, the sensitivity and openness nature of the healthcare system arises different type of attacks and threats such as insider attack, service hijacking, abuse use of healthcare data, and impersonation attack. In the EHS, without knowing the prior information of the requester, data sharing is another considerable issue. Hence, a dynamic Access Control Model (ACM) is needed to overcome the above-discussed issues. In the EHS, the addition of trust into the access control solutions can provide dynamic access to the resources. To achieve such a model, in this paper, we have added user trust into the Identity Based Access Control (IBAC) model. For the computation of user trust, we have used beta reputation approach. An access control rule set has been proposed based on the trust degree and identity of the user to provide access in a controlled manner. This hybrid ACM and rule set not only protect the data from unauthorized access but also dynamically control the access view of the healthcare data. The experimental result of the proposed model shows that it is more accurate and reliable as compared to other trust models.
... Weaknesses in controlling access can expose the system to insufficient or the over privileged and incompetent permission administration (Fang et al. 2014). This increases the risk of various attacks, such as aggregation of unauthorised computing resource access, malicious data theft or modification, malware attacks (Parkinson 2017) and others (Benantar 2006). Another type of threat, and that considered within this paper, is permission creep (Vidas et al. 2011). ...
Article
Full-text available
Access control mechanisms are widely used in multi-user IT systems where it is necessary to restrict access to computing resources. This is certainly true of file systems whereby information needs to be protected against unintended access. User permissions often evolve over time, and changes are often made in an ad hoc manner and do not follow any rigorous process. This is largely due to the fact that the structure of the implemented permissions are often determined by experts during initial system configuration and documentation is rarely created. Furthermore, permissions are often not audited due to the volume of information, the requirement of expert knowledge, and the time required to perform manual analysis. This paper presents a novel, unsupervised technique whereby a statistical analysis technique is developed and applied to detect instances of permission creep. The system (herein refereed to as Creeper) has initially been developed for Microsoft systems; however, it is easily extensible and can be applied to other access control systems. Experimental analysis has demonstrated good performance and applicability on synthetic file system permissions with an average accuracy of 96%. Empirical analysis is subsequently performed on five real-world systems where an average accuracy of 98% is established.
... La confiance dans les politiques de sécurité passe particulièrement par le contrôle d'accès (authentification, autorisation et traçabilité : cf. [25], [26] et [27]). Avec l'ouverture d'accès aux ressources sur Internet et le nomadisme des employés consistantà leur permettre d'accéder aux ressourcesà partir de n'importe quel endroit, les entreprises sont confrontéesà des problèmes de sécurité importants. ...
... In general, Access means reaching resources or data for any secured system, while Control means the preparation of some conditions that permits the access to an authorized person. To make a good and a successful accessing, and to prevent access policies to be broken or destroyed, the controls that are managing the accessing should be able to provide the right rules (policies) to the authenticated users [10]. To achieve access control, authentication must occur to the right person, and this can be done by determining validity of the access control conditions. ...
Article
Full-text available
Any secured system requires one or more logging policies to make that system safe. Static passwords alone cannot be furthermore enough for securing systems, even with strong passwords illegal intrusions occur or it suffers the risk of forgotten. Authentication using many levels (factors) might complicate the steps when intruders try to reach system resources. Any person to be authorized for logging-in a secured system must provide some predefined data or present some entities that identify his/her authority. Predefined information between the client and the system help to get more secure level of logging-in. In this paper, the user that aims to log-in to a secured system must provide a recognized RFID card with a mobile number, which is available in the secured systems database, then the secured system with a simple algorithm generates a One-time Password that is sent via GSM Arduino compatible shield to the user announcing him/her as an authorized person.
... A PMI supports discretionary, mandatory, role-and attributebased access control models (DAC, MAC, RBAC and ABAC). If a discretionary access control model [2] is used, the owner of the system will manage access control at his own discretion. Access to a system resource is implicit given by ownership of the resource or explicit through access rights which are granted from the resource owner. ...
Conference Paper
Cybersecurity becomes ever more important since the industry is transforming towards an Industrial Internet of Things (IIoT). Essential parts of the whole security concept are securing the communication between clients and servers on different business layers, like plant floor network and enterprise network, separation of information model and authorization model and to keep the management of security policies as easy as possible. A widespread used service-oriented architecture for the IIoT is Open Platform Communications Unified Architecture (OPC UA) which supports confidentiality, integrity, application authentication, user authentication and user authorization. We present a novel security model based authorization concept for OPC UA where a Privilege Management Infrastructure (PMI) is used to grant user authorizations. Furthermore, drawbacks of OPC UA revision 1.04 are pointed out and security models are introduced which extract the security dependencies from the information model to improve maintainability, usability and transparency. Security models are implemented within OPC UA, so no additional technologies are needed and the OPC UA specification remains backward compatible.
... Therefore, the use of adequate design approaches as well as rigorous analysis techniques is gaining more and more importance. Access control systems [6,40] -as example of software based systems -are one of the most popular crucial assets of security that play a crucial role in computer security, industrial research and in finance. In addition, access control systems (ACS) are mostly used as a measure to restrict the access to sensitive data, specific areas, personal account information as well as to protect valuables. ...
Article
Full-text available
Our daily life is increasingly becoming more and more dependent on software as they are being extensively used to control safety and mission-critical systems. This has lead to very stringent verification requirements for ensuring that the software performs as intended. However, the testing based techniques cannot provide a rigorous verification due to limited computational and memory constraints and traditional formal verification techniques, like model checking and theorem proving, are not too straightforward to work with in the industrial setting. In this paper, as a first step to overcome these limitations, we describe a hybrid property based testing and model checking based technique for verifying both models and implementation of access control systems. Our approach addresses the model checking of critical properties of access control systems and aims at improving their reliability by using property based testing to analyze the corresponding software code. For illustration purposes, a simple example of an access control system is used.
... Nowadays, such implementations are very critical to secure the organizations. The book in [11] focuses on access control systems from security perspectives. Managing the process of user access and permissions all depends on the identification and authentication of the user in the first place. ...
... It maintains the confidentiality through the disclosure of information only to an authorized person and integrity is achieved by allowing only the legitimate person to perform modifications [14]. To achieve this it necessarily provides a mechanism to evaluate permission for the secure establishment of an individual or entity to access resource based on the access policy [15]. Protecting data and resources from unauthorized disclosure while ensuring the "CIA Triad" is essential under all circumstances. ...
Conference Paper
Full-text available
The thrust of technological advancement in healthcare today lies in improving the quality and timeliness of patient services. Medical information is accessed through various means that require efficient methods for protecting patient privacy and security. Healthcare providers implement a set of information security mechanisms such as access control, authentication, log analysis etc. to protect disclosure of data to the unauthorized person. This research proposes a continuous and transparent access control framework based on a preliminary study. The proposed framework has three core components: adaptive authentication, risk analysis and data transparency mapped with Role Based Access Control. The adaptive authentication validates the user through behavior profiling, risk analysis measures the amount of risk in user data access, and data transparency allows patients and administrators to monitor data consumption and detect deviation from patient consent.
... These systems enable the concept of federated identity, which is the focus of this work and allows users authenticated in various IdPs to access services offered by SPs located in different administrative domains due to a previously established trust relationship[8]. Some important IdM concepts are described next, as defined in[4][9][10]: 1) Personally Identifiable Information (PII): information that can be used to identify the person to whom it relates or can be directly or indirectly linked to that person. Thus, depending on the scope, information such as date of birth, GPS location, IP address and personal interests inferred by the tracking of the use of web sites may be considered as PII. ...
Conference Paper
Full-text available
With the increasing amount of personal data stored and processed in the cloud, economic and social incentives to collect and aggregate such data have emerged. Therefore, secondary use of data, including sharing with third parties, has become a common practice among service providers and may lead to privacy breaches and cause damage to users since it involves using information in a non-consensual and possibly unwanted manner. Despite numerous works regarding privacy in cloud environments, users are still unable to control how their personal information can be used, by whom and for which purposes. This paper presents a mechanism for identity management systems that instructs users about the possible uses of their personal data by service providers, allows them to set their privacy preferences and sends these preferences to the service provider along with their identification data in a standardized, machine-readable structure, called privacy token. This approach is based on a three-dimensional classification of the possible secondary uses of data, four predefined privacy profiles and a customizable one, and a secure token for transmitting the privacy preferences. The correct operation of the mechanism was verified through a prototype, which was developed in Java in order to be incorporated, in future work, to an implementation of the OpenId Connect protocol. The main contribution of this paper is the privacy token, which inverts the current scenario where users are forced to accept the policies defined by service providers by allowing the former to express their privacy preferences and requesting the latter to align their actions or ask for specific permissions.
Book
Full-text available
The high-quality refereed papers appearing in this book compose the proceedings of the 2022 International Conference on Innovative Solutions in Software Engineering (ICISSE). ICISSE is the forum organized by the Department of Information Technology of the Vasyl Stefanyk Precarpathian National University. ICISSE-2022 is held from November 29 to 30, 2022 in Ivano-Frankivsk, Ukraine. The event is intended to bring together researchers, scientists, and engineers to discuss experimental and theoretical results, mainly in the area of software engineering. The conference also covers topics related to computer science, computer engineering, systems analysis, cybersecurity, information systems and technology, industrial automation, electronics, metrology, micro and nanosystems, telecommunications, radio frequency engineering, IT entrepreneurship, and IT education. The current edition of the event is dedicated to the 50th birth anniversary of famous Ukrainian scientist working in information technology, Founder and first Head of the Department of Information Technology, Vice-Rector for Research of the university, Professor Dr. Pavlo Fedoruk, who passed away in 2013. The conference is included into the list of scientific conferences scheduled for 2022 by the Ministry of Education and Science of Ukraine.
Conference Paper
To meet the demands for highest level security of today’s world, a sophisticated security management system is essential. An access control system generally categorized into biometric and non-biometric types based upon contact or contactless in operation. This research work aims to survey the preferences of people, for understanding the role and need of access control systems during the difficult pandemic situation through an online survey. This survey finds that various access control solutions fail to provide the required security during this worldwide pandemic due to their contact-based operations. Henceforth, a feasible integrated electronic access control system requires to be adopted to fulfill the expectations of users amid global pandemic.
Chapter
Access control enforces authorization policies in order to prohibit unauthorized users from performing actions that could trigger a security violation. There exist numerous access control models and even more have recently evolved to conform with the challenging requirements of resource protection. That makes it hard to classify the models and choose an appropriate one satisfying security needs. This paper provides an overview of authorization strategies and proposes a rough classification of access control models providing examples for each category. In comparison with other comparative studies, we discuss more access control models including the conventional state-of-the-art models and novel ones. We also summarize each of the literature works after selecting the relevant ones focusing on database systems domain or providing a survey, a taxonomy/classification, or evaluation criteria of access control models. Additionally, the introduced categories of models are analyzed with respect to various criteria that are partly selected from the standard access control system evaluation metrics by the National Institute of Standards and Technology (NIST). Further studies for extending the list of access control models as well as analysis criteria are planned.
Chapter
It is well understood that today’s society generates enormous volumes of data, with much of that data being used to drive the emerging digital data economy. However, due to dataveillance and related concerns, the magnitude and specificity of this data has given rise to well documented privacy concerns. Amongst these concerns is the unregulated collection of online metadata. Despite being commonly used, metadata is a term that few understand. Yet its richness, prevalence and collection has serious implications for many. In this paper we describe initial work with regards to a means of allowing users to control access to metadata pertaining to them, with a view to addressing these implications. To do so, we leverage the work of the Solid data decentralisation project and build upon the notion of Category-Based Access Control.
Book
This book covers selected high-quality research papers presented at the International Conference on Big Data, Machine Learning, and Applications (BigDML 2019). It focuses on both theory and applications in the broad areas of big data and machine learning. It brings together the academia, researchers, developers and practitioners from scientific organizations and industry to share and disseminate recent research findings.
Chapter
The research in this paper analyzes the origin of the data generated and treated as Big Data and discusses the hypothesis about the conflict generated by the management of personal data privacy. It is based on a model to identify the set of management practices and resources, resulting from the integration of guidelines, norms, standards and contractual commitments that are more relevant to keeping personal data safe and generating trust in the environment. The field study shows the importance and urgency of this issue, given that it is necessary to innovate in business management, the provision of public services and the design and implementation of regional development policies. Through its conclusions, it is verified that an intelligent management is needed to rectify the course in terms of discovering and detecting patterns, relations and formulating models from these gigantic databases. This work presents a generic model for the management of privacy in the cloud of huge volumes of data. The impact of privacy is analyzed, risks are identified, and solutions provided by standards are explored to develop controls that can be integrated into a model to establish the steps so that any type of public or private organization can verify the organizational impact of its products, processes or service in the fulfillment of its strategic objectives.
Chapter
In a recent information system, effective management of secure information plays an important role in maintaining the system securely. To ensure the safe operation of a system, secure information in the system should be kept safely and prevented from external intrusion or information leakage. So, it is needed to protect secure information against unauthorized access to maintain a safe information system. In this paper, we present a data secure language and design an access control method for protecting secure data against unauthorized access in programs. The proposed method is designed to manage data containing secure information, and it can improve the information security of programs. In experiments, we show evaluation results and the accuracy of the proposed method.
Chapter
Workflow management systems (WfMS) are a special class of information systems (IS) which support the automated enactment of business processes. Meanwhile there are WfMS which allow the execution of tasks using mobile computers like PDA with the ability of wireless data transmission. However, the employment of workflow systems as well as mobile technologies comes along with special security challenges. One way to tackle these challenges is the employment of location-aware access control to enforce rules that describe from which locations a user is allowed to perform which activities. The data model behind access control in termed Access Control Model (ACM). There are special ACM for mobile information systems as well as for WfMS, but no one that addresses mobile as well as workflow specific aspects. In the article we therefore discuss the specific constraints such a model should be able to express and introduce an appropriate ACM. A special focus is on location constraints for individual workflow instances.
Chapter
Workflow management systems (WfMS) are a special class of information systems (IS) which support the automated enactment of business processes. Meanwhile there are WfMS which allow the execution of tasks using mobile computers like PDA with the ability of wireless data transmission. However, the employment of workflow systems as well as mobile technologies comes along with special security challenges. One way to tackle these challenges is the employment of location-aware access control to enforce rules that describe from which locations a user is allowed to perform which activities. The data model behind access control in termed Access Control Model (ACM). There are special ACM for mobile information systems as well as for WfMS, but no one that addresses mobile as well as workflow specific aspects. In the article we therefore discuss the specific constraints such a model should be able to express and introduce an appropriate ACM. A special focus is on location constraints for individual workflow instances.
Technical Report
Full-text available
Supervisor: Associate Professor Jingyue Li. Specialization project as pre-work to master's thesis. Conference paper based on this work, with refined statistical analysis: https://www.researchgate.net/publication/340966751_What_Norwegian_Developers_Want_and_Need_From_Security-Directed_Program_Analysis_Tools_A_Survey
Article
The paper presents a generalized method for improving security of information systems based on protection of the systems from reconnaissance by adversaries. Attacks carried out by exploiting almost all vulnerabilities require particular information about the architecture and operating algorithms of an information system. Obstructions to obtain that information also complicates carrying out attacks. Reconnaissance-protection methods can be utilized for establishing such systems (continuous change of attack surface). Practical implementation of the techniques demonstrated their high efficiency in reducing the risk of information resources to be cracked or compromised.
Chapter
An access control policy usually consists of a structured set of rules describing when an access to a resource should be permitted or denied, based on the attributes of the different entities involved in the access request. A policy containing a large number of rules and attributes can be hard to navigate, making policy editing and fixing a complex task. In some contexts, visualisation techniques are known to be helpful when dealing with similar amounts of complexity; however, finding a useful visual representation is a long process that requires observation, supposition, testing and refinement. In this paper, we report on the design process for a visualisation tool for access control policies, which led to the tool VisABAC. We first present a comprehensive survey of the existing literature, followed by the description of the participatory design for VisABAC. We then describe VisABAC itself, a tool that implements Logic Circle Packing to pursue the reduction of cognitive load on Access Control Policies. VisABAC is a web-page component, developed in Javascript using the D3.js library, and easily usable without any particular setup. Finally, we present a testing methodology that we developed to prove usability by conducting a controlled experiment with 32 volunteers; we asked them to change some attribute values in order to obtain a given decision for a policy and measured the time taken by participant to conduct these tasks (the faster, the better). We obtained a small to medium effect size (\(d=0.44\)) that indicates that VisABAC is a promising tool for authoring and editing access control policies.
Chapter
As organisations expand and interconnect, authorisation infrastructures become increasingly difficult to manage. Several solutions have been proposed, including self-adaptive authorisation, where the access control policies are dynamically adapted at run-time to respond to misuse and malicious behaviour. The ultimate goal of self-adaptive authorisation is to reduce human intervention, make authorisation infrastructures more responsive to malicious behaviour, and manage access control in a more cost-effective way. In this chapter, we scope and define the emerging area of self-adaptive authorisation by describing some of its developments, trends, and challenges. For that, we start by identifying key concepts related to access control and authorisation infrastructures and provide a brief introduction to self-adaptive software systems, which provides the foundation for investigating how self-adaptation can enable the enforcement of authorisation policies. The outcome of this study is the identification of several technical challenges related to self-adaptive authorisation, which are classified according to the different stages of a feedback control loop.
Conference Paper
Identity- und Zugriffsmanagment ist ein wichtiger Teil der IT-Sicherheit in Enterprise Umgebungen. Unterschiedlichste Systeme, Dienste und Anwendungen erfordern Berechtigungen von Benutzern und Systemen. Mithilfe von stan-dardisierten Webtechnologien zur Authentifizierung und Autorisierung mittels Tokens und der offenen Implementierung von Identitätsverwaltungen ist es möglich heterogene Anwendungen zentral abzusichern. Diese Ausarbeitung zeigt anhand von Keycloak das Potential von offenen Standards für die Enterprise-Security.
Chapter
Durch die zunehmende Digitalisierung der Gesellschaft nimmt die Bedeutung von digitalen Identitäten natürlicher Personen fortwährend zu, und die Sicherheit von Informationssystemen ist maßgeblich durch die Identitäten von Personen geprägt. Bei allen Zugriffen auf Ressourcen muss die Identität immer geprüft und der Zugriff authentifiziert werden. Entsprechende Maßnahmen und Verfahren zur Feststellung der Identität sind daher von zentraler Bedeutung für den sicheren Betrieb von Informationssystemen. Im Folgenden sind vorwiegend digitale Abbilder realer Identitäten relevant. Die vorgestellten Authentifizierungsmaßnahmen und -verfahren basieren daher auf Identitätsbezeichnern und Attributen natürlicher Personen bzw. sind eng mit diesen verknüpft. Entsprechende Maßnahmen zur Identifizierung und Authentifizierung von Personen werden in Kap. 5 betrachtet.
Article
Full-text available
This article discusses potential clashes between different types of security policies that regulate resource access requests on clinical patient data in hospitals by employees. Attribute-based Access Control (ABAC) is proposed as a proper means for such regulation. A proper representation of ABAC policies must include a handling of policy attributes among different policy types. In this article, we propose a semantic policy model with predefined policy conflict categories. A conformance verification function detects erroneous, clashing or mutually susceptible rules early during the policy planning phase. The model and conflicts are used in a conceptual application environment and evaluated in a technical experiment during an interoperability test event.
Conference Paper
Full-text available
The main evolution of web services and its exploitation enforce new security challenges, especially in terms of digital identity life cycle management. A set of Identity Management Systems exist to deal with these identities, in order to improve users' experience and gain secure access. Today we are faced with a large number of heterogeneous identity management approaches. In our study we treated several systems, among those, we present isolated model, centralized model, federated model and user centric model. The federated system makes proof of it eligibility for the identity management, therefore, we were interested in the federated model, which consist on the sharing of digital identity between different security domains, based on an agreement between the entities in communication. The Federated Identity Management (FIM) faces the problem of interoperability between heterogeneous identity federation systems. This study present a use case of interoperability among SAML and WS-Federation. We propose an approach that will permit to inter-operate heterogeneous federation systems and allow the exchange of identity data between them.
Article
Full-text available
Efforts to place vast information resources at the fingertips of each individual in large user populations must be balanced by commensurate attention to information protection. For distributed systems with less-structured tasks, more-diversified information, and a heterogeneous user set, the computing system must administer enterprise-chosen access control policies. One kind of resource is a digital library that emulates massive collections of paper and other physical media for clerical, engineering, and cultural applications. This article considers the security requirements for such libraries and proposes an access control method that mimics organizational practice by combining a subject tree with ad hoc role granting that controls privileges for many operations independently, that treats (all but one) privileged roles (e.g., auditor, security officer) like every other individual authorization, and that binds access control information to objects indirectly for scaling, flexibility, and reflexive protection. We sketch a realization and show that it will perform well, generalizes many deployed proposed access control policies, and permits individual data centers to implement other models economically and without disruption.
Article
The lattice model of non-discretionary access control in a secure computer system was developed in the early Seventies[BIaP]. The model was motivated by the controls used by the Defense Department and other "nationalsecurity" agencies to regulate people's access to sensitive information. Since that time, the lattice model has enjoyed reasonable success in several computer systems used to process national security classified information [MME; Multics; SACDIN]. "Reasonable success", in this context, means that human beings accept the systems and are able to use them to accomplish useful work,without the protection provided by the non-discretionary controls unduly interfering with productivity or perceived convenience.
Conference Paper
An examination is made of questions concerning commercial computer security integrity policies. An example is given of a dynamic separation of duty policy which cannot be implemented by mechanisms based on TCSEC based mechanisms alone, yet occurs in the real commercial world and can be implemented efficiently in practice. A commercial computer security product in wide use for ensuring the integrity of financial transactions is presented. It is shown that it implements a well-defined and sensible integrity policy that includes separation of duty, yet fails to meet either the TCSEC or the D.D. Clark and D.R. Wilson (1987) rules