Article

AAA Authorization Framework

Authors:
  • Air France KLM
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This requires the Client and the RS to mutually authenticate, and must permit the RS to verify Client requests as previously authorized. In order to enable fine-grained and flexible access control in the IoT, the Authentication and Authorization for Constrained Environments (ACE) framework has been proposed [3], building on the authorization framework OAuth 2.0 [4]. ...
... A typical security requirement in the Internet is authorization, i.e. the process for granting approval to a client that wants to access a resource [21]. The Open Authentication 2.0 (OAuth 2.0) authorization framework has asserted itself among the most adopted standards to enforce authorization [4]. OAuth 2.0 relies on an Authorization Server (AS) entity, and addresses all common issues of alternative approaches based on credential sharing, by introducing a proper authorization layer and separating the role of the actual resource owner from the role of the client accessing a resource. ...
... OAuth [4] defines the overall authentication paradigm resulting in the protocol flows and actors' interaction. ...
Conference Paper
The Authentication and Authorization for Constrained Environments (ACE) framework provides fine-grained access control in the Internet of Things, where devices are resource-constrained and with limited connectivity. The ACE framework defines separate profiles to specify how exactly entities interact and what security and communication protocols to use. This paper presents the novel ACE IPsec profile, which specifies how a client establishes a secure IPsec channel with a resource server, contextually using the ACE framework to enforce authorized access to remote resources. The profile makes it possible to establish IPsec Security Associations, either through their direct provisioning or through the standard IKEv2 protocol. We provide the first Open Source implementation of the ACE IPsec profile for the Contiki OS and test it on the resource-constrained Zolertia Firefly platform. Our experimental performance evaluation confirms that the IPsec profile and its operating modes are affordable and deployable also on constrained IoT platforms.
... Since OpenID Connect is built on OAuth 2.0, token response contains OAuth 2.0 tokens along with ID token. This make OpenID Connect a protocol that support both authentication and authorization [4] [6]. ...
... A. OAuth 2.0 in breif OAuth 2.0 is an Internet Engineering Task Force (IETF) standard which is identified by RFC6749 [6]. The initial version (the predecessor) OAuth is identified by RFC5849 [12]. ...
... According to OAuth 2.0 specification, resource server and authorization server can reside in the same server. They can also reside in different domains [6]. Also, it is common to call authorization server with names like identity provider, identity server. ...
Preprint
Full-text available
Authentication and authorization are two key elements of a software application. In modern day, OAuth 2.0 framework and OpenID Connect protocol are widely adopted standards fulfilling these requirements. The protocols are implemented in to authorization servers. It is common to call these authorization servers as identity server or identity providers since they hold user identity information. Applications registered to an identity provider can use OpenID Connect to retrieve ID token for authentication. Access token obtained along with ID token allows application to consume OAuth 2.0 protected resources. In this approach, client application is bound to a single identity provider. If the application needs to consume a protected resource from a different domain, which only accepts tokens of a defined identity provider, then client must again follow OpenID Connect protocol to obtain new tokens. This requires user identity details to be stored in the second identity provider as well. This paper proposes an extension to OpenID Connect protocol to overcome this issue. It proposes a client centric mechanism to exchange identity information as token grants against a trusted identity provider. Once grant is accepted, resulting token response contains an access token, which is good enough to access protected resources
... It is important to have a strong foundation while designing such an IAM framework for multi-tenant platforms to ensure security, privacy, and data protection. While we were working out the design to extend to a hybrid model, we had 3 guiding principles in mind viz., a) Authentication, b) Authorization and c) Auditing, commonly known as the "AAA" principle [8]. Authentication, also known as AuthN, is the process of verifying the identity of a user or a process. ...
... As discussed in section II, our framework follows the best security practices like Authentication, Authorization, and Auditing (AAA) [8] and the Principle of Least Privilege [9]. ...
Preprint
Full-text available
While more organizations have been trying to move their infrastructure to the cloud in recent years, there have been significant challenges in how identities and access are managed in a hybrid cloud setting. This paper showcases a novel identity and access management framework for shared resources in a multi-tenant hybrid cloud environment. The paper demonstrates a method to implement the "mirror" identities of on-premise identities in the cloud. Following the best security practices, the framework ensures that only rightful users can use their mirror identities in the cloud. Furthermore, the paper also proposes a technique in scaling the framework to accommodate large-scale enterprises. The framework exhibited in the paper provides a comprehensive and scalable solution for enterprises to implement identity and access control in their hybrid cloud infrastructure. Although the paper focuses on implementing the framework in Google Cloud Platform, it can be easily applied to any major public cloud platform.
... These access tokens grant the specific data access rights needed by the jobs, limiting exposure to abuse. These tokens comply with the IETF OAuth standard [15], enabling interoperability with the many public cloud storage and computing services that have adopted this standard. By improving the interoperability and security of scientific workflows, we 1) enable use of distributed computing for scientific domains that require greater data protection and 2) enable use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems. ...
... Over 4,500 scientists regularly use CILogon for authentication, including over 200 LIGO scientists. CILogon includes support for OAuth [15] and the OpenID Connect [25] standards, using open source software originally developed for NSF science gateways [4]. This OAuth software contains lightweight Java OAuth client/server libraries, with support for JSON Web Tokens [16], which we use for our SciTokens implementation. ...
Conference Paper
The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.
... To achieve this, after studying the main communications protocols, and the most important access control schemes which can be used in IoT environments, we have selected a schema based on the Open Authorization (OAuth) 2.0 profile called User-Managed Access (UMA) [4], designed mainly for Web-based services, which offers a great deal of granularity in access control. The OAuth 2.0 protocol [5] is a de facto standard to solve this situation, while offering a high level of granularity and ease of use. The main component in OAuth is the authorization server, which performs the access control tasks. ...
... OAuth is a framework designed to provide an access control scheme to Web Services and applications. It is probably the most used framework in this kind of environment (counting both versions 1.0 and 2.0 [5]), and consequently, a lot of efforts have been made to provide OAuth-based solutions for IoT (for instance, works like [34]). In [35] an implementation can be found for the CoAP, and there is another one for the MQTT protocol in [36]. ...
Article
Full-text available
Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal.
... 978-1-4673-9944-9/18/$31.00 c 2018 IEEE II. RELATED WORK Identity management is commonly addressed by using wellknown technologies, such as the Security Assertion Markup Languaje (SAML) [3], OpenID [4], OAuth [5] or WS-Federation [6]. These technologies, are, in turn, used as baseline by most of the European research projects related with the ARIES EU project. ...
... User consent will be obtained prior to transferring any personal information. Interaction with legacy non-ARIES IdPs can be also achieved by contacting those IdPs via standard protocols such as SAML [3], OAuth2 [5], etc. ...
Conference Paper
As the Internet of Things evolves, security and privacy aspects are becoming the main barriers in the development of innovative and valuable services that will transform our society. One of the biggest challenges in IoT lies in the design of secure and privacy-preserving solutions guaranteeing privacy properties such as anonymity, unlinkability, minimal disclosure of personally identifiable information, as well as assuring security properties, such as content integrity and authenticity. In this regard, this paper provides a data provenance solution that meets those properties, enabling a privacy-preserving identity auditing of the IoT sensor’s exchanged data, whereas allowing de-anonymization of the real owner identity of the associated IoT shared data in case of law enforcement inspection is needed, (e.g. identity theft or related cyber-crimes). This research is built on the foundations of the ARIES European identity ecosystem for highly secure and privacy-respecting physical and virtual identity management processes.
... For the user to access their GCS data seamlessly, the Demigod services also applies a few permissions on the buckets, abiding by the Authentication, Authorization, and Accounting (AAA) principle [9] and the Principle of Least Privilege [10]. The user's GSuite identity and their shadow service account identity get "owner" permissions on the GCS bucket. ...
Preprint
Full-text available
Implementing big data storage at scale is a complex and arduous task that requires an advanced infrastructure. With the rise of public cloud computing, various big data management services can be readily leveraged. As a critical part of Twitter's "Project Partly Cloudy", the cold storage data and analytics systems are being moved to the public cloud. This paper showcases our approach in designing a scalable big data storage and analytics management framework using BigQuery in Google Cloud Platform while ensuring security, privacy, and data protection. The paper also discusses the limitations on the public cloud resources and how they can be effectively overcome when designing a big data storage and analytics solution at scale. Although the paper discusses the framework implementation in Google Cloud Platform, it can easily be applied to all major cloud providers.
... In [26], a nomenclature for access control systems is defined, where the authorize functionality is called Policy Decision Point (PDP), the Policy Enforcement Point (PEP), and the Policy Administration Point (PAP). All these functionalities are encapsulated within the AUTHZ element since these are the basic access control components commonly utilized, providing the system model with an abstraction of the access control features. ...
Article
Full-text available
In this paper, an approach referred to as IoTsecM is proposed. This proposal is a UML/SysML extension for security requirements modeling within the analysis stage in a waterfall development life cycle in a Model-Based Systems Engineering Approach. IoTsecM allows the security requirements representation in two very well-known modeling languages, UML and SysML. With the utilization of this extension, IoT developers can consider the security requirements from the analysis stage in the design process of IoT systems. IoTsecM allows IoT systems to be designed considering possible threats and the corresponding security requirements analysis. The applicability of IoTsecM is demonstrated through applying it to analyze and represent the security requirements in an IoT real-life system in the context of collaborative autonomous vehicles in smart cities. In this use case, IoTsecM was able to represent the security requirements identified within the system architecture elements, in which all countermeasures identified were depicted using the proposed IoTsecM profile.
... For example, the already mentioned NIST report [13] considers a scenario where devices are provisioned with certificates to associate device authentication with MUD files. Furthermore, the proposed architecture uses an Authentication, Authorization and Accounting (AAA) infrastructure [58] based on the Remote Authentication Dial-In User Service (RADIUS) protocol [59], so that the router/switch can communicate the MUD URL to the MUD Manager. A similar approach is also proposed by [60], where MUD Manager's functionality is integrated into fog nodes [61]. ...
Article
Full-text available
With the strong development of the Internet of Things (IoT), the definition of IoT devices’ intended behavior is key for an effective detection of potential cybersecurity attacks and threats in an increasingly connected environment. In 2019, the Manufacturer Usage Description (MUD) was standardized within the IETF as a data model and architecture for defining, obtaining and deploying MUD files, which describe the network behavioral profiles of IoT devices. While it has attracted a strong interest from academia, industry, and Standards Developing Organizations (SDOs), MUD is not yet widely deployed in real-world scenarios. In this work, we analyze the current research landscape around this standard, and describe some of the main challenges to be considered in the coming years to foster its adoption and deployment. Based on the literature analysis and our own experience in this area, we further describe potential research directions exploiting the MUD standard to encourage the development of secure IoT-enabled scenarios.
... Popular authorization models include Discretionary access control model "DAC", Mandatory model "MAC", role-Based access control "RBAC" and its extensions, Attribute-Based Access Control (ABAC) model [15] , Organization-Based access control "OrBAC" model [16] and UsageControl (UCON) [17] , etc. -Architecture : this layer describes the entities, the workflow and interactions between them (centralized, decentralized, hybrid, cloud-based …). Given this set of entities, several authoriza-tion sequences can be defined like Push, Pull or Agent sequence [18] . The most popular authorization architecture is published by an ISO standard for the access control framework ISO/IEC 10,181-3 that defines the main features of the reference monitor [19] . ...
Article
Supervisory control and data acquisition (SCADA) systems are used in critical infrastructure to control vital sectors such as smart grids, oil pipelines, water treatment, chemical manufacturing plants, etc. Any malicious or accidental intrusion could cause dramatic human, material and economic damages. Thus, the security of the SCADA is very important, not only to keep the continuity of services (i.e., availability) against hostile and cyber-terrorist attacks, but also to ensure the resilience and integrity of processes and actions. Dealing with this issue, this paper discusses SCADA vulnerabilities and security threats, with a focus on recent ones. Then, we define a holistic methodology to derive the suitable security mechanisms for this kind of critical systems. Our methodology starts by identifying the security needs and objectives, specifying the security policies and models, deriving the adapted architecture and, finally, implementing the security mechanisms that satisfy the needs and cover the risks. We focus on the modelling step by proposing the new CI-OrBAC model. In this paper, we focused on securing communication and protecting SCADA against both internal and external threats while satisfying the self-healing, intrusion tolerance, integrity, scalability and collaboration needs.
... EAP-NOOB is proposed for the registration, authentication and key derivation of IoT devices that have a minimal user interface and no pre-configured authentication credentials, and could also be a candidate for authenticating LPWA devices due to similar device characteristics. On the other hand, IETF recommends AAA framework [111] to tackle some of the security issues of LPWA networks [112], and it has been considered as one of the technologies to secure IoT deployments in [36]. Authentication of the massive number of nodes that number in the hundreds of thousands in some scenarios is a challenge for LPWA technologies. ...
Preprint
Low power wide area (LPWA) technologies are strongly recommended as the underlying networks for Internet of things (IoT) applications. They offer attractive features, including wide-range coverage, long battery life and low data rates. This paper reviews the current trends in this technology, with an emphasis on the services it provides and the challenges it faces. The industrial paradigms for LPWA implementation are presented. Compared with other work in the field, this survey focuses on the need for integration among different LPWA technologies and recommends the appropriate LPWA solutions for a wide range of IoT application and service use-cases. Opportunities created by these technologies in the market are also analyzed. The latest research efforts to investigate and improve the operation of LPWA networks are also compared and classified to enable researchers to quickly get up to speed on the current status of this technology. Finally, challenges facing LPWA are identified and directions for future research are recommended.
... EAP-NOOB is proposed for the registration, authentication and key derivation of IoT devices that have a minimal user in-terface and no pre-configured authentication credentials, and could also be a candidate for authenticating LPWA devices due to similar device characteristics. On the other hand, IETF recommends AAA framework [111] to tackle some of the security issues of LPWA networks [112], and it has been considered as one of the technologies to secure IoT deployments in [36]. Authentication of the massive number of nodes that number in the hundreds of thousands in some scenarios is a challenge for LPWA technologies. ...
Preprint
Full-text available
div>Low-power wide area (LPWA) technologies are strongly recommended as the underlying networks for Internet of Things (IoT) applications. They offer attractive features, including wide-range coverage, long battery life, and low data rates. This paper reviews the current trends in this technology, with an emphasis on the services it provides and the challenges it faces. The industrial paradigms for LPWA implementation are presented. Compared with other work in the field, this paper focuses on the need for integration among different LPWA technologies and recommends the appropriate LPWA solutions for a wide range of IoT application and service use cases. Opportunities created by these technologies in the market are also analyzed. The latest research efforts to investigate and improve the operation of LPWA networks are also compared and classified to enable researchers to quickly get up to speed on the current status of this technology. Finally, challenges facing LPWA are identified and directions for future research are recommended.</div
... EAP-NOOB is proposed for the registration, authentication and key derivation of IoT devices that have a minimal user in-terface and no pre-configured authentication credentials, and could also be a candidate for authenticating LPWA devices due to similar device characteristics. On the other hand, IETF recommends AAA framework [111] to tackle some of the security issues of LPWA networks [112], and it has been considered as one of the technologies to secure IoT deployments in [36]. Authentication of the massive number of nodes that number in the hundreds of thousands in some scenarios is a challenge for LPWA technologies. ...
Preprint
Full-text available
div>Low-power wide area (LPWA) technologies are strongly recommended as the underlying networks for Internet of Things (IoT) applications. They offer attractive features, including wide-range coverage, long battery life, and low data rates. This paper reviews the current trends in this technology, with an emphasis on the services it provides and the challenges it faces. The industrial paradigms for LPWA implementation are presented. Compared with other work in the field, this paper focuses on the need for integration among different LPWA technologies and recommends the appropriate LPWA solutions for a wide range of IoT application and service use cases. Opportunities created by these technologies in the market are also analyzed. The latest research efforts to investigate and improve the operation of LPWA networks are also compared and classified to enable researchers to quickly get up to speed on the current status of this technology. Finally, challenges facing LPWA are identified and directions for future research are recommended.</div
... A number of generic access control frameworks and architectures exist that could conceivably be used to support HGABAC. The most notable of these are the Security Assertion Markup Language (SAML) [Hughes and Maler 2005], the eXtensible Access Control Markup Language (XACML) [Anderson et al. 2003] and the AAA Authorization Framework [Vollbrecht et al. 2000]. SAML provides an XML-based standard for exchanging authentication and authorization information commonly used for Single Sign-On (SSO). ...
Thesis
Full-text available
Attribute-Based Access Control (ABAC) is a promising alternative to traditional models of access control (i.e. Discretionary Access Control (DAC), Mandatory Access Control (MAC) and Role-Based Access control (RBAC)) that has drawn attention in both recent academic literature and industry application. However, formalization of a foundational model of ABAC and large-scale adoption is still in its infancy. The relatively recent popularity of ABAC still leaves a number of problems unexplored. Issues like delegation, administration, auditability, scalability, hierarchical representations, etc. have been largely ignored or left to future work. This thesis seeks to aid in the adoption of ABAC by filling in several of these gaps. The core contribution of this work is the Hierarchical Group and Attribute-Based Access Control (HGABAC) model, a novel formal model of ABAC which introduces the concept of hierarchical user and object attribute groups to ABAC. It is shown that HGABAC is capable of representing the traditional models of access control (MAC, DAC and RBAC) using this group hierarchy and that in many cases it's use simplifies both attribute and policy administration. HGABAC serves as the basis upon which extensions are built to incorporate delegation into ABAC. Several potential strategies for introducing delegation into ABAC are proposed, categorized into families and the trade-offs of each are examined. One such strategy is formalized into a new User-to-User Attribute Delegation model, built as an extension to the HGABAC model. Attribute Delegation enables users to delegate a subset of their attributes to other users in an "off-line" manner (not requiring connecting to a third party). Finally, a supporting architecture for HGABAC is detailed including descriptions of services, high-level communication protocols and a new low-level attribute certificate format for exchanging user and connection attributes between independent services. Particular emphasis is placed on ensuring support for federated and distributed systems. Critical components of the architecture are implemented and evaluated with promising preliminary results. It is hoped that the contributions in this research will further the acceptance of ABAC in both academia and industry by solving the problem of delegation as well as simplifying administration and policy authoring through the introduction of hierarchical user groups.
... EAP-NOOB is proposed for the registration, authentication and key derivation of IoT devices that have a minimal user interface and no pre-configured authentication credentials, and could also be a candidate for authenticating LPWA devices due to similar device characteristics. On the other hand, IETF recommends AAA framework [111] to tackle some of the security issues of LPWA networks [112], and it has been considered as one of the technologies to secure IoT deployments in [36]. Authentication of the massive number of nodes that number in the hundreds of thousands in some scenarios is a challenge for LPWA technologies. ...
Article
Low power wide area (LPWA) technologies are strongly recommended as the underlying networks for Internet of things (IoT) applications. They offer attractive features, including wide-range coverage, long battery life and low data rates. This paper reviews the current trends in this technology, with an emphasis on the services it provides and the challenges it faces. The industrial paradigms for LPWA implementation are presented. Compared with other work in the field, this survey focuses on the need for integration among different LPWA technologies and recommends the appropriate LPWA solutions for a wide range of IoT application and service use-cases. Opportunities created by these technologies in the market are also analyzed. The latest research efforts to investigate and improve the operation of LPWA networks are also compared and classified to enable researchers to quickly get up to speed on the current status of this technology. Finally, challenges facing LPWA are identified and directions for future research are recommended. OAPA
... Its a term used to refer to a family of protocols that mediate network based access [72] managing authentication and authorization of users and the accounting of network resources information between a Network Access Server (NAS) and an Authentication Server. ...
Article
Full-text available
Federated Identity Management is a method that facilitates management of identity processes and policies among the collaborating entities without a centralized control. Nowadays, there are many Federated Identity solutions, however most of them covers different aspects of the identification problem, solving in some cases specific problems. Thus, none of these initiatives has consolidated as a unique solution and surely it will remain like that in a near future. To assist users choosing a possible solution, we analyze different Federated Identify approaches, showing main features and making a comparative study among them. The former problem is even worst when multiple organizations or countries already have legacy eID systems, as it is the case of Europe. In this paper, we also present the European eID solution, a purely Federated Identity system that aims to serve almost 500 millions people and that could be extended in mid-term also to eID companies. The system is now being deployed at the EU level and we present the basic architecture and evaluate its performance and scalability, showing that the solution is feasible from the point of view of performance, while keeping security constrains in mind. The results show a good performance of the solution in local, organizational, and remote environments.
... In the Internet Engineering Task Force (IETF), the working group Authentication and Authorization for Constrained Environments (ACE) is in the process of standardizing an authentication and authorization framework for the IoT [9]. The framework is loosely based on OAuth [6] and addresses scenarios where a client (C) contacts an authorization server (AS) to obtain an access token that it then can use to prove its authorization to a resource server (RS). The overseeing principal for the RS and AS is called resource owner (RO) in this architecture. ...
... Unfortunately, smart contract operations only occur in the blockchain space to ensure deterministic outcomes. Services (such as OAuth [46]) that exist off the blockchain therefore cannot be used. Given this constraint, incorporating other alternatives to provide data access permissioning should be a key component of a blockchain-based design. ...
Preprint
Secure and scalable data sharing is essential for collaborative clinical decision making. Conventional clinical data efforts are often siloed, however, which creates barriers to efficient information exchange and impedes effective treatment decision made for patients. This paper provides four contributions to the study of applying blockchain technology to clinical data sharing in the context of technical requirements defined in the "Shared Nationwide Interoperability Roadmap" from the Office of the National Coordinator for Health Information Technology (ONC). First, we analyze the ONC requirements and their implications for blockchain-based systems. Second, we present FHIRChain, which is a blockchain-based architecture designed to meet ONC requirements by encapsulating the HL7 Fast Healthcare Interoperability Resources (FHIR) standard for shared clinical data. Third, we demonstrate a FHIRChain-based decentralized app using digital health identities to authenticate participants in a case study of collaborative decision making for remote cancer care. Fourth, we highlight key lessons learned from our case study.
... OAuth 2.0 [25] is a web protocol that enables resource owners to grant controlled access to resources hosted at remote servers. Typically, OAuth 2.0 is also used for authenticating the resource owner to third parties by giving them access to the resource owner's identity stored at an identity provider. ...
Preprint
We present WPSE, a browser-side security monitor for web protocols designed to ensure compliance with the intended protocol flow, as well as confidentiality and integrity properties of messages. We formally prove that WPSE is expressive enough to protect web applications from a wide range of protocol implementation bugs and web attacks. We discuss concrete examples of attacks which can be prevented by WPSE on OAuth 2.0 and SAML 2.0, including a novel attack on the Google implementation of SAML 2.0 which we discovered by formalizing the protocol specification in WPSE. Moreover, we use WPSE to carry out an extensive experimental evaluation of OAuth 2.0 in the wild. Out of 90 tested websites, we identify security flaws in 55 websites (61.1%), including new critical vulnerabilities introduced by tracking libraries such as Facebook Pixel, all of which fixable by WPSE. Finally, we show that WPSE works flawlessly on 83 websites (92.2%), with the 7 compatibility issues being caused by custom implementations deviating from the OAuth 2.0 specification, one of which introducing a critical vulnerability.
... Finally, the founder is requested to log in with their Google account, in order to provide an authenticated email and social profile. We chose to use an OAuth2-based [23] login flow with an existing service provider to simplify the login experience, and to avoid having to store user credentials. Google was the platform of choice on account of it providing verified email address information for users, as well as to unify the authentication experience in the case that the founder also decides to provide API access to their email inbox (for the purpose of synchronizing their conversations with investors). ...
Preprint
The process of matching startup founders with venture capital investors is a necessary first step for many modern technology companies, yet there have been few attempts to study the characteristics of the two parties and their interactions. Surprisingly little has been shown quantitatively about the process, and many of the common assumptions are based on anecdotal evidence. In this thesis, we aim to learn more about the matching component of the startup fundraising process. We begin with a tool (VCWiz), created from the current set of best-practices to help inexperienced founders navigate the founder-investor matching process. The goal of this tool is to increase efficiency and equitability, while collecting data to inform further studies. We use this data, combined with public data on venture investments in the USA, to draw conclusions about the characteristics of venture financing rounds. Finally, we explore the communication data contributed to the tool by founders who are actively fundraising, and use it to learn which social attributes are most beneficial for individuals to possess when soliciting investments.
... OAuth is a protocol that enables a third-party application to access resources on a hypertext transfer protocol (HTTP) service on behalf of a resource owner [42]. The OAuth protocol flow consists of the following three main parts. ...
Article
The confluence of two emerging paradigms, Internet-of-things and sharing economy, has encouraged people to share their assets, which could include personal devices, with others. A typical example of such altruistic device sharing is ‘tethering’ in cellular networks: an owner who uses a smartphone relays data from/to base stations for others who do not have direct connectivity to cellular networks. However, when people share devices, they would be concerned about costs such as battery or bandwidth. Device owners generally want to reduce their costs when they share their devices with someone who is less socially close to them. This is because it was reported that our altruistic behavior has clear correlation with social closeness; the less close someone is to you, the less altruistic actions you take towards that person. Therefore, we propose a system that uses online social relationships to meet device owners’ demand for shared-resource management to enable altruistic device sharing. By acquiring and evaluating online social relationships between a device owner and user, the proposed system automatically determines how much resources the user is allowed to use. In this study, we implemented a prototype system to measure its authentication overhead. Using this actual overhead measured on the prototype system, we conducted a simulation with a large-scale dataset of a real social network to verify that i) the proposed system limits the resource usage for guest users who are not as close to the device owners, and ii) the overhead of the authentication process in the proposed system does not interfere with the resource sharing with guest users who are close to the device owners. OAPA
... Collected data from heterogeneous sensors are stored on individual sensor providers' servers. The sensor adapter connects servers of sensor producers using a personalised sensor model and is authorised by OAuth (Hardt, 2012) open standard protocol. The system uses representational state transfer (RESTful) APIs to synchronise the data. ...
... A number of generic access control frameworks and architectures exist that could conceivably be used to support HGABAC. The most notable of these are the Security Assertion Markup Language (SAML) [9], the eXtensible Access Control Markup Language (XACML) [1] and the AAA Authorization Framework [18]. SAML provides an XML-based standard for exchanging authentication and authorization information commonly used for single sign-on (SSO). ...
Conference Paper
Full-text available
Attribute-Based Access Control (ABAC), a promising alternative to traditional models of access control, has gained significant attention in recent academic literature. This attention has lead to the creation of a number of ABAC models including our previous contribution, Hierarchical Group and Attribute-Based Access Control (HGABAC). However, to date few complete solutions exist that provide both an ABAC model and architecture that could be implemented in real life scenarios. This work aims to advance progress towards a complete ABAC solution by introducing Hierarchical Group Attribute Architecture (HGAA), an architecture to support HGABAC and close the gap between a model and real world implementation. In addition to HGAA we also present an attribute certificate specification that enables users to provide proof of attribute ownership in a pseudonymous and off-line manner, as well as an update to the Hierarchical Group Policy Language (HGPL) to support our namespace for uniquely identifying attributes across disparate security domains. Details of our HGAA implementation are given and a preliminary analysis of its performance is discussed as well as directions for future work.
... Modern authorization standards, like OAuth (RFC 6749 [22]), enable users to grant access to their data and process to third parties without disclosing the user's authentication data. User Managed Access (UMA [23]) is a protocol based on OAuth which enables the user to define policies. ...
... To access network resources, NAC requires the authentication of users/devices and further authorization following the AAA (authentication, authorization, and accounting) framework [29]. A user obtains authentication by many means such as passwords, certificates, and tokens, and authorization by three methods: an agent sequence, in which a user/device contacts a AAA server; a pull sequence, in which a user/device contacts a resource that then contacts the AAA server; and a push sequence, in which a user/device contacts a AAA server, receives a ticket, and then presents the ticket to a resource that also validates the token [102]. ...
Thesis
To interconnect research facilities across wide geographic areas, network operators deploy science networks, also referred to as Research and Education (R&E) networks. These networks allow experimenters to establish dedicated circuits between research facilities for transferring large amounts of data, by using advanced reservation systems. Intercontinental dedicated circuits typically require coordination between multiple administrative domains, which need to reach an agreement on a suitable advance reservation. To enhance provisioning capabilities of multi-domain advance reservations, we propose an architecture for end-to-end service orchestration in multi-domain science networks that leverages software-defined networking (SDN) and software-defined exchanges (SDX) for providing multi-path, multi-domain advance reservations. Our simulations show our orchestration architecture increases the reservation success rate. We evaluate our solution using GridFTP, one of the most popular tools for data transfers in the scientific community. Additionally, we propose an interface that domain scientists can use to request science network services from our orchestration framework. Furthermore, we propose a federated auditing framework (FAS) that allows an SDX to verify whether the configurations requested by a user are correctly enforced by participating SDN domains, whether the configurations requested are correctly removed after their expiration time, and whether configurations exist that are performing non-requested actions. We also propose an architecture for advance reservation access control using SDN and tokens.
... Such mechanisms ensure that the client is authorized to perform the requested operations. Our current implementation supports a wide range of client authentication methods, ranging from HTTP Basic Authentication [Franks et al., 1999] and API keys [Farrell, 2009], to OAuth protocols [Hammer-Lahav, 2010;Hardt, 2012]. To handle these authentication mechanisms, MEDLEY provides a dedicated user interface through which users can authorize third-party services by providing the corresponding credentials and a textual label to reference them. ...
Thesis
Full-text available
In light of the recent advances in the field of web engineering, along with the decrease of cost of cloud computing, service-oriented architectures rapidly became the leading solution in providing valuable services to clients. Following this trend, the composition of third-party services has become a successful paradigm for the development of robust and rich distributed applications, as well as automating business processes. With the availability of hundreds of thousands of web services and APIs, such integrations become cumbersome and tedious when performed manually. Furthermore, different clients may require different integration requirements and policies, which further complexifies the task. Moreover, providing such a solution that is both robust and scalable is a non-trivial task. Therefore, it becomes crucial to investigate how to efficiently coordinate the interactions between existing web services. As such, this thesis aims at investigating the underlying challenges in web service composition in the context of modern web development practices. We present an architectural framework to support the specification of web service compositions using a language-based approach, and show how we support their execution in a scalable manner using MEDLEY, a lightweight, event-driven platform.
... Client is interested in obtaining access to a resource hosted at the Resource Server, and both Client and Resource Server rely on the Authorization Server that processes access control policies and reaches authorization decisions. We consider out-of-scope how those decisions are reached as they are mainly dependent on bilateral agreements among different service providers [170]. In the following, we focus on abstract communication exchanges that lead to the enforcement of authorization decisions and discuss their advantages and drawbacks for constrained environments. ...
Thesis
Our research explores the intersection of academic, industrial and standardization spheres to enable secure and energy-efficient Internet of Things. We study standards-based security solutions bottom-up and first observe that hardware accelerated cryptography is a necessity for Internet of Things devices, as it leads to reductions in computational time, as much as two orders of magnitude. Overhead of the cryptographic primitives is, however, only one of the factors that influences the overall performance in the networking context. To understand the energy - security tradeoffs, we evaluate the effect of link-layer security features on the performance of Wireless Sensors Networks. We show that for practical applications and implementations, link-layer security features introduce a negligible degradation on the order of a couple of percent, that is often acceptable even for the most energy-constrained systems, such as those based on harvesting.Because link-layer security puts trust on each node on the communication path consisted of multiple, potentially compromised devices, we protect the information flows by end-to-end security mechanisms. We therefore consider Datagram Transport Layer Security (DTLS) protocol, the IETF standard for end-to-end security in the Internet of Things and contribute to the debate in both the standardization and research communities on the applicability of DTLS to constrained environments. We provide a thorough performance evaluation of DTLS in different duty-cycled networks through real-world experimentation, emulation and analysis. Our results demonstrate surprisingly poor performance of DTLS in networks where energy efficiency is paramount. Because a DTLS client and a server exchange many signaling packets, the DTLS handshake takes between a handful of seconds and several tens of seconds, with similar results for different duty cycling protocols.But apart from its performance issues, DTLS was designed for point-to-point communication dominant in the traditional Internet. The novel Constrained Ap- plication Protocol (CoAP) was tailored for constrained devices by facilitating asynchronous application traffic, group communication and absolute need for caching. The security architecture based on DTLS is, however, not able to keep up and advanced features of CoAP simply become futile when used in conjunction with DTLS. We propose an architecture that leverages the security concepts both from content-centric and traditional connection-oriented approaches. We rely on secure channels established by means of DTLS for key exchange, but we get rid of the notion of “state” among communicating entities by leveraging the concept of object security. We provide a mechanism to protect from replay attacks by coupling the capability-based access control with network communication and CoAP header. OSCAR, our object-based security architecture, intrinsically supports caching and multicast, and does not affect the radio duty-cycling operation of constrained devices. Ideas from OSCAR have already found their way towards the Internet standards and are heavily discussed as potential solutions for standardization.
... The AAA Authorization Framework [40] standardizes the trust infrastructure of the Internet: It provides digital evidences for accounting with the focus on shared use of IT services. The AAA building blocks are Authentication, Authorization, and Accounting. ...
Article
Full-text available
Threats to a society and its social infrastructure are inevitable and endanger human life and welfare. Resilience is a core concept to cope with such threats in strengthening risk management. A resilient system adapts to an incident in a timely manner before it would result in a failure. This paper discusses the secondary use of personal data as a key element in such conditions and the relevant process mining in order to reduce IT risk on safety. It realizes completeness for such a proof on data breach in an acceptable manner to mitigate the usability problem of soundness for resilience. Acceptable soundness is still required and realized in our scheme for a fundamental privacy-enhancing trust infrastructure. Our proposal achieves an IT baseline protection and properly treats personal data on security as Ground Truth for deriving acceptable statements on data breach. An important role plays reliable broadcast by means of the block chain. This approaches a personal IT risk management with privacy-enhancing cryptographic mechanisms and Open Data without trust as belief in a single-point-of-failure. Instead it strengthens communities of trust. Published in: Special Section PAPER (Special Section on Cryptography and Information Security) Available @ https://doi.org/10.1587/transfun.E101.A.149 All articles of this special section are available @ https://www.jstage.jst.go.jp/browse/transfun/E101.A/1/_contents/-char/en Copyright ©2018 IEICE
... Where FIM is aimed at providing tight integration between federations, CIC provides a loose coupling. Popular SSO protocols are OpenID [23] and OAuth [15]. Online platforms like Google 2 and Facebook 3 also allow third parties to rely on their user authentication mechanism and provide limited access to their users' account details. ...
Article
Full-text available
Data security, which is concerned with the prevention of unauthorized access to computers, databases, and websites, helps protect digital privacy and ensure data integrity. It is extremely difficult, however, to make security watertight, and security breaches are not uncommon. The consequences of stolen credentials go well beyond the leakage of other types of information because they can further compromise other systems. This paper criticizes the practice of using clear-text identity attributes, such as Social Security or driver's license numbers -- which are in principle not even secret -- as acceptable authentication tokens or assertions of ownership, and proposes a simple protocol that straightforwardly applies public-key cryptography to make identity claims verifiable, even when they are issued remotely via the Internet. This protocol has the potential of elevating the business practices of credit providers, rental agencies, and other service companies that have hitherto exposed consumers to the risk of identity theft, to where identity theft becomes virtually impossible.
Conference Paper
This document is representing not only the description of AAA protocol. It is representing real production problems which I encountered in many years of work with it. Each from this problem was taken separately. It was analyzed and the best solution was applied to it. Article is containing not only analyzing of the problem. In this document the solution described was developed and tested in time on different networks starting with few hundred devices and finishing with tens of thousands of them. On production, solution applied worked for years. Based on this point, I can conclude that problems which were on the production, where solved by applying of those solutions. In order to understand those problems, is needed to understand how this protocol is working and where are the bottleneck points of it.
Chapter
There are several authentication methods available whose workings are done at level control sensor application layer with 802.1X, a powerful integrated solution for authentication. It is more demanding than other authentication solutions because it requires the client to enter their details as prompted by wired/wireless supplicant. It employs full end-to-end provisioning, automating, development, managing and problem-solving tasks. IEEE 802.1X works at layer 2 to supplicant client devices on software access points. With point-to-point protocol, RAR is usually used for dial-up Internet access in several networking environments over the web. An authentication phase at network layer 3 (packet IP sec) is employed for user and password authentication to access control independent of protocol RADIUS with end-to-end connection for credential information. In this paper SDN is dependable on authentication information provided for authorization, and the construction of IEEE 802.1X port-based authentication is proposed using new Inasu algorithm to extend the implementation of EAP from 802.1X which successively advances throughput and validation to a reliable improvement using both the application layer and control layer in IPv6. The working of the algorithm shows the measurements derived from SAA with the prevailing EAP and RADIUS protocol accuracy.KeywordsSDN RADIUS 802.1X Throughput Validation
Chapter
Network functions virtualization (NFV) is consolidating as one of the base technologies for the design, deployment, and operation of network services. NFV can be seen as a natural evolution of the trend to cloud technologies in IT, and hence perceived as bringing them to the network provider environments. While this can be true for the simplest cases, focused on the IT services network providers rely on, the nature of network services raises unique requirements on the overall virtualization process. NFV aims to provide at the same time an opportunity to network providers, not only in reducing operational costs but also in bringing the promise of easing the development and activation of new services, thereby reducing their time-to-market and opening new approaches for service provisioning and operation, in general. In this chapter, the authors analyse these requirements and opportunities, reviewing the state of the art in this new way of dealing with network services. Also, the chapter presents some NFV deployments endorsed by some network operators and identifies some remaining challenges.
Thesis
This thesis describes the evolution of control architectures and network intelligence towards next generation telecommunications networks. Network intelligence is a term given to the group of architectures that provide enhanced control services. Network intelligence is provided through the control plane, which is responsible for the establishment, operation and termination of calls and connections. The work focuses on examining the way in which network intelligence has been provided in the traditional telecommunications environment and in a converging environment. In the case of the traditional telecommunications environment, the thesis examines the Intelligent Network (IN) architecture as a case scenario. In the case of the converging telecommunications environment, the work focuses on examining the relation and impact of emerging architectures and protocols and the ways in which these can inter-work with the IN. The discussion is presented using a taxonomy reference model of network intelligence architectures and their relation to the IN. For example, a protocol based on existing IN capabilities is presented that allows end users to engage in electronic commerce without the need for credit cards. The control plane architecture in the Public Switched Telephony Network (PSTN) is heavily based on state machines. The role of state models and the reliance of IP-based protocols on state models are also examined. For this, IP-based architectures are examined and the extent of state utilisation is presented. This enables a classification of IP-based architectures and protocols to be drawn with regard to state utilisation. The role of existing network intelligence within the context of open programmable networks and application servers is also examined. The work identifies the need for a common communications framework between third-party service providers. This is the focus of the API server architecture, which draws from IN concepts and from approaches in the IP domain.
Conference Paper
Modern portable devices such as smartphones are enhanced by advanced functionalities and may therefore soon become both the preferred portable computing device (thereby substituting laptops) and the personal trusted device. They are also increasingly used to access to online cloud services, including those particularly sensitive which require high security. This paper introduces an original and strong authentication method for mobiles. It involves a two factor scheme enhanced through network channels and devices diversity. Our solution combines an OTP-based approach using an IoT object as secondary device in addition to the smartphone. The diversity of the network’s channels rests on the use of one of the LPWAN networks together with LTE or WIFI networks. Authentication factors are therefore transmitted over different channels through different devices thus greatly reducing the attack surface. The proposal is also enhanced by end-to-end encryption of the transferred sensitive contents. The link with the authorization issues is analyzed and the integration of our approach to OpenID Connect/OAuth 2.0 is investigated. A platform that implements this scheme has been developed, tested and evaluated under different attack scenarios.
Chapter
Increasingly, the society is witnessing how today’s industry is adapting the new technologies and communication protocols to offer more optimal and reliable services to end-users, with support for inter-domain communication belonging to diverse critical infrastructures. As a consequence of this technological revolution, interconnection mechanisms are required to offer transparency in the connections and protection in the different application domains, without this implying a significant degradation of the control requirements. Therefore, this book chapter presents a reference architecture for Industry 4.0 where the interconnection core is mainly concentrated in the Policy Decision Points (PDP), which can be deployed in high volume data processing and storage technologies such as cloud and fog servers. Each PDP authorizes actions in the field/plant according to a set of factors (entities, context and risks) computed through the existing access control measures, such as RBAC+ABAC+Risk-BAC (Role/Attribute/Risk-Based Access Control, respectively), to establish coordinated and constrained accesses in extreme situations. Part of these actions also includes proactive risk assessment measures to respond to anomalies or intrusive threats in time.
Chapter
Full-text available
The Connected Mobility Lab (CML) is a mobility solution created in collaboration between Siemens and BMW. The CML provides a multi-tenant cloud infrastructure where entities – mobility providers, financial service providers, users – might know each other or might be complete strangers. The CML encapsulates core services from different stakeholders and exposes an integrated, comprehensive, and innovative mobility service to its users. The different owners may have different security goals and impose their own rules and workflows on entities interacting with their services. Thus, there is a need to negotiate in order to reach mutually acceptable compromises, and inter-operate services within the CML. Especially, when different services collaborate to fulfill a purpose it is important to allow only authorized entities to execute the required tasks. To enforce such tasks to be executed in a particular order we need a workflow specification and enforcement method.
Chapter
The utilization of Service-Oriented Architecture (SOA) offers certain benefits, such as low coupling and interoperability. Considering its benefits, SOA is being used for integration of systems and applications within organizations. In order to evaluate and to provide evolution of legacy systems, SOA is an option for the modernization of the legacy systems. Regarding authorization with SOA, the OAuth 2.0 protocol was implemented as part of the solution of the Enterprise Service Bus (ESB) that is be used as important step for modernization of legacy systems. This research presents a case of study of a systematic mapping regarding the authentication and authorization mechanisms in SOA applied to legacy systems maintained and that are in use by students and professionals at University of Brasília (UnB). Performance tests were carried out in the solution allowing to check the increase in the latency introduced by the Protocol and the average flow supported. Simulations were carried out with the objective to verify the behavior of the Protocol implemented when exposed to a replay attack.
Chapter
The Internet of Things (IoT) has enabled as well as boosted several applications that will radically change everyday life, including smart building, environmental monitoring, smart energy grids, and intelligent transportation. Unlike traditional IT systems, IoT deployments will be directly exposed and reachable over the Internet, and massively composed of constrained devices equipped with limited resources. In such a context, security has been unanimously acknowledged as a fundamental requirement to fulfill, with evident implications on service reliability, privacy, and even safety of facilities and people. At the same time, it is particularly challenging to effectively and efficiently ensure security in the IoT, given the wide range of heterogeneous, dynamic, and possibly large‐scale environments, and the resource‐constrained nature of deployed devices. This chapter overviews the current main security protocols and technologies for the IoT, presents related security issues, and discusses solutions to address them based on recent research and standardization activities.
Conference Paper
Full-text available
The XSEDE project seeks to provide “a single virtual system that scientists can use to interactively share computing resources, data and experience.” The potential compute resources in XSEDE are diverse in many dimensions, node architectures, interconnects, memory, local queue management systems, and authentication policies to name a few. The diversity is particularly rich when one considers the NSF funded service providers and the many campuses that wish to participate via campus bridging activities. Resource diversity presents challenges to both application developers and application platform developers (e.g., developers of gateways, portals, and workflow engines). The XSEDE Execution Management Services (EMS) architecture is an instance of the Open Grid Services Architecture EMS and is used by higher level services such as gateways and workflow engines to provide end users with execution services that meet their needs. The contribution of this paper is to provide a concise explanation and concrete examples of how the EMS works, how it can be used to support scientific gateways and workflow engines, and how the XSEDE EMS and other OGSA EMS architectures can be used by applications developers to securely access heterogeneous distributed computing and data resources.
Article
Full-text available
One of the critical requirement in managing security of any computing system is access control, which includes protection and access management to the available resources. This requirement becomes more strict especially in a distributed computing environment that consists of constrained devices such as Machine-to-Machine (M2M). New challenges in access control are identified in a system comprises a group of distributed multiple M2M gateways forming a so called M2M local cloud platform (Vallati et al. in Wirel Trans Commun 87(3):1071–1091, 2016). Scalability is obviously a necessity which is lacking in some existing access control system. In addition, flexibility in managing access from users or entity belonging to other authorization domains as well as delegating access right are not provided as an integrated features. Lately, the capability-based access control has been suggested as method to manage access for M2M as the key enabler of Internet of Things. In this paper, a capability based access control equipped with Elliptic Curve Cryptography based key management is proposed for the M2M local cloud platform. The feasibility of the proposed capability based access control and key management are tested by implementing them within the security manager that is part of the overall component of the platform architecture, and evaluating their performances by a series of experimentations.
ResearchGate has not been able to resolve any references for this publication.