Article
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Schneier and Kelsey [62] proposed a protocol, using symmetric cryptographic, focused on the storage of audit logs. The scheme employs an one-way hash chain [63] used to create a dependency among the log entries. ...
... They state that, for an audit logging system to be considered secure, it must assure not only data integrity but also stream integrity, as no reordering of the log entries should be possible. The author also enunciates the log truncation attack, a type of attack that prior schemes [61,62,69] failed to mitigate. The proposed mitigation is based on forward-secure stream integrity. ...
... Accorsi addresses log privacy [84,85] with focus on devices with low resources. His solution, inspired by Reference [62], starts on the device, which is expected to apply the necessary cryptographic techniques to safeguard the privacy, integrity and uniqueness of its log file. Privacy is achieved by encrypting each log entry with a symmetric encryption algorithm. ...
Article
Full-text available
Along with the use of cloud-based services, infrastructure, and storage, the use of application logs in business critical applications is a standard practice. Application logs must be stored in an accessible manner in order to be used whenever needed. The debugging of these applications is a common situation where such access is required. Frequently, part of the information contained in logs records is sensitive. In this paper, we evaluate the possibility of storing critical logs in a remote storage while maintaining its confidentiality and server-side search capabilities. To the best of our knowledge, the designed search algorithm is the first to support full Boolean searches combined with field searching and nested queries. We demonstrate its feasibility and timely operation with a prototype implementation that never requires access, by the storage provider, to plain text information. Our solution was able to perform search and decryption operations at a rate of, approximately, 0.05 ms per line. A comparison with the related work allows us to demonstrate its feasibility and conclude that our solution is also the fastest one in indexing operations, the most frequent operations performed.
... Several works focus on data integrity [2]- [9]. Secure logging as a popular data protection domain has been studied by many researchers [10]- [18]. ...
... Secret Key Management. Schneier and Kelsey [2] propose a hash chain technique based on Message Authentication Codes (MAC) to achieve forward security, which means that each successive data entry has an associated hash key that is dependent upon the previous entry. Bellare and Yee [3] present an approach with forwarding security based on MAC to ensure that session keys will not be compromised even if the previous key is compromised. ...
... Secure Logging. BBox [10] is based on Schneier and Kelsey's study [2] and builds on hash function and the Device Authorisation and Key Lookup (DAKL) table to ensure the integrity of the log record. Stathopoulos et al. [20] use a new method that employs digital signatures to protect the integrity of data, which means that signatures can replace MACs to verify data. ...
... 3. Secure logging: We introduce a system model used for secure logging that allows a comparison between secure logging protocols and a corresponding attacker model in Chapter 4. The model introduces redundancy via a set of distributed logging devices as a key element. Within a formal system model, we describe several well-known existing approaches to secure logging, including Schneier and Kelsey [188], Holt [88], Accorsi [4], Ma and Tsudik [143] and compare them with each other. For the comparison of these secure logging protocols, we consider (local and global) weak and strong attackers. ...
... To ensure forward integrity, keys which are used to protect the integrity of log entries must be renewed regularly. Based on this idea, Schneier and Kelsey [188] devised a practical technique based on a hash-chain and a forward secure MAC to protect the integrity of logs in order to at least detect manipulations. The role of the verifier was played by a trusted server. ...
... The role of the verifier was played by a trusted server. Holt's logcrypt system [88] extends Schneier and Kelsey [188] to use asymmetric cryptography so that not only the trusted server can check the integrity of the logs but any system in possession of an (initial) authentic public key. Accorsi [2,5] later proposed to leverage trusted hardware (in the form of Trusted Platform Moduless (TPMs) [214]) for producing trustworthy integrity protection. ...
Thesis
With Industrial Control Systems being increasingly networked, the need for sound forensic capabilities for such systems increases, including reliable log file analysis as a vital part of such investigations. However, manipulating log files is one of the steps a knowledgeable attacker can take to prevent visibility on system events and hiding traces of attacker actions in those systems. Therefore, secure logging is advisable for an effective preparation of digital forensics investigations. In addition, implementing digital forensics readiness in nuclear power plants allows efficient digital forensics investigations and proper gathering of digital forensics evidence while minimizing investigation costs. These capabilities are necessary to adequately prevent and quickly detect any security incident and perform further digital forensics investigations with complete evidence. If an attacker is able to modify log entries or blocks of log entries, critical digital evidence is lost. Within this thesis, we first evaluate the presence of digital forensics readiness in critical infrastructures, including nuclear power plants and briefly discuss existing digital forensics readiness approaches. As systems in critical infrastructures are sophisticated, such as ones in nuclear power plants, adequate preparedness is essential in order to respond to cybersecurity incidents before they happen. Due to the importance of safety in these systems, manual approaches are more favored compared to automated techniques. All required tasks and activities and expected results must be also properly documented. Application Security Controls can be one of the approaches to properly document forensic controls. However, Application Security Controls must be evaluated further to ensure forensic applicability as considerable alternatives also exist. In order to demonstrate the value of such forensic Application Security Controls, we analyze a server system of an Operational Instrumentation & Control system in terms of digital evidence. Based on the analysis results, we derive recommendations to improve the overall digital forensic readiness and the security hardening of Linux server systems in the Operational Instrumentation & Control system. Then, we introduce our formal system model and type of attackers that can access and manipulate logs and logging device. Here, we also give a brief overview of some existing secure logging approaches and compare them between each other. The goal is to standardize requirements of secure logging approaches and analyze which unified security guarantees are realized by these existing approaches under strong attacker models. Later, we extend our secure logging model by using blockchain as secure logging protocol, apply the new model to industrial settings, and build a simple prototype as proof of concept. In an evaluation of the new model and the corresponding prototype, we show the potential, but also the challenges, of this approach. Further, we take a deeper look into existing algorithms for secure logging and integrate them into a single parameterized algorithm. This log authentication and verification algorithm contains a combination of security guarantees and their parametrization returns the set of previous algorithms. Even with different file formats and common purpose, log files generally have similar structures. To this end, we evaluated three common log file types (syslog, windows event log and SQLite browser histories). Based on this evaluation, we developed a simple unified representation of log files and perform analysis independently of their format. As visualization of log files is helpful to find proper evidence, we have developed a simple log file visualization tool. This tool helps to identify evidence of system time manipulation.
... This is called forward security or forward integrity [3] and is achieved by evolving forward (using a one-way function) the key that is used for generating integrity information of log entries. In [13], authors used forward integrity based one MAC and hash chains and proposed a secure audit log for a local untrusted logging device that has infrequent communication with the verifier. LogCrypt [8] made some improvement to [13] such as the ability to use public key cryptography as well as aggregating multiple log entries to reduce latency and computational load. ...
... In [13], authors used forward integrity based one MAC and hash chains and proposed a secure audit log for a local untrusted logging device that has infrequent communication with the verifier. LogCrypt [8] made some improvement to [13] such as the ability to use public key cryptography as well as aggregating multiple log entries to reduce latency and computational load. Forward integrity, however, does not protect against truncating of the log file: the adversary can remove entries at the end without being detected. ...
... Existing secure log schemes, i.e. [3,13,8,11], consider an ordered log where a new log entry is appended to the end of LStore. These schemes use key evolution but do not use secure hardware or platforms to store the latest secret key that captures the state of the log file. ...
Preprint
Logging systems are an essential component of security systems and their security has been widely studied. Recently (2017) it was shown that existing secure logging protocols are vulnerable to crash attack in which the adversary modifies the log file and then crashes the system to make it indistinguishable from a normal system crash. The attacker was assumed to be non-adaptive and not be able to see the file content before modifying and crashing it (which will be immediately after modifying the file). The authors also proposed a system called SLiC that protects against this attacker. In this paper, we consider an (insider) adaptive adversary who can see the file content as new log operations are performed. This is a powerful adversary who can attempt to rewind the system to a past state. We formalize security against this adversary and introduce a scheme with provable security. We show that security against this attacker requires some (small) protected memory that can become accessible to the attacker after the system compromise. We show that existing secure logging schemes are insecure in this setting, even if the system provides some protected memory as above. We propose a novel mechanism that, in its basic form, uses a pair of keys that evolve at different rates, and employ this mechanism in an existing logging scheme that has forward integrity to obtain a system with provable security against adaptive (and hence non-adaptive) crash attack. We implemented our scheme on a desktop computer and a Raspberry Pi, and showed in addition to higher security, a significant efficiency gain over SLiC.
... Log file analysis is an important tool for understanding the operation of a system, including fault analysis [17], anomaly detection [10,12], forensics and audits [23,30]. A key aspect of system security is maintaining reliable, secure log files. ...
... The additional encryption layer provides the required privacy to support this and prevent legitimate parties from reading the content of each others' logs. This form of protection remains true [22,23] even against an attacker that has full control over a few compromised parties. ...
... Several solutions have been proposed to secure log files, such as write once read many (WORM) memory systems [18]. Another method combines the use of an untrusted machine with a trusted one in order to prevent an attacker from being able to read any log files generated prior to their penetration of the unstrusted machine [22,23]. Likewise, the system prevents the attacker from being able to modify the log files without detection. ...
Conference Paper
A reliable log system is a prerequisite for security applications. One of the first actions a hacker takes upon penetrating a system is altering the log files. Maintaining redundant copies in a distributed manner in a Byzantine setting has always been a challenging task, however it has recently become simpler given recent advances in blockchain technologies. In this work, we present a tamper-resistant log system through the use of a blockchain. We leverage the immutable write action and distributed storage provided by the blockchain and add an additional encryption layer to develop a secure log system. We assess the security and privacy aspects of our solution. Finally, we implement our system over Hyperledger Fabric and demonstrate the system's value for several use cases.
... In case a breach is successful, it's desirable to identify the perpetrator in a forensic investigation and bring the responsible person to court. In practice however intruders may attempt to alter or delete log entries documenting the intrusion [2]. Besides being exposed to malicious modification, log records are also often processed during analysis, for example by SIEM systems [3]. ...
... Software-based approaches use cryptographic mechanisms to detect modifications of log files. One of the earliest works on secure logging for forensic investigations was published by Schneier and Kelsey [2]. Their proposed algorithm is used for concatenating the sequence of log events and provides forward security and verifiability. ...
... An intruder could attempt to modify the original log record to obscure traces. As noted by Schneier and Kelsey [2], once an attacker has control of the log source, the integrity of new logs cannot be protected. The goal of secure logging is therefore to protect log data generated prior to intrusion. ...
Article
Information systems in organizations are regularly subject to cyber attacks targeting confidential data or threatening the availability of the infrastructure. In case of a successful attack it is crucial to maintain integrity of the evidence for later use in court. Existing solutions to preserve integrity of log records remain cost-intensive or hard to implement in practice. In this work we present a new infrastructure for log integrity preservation which does not depend upon trusted third parties or specialized hardware. The system uses a blockchain to store non-repudiable proofs of existence for all generated log records. An open-source prototype of the resulting log auditing service is developed and deployed, followed by a security and performance evaluation. The infrastructure represents a novel software-based solution to the secure logging problem, which unlike existing approaches does not rely on specialized hardware, trusted third parties or modifications to the logging source.
... As a starting point, we will not even try to achieve consensus reliably on a single common history, but instead simply allow each node to define and build its own idea of a possible history, independently of all other nodes. For convenience and familiarity, we will represent each node's possible history as a blockchain, or tamper-evident log [49,143] in the form popularized by Bitcoin [121]. ...
... One apparent technical challenge with pipelining is that at the start of round r + 1 (step r + 1), when each node broadcasts its proposal, we might expect this proposal to include a new block in a blockchain. To produce a blockchain's tamper-evident log structure [49,143], however, each block must contain a cryptographic hash of the previous block. But the content of the previous block is not and cannot be known until the prior consensus round r ends at step r + 3, which due to pipelining is two timesteps after step r + 1, when we appear to need it! ...
... While TLC's goal is to create a "lock-step" notion of logical time, to build TLC and secure it against Byzantine nodes, it is useful to leverage the classic notion of vector time [63,66,105,114] and associated techniques such as tamper-evident logging [49,143], timeline entanglement [113], and accountable state machines [78,79]. ...
Preprint
Consensus protocols for asynchronous networks are usually complex and inefficient, leading practical systems to rely on synchronous protocols. This paper attempts to simplify asynchronous consensus by building atop a novel threshold logical clock abstraction, which enables upper layers to operate as if on a synchronous network. This approach yields an asynchronous consensus protocol for fail-stop nodes that may be simpler and more robust than Paxos and its leader-based variants, requiring no common coins and achieving consensus in a constant expected number of rounds. The same approach can be strengthened against Byzantine failures by building on well-established techniques such as tamper-evident logging and gossip, accountable state machines, threshold signatures and witness cosigning, and verifiable secret sharing. This combination of existing abstractions and threshold logical clocks yields a modular, cleanly-layered approach to building practical and efficient Byzantine consensus, distributed key generation, time, timestamping, and randomness beacons, and other critical services.
... While we are not aware of any prior work that explicitly takes this abstract viewpoint of the problem, there are several publications that tackle it through the lens of a specific use-case: secure logging [39] [11] [37], accountable shared storage [25] [49], certificate transparency [20] [21] [23], or data replication [33] [43]. ...
... Some approaches to tamper-evident logging also rely on linked lists [39] [37]. In this context, Crosby and Wallach [11] designed the first non-linking scheme approach to prefix authentication that we know of. ...
Preprint
Full-text available
Secure relative timestamping and secure append-only logs are two historically mostly independent lines of research, which we show to be sides of the same coin -- the authentication of prefix relations. From this more general viewpoint, we derive several complexity criteria not yet considered in previous literature. We define transitive prefix authentication graphs, a graph class that captures all hash-based timestamping and log designs we know of. We survey existing schemes by expressing them as transitive prefix authentication graphs, which yields more compact definitions and more complete evaluations than in the existing literature.
... Nevertheless, the settings described above may consist of heterogeneous devices, and the use of auxiliary hardware [1,11,35] or specific hardware extensions may not be feasible [17,27] for all devices. Previous works on tamper-evident data structures use symmetric cryptography and hash functions [6,33,34]. Yet, these schemes induce large storage overheads and cannot efficiently generate log receipts. Other works leverage forward-secure aggregated signatures [21,39,40] for efficient tamper-detection of logs. ...
... Schneier and Kelsey [33,34] presented a similar approach using per-log MACs and a hash chain. The logging device shares a secret with a trusted entity. ...
Preprint
Our modern world relies on a growing number of interconnected and interacting devices, leading to a plethora of logs establishing audit trails for all kinds of events. Simultaneously, logs become increasingly important for forensic investigations, and thus, an adversary will aim to alter logs to avoid culpability, e.g., by compromising devices that generate and store logs. Thus, it is essential to ensure that no one can tamper with any logs without going undetected. However, existing approaches to establish tamper evidence of logs do not scale and cannot protect the increasingly large number of devices found today, as they impose large storage or network overheads. Additionally, most schemes do not provide an efficient mechanism to prove that individual events have been logged to establish accountability when different devices interact. This paper introduces a novel scheme for practical large-scale tamper-evident logging with the help of a trusted third party. To achieve this, we present a new binary hash tree construction designed around timestamps to achieve constant storage overhead with a configured temporal resolution. Additionally, our design enables the efficient construction of shareable proofs, proving that an event was indeed logged. Our evaluation shows that - using practical parameters - our scheme can localize any tampering of logs with a sub-second resolution, with a constant overhead of ~8KB per hour per device.
... Kawaguchi et al. [7] discussed about the a process called forensic computing to recognize the technique of intrusion and hacker in the case of system being attacked and proposed a scheme Signature tree and forward integrity to achieve the integrity, access control for logs. Schneier et al. [11] describes a computationally cheap method for securing the log files of untrusted machine to share a key with trusted verification machine using one way hash function. ...
... Limitation: This measure helps only when intruder is trying to modifying the contents of the log files but if he tries to clear the log files this measure doesn"t make any sense. There is always a possibility for the hacker to decrypt the log files if the user is trying to encrypt already compromised computer [11] . ...
Article
ABSTRACT: With the growing threat of cyberattacks, the demand for recording the audit trails or events in the system and networks is also increasing. Log file is one such mechanism which contains automatically produced and time stamped documentation of every event that occurs in a system or a network. It is a means to detect and protect a system from unauthorized access. Log files monitoring which oversees network activities, store user actions, inspect events that occur in the system can help in maintaining the network security and integrity. But hacking also evolved over time and there are more experienced hackers who can intrude over the log files and modify the contents, disable the auditing, clear logs and even erase the command history thereby keeping the user unknown about the intrusion that occurred in their system. With this, there arise the need for securing the log files. There are many techniques to secure these log files from unauthorized access. This paper focuses on the some of the techniques used to secure the log files which are ranging from most simple to a more advanced. It also focuses on the limitations of the techniques with which user can decide on which technique to apply to his system based on his requirement. Keywords: Security threats, Securing log files, Techniques for protecting log files, Limitations of techniques in securing log files, network security.
... For simplicity of presentation and reasoning, our formulation of consensus protocols will deliver not just individual messages but entire histories, ordered lists cumulatively representing all messages committed and delivered so far. An easy and efficient standard practice is to represent a history as the typically constant-size head of a tamper-evident log [30,90] or blockchain [72], each log entry containing a hash-link to its predecessor. Thus, the fact that histories conceptually grow without bound is not a significant practical concern. ...
... Implementing QSC naïvely, the histories broadcast in each round would grow linearly with time. We can make QSC efficient, however, by adopting the standard practice of representing histories as tamper-evident logs or blockchains [30, 72,90]. Each broadcast needs to contain only the latest proposal or head of the history, which refers to its predecessor (and transitively to the entire history) via a cryptographic hash. ...
Preprint
It is commonly held that asynchronous consensus is much more complex, difficult, and costly than partially-synchronous algorithms, especially without using common coins. This paper challenges that conventional wisdom with que sera consensus QSC, an approach to consensus that cleanly decomposes the agreement problem from that of network asynchrony. QSC uses only private coins and reaches consensus in O(1) expected communication rounds. It relies on "lock-step" synchronous broadcast, but can run atop a threshold logical clock (TLC) algorithm to time and pace partially-reliable communication atop an underlying asynchronous network. This combination is arguably simpler than partially-synchronous consensus approaches like (Multi-)Paxos or Raft with leader election, and is more robust to slow leaders or targeted network denial-of-service attacks. The simplest formulations of QSC atop TLC incur expected O(n2)O(n^2) messages and O(n4)O(n^4) bits per agreement, or O(n3)O(n^3) bits with straightforward optimizations. An on-demand implementation, in which clients act as "natural leaders" to execute the protocol atop stateful servers that merely implement passive key-value stores, can achieve O(n2)O(n^2) expected communication bits per client-driven agreement.
... Although this scheme preserves the logs' integrity, however, it does not ensure their confidentiality and availability. A similar scheme was proposed by Schneier and Kelsey in [6] to secure the logs by a chain of log MACs. Thus, the secret key is pre-shared between the logging devices and updated after each log entry. ...
... Symmetric Authentication-Encryption Algorithm [5], [6]: This type of solutions can ensure data confidentiality, data integrity and source authentication of log contents. ...
Article
Full-text available
Digital forensics are vital in the Internet of Things (IoT) domain. This is due to the enormous growth of cyber attacks and their widespread use against IoT devices. While IoT forensics do not prevent IoT attacks, they help in reducing their occurrence by tracing their source, tracking their root causes and designing the corresponding countermeasures. However, modern IoT attacks use anti-forensics techniques to destroy or modify any important digital evidence including log files. Anti-forensics techniques complicate the task for forensic investigators in tracking the attack source. Thus, countermeasures are required to defend against anti-forensics techniques. In this paper, we aim at securing the IoT log files to prevent anti-forensics techniques that target the logs’ availability and integrity such as wiping and injecting attacks. In the proposed solution, and at regular intervals of time, the logs generated by IoT devices are aggregated, compressed and encrypted. Afterwards, the encrypted logs are fragmented, authenticated and distributed over n storage nodes, based on the proposed Modified Information Dispersal Algorithm (MIDA) that can ensure log files availability with a degree of (n−t). For data dispersal, two cases are considered: the case where the fog nodes are interconnected and the case where they are not. For the former case, the n obtained fragments are transmitted to n neighboring IoT devices (aggregation nodes). However, for the latter one, the output is transmitted to the corresponding fog and then, dispersed over the n neighboring fog nodes. A set of security and performance tests were performed showing the effectiveness and robustness of the proposed solution in thwarting well-known security attacks.
... Security and privacy challenges in this area are not new. In 1999, Schneier and Kelsey [8] set out to secure the collection of sensitive logs using encryption, to ensure that forensic records could be maintained in the event of a cyber breach. Some five years later, Waters et al. [9], realised that system logs were becoming a prime attack vector for attackers, who were seeking to cover their trail after successfully breaking in to computer systems. ...
Preprint
Finding a robust security mechanism for audit trail logging has long been a poorly satisfied goal. There are many reasons for this. The most significant of these is that the audit trail is a highly sought after goal of attackers to ensure that they do not get caught. Thus they have an incredibly strong incentive to prevent companies from succeeding in this worthy aim. Regulation, such as the European Union General Data Protection Regulation, has brought a strong incentive for companies to achieve success in this area due to the punitive level of fines that can now be levied in the event of a successful breach by an attacker. We seek to resolve this issue through the use of an encrypted audit trail process that saves encrypted records to a true immutable database, which can ensure audit trail records are permanently retained in encrypted form, with no possibility of the records being compromised. This ensures compliance with the General Data Protection Regulation can be achieved.
... The execution of this smart contract returns the access decision for that particular resource. This is quite expensive, and if all of the resources are within a single organization, issues of trust do not exist and alternative solutions exist (such as tamper-proof logs [25]) to provide auditability at a lower cost. On the contrary, distributed applications, including BPs and distributed workflows, are composed of multiple resources/services that are subject to the security and access control policies of different autonomous organizational domains. ...
Article
Full-text available
The use of blockchain technology has been proposed to provide auditable access control for individual resources. Unlike the case where all resources are owned by a single organization, this work focuses on distributed applications such as business processes and distributed workflows. These applications are often composed of multiple resources/services that are subject to the security and access control policies of different organizational domains. Here, blockchains provide an attractive decentralized solution to provide auditability. However, the underlying access control policies may have event-driven constraints and can be overlapping in terms of the component conditions/rules as well as events. Existing work cannot handle event-driven constraints and does not sufficiently account for overlaps leading to significant overhead in terms of cost and computation time for evaluating authorizations over the blockchain. In this work, we propose an automata-theoretic approach for generating a cost-efficient composite access control policy. We reduce this composite policy generation problem to the standard weighted set cover problem. We show that the composite policy correctly captures all the local access control policies and reduces the policy evaluation cost over the blockchain. We have implemented the initial prototype of our approach using Ethereum as the underlying blockchain and empirically validated the effectiveness and efficiency of our approach. Ablation studies were conducted to determine the impact of changes in individual service policies on the overall cost.
... In a scenario of ciphertext-only attack, where the cryptanalyst had limited information, it was crucial to at least know the crypto-graphic algorithm used for encryption. Although breaking an algorithm was not a simple task, knowledge of the algorithm used could significantly reduce the effort required to obtain the original message through cryptanalysis [29]. ...
... Prefix authentication [15] unifies several previously disjoint strands of research, such as secure logging [20] [6] [19], accountable shared storage [12] [23], certificate transparency [9] [10] [11], or data replication [17] [21]. Our designs fall in the class of transitive prefix authentication schemes, more specifically they are linking schemes. ...
Preprint
Full-text available
We present new schemes for solving prefix authentication and secure relative timestamping. By casting a new light on antimonotone linking schemes, we improve upon the state of the art in prefix authentication, and in timestamping with rounds of bounded length. Our designs can serve as more efficient alternatives to certificate transparency logs.
... Bellare and Yee [24] proposed the first logging scheme by using the message authentication code (MAC) with forwarding security, which protects the log data from tampering even if the current key is known. Then, by using hash chain technology, another MAC-based logging scheme was presented by Schneier and Kelsey [25]. Based on the work of Schneier and Kelsey, Stathopoulos et al. [26] presented an improved scheme to effectively resist the internal and collusion attacks launched by log generators and the servers in public networks. ...
Article
Full-text available
Cloud storage security has received widespread attention from industry and academia. When cloud data is damaged and tampered by various types of security attacks, it is one of the most common methods to track accidents through log analysis. In order to support credible forensic analysis of user operation behavior logs in a shared cloud environment allowing multiple users handling data, this paper presents a secure and efficient public auditing scheme for user operation behavior logs. Specifically, a blockchain-based logging strategy was designed to support selective verification of log-data integrity and resist collusion attacks between malicious users and the cloud service provider. Based on the multi-user logging strategy, a public auditing approach was further presented to verify the integrity of log data in the cloud remotely without leaking any log content. We formally prove the security of the presented scheme and evaluate its performance by theoretical analysis and comprehensive experiments. The results indicate that our scheme can achieve secure and effective auditing for logs in the shared cloud storage.
... Secure logging mechanisms have been of interest to cryptographers for a long time [34], [52], [99], [128], [165], [176], [177], [210] and, for the purpose of transparency, have coalesced under the notions of authenticated data structures [137], [195] and transparency overlays [51], which are designed to broadly ensure that the log is verifiably append-only, can be used to lookup information, and is consistent (i.e., shows the same information to everyone and does not fork). In practice, logs are built using Merkle trees or blockchains, although more recent work has also explored the use of append-only dictionaries. ...
Preprint
Full-text available
This paper systematizes log based Transparency Enhancing Technologies. Based on established work on transparency from multiple disciplines we outline the purpose, usefulness, and pitfalls of transparency. We outline the mechanisms that allow log based transparency enhancing technologies to be implemented, in particular logging mechanisms, sanitisation mechanisms and the trade-offs with privacy, data release and query mechanisms, and how transparency relates to the external mechanisms that can provide the ability to contest a system and hold system operators accountable. We illustrate the role these mechanisms play with two case studies, Certificate Transparency and cryptocurrencies, and show the role that transparency plays in their function as well as the issues these systems face in delivering transparency.
... To remove the need for an online trusted server, Holt [22] proposed using public key cryptography instead of private keys residing in the trusted server and keeping the encrypted keys with the log entries. Thus, it was proposed to use an elliptic curve cryptosystem. ...
Article
Full-text available
Cloud computing has a new edge computing paradigm these days. Sometimes cloud computing architectures don’t support for computer forensics investigations. Analyzing various types of logs and logging mechanism plays an important role in computer forensics. Distributed nature and the multi-tenant cloud models, where many users share the same processing and network resources, collecting, storing and analyzing logs from a cloud is very hard. User activity logs can be a valuable source of information in cloud forensic investigations. Generally, Cloud service providers have access to activity logs of cloud user and CSP can tamper the logs so that investigator cannot reach to the real culprit. In such an environment, log security is one of challenge in the cloud. Logging technique is used to monitor employee’s behavior, to keep track of malicious activities and prevent cloud networks from intrusion by well-known organizations. Ensuring the reliability and integrity of logs is crucial. Most existing solutions for secure logging are designed for traditional systems rather than the complexity of a cloud environment. In the proposed framework secure logging environment is provided by storing and processing activity logs and encrypting using advanced encryption method. It detects DDoS (distributed denial of service) attack on cloud infrastructure by using the published logs on cloud and thus helpful in cloud forensics. It is detected by the investigator using available application activity logs in the cloud server. Searchable encryption algorithm will be used to increase the security of the logging mechanism and to maintain confidentiality and privacy of user data. Proof of past (PPL) logs is created by storing logs at more than one place. This PPL helps in the verification process of changed logs by CSP the actual implementation of this application on AWS Infrastructure as a service ( IAAS ) cloud shows real-time use of this structure.
... The evidence obtained using IDS are recorded at the time of attack or when the vulnerabilities are exploited to compromise the system. The information collected can be about an open network connection, processes running in the system, files and system call (Schneier and Kelsey 1999). Thus, the information collected by the IDS can be directly presented as the evidence for legal procedures against any attack performed to harm the system and the digital devices (Sommer 1999 ...
Article
Full-text available
With the increase in the usage of the Internet, a large amount of information is exchanged between different communicating devices. The data should be communicated securely between the communicating devices and therefore, network security is one of the dominant research areas for the current network scenario. Intrusion detection systems (IDSs) are therefore widely used along with other security mechanisms such as firewall and access control. Many research ideas have been proposed pertaining to the IDS using machine learning (ML) techniques, deep learning (DL) techniques, and swarm and evolutionary algorithms (SWEVO). These methods have been tested on the datasets such as DARPA, KDD CUP 99, and NSL-KDD using network features to classify attack types. This paper surveys the intrusion detection problem by considering algorithms from areas such as ML, DL, and SWEVO. The survey is a representative research work carried out in the field of IDS from the year 2008 to 2020. The paper focuses on the methods that have incorporated feature selection in their models for performance evaluation. The paper also discusses the different datasets of IDS and a detailed description of recent dataset CIC IDS-2017. The paper presents applications of IDS with challenges and potential future research directions. The study presented, can serve as a pedestal for research communities and novice researchers in the field of network security for understanding and developing efficient IDS models.
... Audit Logs. Towards secure audit log systems, [24,25] proposed a temper detection scheme that detects an audit log compromise. However, a key limitation in their design is that it only detects audit log tampering while lacking the capability of preventing the attacker from manipulating data during an attack. ...
Article
Full-text available
Blockchain‐based audit systems suffer from low scalability and high message complexity. The root cause of these shortcomings is the use of “Practical Byzantine Fault Tolerance” (PBFT) consensus protocol in those systems. Alternatives to PBFT have not been used in blockchain‐based audit systems due to the limited knowledge about their functional and operational requirements. Currently, no blockchain testbed supports the execution and benchmarking of different consensus protocols in a unified testing environment. This paper demonstrates building a blockchain testbed that supports the execution of five state‐of‐the‐art consensus protocols in a blockchain system; namely PBFT, Proof‐of‐Work (PoW), Proof‐of‐Stake (PoS), Proof‐of‐Elapsed Time (PoET), and Clique. Performance evaluation of those consensus algorithms is carried out using data from a real‐world audit system. These results show that the Clique protocol is best suited for blockchain‐based audit systems, based on scalability features.
... Security and privacy challenges in this area are not new. In 1999, Schneier and Kelsey [8] set out to secure the collection of sensitive logs using encryption, to ensure that forensic records could be maintained in the event of a cyber breach. Some five years later, Waters et al. [9], realised that system logs were becoming a prime attack vector for attackers, who were seeking to cover their trail after successfully breaking in to computer systems. ...
Conference Paper
Full-text available
Finding a robust security mechanism for audit trail logging has long been a poorly satisfied goal. There are many reasons for this. The most significant of these is that the audit trail is a highly sought after goal of attackers to ensure that they do not get caught. Thus they have an incredibly strong incentive to prevent companies from succeeding in this worthy aim. Regulation, such as the European Union General Data Protection Regulation, has brought a strong incentive for companies to achieve success in this area due to the punitive level of fines that can now be levied in the event of a successful breach by an attacker. We seek to resolve this issue through the use of an encrypted audit trail process that saves encrypted records to a true immutable database, which can ensure audit trail records are permanently retained in encrypted form, with no possibility of the records being compromised. This ensures compliance with the General Data Protection Regulation can be achieved.
... Also, there is a need of verifying and conducting the audit of security practices and data storage policies of the cloud service provider. All the above mentioned practices will lead to scenario of better trust between user and cloud service provider (Schneier & Kelsey, 1999). ...
Chapter
Full-text available
As the data being stored to a distant server away from direct control of user cloud presents various security risks and threat issues associated with the user authentication and access control mechanisms, it is of upmost importance to ensure the security of confidential business data in the cloud storage along with making sure that only properly authenticated and authorized personnel can access the data and applications in the cloud. An important step in this regard is to execute biometric security mechanisms, which increases the competence level of security and only permits authenticated individuals by verifying different biometric parameters of human biometric characteristics (traits): patterns like fingerprints, retina, iris, voice, face, ear, palm, signature, and DNA recognition. Implementation of biometric authentication mechanism will take security of data and access control in cloud to higher level. This chapter discusses how a proposed biometrics system with respect to other recognition systems so far is more advantageous and result-oriented because it does not work on presumptions: it is unique and provides fast and contact-less authentication.
... In [20], authors consider a secure logging architecture in which the audit logs would be stored on an untrusted machine in the storage phase. They proposed a scheme based on hash chains and evolving shared cryptographic keys to limit the attacker's ability to read and alter the audit logs. ...
Preprint
Full-text available
Verification of data generated by wearable sensors is increasingly becoming of concern to health service providers and insurance companies. There is a need for a verification framework that various authorities can request a verification service for the local network data of a target IoT device. In this paper, we leverage blockchain as a distributed platform to realize an on-demand verification scheme. This allows authorities to automatically transact with connected devices for witnessing services. A public request is made for witness statements on the data of a target IoT that is transmitted on its local network, and subsequently, devices (in close vicinity of the target IoT) offer witnessing service. Our contributions are threefold: (1) We develop a system architecture based on blockchain and smart contract that enables authorities to dynamically avail a verification service for data of a subject device from a distributed set of witnesses which are willing to provide (in a privacy-preserving manner) their local wireless measurement in exchange of monetary return; (2) We then develop a method to optimally select witnesses in such a way that the verification error is minimized subject to monetary cost constraints; (3) Lastly, we evaluate the efficacy of our scheme using real Wi-Fi session traces collected from a five-storeyed building with more than thirty access points, representative of a hospital. According to the current pricing schedule of the Ethereum public blockchain, our scheme enables healthcare authorities to verify data transmitted from a typical wearable device with the verification error of the order 0.01% at cost of less than two dollars for one-hour witnessing service.
... Schneier et al. [44] describe a method for making all entries before logging difficult to the attacker to read, modify or delete undetectably. When audit logs are generated, its audit authentication key is hashed. ...
Thesis
Full-text available
The auditability of information systems plays an important role in public administration. Information system accesses to resources are saved in log files so auditors can later inspect them. However, there are two problems with managing conventional audit logs: i) audit logs are vulnerable to attacks where adversaries can tamper data, without being detected and ii) there can be distinct stakeholders with different roles and different levels of trust with different access rights to data. This scenario happens in the Portuguese judicial system, where stakeholders utilize an information system managed by a third-party. This document proposes using blockchain technology to make the storage of access logs more resilient while supporting such a multi-stakeholder scenario, in which different entities have different access rights to data. Towards that, we implemented this proposal in the Portuguese judicial system through JusticeChain. JusticeChain is divided into the blockchain components and blockchain client components. The blockchain components, implemented with Hyperledger Fabric, grant log integrity and improve its resiliency. The blockchain client component, JusticeChain Client, is responsible for saving logs on behalf of an information system and comprises the JusticeChain Log Manager and JusticeChain Audit Manager. The latter allows audits mediated by the blockchain. The evaluation results show that the system can obtain a throughput of 37 transactions per second, and latency lower than 1 minute. The required storage for each peer, for a year, is in the order of terabytes. As an extension of JusticeChain, which achieves even more trust distribution, we present a blockchain-based access control system, JusticeChain v2. JusticeChain v2 allows distributing the authorization process while providing the same advantages as JusticeChain. Our evaluation shows that our system can handle around 250 access control requests per second, with a latency lower than 12 seconds. The storage required is approximately the same as JusticeChain.
... Bellare and Yee [24] proposed the first logging scheme by using the message authentication code (MAC) with forwarding security, which protects the log data from tampering even if the current key is known. Then, by using hash chain technology, another MAC-based logging scheme was presented by Schneier and Kelsey [25]. Based on the work of Schneier and Kelsey, Stathopoulos et al. [26] presented an improved scheme to effectively resist the internal and collusion attacks launched by log generators and the servers in public networks. ...
Article
Full-text available
Cloud storage security has received widespread attention from industry and academia. When cloud data is damaged and tampered by various types of security attacks, it is one of the most common methods to track accidents through log analysis. In order to support credible forensic analysis of user operation behavior logs in a shared cloud environment allowing multiple users handling data, this paper presents a secure and efficient public auditing scheme for user operation behavior logs. Specifically, a blockchain-based logging strategy was designed to support selective verification of log-data integrity and resist collusion attacks between malicious users and the cloud service provider. Based on the multi-user logging strategy, a public auditing approach was further presented to verify the integrity of log data in the cloud remotely without leaking any log content. We formally prove the security of the presented scheme and evaluate its performance by theoretical analysis and comprehensive experiments. The results indicate that our scheme can achieve secure and effective auditing for logs in the shared cloud storage.
... to central log storage represented in framework as SAN/NAS (Y. Lass provides log monitoring service using log monitor/collection appliance where the log data can be cross checked by user using GUI appliance call log analysis. If found the log data is getting altered user can directly reach compliance and get logs audited (M. Bellare & B. Yee,1997;B. Schneier & J. Kelsey, 1999). Log retention provides log feeds to SEM which processes the data and provides alerts which can be used for information regarding system breakdown or threat to the application, network or virtual machine. ( Thorpe, I The SOC/MSSP receives log from the log retention appliance and helps in detecting, analyzing, and responding to cyber sec ...
... Their scheme does not support aggregation of tags. Schneier and Kelsey [7] presented an FssAgg MAC scheme using a cryptographic hash function and a MAC function. Their scheme computes a new digest of a new message and a previous digest using the collision-resistant hash function and produces a tag for the new digest using the MAC function. ...
... Secure logging. A sequence of works, including [26] considered secure logging mechanisms. The goal is to record on a local machine a log of all activity that both cannot be modified by an attacker and does not reveal anything without access to a decryption key. ...
Conference Paper
Messaging systems are used to spread misinformation and other malicious content, often with dire consequences. End-to-end encryption improves privacy but hinders content-based moderation and, in particular, obfuscates the original source of malicious content. We introduce the idea of message traceback, a new cryptographic approach that enables platforms to simultaneously provide end-to-end encryption while also being able to track down the source of malicious content reported by users. We formalize functionality and security goals for message traceback, and detail two constructions that allow revealing a chain of forwarded messages (path traceback) or the entire forwarding tree (tree traceback). We implement and evaluate prototypes of our traceback schemes to highlight their practicality, and provide a discussion of deployment considerations.
... Audit Logs. Schneier and Kelsey [31,32] proposed a secure audit logging scheme capable of tamper detection even after compromise. However, their system requires the audit log entries to be generated prior to the attack. ...
Preprint
Full-text available
Audit logs serve as a critical component in enterprise business systems and are used for auditing, storing, and tracking changes made to the data. However, audit logs are vulnerable to a series of attacks enabling adversaries to tamper data and corresponding audit logs without getting detected. Among them, two well-known attacks are "the physical access attack," which exploits root privileges, and "the remote vulnerability attack," which compromises known vulnerabilities in database systems. In this paper, we present BlockAudit: a scalable and tamper-proof system that leverages the design properties of audit logs and security guarantees of blockchain to enable secure and trustworthy audit logs. Towards that, we construct the design schema of BlockAudit and outline its functional and operational procedures. We implement our design on a custom-built Practical Byzantine Fault Tolerance (PBFT) blockchain system and evaluate the performance in terms of latency, network size, payload size, and transaction rate. Our results show that conventional audit logs can seamlessly transition into BlockAudit to achieve higher security and defend against the known attacks on audit logs.
Article
Full-text available
Many security issues are involved in log management. Integrity of the log file, and that of the logging process need to be ensured at all time. Main goal of a log manager is to provide high bandwidth and low level inactivity. In many real world applications and sensitive information must be kept in log files on an untreated machine. The event that an attacker captures this machine, and would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log file. It describes a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read and also impossible to undetectably modify or destroy. In this work, find out the challenges for a secure cloud based log management service. It Provide a comprehensive solution for storing and maintaining log records in a server operating in cloud-based environment. Also address security and integrity issues not only just during the log generation phase but also during other stage in the log management. It implement how to store secure log file in cloud and that file we can change read, write, delete, upload and download. Managing the Log Records is highly tedious and confidential in any organization. Even though Log Records contain Log Files, it should be protected from third party hackers. Since, Log Files contains privacy details and sensitive information. Delegating log management is cost saving measure.. In order to overcome from hackers we introduce a secure algorithm known as Advance Encryption Standard (AES). It provide secret key to both Client and Data Owners. It provides high Bandwidth & low level Inactivity. It's the cheapest method where an attacker cannot read or modify/destroy the data's. We can implement AES Algorithm for various Security Issues.
Article
Full-text available
Many security issues are involved in log management. Integrity of the log file, and that of the logging process need to be ensured at all time. Main goal of a log manager is to provide high bandwidth and low level inactivity. In many real world applications and sensitive information must be kept in log files on an untreated machine. The event that an attacker captures this machine, and would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log file. It describes a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read and also impossible to undetectably modify or destroy. In this work, find out the challenges for a secure cloud based log management service. It Provide a comprehensive solution for storing and maintaining log records in a server operating in cloud-based environment. Also address security and integrity issues not only just during the log generation phase but also during other stage in the log management. It implement how to store secure log file in cloud and that file we can change read, write, delete, upload and download. Managing the Log Records is highly tedious and confidential in any organization. Even though Log Records contain Log Files, it should be protected from third party hackers. Since, Log Files contains privacy details and sensitive information. Delegating log management is cost saving measure.. In order to overcome from hackers we introduce a secure algorithm known as Advance Encryption Standard (AES). It provide secret key to both Client and Data Owners. It provides high Bandwidth & low level Inactivity. It's the cheapest method where an attacker cannot read or modify/destroy the data's. We can implement AES Algorithm for various Security Issues.
Article
Full-text available
Many security issues are involved in log management. Integrity of the log file, and that of the logging process need to be ensured at all time. Main goal of a log manager is to provide high bandwidth and low level inactivity. In many real world applications and sensitive information must be kept in log files on an untreated machine. The event that an attacker captures this machine, and would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log file. It describes a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read and also impossible to undetectably modify or destroy. In this work, find out the challenges for a secure cloud based log management service. It Provide a comprehensive solution for storing and maintaining log records in a server operating in cloud-based environment. Also address security and integrity issues not only just during the log generation phase but also during other stage in the log management. It implement how to store secure log file in cloud and that file we can change read, write, delete, upload and download. Managing the Log Records is highly tedious and confidential in any organization. Even though Log Records contain Log Files, it should be protected from third party hackers. Since, Log Files contains privacy details and sensitive information. Delegating log management is cost saving measure.. In order to overcome from hackers we introduce a secure algorithm known as Advance Encryption Standard (AES). It provide secret key to both Client and Data Owners. It provides high Bandwidth & low level Inactivity. It's the cheapest method where an attacker cannot read or modify/destroy the data's. We can implement AES Algorithm for various Security Issues.
Article
Full-text available
Many security issues are involved in log management. Integrity of the log file, and that of the logging process need to be ensured at all time. Main goal of a log manager is to provide high bandwidth and low level inactivity. In many real world applications and sensitive information must be kept in log files on an untreated machine. The event that an attacker captures this machine, and would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log file. It describes a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read and also impossible to undetectably modify or destroy. In this work, find out the challenges for a secure cloud based log management service. It Provide a comprehensive solution for storing and maintaining log records in a server operating in cloud-based environment. Also address security and integrity issues not only just during the log generation phase but also during other stage in the log management. It implement how to store secure log file in cloud and that file we can change read, write, delete, upload and download. Managing the Log Records is highly tedious and confidential in any organization. Even though Log Records contain Log Files, it should be protected from third party hackers. Since, Log Files contains privacy details and sensitive information. Delegating log management is cost saving measure.. In order to overcome from hackers we introduce a secure algorithm known as Advance Encryption Standard (AES). It provide secret key to both Client and Data Owners. It provides high Bandwidth & low level Inactivity. It's the cheapest method where an attacker cannot read or modify/destroy the data's. We can implement AES Algorithm for various Security Issues.
Article
Full-text available
Many security issues are involved in log management. Integrity of the log file, and that of the logging process need to be ensured at all time. Main goal of a log manager is to provide high bandwidth and low level inactivity. In many real world applications and sensitive information must be kept in log files on an untreated machine. The event that an attacker captures this machine, and would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log file. It describes a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read and also impossible to undetectably modify or destroy. In this work, find out the challenges for a secure cloud based log management service. It Provide a comprehensive solution for storing and maintaining log records in a server operating in cloud-based environment. Also address security and integrity issues not only just during the log generation phase but also during other stage in the log management. It implement how to store secure log file in cloud and that file we can change read, write, delete, upload and download. Managing the Log Records is highly tedious and confidential in any organization. Even though Log Records contain Log Files, it should be protected from third party hackers. Since, Log Files contains privacy details and sensitive information. Delegating log management is cost saving measure.. In order to overcome from hackers we introduce a secure algorithm known as Advance Encryption Standard (AES). It provide secret key to both Client and Data Owners. It provides high Bandwidth & low level Inactivity. It's the cheapest method where an attacker cannot read or modify/destroy the data's. We can implement AES Algorithm for various Security Issues.
Book
Full-text available
El Congreso Argentino de Ingeniería (CADI) es el evento más importante organizado por el Consejo Federal de Decanos de Ingeniería (CONFEDI) a nivel nacional. Desde 2012 y cada dos años, convoca a referentes de nuestro país y la región, para intercambiar experiencias, potenciar el rol de la ingeniería desde lo profesional y académico (enseñanza, investigación y extensión) e impulsar lazos de cooperación que permitan generar proyectos compartidos. En este mismo marco, se realiza el Congreso Argentino de Enseñanza de la Ingeniería (CAEDI), un ámbito propicio para el intercambio de experiencias de todos los sectores vinculados al proceso educativo y el debate de sus ideas. Este encuentro, se realiza desde 1996 y es la piedra basal que dio origen años después a la realización del CADI. Siempre en pos de la formación de nuevos y mejores profesionales y ampliando horizontes en busca de unificación, pluralidad e intercambio de conocimientos, en el año 2017, nuevamente desde el CONFEDI, se impulsa la realización del 1er Congreso Latinoamericano de Ingeniería (CLADI 2017) que tuvo su 2da edición en Cartagena de Indias en el 2019 y este año se realizará en Buenos Aires sumando como Invitado Especial a la Corporación de Facultades de Ingeniería (CONDEFI) de Chile. Todo este recorrido ininterrumpido converge en este mega encuentro en el que por primera vez los tres eventos anteriores se plasman en una única reunión: CADI – CLADI – CAEDI-2021, bajo el lema: “la ingeniería latinoamericana celebra los 150 años de la ingeniería argentina”. A través de estas iniciativas, universidades, empresas y sector público trabajan mancomunadamente por una ingeniería al servicio de una sociedad mejor y más inclusiva
Article
Full-text available
Many security issues are involved in log management. Integrity of the log file, and that of the logging process need to be ensured at all time. Main goal of a log manager is to provide high bandwidth and low level inactivity. In many real world applications and sensitive information must be kept in log files on an untreated machine. The event that an attacker captures this machine, and would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log file. It describes a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read and also impossible to undetectably modify or destroy. In this work, find out the challenges for a secure cloud based log management service. It Provide a comprehensive solution for storing and maintaining log records in a server operating in cloud-based environment. Also address security and integrity issues not only just during the log generation phase but also during other stage in the log management. It implement how to store secure log file in cloud and that file we can change read, write, delete, upload and download. Managing the Log Records is highly tedious and confidential in any organization. Even though Log Records contain Log Files, it should be protected from third party hackers. Since, Log Files contains privacy details and sensitive information. Delegating log management is cost saving measure.. In order to overcome from hackers we introduce a secure algorithm known as Advance Encryption Standard (AES). It provide secret key to both Client and Data Owners. It provides high Bandwidth & low level Inactivity. It's the cheapest method where an attacker cannot read or modify/destroy the data's. We can implement AES Algorithm for various Security Issues.
Conference Paper
Full-text available
Contenido 1. Intrusion Detection in 1993 2. Intrusion Detection in 1994 3. Intrusion Detection in 1995 4. Intrusion Detection in 1996 5. Intrusion Detection in 1997 6. Intrusion Detection in 1998 7. Intrusion Detection in 1999 8. Intrusion Detection in 2000 9. Intrusion Detection in 2001 10. Intrusion Detection in 2002 11. Intrusion Detection in 2003
Book
Full-text available
The Twelfth International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2021), held on April 18 - 22, 2021, continued a series of events targeted to prospect the applications supported by the new paradigm and validate the techniques and the mechanisms. A complementary target was to identify the open issues and the challenges to fix them, especially on security, privacy, and inter- and intra-clouds protocols. Cloud computing is a normal evolution of distributed computing combined with Service-oriented architecture, leveraging most of the GRID features and Virtualization merits. The technology foundations for cloud computing led to a new approach of reusing what was achieved in GRID computing with support from virtualization. The conference had the following tracks: Cloud computing Computing in virtualization-based environments Platforms, infrastructures and applications Challenging features Similar to the previous edition, this event attracted excellent contributions and active participation from all over the world. We were very pleased to receive top quality contributions. We take here the opportunity to warmly thank all the members of the CLOUD COMPUTING 2021 technical program committee, as well as the numerous reviewers. The creation of such a high quality conference program would not have been possible without their involvement. We also kindly thank all the authors that dedicated much of their time and effort to contribute to CLOUD COMPUTING 2021. We truly believe that, thanks to all these efforts, the final conference program consisted of top quality contributions. Also, this event could not have been a reality without the support of many individuals, organizations and sponsors. We also gratefully thank the members of the CLOUD COMPUTING 2021 organizing committee for their help in handling the logistics and for their work that made this professional meeting a success. We hope that CLOUD COMPUTING 2021 was a successful international forum for the exchange of ideas and results between academia and industry and to promote further progress in the area of cloud computing, GRIDs and virtualization. CLOUD COMPUTING 2021 Steering Committee Carlos Becker Westphall, Federal University of Santa Catarina, Brazil Yong Woo Lee, University of Seoul, Korea Bob Duncan, University of Aberdeen, UKAspen Olmsted, College of Charleston, USA Alex Sim, Lawrence Berkeley National Laboratory, USA Sören Frey, Daimler TSS GmbH, Germany Andreas Aßmuth, Ostbayerische Technische Hochschule (OTH) Amberg-Weiden, Germany Uwe Hohenstein, Siemens AG, Germany Magnus Westerlund, Arcada, Finland CLOUD COMPUTING 2021 Publicity Chair Jose Luis García, Universitat Politecnica de Valencia, Spain Lorena Parra, Universitat Politecnica de Valencia, Spain CLOUD COMPUTING 2021 Industry/Research Advisory Committee Raul Valin Ferreiro, Fujitsu Laboratories of Europe, Spain Bill Karakostas, VLTN gcv, Antwerp, Belgium Matthias Olzmann, noventum consulting GmbH - Münster, Germany Hong Zhu, Oxford Brookes University, UK
Conference Paper
The use of blockchain technology has been proposed to provide auditable access control for individual resources. However, when all resources are owned by a single organization, such expensive solutions may not be needed. In this work we focus on distributed applications such as business processes and distributed workflows. These applications are often composed of multiple resources/services that are subject to the security and access control policies of different organizational domains. Here, blockchains can provide an attractive decentralized solution to provide auditability. However, the underlying access control policies may be overlapping in terms of the component conditions/rules, and simply using existing solutions would result in repeated evaluation of user's authorization separately for each resource, leading to significant overhead in terms of cost and computation time over the blockchain. To address this challenge, we propose an approach that formulates a constraint optimization problem to generate an optimal composite access control policy. This policy is in compliance with all the local access control policies and minimizes the policy evaluation cost over the blockchain. The developed smart contract(s) can then be deployed to the blockchain, and used for access control enforcement. We also discuss how the access control enforcement can be audited using a game-theoretic approach to minimize cost. We have implemented the initial prototype of our approach using Ethereum as the underlying blockchain and experimentally validated the effectiveness and efficiency of our approach.
Article
Verification of data generated by wearable sensors is increasingly becoming of concern to health service providers and insurance companies. These devices are typically vulnerable to a wide range of cybersecurity attacks, attempting to manipulate sensing data. Most of these disastrous attacks would remain undetected since neither healthcare servers nor Internet-of-Things (IoT) sensors are aware of the existence of attackers in the middle of communication. Thus, there is a need for a verification framework that various authorities can request a verification service for the local network data of a target IoT device. In this article, we leverage blockchain as a distributed platform to realize an on-demand verification scheme. This allows authorities to automatically transact with connected devices for witnessing services. A public request is made for witness statements on the data of a target IoT that is transmitted on its local network, and subsequently, devices (in close vicinity of the target IoT) offer witnessing service. Our contributions are threefold: 1) we develop a system architecture based on blockchain and smart contract that enables authorities to dynamically avail a verification service for data of a subject device from a distributed set of witnesses which are willing to provide (in a privacy-preserving manner) their local wireless measurement in exchange of monetary return; 2) we then develop a method to optimally select witnesses in such a way that the verification error is minimized subject to monetary cost constraints; and 3) finally, we evaluate the efficacy of our scheme using real Wi-Fi session traces collected from a five-storeyed building with more than thirty access points, representative of a hospital. According to the current pricing schedule of the Ethereum public blockchain, our scheme enables healthcare authorities to verify data transmitted from a typical wearable device with the verification error of the order 0.01% at cost of less than \ $ 2 for 1-hr witnessing service.
Article
Full-text available
Pervasiveness of Internet‐based applications and computing devices has increased cybersecurity threats for wide range of users. Studies have shown that application security flaws have their roots in programming languages used for application development. Some vulnerabilities are due to programmer's negligence and others are due to the vulnerabilities present in the programming languages and their libraries. Developers may not be aware of the existing flaws in the programming languages and do not have time to take necessary measures as they develop applications. To cope with the challenge, this article proposes a security feature framework for programming languages to understand various exploitations and possible mitigations in programming languages. This security feature framework can be used to evaluate existing programming languages for potential vulnerabilities, level of security support, and the language features needed to mitigate these vulnerabilities. Moreover, language designers may use this framework as a guide to ensure that the language being designed has necessary and sufficient security feature set. The proposed security feature framework is then applied to several popular programming languages to evaluate the level of security feature coverage and gaps in these languages along with some recommendations on how to address these gaps.
ResearchGate has not been able to resolve any references for this publication.