While safety is one of the most critical contributions of Cooperative Adaptive Cruise Control (CACC), it is impractical to assess such impacts in a real world. Even with simulation, many factors including vehicle dynamics, sensor errors, automated vehicle control algorithms and crash severity need to be properly modeled. In this paper, a simulation platform is proposed which explicitly features: (i) vehicle dynamics; (ii) sensor errors and communication delays; (iii) compatibility with CACC controllers; (iv) state-of-the-art predecessor leader following (PLF) based cooperative adaptive cruise control (CACC) controller; and (v) ability to quantify crash severity and CACC stability. The proposed simulation platform evaluated the CACC performance under normal and cybersecurity attack scenarios using speed variation, headway ratio, and injury probability. The first two measures of effectiveness (MOEs) represent the stability of CACC platoon while the injury probability quantifies the severity of a crash. The proposed platform can evaluate the safety performance of CACC controllers of interest under various paroxysmal or extreme events. It is particularly useful when traditional empirical driver models are not applicable. Such situations include, but are not limited to, cyber-attacks, sensor failures, and heterogeneous traffic conditions. The proposed platform is validated against data collected from real field tests and tested under various cyber-attack scenarios.
In this chapter, we discuss when, how, and why trust and trustworthiness arise to support cooperation within and across organizations. To do so, we first define trust and trustworthiness, discuss how they can be quantified, and determine key components of trusting and trustworthy behavior. In addition, we identify building blocks of trust and trustworthiness and offer tangible insights about how to establish trusting and cooperative business/interorganizational relationships, based on both academic research and case studies from across industries.
This study summarises the implementation of software system architecture and relevant modules to enable cooperative adaptive cruise control (CACC) functionalities as an extension of adaptive cruise control (ACC), thereby leveraging the lessons learned from prototype ACC vehicle testing as well as ideas from prior research. These activities were conducted in the United States under a cooperative agreement between the Crash Avoidance Metrics Partners, LLC and the Federal Highway Administration. A key outcome of this project was to understand the implementation of advanced capabilities for the CACC algorithm in a very structured manner. With the introduction of each CACC module, the impacts on the behaviours of vehicles following in a string (or string stability) were quantified to establish potential performance enhancements to automated following systems.
In this chapter, we discuss when, how, and why trust and trustworthiness arise to support credible information sharing and cooperation in a supply chain. Synthesizing our learning, we identify the four building blocks of trust and trustworthiness as personal values and norms, market environment, business infrastructure, and business process design. We elaborate on these building blocks and offer tangible insights into how to establish more trusting and cooperative supply chain relationships.
The Requirements Engineering (RE) community recognizes the importance of trust proposing several approaches to model and analyze trust requirements. However, such approaches mainly focus on trust as social relations without relating them to the requirements of the sys-tem's components. We propose a belief-based trust approach based on an extended version of Secure Tropos, where social relations are modeled and analyzed along with beliefs concerning capabilities and motivations of system's components. An example concerning US stock market crash (the Flash Crash) is used to illustrate our approach.
In Europe, over recent years, the responsibility for ensuring system safety has shifted onto the developers and operators to construct and present well reasoned arguments that their systems achieve acceptable levels of safety. These arguments (together with supporting evidence) are typically referred to as a "safety case". This paper describes the role and purpose of a safety case. Safety arguments within safety cases are often poorly communicated. This paper presents a technique called GSN (Goal Structuring Notation) that is increasingly being used in safety-critical industries to improve the structure, rigor, and clarity of safety arguments. The paper also describes a number of extensions, based upon GSN, which can be used to assist the maintenance, construction, reuse and assessment of safety cases. The aim of this paper is to describe the current industrial use and research into GSN such that its applicability to other types of Assurance Case, in addition to safety cases, can also be considered.
Models of driving have traditionally been couched either in terms of guidance and control or in terms of human factors. There is, however, a need for more powerful models that can match the rapidly growing complexity and sophistication of modern cars. Such models must provide coherent and consistent ways of describing driver performance to help engineers develop and validate technical concepts for semi-and fully automated systems in cars. This paper presents a qualitative model for Driver-in-Control (DiC) based on the principles of cognitive systems engineering. The model describes driving in terms of multiple, simultaneous control loops with the joint driver-vehicle system (JVDS) as a unit. This provides the capability to explain how disturbances may propagate between control levels. The model also enables new functions to be evaluated at the specific level at which they are aimed, rather than by their effects on global driving performance.
Trust is an important tool in human life, as it enables people to cope with the uncertainty caused by the free will of others. Uncertainty and uncontrollability are also issues in computer-assisted collaboration and electronic commerce in particular. A computational model of trust and its implementation can alleviate this problem.
This survey is directed to an audience wishing to familiarize themselves with the field, for example to locate a research target or implement a trust management system. It concentrates on providing a general overview of the state of the art, combined with examples of things to take into consideration both when modelling trust in general and building a solution for a certain phase in trust management, be it trust relationship initialization, updating trust based on experience or determining what trust should have an effect on.
This paper gives the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures.
A 2001 IBM manifesto observed that a looming software complexity crisis -caused by applications and environments that number into the tens of millions of lines of code - threatened to halt progress in computing. The manifesto noted the almost impossible difficulty of managing current and planned computing systems, which require integrating several heterogeneous environments into corporate-wide computing systems that extend into the Internet. Autonomic computing, perhaps the most attractive approach to solving this problem, creates systems that can manage themselves when given high-level objectives from administrators. Systems manage themselves according to an administrator's goals. New components integrate as effortlessly as a new cell establishes itself in the human body. These ideas are not science fiction, but elements of the grand challenge to create self-managing computing systems.
E-participation is about ICT-supported participation of citizens in democratic processes and procedures (e.g., consultation or co-creation). Research has mostly centered on the development of tools to model and deploy ICT-supported democratic processes. So far, the integration and use of reputation has only been rarely considered even tough reputation systems provide ratings that could be adapted well to the context of e-participation e.g., evaluating and rating comments and activities of users. Furthermore, reputation in e-participation can increase the trust between users (e.g., new participants) and their activities e.g., commenting or rating. In this paper, we aim to address reputation in e-participation with an overview of state of the art and an experimental reputation model for e-participation. The model measures not only the quality of comments but also the activity of users. Thereby, a certain level of assurance is enabled by users itself; they can mark unqualified posts that can be removed at a certain level. For future work, we aim to perform user acceptance tests in order to identify potential chances and pitfalls and further enhance the proposed solution.
Cyber-Physical Systems (CPS) provide enormous potential for innovation but a precondition for this is that the issue of dependability has been addressed. This paper presents the concept of a Digital Dependability Identity (DDI) of a component or system as foundation for assuring the dependability of CPS. A DDI is an analyzable and potentially executable model of information about the dependability of a component or system. We argue that DDIs must fulfill a number of properties including being universally useful across supply chains, enabling off-line certification of systems where possible, and providing capabilities for in-field certification of safety of CPS. In this paper, we focus on system safety as one integral part of dependability and as a practical demonstration of the concept, we present an initial implementation of DDIs in the form of Conditional Safety Certificates (also known as ConSerts). We explain ConSerts and their practical operationalization based on an illustrative example.
As traffic participation is inherently a risky activity, traffic psychology has generated a great number of so-called risk models, i.e. models in which the risk concept plays a major role. Three of these models are attracting a great deal of attention these days: Naatanen and Summala's 'Model of driver's decision making and behaviour'. Wilde's 'Theory of risk homeostasis' and Fuller's 'Threat-avoidance model of driver behaviour'. All three models emphasize motivational aspects with regard to risk and they claim to be generally applicable to a large array of traffic situations. In an attempt to use these models for quantitative predictions in a concrete example (an overtaking manoeuvre), we found that many model components had not been defined at all, or had been defined only partially, or in a contradictory fashion. We have therefore developed our own model which allows quantitative calculations in terms of behaviour alternatives subjective probabilities of events, and utilities of the outcomes of behaviour alternatives. The concept of risk is more sharply defined as well. Further, the model explicitly takes into account that traffic tasks may be conceived as hierarchically ordered in strategic, tactical and operational task levels.
Trust is indispensible for any user of an e-service in order to make a decision before dealing with any transaction. That's the reason why, users and service providers need various and functional methods to build on-line trust reputation systems. This paper discusses the use of trust management and reputation systems in electronic transactions and particularly in e-commerce applications. It presents a survey of some existing trust reputation systems used in e-commerce applications. This survey proposes a new design for trust reputation systems (TRS) that focuses on the use of semantic feedbacks in order to calculate users' recommendation weights and to classify them according to these weights. This paper highlights the importance of making the distinction between trustful feedbacks or ratings and distrustful ones. It proposes also some methods to follow and to put in practice in order to give the right weight to the right recommendations.
In safety–critical applications, it is necessary to justify, prior to deployment, why software behaviour is to be trusted. This is normally referred to as software safety assurance. Within certification standards, developers demonstrate this by appealing to the satisfaction of objectives that the safety assurance standards require for compliance. In some standards the objectives can be very detailed in nature, prescribing specific processes and techniques that must be followed. This approach to certification is often described as prescriptive or process-based certification. Other standards set out much more high-level objectives and are less prescriptive about the particular processes and techniques to be used. These standards instead explicitly require the submission of an assurance argument which communicates how evidence, generated during development (for example from testing, analysis and review) satisfies claims concerning the safety of the software. There has been much debate surrounding the relative merits of prescriptive and safety assurance argument approaches to certification. In many ways this debate can lead to confusion. There can in fact be seen to be a role for both approaches in a successful software assurance regime. In this paper, we provide a comparative examination of these two approaches, and seek to identify the relative merits of each. We first introduce the concepts of assurance cases and prescriptive software assurance. We describe how an assurance case could be generated for the software of an aircraft wheel braking system. We then describe how prescriptive certification guidelines could be used in order to gain assurance in the same system. Finally, we compare the results of the two approaches and explain how these approaches may complement each other. This comparison highlights the crucial role that an assurance argument can play in explaining and justifying how the software evidence supports the assurance argument, even when a prescriptive safety standard is being followed.
Trust is an aspect that is treated carefully in Grid environments for its open and decentralized nature. This paper extends the discussion in terms of the behavior of nodes members of a computational Grid which applies trust concepts as decision criteria for jobs delegation using multi-agent systems. Also is presented an analysis of trust issues between computational Grid nodes in the moment of task delegation. A trust model was implemented on a multi-agents system simulating a Grid environment in order to make agents interactions and analyze trust situations. The results of the implementation consider when personal experience of the agents dismisses the reputation information and how dishonest opinions influence trust calculation.
Acknowledged as important factors for business environments operating as virtual organizations (VOs), trust and reputation are receiving attention also in Grids devoted to scientific applications where problems of finding suitable models and architectures for flexible security management of heterogeneous resources arise. Being these resources highly heterogeneous (from individual users to whole organizations or experiment tools and workflows), this paper presents a trust and reputation framework that integrates a number of information sources to produce a comprehensive evaluation of trust and reputation by clustering resources having similar capabilities of successfully executing a specific job. Here, trust and reputation are considered as quality of service (QoS) parameters, and are asserted on the operative context of resources, a concept expressing the resources capability of providing trusted services within collaborative scientific applications. Specifically, the framework exploits the use of distributed brokers that support interaction trust and the creation of VOs from existing scientific organizations. A broker is a distributed software module launched at some node of the Grid that makes use of resources and communicates with other brokers to perform specific reputation services. In turn, each broker contributes to maintain a dynamic and adaptive reputation assessment within the Grid in a collaborative and distributed fashion. The proposed framework is empirically implemented by adopting a SOA approach and results show its effectiveness and its possible integration in a scientific Grid.
In the wake of current computing trends like Ubiquitous Computing, Ambient Intelligence and Cyber Physical Systems, new application domains like Car2Car emerged. One key characteristic of these new application domains is their openness with respect to dynamic integration of devices and components. It is obvious that traditional safety assurance techniques, both state of the practice and state of the art, are not sufficient in this context. A possible solution approach would be to shift portions of the safety assurance process into run time. This can be reached by the integration of appropriate run time safety models and corresponding dynamic evaluation mechanisms. In this paper we sketch out our recent work on conditional safety certificates, which facilitate such dynamic safety evaluation. We conclude with a brief discussion and state promising research directions for the future.
Abstract—Due to the dynamic and anonymous nature of open environments, it is critically important for agents to identify trustful cooperators which work consistently as they claim. In the e-services and e-commerce communities, trust and reputation systems are applied broadly as one kind of decision support systems, and aim to cope with the consistency problems caused by uncertain trust relationships. However, challenges still exist: on the one hand, we require more flexible trust computation models to satisfy various personal requirements since agents in these communities are heterogeneous; on the other hand, trust and reputation systems calculate the trustworthiness of agents based on the agents’ past behavior. The open environments are dynamic, agents are anonymous and the records about agents’ past behavior are distributed in the environments, so agents have to search the required records through the environments due to their lack of valid information. Thus, efficient, scalable and effective information collection strategies are required to address these issues. In this paper we present a distributed trust and reputation system to cope with the challenges. We propose a novel and flexible trust computation model based on artificial neural networks. With the advantages of ANN, our trust model tunes the parameters automatically to adapt to various personal requirements. We propose a broker-assisting information collec- tion strategy based on clustering method. With the support of brokers, subcommunities are managed by reputation mechanism in an efficient and scalable way and help their members collect information with high quality. We show the performance of our trust and reputation system by simulation.
Sumario: As traffic participation is inherently a risky activity, traffic psychology has generated a great number of so-called risk models, i.e. models in which the risk concept plays a major role. We have developed our own model, the purpose of our model is to provide a structural framework which will allow us to describe the perceptual, judgemental and decision processes of traffic participants at all levels of traffic tasks, taking into account the subjective correlates of the probabilities of outcomes and of outcome values, explicity distinguishing between risk judgements and other judgements