ArticlePDF Available

Role of information about opponent's actions and intrusion-detection alerts on cyber-decisions in cybersecurity games

Article

Role of information about opponent's actions and intrusion-detection alerts on cyber-decisions in cybersecurity games

Abstract and Figures

Cyberspace is becoming increasingly prone to cyber-attacks. Cyber-attack and cyber-defend decisions may be influenced by the information about actions of adversaries and defenders available to opponents (interdependence information) as well as by alerts from intrusion detection systems (IDSs), systems that offer recommendations to defenders against cyber-attacks. However, currently little is known on how interdependence information and IDS alerts would influence the attack-and-defend decisions in cybersecurity. In this paper, we conducted a laboratory experiment involving participants (N = 140) performing as would-be adversaries (attackers) and defenders (analysts). Participants were randomly assigned to four between-subjects conditions in a cybersecurity game, where the conditions varied in interdependence information (full-information or no-information) and IDS alerts (present or absent). Results revealed that the proportion of defend (attack) actions were smaller (same) when IDS was present compared to absent. In addition, the proportion of defend (attack) actions were same (smaller) in conditions of full-information about opponents compared to no-information about opponents. Furthermore, to understand the experimental results, we calibrated cognitive models based upon Instance-Based Learning Theory (IBLT), a theory of decisions from experience. Based upon the experimental conditions, four different variants of a single IBL model were developed. Model results revealed excessive reliance on recency and frequency of information and cognitive noise among both attackers and defenders across different experimental conditions. We highlight the implications of our results for cyber-decisions in the real world.
Content may be subject to copyright.
A preview of the PDF is not available
Chapter
Full-text available
Cyber-attacks, an intentional effort to capture information or interrupt a network, are growing. Prior research in cybersecurity has investigated the influence of network size on adversarial decisions in a deception game involving honeypots experimentally. However, little is known about the cognitive mechanisms that modulate the influence of network size on adversarial decisions. The primary objective of this research is to investigate how an instance-based learning (IBL) model involving recency, frequency, and cognitive noise would make predictions about adversarial decisions in the presence of networks of different sizes. The experimental study involved the use of a deception game (DG) across three between-subjects conditions of different network sizes: small, medium, and large (N = 20 per condition). The results revealed that the proportion of honeypot and regular probes and attacks were more in the medium-sized and large-sized networks compared to small-sized networks. Similarly, the proportion of no probe and no attack actions were more in small-sized networks compared to medium-and large-sized networks. An IBL model was calibrated to the human decisions collected in the above experiment. An IBL model with ACT-R default parameters was also developed as a baseline. Results revealed that the IBL model with calibrated parameters explained adversary's decisions more accurately compared to the IBL model with ACT-R default parameters. Also, participants showed a greater reliance on recency and frequency of outcomes and smaller cognitive noise in their decision choices across three different network sizes. We highlight the main implications of our findings for the cognitive modeling community.
Chapter
Full-text available
Cyber-attacks are increasing rapidly in the real-world and these attacks cause widespread damage to cyber-infrastructure. Information about actions and consequences of opponents (interdependence information) is likely to influence the decision-making of defenders (analysts) and adversaries (hackers) during cyber-attacks; however, currently, little is known about the cognitive processes involved that would account for the role of interdependence information on the interaction between hackers and analysts. This chapter uses data from a published experiment involving the presence or absence of interdependence information about actions and payoffs of opponents in a simulated cyber-security game. We propose a computational cognitive model based upon Instance-Based Learning Theory (IBLT), a theory of decisions from experience, for explaining hackers' and analysts' decisions in the experiment. Results reveal that the model could account for participants' decisions as hackers and analysts in the presence or absence of interdependence information. We discuss the implications of our results for computational modeling of decisions in the cyber world.
Article
Full-text available
Deception via honeypots, computers that pretend to be real, may provide effective ways of countering cyber-attacks in computer networks. Although prior research has investigated the effectiveness of timing and amount of deception via deception-based games, it is unclear as to how the size of the network (i.e., number of computer systems in the network) influences adversarial decisions. In this research, using a deception game, we evaluate the influence of network size on adversary’s cyber-attack decisions. The deception game has two sequential stages, probe and attack, and it is defined as DG (n, k, γ), where n is the number of servers, k is the number of honeypots, and γ is the number of probes that adversary makes before attacking the network. In the probe stage, participants may probe a few web servers or may not probe the network. In attack the stage, participants may attack any one of the web servers or decide not to attack the network. In a laboratory experiment, participants were randomly assigned to a repeated deception game across three different between-subject conditions: small (20 participants), medium (20 participants), and large (20 participants). The small, medium, and large conditions used DG (2, 1, 1), DG (6, 3, 3), and DG (12, 6, 6) games, respectively (thus, the proportion of honeypots was kept constant at 50% in all three conditions). Results revealed that in the small network, the proportion of honeypot and no-attack actions were 0.20 and 0.52; whereas, in the medium (large) network, the proportion of honeypot and no-attack actions were 0.50 (0.50) and 0.06 (0.03), respectively. There was also an effect of probing actions on attack actions across all three network sizes. We highlight the implications of our results for networks of different sizes involving deception via honeypots.
Article
Full-text available
Cyber-attacks are deliberate attempts by adversaries to illegally access online information of other individuals or organizations. There are likely to be severe monetary consequences for organizations and its workers who face cyber-attacks. However, currently, little is known on how monetary consequences of cyber-attacks may influence the decision-making of defenders and adversaries. In this research, using a cyber-security game, we evaluate the influence of monetary penalties on decisions made by people performing in the roles of human defenders and adversaries via experimentation and computational modeling. In a laboratory experiment, participants were randomly assigned to the role of “hackers” (adversaries) or “analysts” (defenders) in a laboratory experiment across three between-subject conditions: Equal payoffs (EQP), penalizing defenders for false alarms (PDF) and penalizing defenders for misses (PDM). The PDF and PDM conditions were 10-times costlier for defender participants compared to the EQP condition, which served as a baseline. Results revealed an increase (decrease) and decrease (increase) in attack (defend) actions in the PDF and PDM conditions, respectively. Also, both attack-and-defend decisions deviated from Nash equilibriums. To understand the reasons for our results, we calibrated a model based on Instance-Based Learning Theory (IBLT) theory to the attack-and-defend decisions collected in the experiment. The model’s parameters revealed an excessive reliance on recency, frequency, and variability mechanisms by both defenders and adversaries. We discuss the implications of our results to different cyber-attack situations where defenders are penalized for their misses and false-alarms.
Article
Full-text available
Intrusion Detection Systems (IDSs) help in creating cyber situational awareness for defenders by providing recommendations. Prior research in simulation and game-theory has revealed that the presence and accuracy of IDS-like recommendations influence the decisions of defenders and adversaries. In the current paper, we present novel analyses of prior research by analyzing the sequential decisions of defenders and adversaries over repeated trials. Specifically, we developed computational cognitive models based upon Instance-Based Learning Theory (IBLT) to capture the dynamics of the sequential decisions made by defenders and adversaries across numerous conditions that differed in the IDS’s availability and accuracy. We found that cognitive mechanisms based upon recency, frequency, and variability helped account for adversarial and defender decisions better than the optimal Nash solutions. We discuss the implications of our results for adversarial-and-defender decisions in the cyber-world.
Chapter
Full-text available
Cyber-attacks are increasing in the real-world and cause widespread damage to cyber-infrastructure and loss of information. Deception, i.e., actions to promote the beliefs of things that are not true, could be a way of countering cyber-attacks. . In this paper, we propose a deception game, which we use to evaluate the decision making of a hacker in the presence of deception. In an experiment, using the deception game, we analyzed the effect of two between-subjects factors in Hacker’s decisions to attack a computer network (N = 100 participants): amount of deception used and the timing of deception. The amount of deception used was manipulated at 2-levels: low and high. The timing of deception use was manipulated at 2-levels: early and late. Results revealed that using late and high deception condition, proportion of not attack actions by hackers are higher. Our results suggest that deception acts as a deterrence strategy for hacker.
Conference Paper
Full-text available
Currently, little is known about how defenders' reliance on decision-support technology influences their decisions. Here, we designed a cyber-security game, where " hackers " decide whether to attack a computer network and " analysts " decide whether to defend the network based upon recommendations from IDS. We present results from an experiment with 200 participants randomly paired and assigned to one of four between-subjects conditions that varied in the IDS's availability (absent/present) and its accuracy (when present, it is 10%, 50%, or 90% accurate). Results revealed that proportion of attack and defend actions were similar and close to their Nash proportions when IDS was absent and when it was 50% accurate; but, these proportions were smaller and different from their Nash proportions when the IDS was inaccurate (10% accurate) or very accurate (90% accurate). Our results suggest that the presence of decision-support technology is likely to make defenders over rely on this technology. 1 Introduction Cyber-attacks, i.e., attempts by hackers to steal data and damage networks and systems , are increasing at an alarming rate [10, 16]. In fact, a recent survey by Cyber Ark suggests that currently, nations are at a greater risk from cyber-attacks compared to physical attacks [8]. According to Barack Obama, the cyber threat is one of the most serious economic and national security challenges [22]. In order to safeguard computer systems against cyber-attacks, the role of specialized human decision makers (i.e., Analysts) is indispensable [9, 12]. However, with the sudden and highly changing conditions in which analysts work, science has not yet caught-up with understanding of the basic cognitive and psychological processes that may influence analysts' work, and ultimately the safety of our information in the network. In this paper, we aim at a
Conference Paper
Full-text available
Cyber-attacks, i.e., disruption of normal functioning of computers and loss of information, are becoming widespread. Cyber security may be studied as a non-cooperative game as described by behavioral game theory. However, current game-theoretic approaches have based their conclusions on Nash equilibriums, while disregarding the role of information availability among hackers and analysts. In this study, we investigated how information availability affected behavior of analysts and hackers in 2x2 security games. In an experiment involving security games, interdependence information available to hackers and analysts was analyzed in two between-subjects conditions: “Info” and “No-Info”. In “Info” condition, both players had complete information about each other’s actions and payoffs, while this information was missing in “No-Info” condition. Results showed that presence of information caused analysts and hackers to increase their proportion of defend and attack actions, respectively. We highlight the relevance of our results to cyber-attacks in the real world.
Book
Game theory, the formalized study of strategy, began in the 1940s by asking how emotionless geniuses should play games, but ignored until recently how average people with emotions and limited foresight actually play games. This book marks the first substantial and authoritative effort to close this gap. Colin Camerer, one of the field's leading figures, uses psychological principles and hundreds of experiments to develop mathematical theories of reciprocity, limited strategizing, and learning, which help predict what real people and companies do in strategic situations. Unifying a wealth of information from ongoing studies in strategic behavior, he takes the experimental science of behavioral economics a major step forward. He does so in lucid, friendly prose. Behavioral game theory has three ingredients that come clearly into focus in this book: mathematical theories of how moral obligation and vengeance affect the way people bargain and trust each other; a theory of how limits in the brain constrain the number of steps of "I think he thinks . . ." reasoning people naturally do; and a theory of how people learn from experience to make better strategic decisions. Strategic interactions that can be explained by behavioral game theory include bargaining, games of bluffing as in sports and poker, strikes, how conventions help coordinate a joint activity, price competition and patent races, and building up reputations for trustworthiness or ruthlessness in business or life.
Article
We provide a tutorial on the basic attributes of computational cognitive models-models that are formulated as a set of mathematical equations or as a computer simulation. We first show how models can generate complex behavior and novel insights from very simple underlying assumptions about human cognition. We survey the different classes of models, from description to explanation, and present examples of each class. We then illustrate the reasons why computational models are preferable to purely verbal means of theorizing. For example, we show that computational models help theoreticians overcome the limitations of human cognition, thereby enabling us to create coherent and plausible accounts of how we think or remember and guard against subtle theoretical errors. Models can also measure latent constructs and link them to individual differences, which would escape detection if only the raw data were considered. We conclude by reviewing some open challenges.
Article
We analyze the dynamics of repeated interaction of two players in the Prisoner's Dilemma (PD) under various levels of interdependency information and propose an instance-based learning cognitive model (IBL-PD) to explain how cooperation emerges over time. Six hypotheses are tested regarding how a player accounts for an opponent's outcomes: the selfish hypothesis suggests ignoring information about the opponent and utilizing only the player's own outcomes; the extreme fairness hypothesis weighs the player's own and the opponent's outcomes equally; the moderate fairness hypothesis weighs the opponent's outcomes less than the player's own outcomes to various extents; the linear increasing hypothesis increasingly weighs the opponent's outcomes at a constant rate with repeated interactions; the hyperbolic discounting hypothesis increasingly and nonlinearly weighs the opponent's outcomes over time; and the dynamic expectations hypothesis dynamically adjusts the weight a player gives to the opponent's outcomes, according to the gap between the expected and the actual outcomes in each interaction. When players lack explicit feedback about their opponent's choices and outcomes, results are consistent with the selfish hypothesis; however, when this information is made explicit, the best predictions result from the dynamic expectations hypothesis.
Article
In social interactions, decision makers are often unaware of their interdependence with others, precluding the realization of shared long-term benefits. In an experiment, pairs of participants played an Iterated Prisoner's Dilemma under various conditions involving differing levels of interdependence information. Each pair was assigned to one of four conditions: “No-Info” players saw their own actions and outcomes, but were not told that they interacted with another person; “Min-Info” players knew they interacted with another person but still without seeing the other's actions or outcomes; “Mid-Info” players discovered the other's actions and outcomes as they were revealed over time; and “Max-Info” players were also shown a complete payoff matrix mapping actions to outcomes from the outset and throughout the game. With higher levels of interdependence information, we found increased individual cooperation and mutual cooperation, driven by increased reciprocating cooperation (in response to a counterpart's cooperation). Furthermore, joint performance and satisfaction were higher for pairs with more information. We discuss how awareness of interdependence may encourage cooperative behavior in real-world interactions. Copyright © 2013 John Wiley & Sons, Ltd.