Article

Developing decision support for cybersecurity threat and incident managers

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cybersecurity threat and incident managers in large organizations, especially in the financial sector, are confronted more and more with an increase in volume and complexity of threats and incidents. At the same time, these managers have to deal with many internal processes and criteria, in addition to requirements from external parties, such as regulators that pose an additional challenge to handling threats and incidents. Little research has been carried out to understand to what extent decision support can aid these professionals in managing threats and incidents. The purpose of this research was to develop decision support for cybersecurity threat and incident managers in the financial sector. To this end, we carried out a cognitive task analysis and the first two phases of a cognitive work analysis, based on two rounds of in-depth interviews with ten professionals from three financial institutions. Our results show that decision support should address the problem of balancing the bigger picture with details. That is, being able to simultaneously keep the broader operational context in mind as well as adequately investigating, containing and remediating a cyberattack. In close consultation with the three financial institutions involved, we developed a critical-thinking memory aid that follows typical incident response process steps, but adds big picture elements and critical thinking steps. This should make cybersecurity threat and incident managers more aware of the broader operational implications of threats and incidents while keeping a critical mindset. Although a summative evaluation was beyond the scope of the present research, we conducted iterative formative evaluations of the memory aid that show its potential.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Classifying how an information system performs in preparation for, during, and after a cyber incident is essential. Despite the prevalence of preventative-focused security controls in literature, the dynamic cybersecurity landscape has increased in terms of unpredictability (van der Kleij et al. 2022). The Verizon (2023) annual data breach report notes that ransomware accounted for 24% of breaches, increasing from less than 5% in 2020. ...
... Incident response requires a twofold approach. First, when dealing with known threats, organizations prioritize preventative security controls; however, not at the cost of ignoring the gradual implementation of detective and corrective measures (van der Kleij et al. 2022;Salvi et al. 2022). As prevention strategies mature, organizations must shift their attention to cater for more novel threats. ...
... Ramezani and Camarinha-Matos (2020) assert that organizations must therefore regard cyber hostility as the norm instead of the exception, given that highly improbable events can cause the same, if not more damage than probable events. These are also referred to as black swan events -those that cause the victims to completely reevaluate their levels of security (Baskerville et al. 2014;van der Kleij et al. 2022;Munoz et al. 2022). There is no contention that uncertainty is uncomfortable and efforts to reduce it should be strived for. ...
Conference Paper
Full-text available
To date, research on incident response has predominantly focused on system resilience in terms of recovery mechanisms, falling short of discussing how systems improve post-disruption. This work offers a novel conceptual investigation into systems that improve because of disruption experienced within the cybersecurity context – antifragile systems. Chaos engineering represents a prominent method for resilience engineering, accomplished by exposing systems to short-term stressors in a controlled environment to establish long-term sustainability. This paper contributes to the cybersecurity incident response literature by reframing how resilient systems are defined. The main contribution is the Resilient Systems Model, defining five system classifications: fragile, reliable, robust, recovery, and antifragile. This is necessary as these systems are oftentimes incorrectly defined and applied in an inconsistent manner. Organizational systems must strive to be as close to the top of the model as possible – fostering anticipatory practices, system improvement, and controlled experiments that stimulate learning.
... To do so, it is fundamental that the auditor is an expert on the used reference process. Examples of compliance requirements an auditor must check are ISO/TC 9001 (2014), van der Kleij et al. (2022): ...
... Manual approaches raise several issues (e.g., auditors' bias) during the evaluation. Indeed, van der Kleij et al. (2022) performed an expert-driven evaluation to determine the decisional tasks of an auditor during the IM process. They found there is a lack of attention to the details, while only macro-activities are checked. ...
... We report in Table 3 the comparison of the proposed work with the current state-of-the-art of IM compliance process assessment. It shows that most of the literature focuses on qualitative assessments by contributing frameworks (Mouratidis et al., 2023;He et al., 2022), rule-based approaches (Ly et al., 2012;Ghanem et al., 2023), and user studies (Shinde and Kulkarni, 2021;van der Kleij et al., 2022). This hinders the possibility of measuring compliance through suitable quantitative metrics (ND labels in Table 3 stand for Not Defined). ...
... One study promoted the use of real-time analytics to drive automated response decisions against known threats [51]. Another study argued for the automation of playbooks, which catered to routine threats [52]. ...
... The automation of well-defined, low-risk, predictive, and repetitive manual tasks was a generally agreed-upon use case [2,32,52,58]. Ambiguous processes often require increased human oversight [45,46]. Looking at this sub-theme differently, Ref. [22] developed APTEmu to automate well-defined attacks in a simulation environment to assess the sufficiency of mitigation procedures. ...
... In eliciting feedback from analysts on the adequacy of their automated IOC generation tool, one study acknowledged that automation bias exists in that they did not inform participants that the IOCs were automatically generated [28]. Another study suggested that the speed of automation introduces bias whereby analysts value quick response time over seeking confirmatory/contradictory information [52]. Ref. [35] asserted that over-explanation may put analysts in a position where they deem automation as superior and fail to consider its correctness and Ref. [55] comment that bias could originate from misleading explanations. ...
Article
Full-text available
The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To identify automation characteristics that assist in the mitigation of automation bias and complacency, we investigated the current and proposed application areas of automation in SOCs and discussed its implications for security analysts. A scoping review of 599 articles from four databases was conducted. The final 48 articles were reviewed by two researchers for quality control and were imported into NVivo14. Thematic analysis was performed, and the use of automation throughout the incident response lifecycle was recognized, predominantly in the detection and response phases. Artificial intelligence and machine learning solutions are increasingly prominent in SOCs, yet support for the human-in-the-loop component is evident. The research culminates by contributing the SOC Automation Implementation Guidelines (SAIG), comprising functional and non-functional requirements for SOC automation tools that, if implemented, permit a mutually beneficial relationship between security analysts and intelligent machines. This is of practical value to human automation researchers and SOCs striving to optimize processes. Theoretically, a continued understanding of automation bias and its components is achieved.
... Traditional automation is largely signature and anomaly-based, and threats are becoming more complex in their execution. Ref. [35] issue remarks that complexity (as seen in advanced persistent threats) results in increased indicators of compromise that security analysts and their tools must now account for, leading to an additional administrative burden. Attributing ineffective detection to poor rule configuration and tools repeatedly issuing unsolicited alerts was also reported [36]. ...
... Studies suggested that alert reports lacked context and were inconsistently formatted. One study indicated that security analysts face the cognitive task of piecing together the fragments of intrusion information while still considering the broader organizational impact [35]. Further literature reports that a lack of contextual information elongates response initiatives as analysts need to source information not presented to help inform their response decisions [49,50]. ...
... Various automated solutions were present in the incident response phase, demonstrating automation's technical advancements. Examples include decision support systems for known threats (rule-based systems) [35,59] and decision support systems for unknown threats (adaptable intrusion response systems) [60,61]. Concerning decision support systems, ref. [62] advocate for semi-autonomous systems that either offer recommendations and implementations depending on the threat faced. ...
Article
Full-text available
The continuous integration of automated tools into security operation centers (SOCs) increases the volume of alerts for security analysts. This amplifies the risk of automation bias and complacency to the point that security analysts have reported missing, ignoring, and not acting upon critical alerts. Enhancing the SOC environment has predominantly been researched from a technical standpoint, failing to consider the socio-technical elements adequately. However, our research fills this gap and provides practical insights for optimizing processes in SOCs. The synergy between security analysts and automation can potentially augment threat detection and response capabilities, ensuring a more robust defense if effective human-automation collaboration is established. A scoping review of 599 articles from four databases led to a final selection of 49 articles. Thematic analysis resulted in 609 coding references generated across four main themes: SOC automation challenges, automation application areas, implications on analysts, and human factor sentiment. Our findings emphasize the extent to which automation can be implemented across the incident response lifecycle. The SOC Automation Matrix represents our primary contribution to achieving a mutually beneficial relationship between analyst and machine. This matrix describes the properties of four distinct human-automation combinations. This is of practical value to SOCs striving to optimize their processes, as our matrix mentions socio-technical system characteristics for automated tools.
... Although IRM has a great significance within most companies ( Ahmad et al., 2012 ;Ruefle et al., 2014 ), it is not always well developed. Often, IRM is seen as a cost-center because it creates resourcing constraints ( Ahmad et al., 2021 ) and management awareness is missing ( van der Kleij et al., 2022 ). However, IRM can be considered crucial for organizations as incidents can escalate into emergencies and lead to reputational or financial losses besides disrupting business continuity. ...
... As effective IRM is a complex undertaking and requires substantial planning ( Cichonski et al., 2012 ;Grispos et al., 2015 ), different standards, guidelines, and frameworks exist (e.g., de et al., 2013 ;European Parliament, European Council, 2016 ;International Organization for Standardization, 2016 ;International Organization for Standardization (ISO) 2019 ;von, 1999 ;WA and WD, 2009 ). These regulations often include high-level approaches that fail in the application and are not evaluated ( van der Kleij et al., 2022 ). Furthermore, the socio-organizational perspectives of an IRM solution must be taken into account, as existing incident response literature often focuses on a technical view ( Ahmad et al., 2012 ). ...
... Thus, we discussed the model's fidelity with the real world phenomena, completeness, and internal consistency ( Sonnenberg and vom Brocke, 2012 ). We used an academic focus group consisting of 15 researchers in the field of information security to ensure the development of an MM that is easy to use, compact but complete in terms of content ( Sonnenberg and vom Brocke, 2012 ;Wilkinson, 2004 ), resulting in suggestions regarding the target group and consistency of the model and discussed free-standing capabilities ( Tremblay et al., 2010 ). ...
Article
Although the ongoing digital transformation offers new opportunities for organizations, more emphasis on information security is needed due to the evolving cyber-threat landscape. Despite all preventive measures, security incidents cannot entirely be mitigated. Organizations must establish incident response management to treat inevitable incidents in a structured manner and under considerable time pressure. If not handled, incidents can result in reputational or financial losses and disrupt business continuity. Especially organizations that have not addressed incident response management extensively need to understand which capabilities are required to develop their incident response management. However, research still lacks a practice-grounded and socio-technical conceptualization of those capabilities and their development. For such challenges, maturity models have proven valuable in practice and research. This paper follows a design science research approach to develop an incident response management maturity model (IRM3) closely aligned with practice requirements under a socio-technical lens. Iteratively applying and evaluating the IRM3 with seven real-world organizations leverages its comprehensive view based on four focus areas and 29 capability dimensions to understand which capabilities organizations need to approach incident response management. Building on existing research, this work provides a comprehensive perspective on incident response management and its associated capabilities. For practitioners, especially in organizations with initial incident response maturity, the IRM3 offers descriptive value when used as a status quo assessment tool and prescriptive value by outlining capabilities for successful incident response management.
... Models [10]- [13] focused on the description of the first performance indicator. They are based on the mathematical apparatus of time series analysis. ...
... From expression (13), it is seen that the numerator of the function dβ − lim (C)/dC is a quadratic function concerning the parameter π (C). Therefore, if one of the roots of the function (13) belongs to the interval [0, 1], then the extremum of the function β − lim (C) exists. Let us present this statement analytically: ...
... Since condition (11) was not satisfied for our RIS, the dependences β = f (π) were calculated by expression (13). Also relevant is objective information on the calculation for the RIS, the dependence of β = f (1 − f 1 ) where f 1 is the probability of correctly identifying the negative impact. ...
Article
Full-text available
The manuscript presents a mathematical apparatus for modeling the process of operation of the information system in the conditions of aggressive cyberspace, for which the corresponding parameter is provided. The highlight is that the simulation is carried out in the parametric space of reliability indicators, functional safety indicators, and economic indicators. The generalizing parameter in the mathematical apparatus is the coefficient of efficiency of operation of the studied system. It considers the accumulated parameter of efficiency of functioning of the studied system, the accompanying risk of its operation, and the number of resources invested in cybersecurity measures at its design stage. The connection of this coefficient with the probability of the information system transition to a non-functional state due to the realization of negative impact despite the resistance to cyber immune reaction is analytically described. The mathematical apparatus is developed to consider the errors of the first and second kind in identifying the negative impact on the information system. The search for the extreme value of the coefficient of the information system's efficiency from the number of resources invested in its cybersecurity measures is described considering the characteristic parameters of cyberspace in which the studied system is operated. The functionality of the created mathematical apparatus is demonstrated in the example of a study of a real information system of the Situational Center of the Department of Information Technologies of the Vinnytsia City Council (Ukraine). The results obtained showed that the amount of funds invested in cybersecurity at the design stage of the studied information system is sufficient for its operation in cyberspace, typical for the region. At the same time, the growth dynamics of the accumulated operational efficiency characteristics outpaces the growth dynamics of the characteristics of the risk of studied information system operation. The simulation results coincide entirely with the empirical experience of the studied system operation, which allows us to recognize the created mathematical apparatus as adequate. The simulation showed that when the value of the probability of incorrect identification of the negative impact level intersects the value of ≈0.007, the studied system operational efficiency coefficient begins to decline rapidly. It indicates that the amount of resources invested in cybersecurity of the studied information system is exhausted.
... Zhong et al., 2017). Analysts engage in a cognitively demanding analytical process that includes gathering pertinent data, identifying patterns of incidents, synthesizing information from various sources, and correlating different pieces of data to achieve cyber defense situation awareness (cyber SA) (Erbacher et al., 2010;Kleij et al., 2022;Zhong et al., 2017). The cyber SA enables analysts to gain a comprehensive understanding and project future attack behaviors based on the evolving situation. ...
... Despite the critical role of analysts in CSIR, minimal research has focused on understanding the cognitive mechanisms underpinning their decision-making (Kleij et al., 2022). The majority of the extant research has been on the system side of CSIR with a focus on incident detection performance (Yayla et al., 2022). ...
Article
Full-text available
Cybersecurity incident response (CSIR) is paramount for organizational resilience. At its core, analysts undertake a cognitively demanding process of data analytics to correlate data points, identify patterns, and synthesize diverse information. Recently, artificial intelligence (AI) based solutions have been utilized to streamline CSIR workflows, notably with an increasing focus on explainable AI (XAI) to ensure transparency. However, XAI also poses challenges, requiring analysts to allocate additional time to process explanations. This study addresses the gap in understanding how AI and its explanations can be seamlessly integrated into CSIR workflows. Employing a multi-method approach, we first interviewed analysts to identify their cognitive challenges, interactions with AI, and expectations from XAI. In a subsequent case study, we investigated the evolution of analysts' needs for AI explanations throughout the investigative process. Our findings yield several key propositions for addressing the cognitive impacts of XAI in CSIR, aiming to enhance cognitive fit to reduce analysts' cognitive load during investigations.
... This is mainly due to the fact that the incidence workload can be very high throughout the lifecycle of an information system, especially in typical cases such as the release of a new version of an application or an operating system upgrade. Thus, there is a strong need to take into account the specific efficiency and effectiveness needs of these new incident management support systems (van der Kleij et al., 2021). ...
... But for us, in this work, the most relevant problem faced by organizations is agility in managing and responding to security incidents (Tam et al., 2021). This agility translates into the need to respond to these incidents in the shortest possible time (van der Kleij et al., 2021;He et al., 2022). But this problem is becoming increasingly difficult to address, due to the growing number of incidents and their interconnection. ...
Article
Full-text available
The Information Security Management Systems (ISMS) are global and risk-driven processes that allow companies to develop their cybersecurity strategy by defining security policies, valuable assets, controls, and technologies for protecting their systems and information from threats and vulnerabilities. Despite the implementation of such management infrastructures, incidents or security breaches happen. Each incident has associated a level of severity and a set of mitigation controls, so in order to restore the ISMS, the appropriate set of controls to mitigate their damage must be selected. The time in which the ISMS is restored is a critical aspect. In this sense, classic solutions are efficient in resolving scenarios with a moderate number of incidents in a reasonable time, but the response time increases exponentially as the number of incidents increases. This makes classical solutions unsuitable for real scenarios in which a large number of incidents are handled and even less appropriate for scenarios in which security management is offered as a service to several companies. This paper proposes a solution to the incident response problem that acts in a minimal amount of time for real scenarios in which a large number of incidents are handled. It applies quantum computing, as a novel approach that is being successfully applied to real problems, which allows us to obtain solutions in a constant time regardless of the number of incidents handled. To validate the applicability and efficiency of our proposal, it has been applied to real cases using our framework (MARISMA).
... Additionally, security analysts require transparent explanations for why certain transactions are flagged as fraudulent [28,29] . ...
Article
Full-text available
Blockchain networks have become a cornerstone of decentralized finance and digital asset management, yet they remain susceptible to fraudulent activities, money laundering, and illicit financial transactions. Traditional anomaly detection methods, including rule-based systems and supervised machine learning models, often struggle to generalize across evolving blockchain transaction patterns due to their reliance on static heuristics and manually engineered features. Graph-based learning techniques offer a more robust approach by leveraging the inherent structure of blockchain transactions, where wallets and transactions form a dynamic graph.This study proposes a novel Spatial-Temporal Graph Neural Network (STGNN)-based anomaly detection framework for blockchain transactions. By modeling transaction flows as evolving graphs, the proposed system captures both spatial dependencies between wallets and temporal patterns in transaction sequences. The framework employs Graph Convolutional Networks (GCN) or Graph Attention Networks (GAT) to extract spatial representations, while Gated Recurrent Units (GRU) or Temporal Convolutional Networks (TCN) model the time-dependent evolution of transaction behaviors. The fusion of these spatial-temporal features enables the detection of anomalous transactions that deviate from expected network behaviors.Experimental evaluations on real-world blockchain datasets demonstrate that the STGNN-based model achieves higher detection accuracy, lower false positive rates, and better adaptability than traditional fraud detection techniques. The study further explores the system's scalability and generalization across different blockchain networks, revealing its potential for real-time monitoring of illicit financial activities. These findings highlight the effectiveness of graph-based deep learning models in strengthening blockchain security and provide a foundation for future research in decentralized fraud detection, anti-money laundering (AML) compliance, and intelligent financial surveillance.
... The potential of adaptive reinforcement learning to enhance cybersecurity incident response lies in its ability to autonomously refine response strategies, reduce reliance on human analysts, and improve the speed and accuracy of threat mitigation [26] . ...
Article
Full-text available
Cybersecurity threats have evolved dramatically over the past few decades, requiring organizations to continuously improve their security posture. Traditional cybersecurity incident response (CIR) frameworks, which rely on predefined rules and heuristics, have shown significant limitations in addressing sophisticated and rapidly evolving cyberattacks. The increasing complexity of threat landscapes necessitates adaptive security mechanisms capable of learning and evolving in real time. This paper explores the potential of Adaptive Reinforcement Learning (ARL) as a mechanism to enhance cybersecurity incident response strategies. Reinforcement learning (RL), a subset of machine learning, is well-suited for dynamic decision-making scenarios, where optimal strategies emerge through iterative learning. By integrating adaptive RL techniques into CIR, cybersecurity professionals can develop response strategies that continuously refine themselves based on observed threats, attack vectors, and system vulnerabilities. The study first examines conventional CIR approaches, discussing their constraints in modern cybersecurity environments. A comprehensive literature review explores the existing machine learning methodologies applied to cybersecurity and the emerging role of reinforcement learning in security applications. The methodology section presents the design and implementation of an ARL-driven incident response framework, detailing the algorithmic foundation, data sources, and training methodology. The effectiveness of the proposed approach is validated through extensive simulations across different cyberattack scenarios. Results highlight the superior performance of adaptive RL models in minimizing response time, improving threat mitigation rates, and reducing false positives when compared to traditional rule-based and supervised learning approaches. In addition to analyzing the results, the paper discusses practical challenges in deploying RL-based cybersecurity frameworks, including computational overhead, adversarial learning risks, and the need for high-quality training data. Future research directions are explored, emphasizing the importance of integrating federated learning techniques, adversarial resilience mechanisms, and multi-agent reinforcement learning systems to further enhance cybersecurity defenses. This study contributes to the growing field of AI-driven cybersecurity by demonstrating how adaptive reinforcement learning can optimize decision-making processes in real-time incident response, ultimately paving the way for more intelligent and resilient cyber defense strategies.
... Additionally, security analysts require transparent explanations for why certain transactions are flagged as fraudulent [28,29] . ...
Article
Full-text available
Financial fraud risk mitigation is a growing challenge as fraudsters continuously develop new tactics to evade detection. Traditional fraud prevention methods, including rule-based systems and supervised machine learning models, struggle to adapt to evolving fraud patterns, leading to high false positives and an increased risk of undetected fraudulent transactions. Recent advancements in graph neural networks (GNNs) have enabled fraud detection models to capture complex transactional relationships, allowing for the identification of hidden fraud networks. However, static GNN models remain limited in their ability to adapt to new fraud strategies in real-time.This study proposes a deep reinforcement learning (DRL)-based fraud risk mitigation framework, integrating GNNs with adaptive decision-making policies. The GNN component models financial transactions as a heterogeneous graph, capturing multi-hop fraud pathways and high-risk account interactions. The DRL agent continuously optimizes fraud classification thresholds, ensuring that fraud detection strategies remain adaptive to emerging fraud tactics. The model is evaluated on large-scale financial transaction datasets, demonstrating higher fraud detection accuracy, lower false positive rates, and improved real-time adaptability compared to traditional fraud detection models. The results confirm that graph-based learning combined with DRL provides a scalable, intelligent solution for financial fraud risk mitigation.
... Current research has focused primarily on RL for intrusion detection and policy optimization, but RL-driven incident response remains underexplored [24]. This study seeks to address this gap by developing an RL-based cybersecurity response framework capable of dynamically adapting to real-time attack scenarios, optimizing threat mitigation actions, and reducing false positives [25][26][27]. By integrating DQN and PPO, this research aims to demonstrate that RL can surpass static ML and rule-based security frameworks in terms of efficiency, adaptability, and decision-making accuracy. ...
Article
Cyber threats are evolving in complexity and frequency, rendering traditional cybersecurity response mechanisms insufficient. Conventional rule-based and supervised machine learning (ML) models struggle to adapt to novel attack patterns, leaving security systems vulnerable to emerging threats. Reinforcement learning (RL) offers a promising approach to adaptive cybersecurity by enabling systems to learn optimal defense strategies through continuous interaction with adversarial environments. This study explores an RL-based cybersecurity response framework that dynamically adjusts mitigation strategies based on real-time threat intelligence. The proposed model leverages deep Q-networks (DQN) and proximal policy optimization (PPO) to enhance automated threat detection, response efficiency, and adaptability to evolving attack vectors. The research evaluates the performance of RL-driven security automation through simulated attack scenarios, including distributed denial-of-service (DDoS) attacks, ransomware propagation, and zero-day exploits. The findings demonstrate that the RL model significantly improves incident response time, reduces false positives, and enhances overall threat mitigation success rates compared to traditional security frameworks. Additionally, the study identifies key challenges associated with RL-based cybersecurity, including computational overhead, adversarial vulnerabilities, and model interpretability. The results suggest that RL-driven security frameworks can serve as a viable alternative to static security models, offering organizations a scalable, self-learning defense mechanism against advanced cyber threats.
... We identified a proliferation of AI incident trackers (e.g., Hutiri et al., 2024;McGregor, 2021;Rodrigues et al., 2023;Shrishak, 2023). These AI incident trackers are informed by earlier incident reporting strategies to address system failures and risks in aviation (NASA Aviation Safety Reporting System, n.d.; Reynard, 1986), healthcare (Kohn et al., 2000;Macrae, 2016), software development (Booth et al., 2013) and cybersecurity (van der Kleij et al., 2022). However, current AI trackers 'rely heavily on news coverage of AI incidents' (Turri & Dzombak, 2023). ...
Article
Full-text available
Introduction. From the point of view of public policy, artificial intelligence (AI) is an emerging technology with as-yet-unknown risks. AI incident trackers collect harms and risks to inform policymaking. We investigate how labour is represented in two popular AI incident trackers. Our goal is to understand how well the knowledge organization of these incident trackers reveals labour-related risks for AI in the workplace, with a focus on how AI is impacting and expected to impact workers within the United States. Data and Analysis. We search for and analyse labour-related incidents in two AI incident trackers, the Organization for Economic Cooperation and Development's AI incidents monitor (OECD AIM) and the AI incident database (AIID) from the responsible AI collaborative. Results. The OECD AIM database categorised workers as stakeholders for 600 incidents with 6,744 associated news reports. From the AIID, we constructed a set of 57 labour-related incidents. Discussion and Conclusions. The AI incident trackers do not facilitate ready retrieval of labour-related incidents: they used limited existing labour-related terminology. AI incident trackers' reliance on news reports risks overrepresenting some sectors and depends on the news reports' framing of the evidence.
... Cyber threats continue to change frequently, and it is rare to hold a single training session so that employees can be updated on current threats and the measures that need to be taken to counter them van der Kleij et al. It has been established that their nature and threats change over time, thus requiring professional development to provide continuous updates about such threats [78,79]. Continual information on new malware threats, phishing, and other vulnerabilities makes employees knowledgeable about such related threats. ...
Article
Full-text available
In the modern world of networks, managing multiple threats in large-scale network environments is an essential goal of organizations to protect their system and essential information. Measures to counter threats and sources of vulnerability and the chances of cyber threats and cyber incidences are the primary focuses of this paper. Vulnerability management, network segmentation, and access controls are the first pillars of a sound cybersecurity program. Vulnerability management is a repeated process of discovering, analyzing, documenting, addressing, and checking general and individual protection flaws before an attacker can. Dividing the network becomes effective in preventing threats from affecting certain highly vulnerable areas of the network, as well as improving the levels of security control. Adopting robust identity and access management solutions, including MFA and RBAC, adds another layer of protection to the network by limiting access to networks and resources to only authorized users. Furthermore, the relevance of repeating vulnerability assessments and penetration testing is underlined, as these activities help update the information on the organization’s security and identify insecure positions. However, factors like the increasing pace of vulnerabilities and the necessity to adapt instantly to novel threats show that protection is not easy. Cyber threats are still on the rise, and as a result, the adoption of the spur) advanced technologies like AI in security frameworks boost detection rates and reaction time. The paper concludes by emphasizing the need for constant monitoring and implementing preventive measures to safeguard against emerging and evolving cyber threats.
... Even though applying the CWA can be challenging, it has been applied several times over recent decades to design information systems approaches in a set of diverse use cases, i.e. enterprise social network technologies [58], safety in passenger transportation and vehicle occupancy optimisation [59], railway safety [60], mining operations [61], para-sports [62], sustainable emergency system development [63], cyber security [64] as well as in military and aerospace use cases [65]. ...
Preprint
Full-text available
This specific paper regards the previous preprint version (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4851104) submitted to SSRN that APEN allowed us to publish also as a public preprint. The final published version you can find in: https://authors.elsevier.com/a/1jyUL15eifC7%7EL I'm happy to answer any questions. I hope it is useful, and I would love to get feedback on our work. We are all open to collaboration and further expanding our research.
... Resilience and Incident Response: According to [168] and [169], there is need for research that focuses on improving organizational resilience [170] to cyber incidents, including strategies for rapid incident response, effective recovery, and minimizing the impact of attacks. This includes studying incident response processes, incident management frameworks, and approaches for managing complex and coordinated attacks [171]- [173]. ...
Article
Full-text available
Organizational information security is a critical concern in today's interconnected and data-driven world. With the increasing frequency and sophistication of cyber threats, organizations face significant risks to the confidentiality, integrity, and availability of their sensitive information. This paper provides an overview of the key aspects and challenges related to organizational information security. It highlights the importance of implementing robust security measures, such as firewalls, intrusion detection systems, encryption technologies, and secure coding practices, to protect against external threats. It also demonstrates the need for continuous monitoring, threat intelligence sharing, and incident response capabilities to detect and respond to security incidents effectively. This survey shows importance of user awareness, training, and adherence to security policies and procedures. In addition, the significance of establishing a security-centric culture within organizations to mitigate the risk of insider threats and promote a strong security posture is discussed. The evolving threat landscape, including challenges associated with advanced persistent threats, zero-day vulnerabilities, and the security of emerging technologies such as IoT and AI are highlighted, together with the need for ongoing research and innovation to address these challenges and enhance the effectiveness of preventive measures.
Article
Full-text available
As cyber threats become increasingly sophisticated, human factors remain one of the most exploited vulnerabilities in security breaches, particularly in the context of Advanced Persistent Threats (APTs). Traditional cybersecurity approaches focus on technological defenses, yet they often overlook the cognitive biases, social engineering tactics, and decision-making errors that adversaries exploit. This review explores the integration of behavioral science with CTI as a strategic approach to counter APTs and mitigate human-enabled security breaches. By examining cognitive vulnerabilities, psychological manipulation techniques, and behavior-based interventions, this study highlights the need for adaptive security frameworks that incorporate human-centric defenses. Additionally, the role of artificial intelligence and machine learning in enhancing behavior-based threat detection and response is discussed. The review further addresses challenges in integrating behavioral insights with CTI, ethical considerations, and emerging advancements in human-centric cybersecurity models. Ultimately, this paper advocates for a multidisciplinary approach that combines behavioral science and CTI to develop proactive, intelligence-driven security strategies capable of addressing the evolving cyber threat landscape.
Article
Full-text available
Cybercriminals target the healthcare sector because patient data can be traded illegally. These include ransomware attacks and medical record theft. This article analyzes the internal factors that make healthcare firms vulnerable to cybercrime. It also examines their complicated tactics for entering and profiting from these sectors. This paper examines healthcare cybersecurity case studies and current research. The goal is to help readers understand the main challenges of fighting different cyberattacks. The methodology part includes a thorough literature review and rigorous analysis of data from academic sources, industry publications, and cyber security incident databases. The numbers show the diverse threat actors targeting the healthcare business, their techniques, and the vulnerabilities of healthcare institutions to cyber attacks. The report concludes that healthcare cybersecurity is crucial. The guidelines presented are essential for healthcare cybersecurity.
Article
Full-text available
Organizational cybersecurity relies heavily on security operation centers (SOCs) to protect businesses and institutions from emerging cyber threats. In recent years, the complexity and sophistication of cyber threats have increased, pushing SOCs to their limits. As a result, SOCs struggle to address the evolving threat landscape due to their reliance on isolation technologies and reactive strategies. However, advanced technologies, such as artificial intelligence (AI) and machine learning (ML), have the potential to revolutionize SOCs by enhancing threat identification and response capabilities, as well as predicting and preempting risks. To address these challenges and highlight the full potential of SOC, this study provides a detailed overview through a comprehensive literature review that identifies gaps in existing research and examines the latest technologies used in the SOC environment to help address different operational and technical challenges and bring out their capabilities. Various methods, ranging from automated incident response and behavioral analytics to neural networks and deep learning, have been classified and compared. In addition, an in-depth reference architectural model, which is a blueprint for SOC integrating AI and ML into SOCs, is introduced. The proposed model provides a structured framework for implementation and offers insights into different SOC components and their interactions. Moreover, this systematic review emphasizes the benefits of these technologies for enhancing security operations. Finally, a case study is presented to describe the function of ML-and AI-powered SOC components to achieve optimum security. This paper concludes by discussing additional challenges and future research directions that may help advance the cybersecurity sector and provide insights into improving SOCs.
Chapter
Human analysts working for threat intelligence leverage tools powered by artificial intelligence to routinely assemble actionable intelligence. Yet, threat intelligence sources and methods often have significant uncertainties and biases. In addition, data sharing might be limited for operational, strategic, or legal reasons. Experts are aware of these limitations but lack formal means to represent and quantify these uncertainties in their daily work. In this chapter, we enunciate the technical, legal, and societal challenges for building explainable AI for threat intelligence. We also discuss ideas for overcoming these challenges.
Article
The review investigates the pressing need for robust cybersecurity measures within the logistics and shipping sector, where the digital supply chain is vulnerable to a myriad of cyber threats. The paper delves into the specific challenges faced by logistics companies, including the interconnectedness of global supply chains, reliance on digital technologies for operations, and the high value of goods in transit. It explores the multifaceted nature of cyber risks, encompassing threats such as ransomware, phishing attacks, data breaches, and supply chain disruptions, which can have far-reaching consequences for business continuity and reputation. Through a detailed analysis, the study elucidates cybersecurity best practices tailored to the logistics and shipping industry, encompassing both technical solutions and organizational policies. These include implementing robust authentication and access controls, encrypting sensitive data in transit and at rest, establishing secure communication channels, and conducting regular vulnerability assessments and penetration testing. Furthermore, the paper emphasizes the importance of fostering a culture of cybersecurity awareness among employees through comprehensive training programs and incident response drills. It also discusses the role of regulatory compliance frameworks such as GDPR, CCPA, and industry-specific standards like ISO 27001 in guiding cybersecurity efforts and ensuring adherence to best practices. By providing actionable recommendations and insights garnered from real-world case studies, the study equips logistics and shipping companies with the knowledge and tools needed to bolster their cybersecurity defenses, safeguard critical assets, and maintain trust in the digital supply chain ecosystem.
Article
Full-text available
As the Internet of Things (IoT) becomes more integral across diverse sectors, including healthcare, energy provision and industrial automation, the exposure to cyber vulnerabilities and potential attacks increases accordingly. Facing these challenges, the essential function of an Information Security Management System (ISMS) in safeguarding vital information assets comes to the fore. Within this framework, risk management is key, tasked with the responsibility of adequately restoring the system in the event of a cybersecurity incident and evaluating potential response options. To achieve this, the ISMS must evaluate what is the best response. The time to implement a course of action must be considered, as the period required to restore the ISMS is a crucial factor. However, in an environmentally conscious world, the sustainability dimension should also be considered to choose more sustainable responses. This paper marks a notable advancement in the fields of risk management and incident response, integrating security measures with the wider goals of sustainability and corporate responsibility. It introduces a strategy for handling cybersecurity incidents that considers both the response time and sustainability. This approach provides the flexibility to prioritize either the response time, sustainability or a balanced mix of both, according to specific preferences, and subsequently identifies the most suitable actions to re-secure the system. Employing a quantum methodology, it guarantees reliable and consistent response times, independent of the incident volume. The practical application of this novel method through our framework, MARISMA, is demonstrated in real-world scenarios, underscoring its efficacy and significance in the contemporary landscape of risk management.
Article
Increasing threats to the confidentiality and integrity of information require careful consideration of the problem of its protection. This is confirmed by the constantly spreading information about successful hacker attacks. Thus, the problem of securing information that has financial, competitive, military or political value is extremely relevant. However, increasing confidentiality should not forget about its antipode – availability. An effective information security protection subsystem must ensure a rational balance between the values of these dependability attributes. Analytically, this concept of balance can be embodied in the task of optimizing the values of the characteristic parameters of such a subsystem. At the same time, the concept of efficiency should be extended to such a mathematical apparatus. Its complexity should ensure the adequacy of the description of the information protection process but not be excessive to ensure that it can be applied. Based on these initial provisions, the article presents a method of operational optimization of the composition of the information security protection subsystem, taking into account the aggressiveness of cyberspace in which the target information system is operated. The method is formalized in the paradigm of Markov chains with the approach to the formulation of the classical optimization task, which is classified as nonlinear discrete. Considering the lack of a universal method for solving such mathematical programming tasks, the article adopts the method of sequential variants analysis for such purposes. The results of the experiments proved the adequacy and functionality of the proposed method.
Article
Full-text available
The very raison d’être of cyber threat intelligence (CTI) is to provide meaningful knowledge about cyber security threats. The exchange and collaborative generation of CTI by the means of sharing platforms has proven to be an important aspect of practical application. It is evident to infer that inaccurate, incomplete, or outdated threat intelligence is a major problem as only high-quality CTI can be helpful to detect and defend against cyber attacks. Additionally, while the amount of available CTI is increasing it is not warranted that quality remains unaffected. In conjunction with the increasing number of available CTI, it is thus in the best interest of every stakeholder to be aware of the quality of a CTI artifact. This allows for informed decisions and permits detailed analyses. Our work makes a twofold contribution to the challenge of assessing threat intelligence quality. We first propose a series of relevant quality dimensions and configure metrics to assess the respective dimensions in the context of CTI. In a second step, we showcase the extension of an existing CTI analysis tool to make the quality assessment transparent to security analysts. Furthermore, analysts’ subjective perceptions are, where necessary, included in the quality assessment concept.
Article
Full-text available
The growing sophistication, frequency and severity of cyberattacks targeting financial sector institutions highlight their inevitability and the impossibility of completely protecting the integrity of critical computer systems. In this context, cyber-resilience offers an attractive complementary alternative to the existing cybersecurity paradigm. Cyber-resilience is defined in this article as the capacity to withstand, recover from and adapt to the external shocks caused by cyber risks. Resilience has a long and rich history in a number of scientific disciplines, including in engineering and disaster management. One of its main benefits is that it enables complex organizations to prepare for adverse events and to keep operating under very challenging circumstances. This article seeks to explore the significance of this concept and its applicability to the online security of financial institutions. The first section examines the need for cyber-resilience in the financial sector, highlighting the different types of threats that target financial systems and the various measures of their adverse impact. This section concludes that the “prevent and protect” paradigm that has prevailed so far is inadequate, and that a cyber-resilience orientation should be added to the risk managers’ toolbox. The second section briefly traces the scientific history of the concept and outlines the five core dimensions of organizational resilience, which is dynamic, networked, practiced, adaptive, and contested. Finally, the third section analyses three types of institutional approaches that are used to foster cyber-resilience in the financial sector (and beyond): (i) a thriving cybersecurity industry is promoting cyber-resilience as the future of security; (ii) standards bodies are embedding cyber-resilience into some of their cybersecurity standards; and (iii) regulatory agencies have developed a broad range of compliance tools aimed at enhancing cyber-resilience.
Article
Full-text available
One of the most important requirements for successful learning experiences is learning activity on a regular basis. The problem with today’s learning system is that the learners often get stuck while using traditional learning systems because they can’t motivate them to fast learning and make a creative mind. Successful learning requires getting knowledge on regular bases and keeping it memorable as long as possible. The problem with traditional learning methods is that the learner's mind glued in its stateand it does not provide any motivation to them to get new knowledge and improve their skills. Microlearning provides a new teaching paradigm which can allow knowledge and information to divided into small chunks and deliver it to the learners. Microlearning can make the learning subjects easy to understand and memorable for a longer period. In this work, we tested microlearning teaching methods for ICT subject in the Primaryschool. We chose two groups from a Primaryschool in Sulaimani city. Then we teachthe class using microlearning methods in one of them and traditional methods in the other for six weeks. After testing both groups getting the results, Microlearning group showed around 18% better learning than traditional group. We can conclude that using microlearning techniques, the effectiveness,and efficiency of learning can be improved. Also, the knowledge can stay memorable for longer periods
Article
Full-text available
Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.
Chapter
Full-text available
This chapter discusses lesson learned working with cyber situation awareness and network security domain experts to integrate visualizations into their current workflows. Working closely with network security experts, we discovered a critical set of requirements that a visualization must meet to be considered for use by the these domain experts. We next present two separate examples of visualizations that address these requirements: a flexible web-based application that visualizes network traffic and security data through analyst-driven correlated charts and graphs, and a set of ensemble-based extensions to visualize network traffic and security alerts using existing and future ensemble visualization algorithms.
Article
Full-text available
An ever increasing number of critical missions rely today on complex Information Technology infrastructures, making such missions vulnerable to a wide range of potentially devastating cyber-attacks. Attackers can exploit network configurations and vulnerabilities to incrementally penetrate a network and compromise critical systems, thus rendering security monitoring and intrusion detection much more challenging. It is also evident from the ever growing number of high-profile cyber-attacks reported in the news that not only are cyber-attacks growing in sophistication but also in numbers. For these reasons, cyber-security analysts need to continuously monitor large amounts of alerts and data from a multitude of sensors in order to detect attacks in a timely manner and mitigate their impact. However—given the inherent complexity of the problem—manual analysis is labor-intensive and error-prone, and distracts the analyst from getting the “big picture” of the cyber situation.
Article
Full-text available
Current education systems must respond to meet the increasing need for cyber security and information technology (IT) professionals. However, little research has been conducted on understanding the development of expertise in cyber security and IT, the efficacy of current systems designed to accelerate expertise and/or train cyber security and IT professionals, and the perceived efficacy of these systems rated by the professionals themselves. Moreover, virtually no research exists with respect to the benefit of traditional (classroom-based) formal education compared to informal (self-taught) learning in these complex settings. This paper attempts to address these questions through the use of an online survey of professionals and a follow-up interview with professionals examining this question.
Article
Full-text available
Improved computer security requires improvements in risk communication to naive end users. E-cacy of risk commu- nication depends not only on the nature of the risk, but also on the alignment between the conceptual model embedded in the risk communication and the recipients' perception of the risk. The difierence between these communicated and perceived mental models could lead to inefiective risk com- munication. The experiment described in this paper shows that for a variety of security risks self-identifled security ex- perts and non-experts have difierent mental models. We illustrate that this outcome is sensitive to the deflnition of \expertise". We also show that the models implicit in the literature do not correspond to experts or non-expert mental models. We propose that risk communication should be de- signed based on the non-expert's mental models with regard to each security risk and discuss how this can be done.
Article
Full-text available
In this study, we describe how to use innovative techniques to improve the decision-making process in crisis response organizations. The focus was on building situation awareness of a crisis and overcoming pitfalls such as tunnel vision and information bias through using critical thinking. We started by observing typical difficulties in crisis management in a field study. The essential elements of concern were a deficit in sharing and communicating understanding and a patchy overview of the topics communicated, within as well as between teams. Communication frequently did not entail the reasoning behind a decision that was made. We therefore developed a critical thinking tool that made the reasoning process more explicit and at the same time more robust by tying it to specific hypotheses. We studied a candidate support tool in a controlled setting and found that people made better judgments, particularly in situations where they would be prone to decision biases. We subsequently extended the critical thinking tool to a team setting. We list a number of requirements that are essential for support systems that intend to limit tunnel vision and alleviate communication and coordination problems in crisis response organizat
Chapter
Full-text available
Book
Full-text available
Cognitive Work Analysis: Coping with Complexity Imprint: Ashgate Published: December 2008 Format: 234 x 156 mm Extent: 298 pages Binding: Hardback ISBN: 978-0-7546-7026-1 Price : 99.95»Websiteprice:99.95 » Website price: 89.96 BL Reference: 620.8'2 LoC Control No: 2008030279 Print friendly information sheet Send to a friend Daniel P. Jenkins, Sociotechnic Solutions, UK, Neville A. Stanton, Paul M. Salmon and Guy H. Walker, Brunel University, UK Series : Human Factors in Defence 'Complex sociotechnical systems' are systems made up of numerous interacting parts, both human and non-human, operating in dynamic, ambiguous and safety critical domains. Cognitive Work Analysis (CWA) is a structured framework specifically developed for considering the development and analysis of these complex socio-technical systems. Unlike many human factors approaches, CWA does not focus on how human-system interaction should proceed (normative modelling) or how human-system interaction currently works (descriptive modelling). Instead, through a focus on constraints, it develops a model of how work can be conducted within a given work domain, without explicitly identifying specific sequences of actions (formative modelling). The framework leads the analyst to consider the environment the task takes place within, and the effect of the imposed constraints on the way work can be conducted. It provides guidance through the process of answering the questions of why the system exists, what activities can be conducted within the domain as well as how these activities can be achieved, and who can perform them. The first part of the book contains a comprehensive description of CWA, introducing it to the uninitiated. It then presents a number of applications in complex military domains to explore and develop the benefits of CWA. Unlike much of the previous literature, particular attention is placed on exploring the CWA framework in its entirety. This holistic approach focuses on the system environment, the activity that takes place within it, the strategies used to conduct this activity, the way in which the constituent parts of the system (both human and non-human) interact and the behaviour required. Each stage of this analysis identifies the constraints governing the system; it is contended that through this holistic understanding of constraints, recommendations can be made for the design of system interaction; increasing the ability of users to cope with unanticipated, unexpected situations. This book discusses the applicability of the approach in system analysis, development and evaluation. It provides process to what was previously a loosely defined framework.
Conference Paper
Full-text available
In computer security, risk communication refers to informing computer users about the likelihood and magnitude of a threat. Efficacy of risk communication depends not only on the nature of the risk, but also on the alignment between the conceptual model embedded in the risk communication and the user’s mental model of the risk. The gap between the mental models of security experts and non-experts could lead to ineffective risk communication. Our research shows that for a variety of the security risks self-identified security experts and non-experts have different mental models. We propose that the design of the risk communication methods should be based on the non-expert mental models.
Article
Full-text available
This study addresses the human factors challenge of designing and validating decision support to promote less biased intelligence analysis. The confirmation bias can compromise objectivity in ambiguous medical and military decision making through neglect of conflicting evidence and judgments not reflective of the entire evidence spectrum. Previous debiasing approaches have had mixed success and have tended to place additional demands on users' decision making. Two new debiasing interventions that help analysts picture the full spectrum of evidence, the relation of evidence to a hypothesis, and other analysts' evidence assessments were manipulated in a repeated-measures design: (a) an integrated graphical evidence layout, compared with a text baseline; and (b) evidence tagged with other analysts' assessments, compared with participants' own assessments. Twenty-seven naval trainee analysts and reservists assessed, selected, and prioritized evidence in analysis vignettes carefully constructed to have balanced supporting and conflicting evidence sets. Bias was measured for all three evidence analysis steps. A bias to select a skewed distribution of confirming evidence occurred across conditions. However, graphical evidence layout, but not other analysts' assessments, significantly reduced this selection bias, resulting in more balanced evidence selection. Participants systematically prioritized the most supportive evidence as most important. Domain experts exhibited confirmation bias in a realistic intelligence analysis task and apparently conflated evidence supportiveness with importance. Graphical evidence layout promoted more balanced and less biased evidence selection. Results have application to real-world decision making, implications for basic decision theory, and lessons for how shrewd visualization can help reduce bias.
Article
Full-text available
Cognitive task analysis (CTA) is a set of methods for identifying cognitive skills, or mental demands, needed to perform a task proficiently. The product of the task analysis can be used to inform the design of interfaces and training systems. However, CTA is resource intensive and has previously been of limited use to design practitioners. A streamlined method of CTA, Applied Cognitive Task Analysis (ACTA), is presented in this paper. ACTA consists of three interview methods that help the practitioner to extract information about the cognitive demands and skills required for a task. ACTA also allows the practitioner to represent this information in a format that will translate more directly into applied products, such as improved training scenarios or interface recommendations. This paper will describe the three methods, an evaluation study conducted to assess the usability and usefulness of the methods, and some directions for future research for making cognitive task analysis accessible to practitioners. ACTA techniques were found to be easy to use, flexible, and to provide clear output. The information and training materials developed based on ACTA interviews were found to be accurate and important for training purposes.
Article
Full-text available
Troubleshooting is often a time-consuming and difficult activity. The question of how the training of novice technicians can be improved was the starting point of the research described in this article. A cognitive task analysis was carried out consisting of two preliminary observational studies on troubleshooting in naturalistic settings, combined with an interpretation of the data obtained in the context of the existing literature. On the basis of this cognitive task analysis, a new method for the training of troubleshooting was developed (structured troubleshooting), which combines a domain-independent strategy for troubleshooting with a context-dependent, multiple-level, functional decomposition of systems. This method has been systematically evaluated for its use in training. The results show that technicians trained in structured troubleshooting solve twice as many malfunctions, in less time, than those trained in the traditional way. Moreover, structured troubleshooting can be taught in less time than can traditional troubleshooting. Finally, technicians learn to troubleshoot in an explicit and uniform way. These advantages of structured troubleshooting ultimately lead to a reduction in training and troubleshooting costs.
Article
In this paper I develop a model for the application of rationality constraints in cyber incident handling, attribution and threat intelligence. The basic idea of this paper is that handling, analysis and attribution involves ‘epistemic states’ that are based on a limited understanding of the attackers motives, opportunities, steps and specific movements. These states are updated dynamically during the incident response process. In a similar manner, epistemic states also play a role in cyber threat intelligence and attribution. Such updates are limited in scope and piecemeal. The paper argues that despite these limitations, such updates are still valuable contributors to a robust explanation of events. I contrast this characterization with current assumptions in the literature and argue for the moral strength of specific rationality constraints in how intelligence from cyber attributions is analyzed, reported and disseminated.
Article
Emerging paradigms of attack challenge enterprise cybersecurity with sophisticated custom-built tools, unpredictable patterns of exploitation, and an increasing ability to adapt to cyber defenses. As a result, organizations continue to experience incidents and suffer losses. The responsibility to respond to cybersecurity incidents lies with the incident response (IR) function. We argue that (1) organizations must develop ‘agility’ in their IR process to respond swiftly and efficiently to sophisticated and potent cyber threats, and (2) Real-time analytics (RTA) gives organizations a unique opportunity to drive their IR process in an agile manner by detecting cybersecurity incidents quickly and responding to them proactively. To better understand how organizations can use RTA to enable IR agility, we analyzed in-depth data from twenty expert interviews using a contingent resource-based view. The results informed a framework explaining how organizations enable agile characteristics (swiftness, flexibility, and innovation) in the IR process using the key features of the RTA capability (complex event processing, decision automation, and on-demand and continuous data analysis) to detect and respond to cybersecurity incidents as-they-occur which, in turn, improves their overall enterprise cybersecurity performance.
Article
A critical component to any modern cybersecurity endeavor is effective use of its human resources to secure networks, maintain services and mitigate adversarial events. Despite the importance of the human cyber- analyst and operator to cybersecurity, there has not been a corresponding rise in data-driven analytical approaches for understanding, evaluating, and improving the effectiveness of cybersecurity teams as a whole. Fortunately, cyber defense competitions are well-established and provide a critical window into what makes a cybersecurity team more or less effective. We examined data collected at the national finals and four regional events of the Collegiate Cyber Defense Competition and posited that experience, access to simulation-based training, and functional role composition by the teams would predict team performance on four scoring dimensions relevant to the application of information assurance skills and defensive cyber operations: (a) maintaining services, (b) help-desk customer support, (c) handling scenario injects, and (d) mitigating red team attacks. Bayesian analysis highlighted that experience was a strong predictor of service availability, scenario injects, and red team defense. Simulation training was also associated with good performance along these scoring dimensions. High-performing and experienced teams clustered with one another based on the functional role composition of team skills. These results are discussed within the context of stages of team development, the efficacy of challenge-based learning events, and reinforce previous analytical results from cyber competitions.
Article
The digital landscape is evolving at a rapid speed and it is causing a significant impact on global cyber security trends. In particular, cyber attacks are also changing their shape in terms of aspects such as targets and techniques.¹ Studies have shown that information theft is the fastest-rising and the most expensive type of cybercrime, increasing at an alarming rate over the past few years. Previously, cyber criminals used to target data stored in various organisational information systems – such as financial data and identity data relating to individuals. However, trends show that cyber criminals have recently shifted their focus towards industrial control systems to disrupt industrial processes and destroy related data.
Article
Organized, sophisticated and persistent cyber-threat-actors pose a significant challenge to large, high-value organizations. They are capable of disrupting and destroying cyber infrastructures, denying organizations access to IT services, and stealing sensitive information including intellectual property, trade secrets and customer data. Past research points to Situation Awareness as critical to effective response. However, most research has focused on the technological perspective with comparatively less focus on the practice perspective. We therefore present an in-depth case study of a leading financial organization with a well-resourced and mature incident response capability that has evolved as a result of experiences with past attacks. Our contribution is a process model that explains how organizations can practice situation awareness of the cyber-threat landscape and the broad business context in incident response.
Article
Digital assets of organizations are under constant threat from a wide assortment of nefarious actors. When threats materialize, the consequences can be significant. Most large organizations invest in a dedicated information security management (ISM) function to ensure that digital assets are protected. The ISM function conducts risk assessments, develops strategy, provides policies and training to define roles and guide behavior, and implements technological controls such as firewalls, antivirus, and encryption to restrict unauthorized access. Despite these protective measures, incidents (security breaches) will occur. Alongside the security management function, many organizations also retain an incident response (IR) function to mitigate damage from an attack and promptly restore digital services. However, few organizations integrate and learn from experiences of these functions in an optimal manner that enables them to not only respond to security incidents, but also proactively maneuver the threat environment. In this article we draw on organizational learning theory to develop a conceptual framework that explains how the ISM and IR functions can be better integrated. The strong integration of ISM and IR functions, in turn, creates learning opportunities that lead to organizational security benefits including: increased awareness of security risks, compilation of threat intelligence, removal of flaws in security defenses, evaluation of security defensive logic, and enhanced security response.
Article
Objective: Incident correlation is a vital step in the cybersecurity threat detection process. This article presents research on the effect of group-level information-pooling bias on collaborative incident correlation analysis in a synthetic task environment. Background: Past research has shown that uneven information distribution biases people to share information that is known to most team members and prevents them from sharing any unique information available with them. The effect of such biases on security team collaborations are largely unknown. Method: Thirty 3-person teams performed two threat detection missions involving information sharing and correlating security incidents. Incidents were predistributed to each person in the team based on the hidden profile paradigm. Participant teams, randomly assigned to three experimental groups, used different collaboration aids during Mission 2. Results: Communication analysis revealed that participant teams were 3 times more likely to discuss security incidents commonly known to the majority. Unaided team collaboration was inefficient in finding associations between security incidents uniquely available to each member of the team. Visualizations that augment perceptual processing and recognition memory were found to mitigate the bias. Conclusion: The data suggest that (a) security analyst teams, when conducting collaborative correlation analysis, could be inefficient in pooling unique information from their peers; (b) employing off-the-shelf collaboration tools in cybersecurity defense environments is inadequate; and (c) collaborative security visualization tools developed considering the human cognitive limitations of security analysts is necessary. Application: Potential applications of this research include development of team training procedures and collaboration tool development for security analysts.
Book
Today, when a security incident happens, the top three questions a cyber operation center would ask are: What has happened? Why did it happen? What should I do? Answers to the first two questions form the core of Cyber Situation Awareness (SA). Whether the last question can be satisfactorily addressed is largely dependent upon the cyber situation awareness capability of an enterprise. The goal of this book is to present a summary of recent research advances in the development of highly desirable Cyber Situation Awareness capabilities. The 8 invited full papers presented in this volume are organized around the following topics: computer-aided human centric cyber situation awareness; computer and information science aspects of the recent advances in cyber situation awareness; learning and decision making aspects of the recent advances in cyber situation awareness; cognitive science aspects of the recent advances in cyber situation awareness
Article
Today we find ourselves in possession of stupendous know-how, which we willingly place in the hands of the most highly skilled people. But avoidable failures are common, and the reason is simple: the volume and complexity of our knowledge has exceeded our ability to consistently deliver it - correctly, safely or efficiently. In this groundbreaking book, Atul Gawande makes a compelling argument for the checklist, which he believes to be the most promising method available in surmounting failure. Whether you're following a recipe, investing millions of dollars in a company or building a skyscraper, the checklist is an essential tool in virtually every area of our lives, and Gawande explains how breaking down complex, high pressure tasks into small steps can radically improve everything from airline safety to heart surgery survival rates. Fascinating and enlightening, The Checklist Manifesto shows how the simplest of ideas could transform how we operate in almost any field.
Article
The massive proliferation of information and communications technologies (hardware and software) into the heart of modern critical infrastructures has given birth to a unique technological ecosystem. Despite the many advantages brought about by modern information and communications technologies, the shift from isolated environments to “systems-of-systems” integrated with massive information and communications infrastructures (e.g., the Internet) exposes critical infrastructures to significant cyber threats. Therefore, it is imperative to develop approaches for identifying and ranking assets in complex, large-scale and heterogeneous critical infrastructures. To address these challenges, this paper proposes a novel methodology for assessing the impacts of cyber attacks on critical infrastructures. The methodology is inspired by research in system dynamics and sensitivity analysis. The proposed behavioral analysis methodology computes the covariances of the observed variables before and after the execution of a specific intervention involving the control variables. Metrics are proposed for quantifying the significance of control variables and measuring the impact propagation of cyber attacks.
Article
Cyber Network degradation and exploitation can covertly turn an organization's technological strength into an operational weakness. It has become increasingly imperative, therefore, for an organization's personnel to have an awareness of the state of the Cyber Network that they use to carry out their mission. Recent high-level government initiatives along with hacking and exploitation in the commercial realm highlight this need for general Cyber Situational Awareness (SA). While much of the attention in both the military and commercial cyber security communities is on abrupt and blunt attacks on the network, the most insidious cyber threat to organizations are subtle and persistent attacks leading to compromised databases, processing algorithms, and displays. We recently began an effort developing software tools to support the Cyber SA of users at varying levels of responsibility and expertise (i.e., not just the network administrators). This paper presents our approach and preliminary findings from a CTA we conducted with an operational Subject Matter Expert to uncover the situational awareness requirements of such a tool. Results from our analysis indicate a list of preliminary categories of these requirements, as well as specific questions that will drive the design and development of our SA tool. Copyright 2010 by Human Factors and Ergonomics Society, Inc. All rights reserved.
Article
Generally, computer security incident response team (CSIRT) managers and team members focus only on individual-level skills. The field of organizational psychology can contribute to an understanding of the full range of CSIRT job requirements, which include working as a team and within a larger multiteam system.
Conference Paper
A Cognitive Task Analysis (CTA) was performed to investigate the workflow, decision processes, and cognitive demands of information assurance (IA) analysts responsible for defending against attacks on critical computer networks. We interviewed and observed 41 IA analysts responsible for various aspects of cyber defense in seven organizations within the US Department of Defense (DOD) and industry. Results are presented as workflows of the analytical process and as attribute tables including analyst goals, decisions, required knowledge, and obstacles to successful performance. We discuss how IA analysts progress through three stages of situational awareness and how visual representations are likely to facilitate cyber defense situational awareness.
Article
The term cognitive task analysis (CTA) has been appearing in the human factors literature with increasing frequency. Others have used the term cognitive work analysis (CWA). Is there a difference? Do either of these methods differ from traditional task analysis (TA)? If so, what advantages can CTA/CWA provide human factors engineers? To address these issues, the history of work analysis methods and the evolution of work are reviewed. Work method analyses of the 19th century were suited to manual labor. As job demands progressed beyond the physical, traditional TA was introduced to provide a broader perspective. CTA has since been introduced to increase the emphasis on cognitive task demands. However, CTA, like TA, is incapable of dealing with unanticipated task demands. CWA has been introduced to deal with complex systems whose demands include unanticipated events. The initial evidence available indicates that CWA can be applied to industry-scale problems, leading to innovative designs.
Article
We developed the cyber security risk model can be find the weak point of cyber security integrated two cyber analysis models by using Bayesian Network.•One is the activity-quality model signifies how people and/or organization comply with the cyber security regulatory guide.•Other is the architecture model represents the probability of cyber-attack on RPS architecture.•The cyber security risk model can provide evidence that is able to determine the key element for cyber security for RPS of a research reactor.
Article
Cyber situational awareness is attracting much attention. It features prominently in the national cyber strategies of many countries, and there is a considerable body of research dealing with it. However, until now, there has been no systematic and up-to-date review of the scientific literature on cyber situational awareness. This article presents a review of cyber situational awareness, based on systematic queries in four leading scientific databases. 102 articles were read, clustered, and are succinctly described in the paper. The findings are discussed from the perspective of both national cyber strategies and science, and some directions for future research are examined.
Article
This paper integrates a number of strands of a long-term project that is critically analysing the academic field of decision support systems (DSS). The project is based on the content analysis of 1093 DSS articles published in 14 major journals from 1990 to 2004. An examination of the findings of each part of the project yields eight key issues that the DSS field should address for it to continue to play an important part in information systems scholarship. These eight issues are: the relevance of DSS research, DSS research methods and paradigms, the judgement and decision-making theoretical foundations of DSS research, the role of the IT artifact in DSS research, the funding of DSS research, inertia and conservatism of DSS research agendas, DSS exposure in general “A” journals, and discipline coherence. The discussion of each issue is based on the data derived from the article content analysis. A number of suggestions are made for the improvement of DSS research. These relate to case study research, design science, professional relevance, industry funding, theoretical foundations, data warehousing, and business intelligence. The suggestions should help DSS researchers construct high quality research agendas that are relevant and rigorous.
Conference Paper
This paper reports on investigations of how computer network defense (CND) analysts conduct their analysis on a day-to-day basis and discusses the implications of these cognitive requirements for designing effective CND visualizations. The supporting data come from a cognitive task analysis (CTA) conducted to baseline the state of the practice in the U.S. Department of Defense CND community. The CTA collected data from CND analysts about their analytic goals, workflow, tasks, types of decisions made, data sources used to make those decisions, cognitive demands, tools used and the biggest challenges that they face. The effort focused on understanding how CND analysts inspect raw data and build their comprehension into a diagnosis or decision, especially in cases requiring data fusion and correlation across multiple data sources. This paper covers three of the findings from the CND CTA: (1) the hierarchy of data created as the analytical process transforms data into security situation awareness; (2) the definition and description of different CND analysis roles; and (3) the workflow that analysts and analytical organizations engage in to produce analytic conclusions.
Article
There is a long history of research that has investigated the effects of cognitive conflict on group and individual decision making. No study has simultaneously compared the effects of two techniques, devil′s advocacy and dialectical inquiry, on the performance of individuals versus groups. In this paper, we report the results of a laboratory experiment that makes this comparison. Artificial groups (groups formed by pooling individuals working independently) obtained an overall lower-quality solution for a case analysis problem than intact groups. However, there were no performance differences between intact groups and the performance of the best member of artificial groups. When artificial and intact groups were examined together, those given the devil′s advocacy treatment produced higher-quality solutions than those given the dialectical inquiry treatment and a simpler expert-based approach involving no conflict. Intact groups given the devil′s advocacy treatment produced higher-quality solutions than those given the expert treatment. Artificial groups given devil′s advocacy produced higher-quality solutions than those given the expert or dialectical inquiry treatment. Overall, the results suggest that the devil′s advocacy treatment has a slightly greater advantage over the dialectical inquiry with individuals than with groups.
Cognitive work analysis: models of expertise
  • Burns
Burns CM. The Oxford Handbook of Expertise. In: Ward P, Schraagen JM, Gore J, Roth E, editors. Cognitive work analysis: models of expertise. Oxford: Oxford University Press; 2020.
TIBER-EU FRAMEWORK - how to implement the european framework for threat intelligence-based ethical red teaming (europa.eu)
  • Ecb
ECB (2018). TIBER-EU FRAMEWORK -How to implement the European framework for Threat Intelligence-based Ethical Red Teaming (europa.eu). Retrieved from the internet on September 16 th, 2021 from https://www.ecb.europa.eu/pub/pdf/other/ecb.tiber_eu_framework.en.pdf
Cybersecurity incident response in organisations: a meta-level framework for scenario-based training
  • A O'neill
  • A Ahmad
  • S Maynard
O'Neill, A., Ahmad, A., & Maynard, S. (2021). Cybersecurity incident response in organisations: a meta-level framework for scenario-based training. arXiv preprint arXiv:2108.04996.
2020 data breach investigations report
  • Verizon
Verizon (2020). 2020 data breach investigations report. Retrieved 06/9/2021 from: https://enterprise.verizon.com/resources/ reports/2020-data-breach-investigations-report.pdf
The Difference between Playbooks and Runbooks in Incident Response
  • Dflabs
DFLabs. The Difference between Playbooks and Runbooks in Incident Response; 2019. Retrieved at 15-10-2020 at https://www.dflabs.com/resources/blog/ the-difference-between-playbooks-and-runbooks-inincident-response/.
Computer Security Incident Handling Guide:. US Department of Commerce, Technology Administration
  • T Grance
  • K Kent
  • B Kim
Grance, T., Kent, K., & Kim, B. (2004). Computer Security Incident Handling Guide:. US Department of Commerce, Technology Administration, National Institute of Standards and Technology.
Qualitative Research Methods in Mental Health and Psychotherapy: A Guide for Students and Practitioners
  • H Joffe
Joffe, H. (2012). Thematic analysis. In D. Harper and A. Thompson (Eds), Qualitative Research Methods in Mental Health and Psychotherapy: A Guide for Students and Practitioners (pp. 209-223). Chichester: Wiley-Blackwell.
for incident response: a case study of management practice
for incident response: a case study of management practice. Comput. Secur. 2021;101.
TIBER-EU FRAMEWORK -how to implement the european framework for threat intelligence-based ethical red teaming (europa.eu). Retrieved from the internet on
  • B Dupont
Dupont B. The cyber-resilience of financial institutions: significance and applicability. J. Cybersecur. 2019;5(1):tyz013. ECB (2018). TIBER-EU FRAMEWORK -how to implement the european framework for threat intelligence-based ethical red teaming (europa.eu). Retrieved from the internet on September 16th, 2021 from https://www.ecb.europa.eu/pub/ pdf/other/ecb.tiber _ eu _ framework.en.pdf