Content uploaded by Atif Ahmad
All content in this area was uploaded by Atif Ahmad on Dec 01, 2017
Content may be subject to copyright.
A Case Analysis of Information Systems and Security Incident Responses
Atif Ahmad, Sean B Maynard, Graeme Shanks
email@example.com, firstname.lastname@example.org, email@example.com
Department of Computing and Information Systems, The University of Melbourne, Australia
Our case analysis presents and identifies significant and systemic shortcomings of the incident
response practices of an Australian financial organization. Organizational Incident Response Teams
accumulate considerable experience in addressing information security failures and attacks. Their
first-hand experiences provide organizations with a unique opportunity to draw security lessons and
insights towards improving enterprise-wide security management processes. However, previous
research shows a distinct lack of communication and collaboration between the functions of
incident response and security management, suggesting organizations are not learning from their
incident experiences. We subsequently propose a number of lessons learned and a novel security-
Keywords: Information Security Management; Security Learning; Incident Response Teams;
Incident Response Teams (IRTs) respond to information systems security process failures or
violations. IRTs diagnose incidents, contain them from spreading, eradicate their (technical) causes,
and facilitate organizational recovery to normal business operations (Tøndel, Line, & Jaatun, 2014).
Few studies address how the experiences of IRTs can be used to improve security processes. This is
significant because IRTs accumulate considerable experience in addressing security failures and
attacks first-hand. Incident investigations into security failures can expose inaccurate risk
assessments, insufficient, misleading or contradictory advice in policies, ineffective or misaligned
strategies, and inadequate security education, training and awareness (SETA) (Shedden, Ahmad, &
Whilst best-practice incident response methodologies (Cichonski, Millar, Grance, & Scarfone,
2012) include a ‘feedback’ or ‘follow-up’ phase where lessons learned are discussed and
documented in a formal report, these methodologies focus narrowly on the ‘response’ aspect of the
process. They do not explicitly mention the need to leverage opportunities for wider learning such
as improving security risk assessment and security policy development. Without a clear intent to
draw broad security lessons to benefit the larger organization, there is little prospect of improving
the security of information systems in general (e.g. see Desouza and Vanapalli (2005) on how
insights from breaches can improve systems).
We describe a case that examines how an organization in the Australian financial sector,
OZFinance, learns from security incident response. We chose the financial sector because of the
increasingly sophisticated attacks on its information infrastructure (e.g. see Smith (2013) for news
coverage of attacks on the Reserve Bank of Australia). We use the 4I Organizational Learning
Framework (Crossan, Lane, & White, 1999; Zietsma, Winn, Branzei, & Vertinsky, 2002) to analyze
the OzFinance’s learning processes as the 4I Framework focuses on (1) process improvement, (2)
incorporates double-loop learning principles, and (3) provides a structured approach to learning
across individual, group and organizational levels.
2. Incident Response Practice In Organizations
An incident is a violation (or imminent threat of violation) of computer security policies, acceptable
use policies, or standard security practices (Hansman & Hunt, 2005). Therefore, denial of service,
unauthorized sharing of sensitive information, a malicious attack on a computing system or network
and the inadvertent deletion of an important document all qualify as incidents. The literature
broadly agrees that when dealing with an incident, IRTs generally engage in six sequential stages:
preparation, identification, containment, eradication, recovery and follow-up (Cichonski, et al.,
2012; West-Brown, Stikvoort, Kossakowski, Killcrece, & Ruefle, 2003). The purpose of the follow-
up phase is to reflect on the incident handling experience and identify ‘lessons learned’ that can be
incorporated into standard operating procedures.
2.1. How Lessons Are Learned From The Incident Response Process
Professional incident response literature places great importance on post-incident learning
(Killcrece, Kossakowski, Ruefle, & Zajicek, 2003). However, the focus tends to be on improving
corrective actions towards lowering cost and improving efficiency (Tan, Ruighaver, & Ahmad,
2003). Learning typically takes place formally in meetings and management presentations and
through the sharing and reviewing of reports (Cichonski, et al., 2012).
Tøndel, et al. (2014) identified a number of challenges that relate to learning practices including: (1)
a lack of willingness to share incident-related information outside the organization (e.g. with
industry) (Hove & Tårnes, 2013); (2) poor communication and collaboration between the IRT and
teams from other organizational areas (Hove & Tårnes, 2013); (3) lack of motivation driving
learning activities (Hove & Tårnes, 2013); and (4) inadequate sharing of lessons learnt internally
within organizations (Shedden, Ruighaver, & Ahmad, 2010).
Therefore a key objective of this study is to explain how the three key stakeholders in organizations
(i.e. the IRT, security management team and senior management team) should communicate,
collaborate and share security lessons to improve security management processes.
3. Organizational Learning
Organizational learning, as a research field, examines how organizations develop knowledge and
'routines' to guide their behaviors (Levitt & March, 1988). Learning in organizations takes place at
the individual, team and organizational level (Chan, 2003; Rashman, Withers, & Hartley, 2009)
Understanding the interplay and interaction between these learning levels is a major theme in
organizational learning (Crossan, et al., 1999).
To meet our research objectives we had three requirements for the learning framework. The
framework must (1) adopt a multi-level approach explicitly linking incident responder to key
stakeholders (e.g. security management team and senior management); (2) not be entirely cognitive,
but rather link cognition to action so individual recognition of unusual patterns of security activity
leads to change in security process, and (3) employ double-loop learning principles. Only the 4I
(intuiting and attending, interpreting and experimenting, integrating, institutionalizing) framework
of organizational learning (Crossan, et al., 1999; Zietsma, et al., 2002) met all three requirements
(See Figure 1).
The 4I framework explicitly targets learning at individual, team and organizational levels whilst
incorporating double loop learning principles. The framework encourages organizations to manage
the tension between exploring new ideas and exploiting what has already been learnt. This ‘strategic
renewal’ challenges institutional norms - a particularly useful characteristic as we expect that
lessons learned from security incidents will challenge compliance culture - a key obstacle to the
development of effective security strategy (see Tan, Ruighaver, & Ahmad, 2010).
The Intuiting and Attending processes aim to develop individual capability to discern new patterns
of activity without conscious effort. Interpreting and Experimenting are social activities designed to
allow individual insights to be shared and enacted with a group (i.e. discussions and trying out new
ideas). Integrating allows group collective and coordinated action to ensue. Finally, important
routines are formalized into structures, systems and procedures to retain individual and group
learning through Institutionalization.
Figure 1: The 4I Model Zietsma, et al. (2002)
4. OzFinance: A Case study
The choice of organization for this case was based on three key criteria: (1) their IR practice has
remained relatively stable for three to five years; (2) their IR practice to be complying with ‘best
practice’ guidelines; and (3) they had to be willing to make the relevant stakeholders available,
which is rare in studies that focus on security issues (Kotulic & Clark, 2004).
OzFinance’s incident response capability comes from two teams. The Network Incident Response
Team (the ‘Incident Response Team’) is a full-time, four-person team, which resides in the
Information Security Department. Their primary responsibility is to secure OzFinance’s core
network. The High-Impact Incident Response Coordination Team (the ‘Coordination Team’)
reports to the CIO and acts in a management or coordination capacity and is only activated in the
case of incidents deemed to be ‘high-impact’. The Coordination Team starts with a team of four but
can quickly recruit large numbers of front-line personnel as the situation demands. The team will
typically be called in for significant events such as mission-critical server crashes. They will be
involved in writing the mandatory post-incident report for high-impact incidents. The Incident
Response Team and the Coordination Team work independently of each other but may cooperate if
the need arises. In this case the Coordination Team will take over the coordination role and liaise
with the Incident Response Team that will be more closely involved with the technical network
Managers from the security and IRTs were interviewed (See Table 1). CT_MGR is in charge of the
Coordination Team. CT_MGR’s primary role is to coordinate with business representatives,
technical teams, and others until an incident is resolved. CT_MGR has considerable technical and
business knowledge of the corporation and its staff. IRT_MGR manages the Incident Response
Team. IRT_MGR has extensive experience in networking and security, with educational
qualifications in computing. ISR_MGR also has an extensive security background and security
experience. Their role is to champion information security, to manage information security risk and
to handle minor incidents. PPT_MEMBER is responsible for the production and management (not
enforcement) of the high-level organizational information security policies and procedures at
OzFinance. PPT_MEMBER has considerable experience in security coupled with a strong technical
Data was collected via interviews with each participant (1 to 2 hours each) and analyzing incident
response policies and detailed procedures as well as incident reports. Multiple cycles of analysis
were conducted, each cycle focusing on one of the learning processes identified in the 4I
framework. Our subsequent analysis is structured according to the learning processes in the 4I
framework which serves to highlight the overall gap in security learning within the case
organization and also illustrates the utility of the 4I framework for information security learning.
Table 1: Names and Roles of Interview Participants
CT_MGR Manager of High-impact Incident Response Coordination Team (based in the
Business Continuity Department).
IRT_MGR Manager of Network Incident Response Team (based in the Information Security
ISR_MGR Information Security and Risk Manager of the Corporate Business Department
PPT_MEMBER Senior Member of Information Security Policy and Procedure Team (based in the
Information Security Department)
4.1. Case Study Results: Security Intuiting and Attending
The Incident Response Team is focused on restoring service availability; therefore considerable
pressure exists on personnel to resolve incidents quickly rather than developing security insights.
Given the Coordination Team’s primary role is coordination; one might expect them to be regularly
communicating with Security Management (where views between the two teams should be
exchanged). However, the Coordination Team manager did not agree. There is significant
communication between the Coordination Team and technical problem managers as well as
business managers. But there is no mention of ongoing communication with security management.
Significantly, members of Security Management do not take part in the actual incident handling
process either as an active participant or as a passive observer.
Both the Incident Response Team and the Coordination Team are focused on restoring services.
There was no evidence of systematic identification of security insights (intuiting) and any
discussion with Security Management (attending) that might in assist in the intuiting process.
4.2. Case Study Results: Security Interpreting and Experimenting
The social activity of discussion around security issues is best done with members of the security
management team (as well as members of the IRT). When IRT members do share their security
intuitions, the ensuing conversation with security managers should result in improved capability to
react constructively to security stimuli.
The Coordination Team conducts a formal investigation for every incident deemed ‘high-impact’
which ultimately results in a Post-Incident Report for management consumption. Where an incident
is deemed to be ‘low-impact’, the review is informal resulting in a log entry in the incident tracking
system, which simply states the incident has been resolved.
The formal review process consists of three meetings. The first review is internal to the Incident
Response Team and deliberately excludes members of the security risk and business areas. The
deliberate exclusion is to prevent external personnel from misunderstanding the technical aspects of
the incident or drawing premature conclusions. The second review meeting examines causal factors
and mitigation. The final review contains a causal analysis and a new risk assessment. These are
implemented to improve the incident response process.
The Incident Response Team also produces a report for senior management, which may be
integrated into the Post-Incident Report. Due to the sensitive nature of the IR process, we were not
permitted to observe or query what was said in the review meetings. Therefore, it is not possible to
determine the extent to which security insights were discussed among the IRTs (attending).
However, given the reviews were driven by the need to improve service availability rather than
security learning, it is unlikely that security insights were a major topic. Further, there was no
evidence that groups of Incident Response Team and Security Management tried out new incident
response practices motivated by the need for improved security learning (experimenting).
4.3. Case Study Results: Security Integrating
The integration phase affects three security stakeholders, namely both IRTs, security management
and senior management. This phase should result in: (1) a shared understanding of security risks
and insights amongst all three parties; which leads to (2) ‘mutual adjustment’ or recognition of the
level of exposure to the real-world security threat environment; followed by (3) constructive and
collective (initially ad hoc) security actions that demonstrate that the organization is beginning to
learn from its security experiences.
However, security management and IRTs will have competing priorities. The latter needs to restore
service availability at the lowest possible cost and the former needs to initiate a series of corrective
actions to strategy, monitoring and so forth (single-loop learning) as well as conduct a root cause
analysis (double-loop learning). Amongst senior managers and security management a key issue of
integration will be the resourcing of changes required to the security program. These may include a
new risk assessment, changes to policy, and security education, training and awareness (SETA) for
There was no evidence of a conversation between the IRTs, Security Management and Senior
Management or coordinated action resulting from security insights gained from incidents
(integrating). In fact, there was evidence to the contrary. In the case of the Incident Response Team,
communication channels with security management personnel in policy and risk were both informal
and ad hoc.
There was no formal process that required information to flow from the Incident Response Team to
the security risk managers of other departments but the Incident Response Team manager was doing
so on his own initiative. Although communication between the Coordination Team and security
management personnel is formal and systematic in the shape of review meetings and a formal Post-
Incident Report, communication is primarily with senior management to keep the business
perspective in the loop.
OzFinance has a siloed organizational structure that results in the routine exclusion of personnel
from other departments who in many cases ‘need to know’. In this case it is the ISR_MGR who is
finding it difficult to access critical incident related information that informs the risk management
process. There was even less evidence of understanding and coordination with the Information
Security Policy team. PPT_MEMBER was not even aware of the existence of Post-Incident Reports
and, after some explanation, admitted that they would be very useful in the development of security
policies and procedures.
4.4. Case Study Results: Security Institutionalizing
The process of institutionalization of security learning from incident response should allow new
security insights to influence all security management functions. For example, new routines must
require previously hypothetical risk assessments to factor in actual rates of incident occurrence and
actual cost of impact from auditing reports. In addition, security policies and procedures should be
continuously updated to reflect lessons learned from incidents about the (in)secure behavior of
employees. Required changes to routines must be accompanied by a program of training and
education to drive a change in employee behavior and decision-making.
There was no evidence of formal structures, systems or processes to capture security learning
(institutionalizing). Whatever security learning does occur is entirely spontaneous, informal and as
a result of individual initiative rather than organizational imperative. However, technical learning
does occur through the formal incident review process that culminates in a report. In fact, Incident
Response managers were aware that incidents present opportunities to learn about flaws in the
system itself (although the learning was not related directly to security).
The Coordination Team manager understands the need to go further than simple corrective action to
questioning flaws in the system (single-loop to double-loop learning), but there was no evidence
that OzFinance has institutionalized this philosophy. For example, not investigating low-impact
incidents and ‘near misses’ shows learning is not systematic and institutionalized at OzFinance but
remains limited to the ad hoc initiative of individual managers.
4.5. Security Learning through Feed forward and Feedback
Feed-forward mechanisms play a key role in organizational learning allowing for new insights
discovered by individuals to influence organizational routines. The real test of whether an
organization has developed a shared understanding of an issue is whether coherent action is being
taken at an organizational level to address it.
There is no systematic and consistent benefit to the security management function from the
activities of the IRTs at OzFinance. This is due to the incentive to resolve service availability
problems at the lowest possible cost and the lack of contact between security management and
incident response personnel during critical stages of the incident response process.
Further, there was no evidence that security insights were being consistently identified and
preserved by members of the IRTs (although the Incident Response Team manager does take the
initiative to convey some risk related information to the risk-assessment group).
OzFinance employs learning techniques where ‘high-impact’ incidents are concerned. The
Coordination Team participates in a comprehensive review, which results in the tabling of a formal
Post-Incident Review document. The review consists of three meetings focusing on technical
lessons learned followed by causal factors and mitigation issues and then finally a causal analysis
and new risk assessment.
However, there is no requirement to disseminate the review report to those parties that ‘need to
know’ like the security management function. Any security ‘intelligence’ contained in the report
will not be picked up by security managers if they are not aware of the existence of such reports or
do not have access to them.
The Coordination Team’s activities aim to improve the technical security response rather than
security learning. Security management staff are deliberately excluded from the first two meetings
held by IRTs to avoid drawing premature conclusions. Although this principle makes sense from a
technical perspective, it is a serious institutional barrier to the interpreting process. It is vital for
members of the security team to be present during the early stage of the process. It is in this stage
that security insights are likely to surface and where IRT staff needs assistance in identifying and
describing security concerns and changing their cognitive maps so they can more readily identify a
wider range of security issues in the future.
Even if security insights surface and are articulated in preliminary meetings, the challenge to
influence collective interpreting remains. Personnel putting forward these insights are likely to be
influenced by other members of the technical team (and the organization’s technical objectives)
before they can voice their concerns in the final review meeting. This may lead to self-censorship or
‘vetting’ of issues by members of the team motivated by their desire not to raise issues that may
cast the team in a negative light or that may result in the team being involved in an investigation
that only benefits other departments.
OzFinance’s decision to limit comprehensive reviews to incidents that have had a high financial
impact means that other incidents such as ‘near misses’, or those that might be worth investigating
because of the potential for learning, are not subject to the same level of scrutiny. Cooke (2003)
argues that critical incidents are often caused by the ignorance of low-impact and precursor
incidents, and that learning should be linked to causal structures of incidents instead. From a
learning perspective, this policy poses another institutional barrier to effective organizational
5. Discussion: Towards A Novel Security-Learning Process Model
We propose a novel process model based on the 4I Framework and the insights drawn from the case
study. To better appreciate the potential significance of the DSL process model to practice, we
relate the model to the following scenario:
Scenario: PharmaCorp has invested in the development of anti-viral drugs which it cannot recoup
until the first drug is sold - a cycle of 10 years from discovery to market. Recently, key file servers
with sensitive R&D data crashed numerous times disrupting operations. Whilst investigating the
servers, PharmaCorp’s IRT notices server logs have been erased. Six months later, a foreign rival
releases a similar drug, beating PharmaCorp to the market. An internal estimate of losses to
PharmaCorp is $55 million1.
5.1. The DSL Process Model
Figure 2 shows a preliminary dynamic security learning (DSL) process model that shows how
organizations can create novel structures and practices for gaining new security insights from
incident response. Given the context of this study is security learning from incident response, there
are three obvious stakeholders – security manager(s), IRTs and senior management. The fourth
stakeholder, the IRT member, is included as learning originates in individuals (not teams). These
appear in the top row of Figure 2.
1 Note: Impact estimation taken from comparable incident at Eli Lilly in Oct 2013 (BioSpectrum Bureau 2013)
Figure 2: A Dynamic Security Learning (DSL) Process Model
Drawing from our discussion of the 4I framework, the left column shows the following key
The learning model has six fundamental processes that are traversed in sequence for learning to
progress however back-tracking occurs when required
Exploring occurs by traversing through the learning processes such that insights gained from
incident response eventually create change in security processes
Exploitation occurs by utilizing the new systems and structures to the advantage of the
organization (traversing steps 1 to 6 in reverse)
We explain how the model applies to the scenario as follows:
Whilst investigating the server crashes, PharmaCorp’s IRT notices server logs have been erased.
Pharmacorp’s IRT members might not recognize (or might ignore due to service restoration
priorities) the security implications of server log erasure. However, the security management team
will recognize wider security implications of this event through a process of intuiting. For example,
that deliberate destruction of logs may have been perpetrated to hide malicious activity.
Security management may decide that the log destruction warrants further investigation. As a result,
a consultation workshop may take place (this is the ‘attending’ process as it considers alternative
viewpoints) where security managers may request IRT members to forensically investigate the log
itself to determine when and how the log was deleted. Security management may ask the IRT to
look for other signs of malicious activity that would not normally form part of standard operating
procedures for IRTs (e.g. evidence of tampering with back-ups).
Presentation of evidence of the deliberate destruction of the server logs and other evidence of
malicious activity constitutes a group-cognitive process aimed at creating a shared vision. The
stakeholders may decide that prevention (i.e. removing the vulnerabilities that allowed the intruder
to penetrate the organization) may not help the organization to learn about the motivations for such
attacks. Instead, it may be better to use a strategy of deception where the intruder’s activities were
logged during their attacks whilst they were under the impression that their intrusion was
Security management may implement honeypots to trap perpetrator(s) in a controlled environment
and monitor attempts to subvert organizational defences in the hope that the attacker(s) will reveal
their motivations and methods (strategic deception) (this is one example of the group-active process
of ‘experimenting’). They may discuss potential changes to the penal clauses of organizational
policy to deter insiders from engaging in malicious activity (strategic deterrence). The IRT may
discuss operational issues such as balancing the need to restore services immediately with the need
to avoid tampering with the digital environment to conduct further investigations. Changes to
standard operating procedures will be trialled among the IRT.
Senior management, security management, and IRTs may enter into a process of negotiation about
shared priorities for process changes (this group-cognitive process is ‘integrating’ as it develops
‘shared understanding and practices through dialogue and co-ordinated action’).
This is a critical phase in the learning framework as it orchestrates a discussion on competing
priorities and what necessary actions must be mandated given funding constraints and compliance
requirements. Security management may argue for a range of new initiatives such as (1) upgrades to
firewalls; (2) changes to security policy; (3) funding for a risk review; (4) changes to standard
operating procedures (SOPs) for Incident Response to allow for root cause investigations especially
where mission critical systems are concerned;
Representatives of the IRTs may assert that root-cause investigations are not their concern and it
conflicts with service restoration priorities. Senior managers may share intelligence about possible
motivations for the suspected attack given the organization’s strategic plan. These may include
attacks by competitors or intelligence from Human Resources on disgruntled employees. They may
also argue for the need for better quality metrics and compliance information to support strategic
Despite the competing priorities, the three stakeholders may agree that some processes must
undergo re-alignment towards security objectives. Changes to security processes in the form of
management and technical controls discussed in the interpretation phase and accepted by senior
management will be implemented in this phase. These will occur through re-development of policy,
new risk assessments, training for security and incident response staff to reflect the new security
culture mandated by the change in routines (i.e. institutionalization of new structures, systems and
procedures to capture lessons learned on a routine basis).
Strategic Renewal: Exploitation vs Exploration
Once learning routines are institutionalized, e.g. they become part of Standard Operating
Procedures (SOPs), lessons can be more systematically exploited regardless of employee turnover
(i.e. feedback). As long as the organization follows the new SOPs there will be a corresponding
adjustment in its security behavior allowing for gradual adaptation to the new security risk
environment. New members of both the IRTs and Security Management will be required to adhere
to the newly mandated SOPs which preserve the lessons learned from security insights captured
during incident responses.
The primary contribution of this study is to explain how to practice ‘security learning’ by
identifying the particular stakeholders, learning processes, and framework of learning activities that
must be implemented at an individual, group and organizational level to allow effective security
learning to occur.
6.1. Contributions to Theory
Previous incident response case studies highlighted a key security challenge for organizations is the
lack of communication, collaboration and sharing between IR teams and other teams (Tøndel, et al.,
2014). The case study offers a comprehensive explanation for this phenomenon. We now
understand that IRTs tend to deliberately exclude ‘outsiders’ from the critical early phases of
incident response to prevent ‘misunderstanding’ and ‘premature conclusions’ that can lead to
embarrassment. This exclusive behavior can be reinforced by organizational culture.
Secondly, we explain why IRTs deliberately ignore some opportunities to draw security lessons.
Our study suggests that a key reason is the competing priorities between security learning and
restoring service availability at the lowest possible cost (this is the same reason given for avoiding
lengthy investigations such as root cause analyses, see (Killcrece, Ruefle, & Zajicek, 2004). As a
result, low-impact incidents such as anomalies that cannot be explained are simply logged rather
This phenomenon should concern organizations that face complex and evolving security threats. In
such organizations timely discovery of new patterns of attack are critical to developing security
situation awareness and effective and adaptive security defenses (new and innovative attacks tend to
leave a trail of anomalies) (see Webb, Ahmad, Maynard, and Shanks (2014) & Ahmad, Maynard,
and Park (2014) for discussions on situation awareness and strategy respectively). Recent
developments in security strategy theory argue that many organizations use probabilistic
distributions of risk to guide their selection of security controls. The underlying belief is that
security risks are known and predictable; therefore security learning is not required. This is ill-
advised for organizations that face external attacks that probe defenses and use innovative
(possibilistic) means of exploiting their specific vulnerabilities. It is imperative for such
organizations to recognize that security risks are unpredictable and transient, requiring considerable
learning to enable a rapid tactical response. Our insight points out the critical role of security
learning in such scenarios (and by implication to theory in security strategy).
6.2. Contributions to Practice
The DSL model makes a number of contributions to practice. First, the model extends the limited
guidance in best practice standards on learning from incident experiences in the form of a detailed
step-by-step explanation of how learning should occur. Second, the model identifies the particular
stakeholders that must be involved, when they will be involved and precisely what activities they
will be undertaking in the interests of organizational learning. Third, the roles of stakeholders
implicit in the DSL model constitute their responsibilities, which can be used to guide the hiring
and selection processes in organizations. Fourth, the DSL model explains how organizations can
better utilize their IRTs for broader organizational aims (e.g. security management, but also audit).
Finally, whereas previous case studies have shown that there is a strong incentive for organizations
not to report incidents (see Tan, et al., 2003), this study points out the benefits of reporting incidents
and security-learning from them.
Ahmad, A., Maynard, S. B., & Park, S. (2014). Information Security Strategies: Towards an
Organizational Multi-Strategy perspective. Journal of Intelligent Manufacturing, 25, 357-
Chan, C. C. (2003). Examining the relationships between individual, team and organizational
learning in an Australian hospital. Learning in Health and Social Care, 2, 223-235.
Cichonski, P., Millar, T., Grance, T., & Scarfone, K. (2012). Computer security incident handling
guide. In NIST Special Publication 800-61 Rev 2: NIST.
Cooke, D. L. (2003). Learning from incidents. In 21st System Dynamics Conference, NYC, New
Crossan, M. M., Lane, H. W., & White, R. E. (1999). An organizational learning framework: from
intuition to institution. Academy of Management Review, 522-537.
Desouza, K. C., & Vanapalli, G. K. (2005). Securing knowledge in organizations: lessons from the
defense and intelligence sectors. International Journal of Information Management, 25, 85-
Hansman, S., & Hunt, R. (2005). A taxonomy of network and computer attacks. Computers &
Security, 24, 31-43.
Hove, C., & Tårnes, M. (2013). Information security incident management: an empirical study of
current practice. In: Norwegian University of Science and Technology.
Killcrece, G., Kossakowski, K.-P., Ruefle, R., & Zajicek, M. (2003). Organizational models for
computer security incident response teams (CSIRTs). In: DTIC Document.
Killcrece, G., Ruefle, R., & Zajicek, M. (2004). Creating and managing computer security incident
response teams (CSIRTs). In 16th Annual First Conference: Improving Security Together.
Kotulic, A. G., & Clark, J. G. (2004). Why There Aren't More Information Security Research
Studies. Information and Management, 41, 597-607.
Levitt, B., & March, J. G. (1988). Organizational learning. Annual Review of Sociology, 319-340.
Rashman, L., Withers, E., & Hartley, J. (2009). Organizational learning and knowledge in public
service organizations: a systematic review of the literature. International Journal of
Management Reviews, 11, 463-494.
Shedden, P., Ahmad, A., & Ruighaver, A. B. (2010). Organisational learning and incident response:
promoting effective learning through the incident response process. In Proceedings of the
8th Australian Information Security Management Conference (pp. 139-150). Perth,
Australia: Edith Cowan University.
Shedden, P., Ruighaver, A. B., & Ahmad, A. (2010). Risk Management Standards – The Perception
of Ease of Use. Journal of Information Systems Security, 6.
Smith, P. (2013). Attacks ‘highlight need for data breach notification law’. In Financial Review.
Tan, T., Ruighaver, A. B., & Ahmad, A. (2010). Information security governance: when
compliance becomes more important than security. In Security and Privacy–Silver Linings
in the Cloud (pp. 55-67): Springer.
Tan, T., Ruighaver, T., & Ahmad, A. (2003). Incident Handling: Where the need for planning is
often not recognised. In Proceedings of the 1st Australian Computer, Network &
Information Forensics Conference.
Tøndel, I. A., Line, M. B., & Jaatun, M. G. (2014). Information security incident management:
Current practice as reported in the literature. Computers & Security, 45, 42-57.
Webb, J., Ahmad, A., Maynard, S. B., & Shanks, G. (2014). A Situation Awareness Model for
Information Security Risk Management. Computers & Security, 44, 391-404.
West-Brown, M. J., Stikvoort, D., Kossakowski, K.-P., Killcrece, G., & Ruefle, R. (2003).
Handbook for computer security incident response teams (csirts). In: DTIC Document.
Zietsma, C., Winn, M., Branzei, O., & Vertinsky, I. (2002). The war of the woods: Facilitators and
impediments of organizational learning processes. British Journal of Management, 13, S61-