Content uploaded by M. Glenn Cobb
Author content
All content in this area was uploaded by M. Glenn Cobb on Oct 05, 2016
Content may be subject to copyright.
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
Alternative Front End Analysis for Automated Complex Systems
Natalie Drzymala, Tim Buehner
M. Glenn Cobb
The ASTA Group, LLC
U.S. Army Research Institute
Pensacola, Florida
Fort Benning, Georgia
Natalie.Drzymala@TheASTAGroup.com
Tim.Buehner@TheASTAGroup.com
Marshell.G.Cobb.civ@mail.mil
John Nelson
Linda Brent
Engility Corporation
The ASTA Group, LLC
Leavenworth, Kansas
Pensacola, Florida
John.Nelson@engilitycorp.com
Linda.Brent@TheASTAGroup.com
ABSTRACT
A growing body of literature reports that task-based analyses alone are not sufficient for determining training
requirements for highly automated, complex systems that rely upon multilevel command and control integration.
This has spurred concerns among Army leaders that the traditional Systems Approach to Training (SAT) Front End
Analysis (FEA) strategy may not sufficiently identify training requirements for some emerging systems, and
provided impetus for our research effort to develop an alternative FEA strategy better suited for these types of
systems. The first phase of our effort focused on the research and design of potential alternative FEA strategies. The
second phase provided a use case application of an alternative FEA to existing air and missile defense system
training to validate and refine the strategy. During the third phase of our effort, we applied the alternative FEA to an
emerging integrated air and missile defense architecture. The refined alternative FEA strategy supplements
traditional SAT analyses with team-based and expertise-based analyses and was used to successfully identify
requirements beyond those found through traditional SAT methods alone.
ABOUT THE AUTHORS
Ms. Natalie Drzymala is a Senior Associate with The ASTA Group LLC. She serves as project manager,
researcher, analyst, and technical writer. Ms. Drzymala holds a Masters degree from Florida State University and
has performed multiple training system analyses and research for the U.S. Army, Navy, Air Force and Special
Operations Forces. She performed research, analytical work and writing for this effort.
Dr. Tim Buehner is a Staff Associate with The ASTA Group LLC. He directs research studies for the organization,
and serves as technical director across programs. With a Ph.D. in Psychology, Dr. Buehner has also served on the
faculty of the University of Miami School of Medicine and Florida State University. He was acting Principal
Investigator for this effort.
Dr. M. Glenn Cobb leads an US Army Research Institute (ARI) field research team focused on developing
effective tailored training strategies and tools to enhance institutional training across the Army. He has authored or
co-authored numerous technical reports and articles on a wide range of topics, including skill acquisition and
retention, drill and platoon sergeant training, trainee socialization, and digital training apps development.
Mr. John Nelson is a project manager, analyst, and management consultant with Engility. A retired Army officer,
he has managed a number of projects for a variety of Federal Government and Department of Defense clients. Mr.
Nelson is a 1987 graduate of the United States Military Academy at West Point and holds Masters of Science
degrees from the University of Central Florida in both Training Simulations and Engineering Management.
Dr. Linda Brent is the Chief Executive Officer for The ASTA Group LLC. With an Ed.D., she also supports
various studies with clients and customers across the group’s portfolio. She has served as a Co-Principal Investigator
on projects with the National Science Foundation, the U.S. Air Force, and the Commonwealth of Virginia.
Additionally, Dr. Brent serves on the Executive Committee of the I/ITSEC Conference for Strategic Planning.
2015 Paper No. 15121 Page 1 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
Alternative Front End Analysis for Automated Complex Systems
Natalie Drzymala, Tim Buehner
M. Glenn Cobb
The ASTA Group, LLC
U.S. Army Research Institute
Pensacola, Florida
Fort Benning, Georgia
Natalie.Drzymala@TheASTAGroup.com,
Tim.Buehner@TheASTAGroup.com
Marshell.G.Cobb.civ@mail.mil
John Nelson
Linda Brent
Engility Corporation
The ASTA Group, LLC
Leavenworth, Kansas
Pensacola, Florida
John.Nelson@engilitycorp.com
Linda.Brent@TheASTAGroup.com
INTRODUCTION
There is growing concern among U.S. Army unit leaders that task-based analyses alone are no longer sufficient for
determining the full range of training requirements associated with highly automated systems, particularly when
these systems are incorporated into multilevel, networked Command and Control (C2) chains. The integration of
multilevel C2 information and decision requirements adds complexity to processes, communications, and decision-
making. Traditional front end analysis (FEA) have often focused on individual tasks and roles with little regard for
critical decision and coordination points, interpersonal interactive skills and requirements, or the full range of
collective activities inherent in complex, networked systems. These complex systems collect information from
multiple sources, process it, and output recommendations for action.
As technological advances deliver an ever increasing amount of information to Soldiers in the field, they must
quickly interpret, assess, decide and act on this information to successfully perform their mission, while using
automated systems to do so. Complicating matters further, the use of highly automated systems may actually impair
decision-making due to an over-reliance on system outputs (see Hawley & Mares, 2006; Hawley, Mares, &
Giammanco, 2005), an effect called automation bias. According to Hawley (2007) automation bias occurs when
operators, overly reliant on system data and automated responses for decision-making, fail to recognize errors in
system identification or faulty data and defer to the automated outcomes for decisions. These considerations coupled
with a desire of Army leadership to keep abreast of emerging requirements, provided the impetus for our effort to
develop a viable alternative FEA strategy.
In an effort to develop, validate, and refine an alternative FEA strategy, we conducted a three phase effort, from
October 2013 to March 2015, focusing on the requirements of an existing air and missile defense (AMD) system and
an emerging integrated AMD architecture. During Phase I, we explored and developed potential alternative FEA
strategies, based on the team’s past FEA applications, current U.S. Army Training and Doctrine Command
(TRADOC) manuals and directives, and published literature (see Cobb, Brent, Buehner, Drzymala, & Nelson,
2014). This phase concluded with the recommendation of an alternative FEA strategy for use case applications in
the subsequent phases. During Phase II, the alternative strategy was applied to an existing AMD system’s air battle
management training program in order to determine if it could identify requirements previously not identified by
traditional task-based strategies (see Buehner, Drzymala, Brent, Cobb, & Nelson, 2015). Finally, in Phase III, it was
applied to an emerging integrated AMD architecture’s fire control requirements still under development (see
Drzymala, Buehner, Brent, Cobb, & Nelson, 2015). Data for each of these applications was collected through the
conduct of interviews and observations, as well as a review of government training materials that included course
documentation and evaluation data for the existing system and drafted task lists, planned function/role
configurations and designs, and capability documentation for the emerging multisystem architecture.
The Nature of Front End Analyses
U.S. Army training developers typically use a task or topic centered approach to conduct FEAs called Analysis,
Design, Development, Implementation, and Evaluation (ADDIE). The ADDIE process is embedded within the
Army’s larger Instructional Systems Design (ISD) and Systems Approach to Training (SAT) strategies prescribed
2015 Paper No. 15121 Page 2 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
by TRADOC (U.S. Department of the Army, 2004; U.S. Department of the Army, 2013). These traditional FEAs
focus on individual tasks and duties, and treat collective tasks as individual tasks that must be performed in concert
with other individually defined tasks. A growing body of literature makes the case that task-based analysis methods
seem increasingly ill-suited for progressively more complex, multisystem environments. This is because the process
typically falls short in regard to analyses of critical decision and coordination points, interpersonal interactive skills
and requirements, and the full range of collective activities inherent in complex, networked systems.
An FEA should define the context framing the training needs and answer a series of questions that inform
subsequent training design, development, and delivery. Figure 1 (Cobb, et al., 2014) illustrates a generic FEA
process and the questions it typically addresses. Regardless of the types of analyses used, or the specific questions
asked, a consistent feature of an FEA is that the results are entirely dependent on what is included and the types of
questions asked. As we considered potential alternative FEA strategies, we focused on the types of operational
issues faced by system operators in the field, especially known training and/or operational shortcomings.
FIGURE 1. FEA CORE CONSTRUCTS
Research Questions
This research was an applied effort to design and refine an alternative FEA strategy. While the use case applications
of the alternative FEA produced system specific findings and recommendations, the purpose of this paper is to focus
on an alternative FEA design and summarize findings regarding the utility of the approach. Three questions guided
the overall three-phase research effort.
1. First, could an alternative FEA strategy identify requirements beyond those established for a current system
using the traditional SAT approach to FEAs (i.e., already being trained)?
2. Second, could an alternative FEA strategy render recommendations for adjusting training progression to
enhance expertise development?
3. Third, could the FEA prove flexible enough for analysis of both established and emerging systems?
AN ALTERNATIVE FRONT END ANALYSIS STRATEGY
The alternative FEA strategy injects additional analyses into the Army’s standard SAT, and uses the information in
conjunction with that produced by the standard SAT analyses to supplement rather than replace the SAT. In Figure 2
(Drzymala, et al., 2015), shaded boxes contain major analyses of the traditional SAT and the bold outlined boxes
depict the alternative complementary analyses. We wanted to retain the SAT strengths and familiarity while offering
an additional capability to capture non-task based training requirements. SAT compatibility was deemed desirable
2015 Paper No. 15121 Page 3 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
because of: a) practitioners’ familiarity with the SAT model; b) standalone analytic compatibility for established
systems designed using SAT; and c) complementary capability within the SAT framework for emerging systems.
FIGURE 2. ALTERNATIVE FEA STRATEGY
The Team Analysis
One basic FEA question is “Who needs to be trained?” This question is often answered before the analysis is begun,
as most FEAs typically align with personnel designations, which are then used to further define associated missions,
jobs, and duties. A shortcoming of this approach is that it often removes the individual Soldier from the context of
the overall operation. In other words, a Soldier's individual mission, job, duties, and tasks are prioritized above or
without consistent consideration of their role in the unit's mission. Thus, collective tasks are conceptualized in the
context of an individual's task performance. In answering “Who should be trained?”, our alternative FEA postulates
that the team should be considered as the proper unit of analysis in complex systems requiring the interaction of
multiple individuals and agencies, rather than a simple aggregation of individually defined tasks and responsibilities.
Team Analysis Background
Building on work by Cooke, Salas, and Cannon-Bowers (2000), and Weaver, Rosen, Salas, Baum, and King (2010)
further refined three dimensions of team competencies defined by team member interactions rather than individual
performance (see Table 1, following). For example, “accurate/shared mental models,” a competency within
cognition, reflects the mental organizational structure or model that an individual has relative to his or her role and
the degree to which it is shared or common across team members. Thus, this competency includes an individual
level requirement (i.e., accuracy) and a collective requirement (i.e., shared and accurate across the team). Other
competencies, such as mutual trust, backup behavior, and conflict management, are relevant only in a collective or
team environment. This distinction illustrates how team-based competencies can inform collective training
requirements that may not be identified from a solely task-based analytic model. While some competencies may be
identified through a task-based analysis (e.g., cue-strategy associations) other competencies (e.g., mutual trust,
shared mental models, and mutual performance monitoring) may go unnoticed.
The presence of distinct team competencies (vice individual competencies) highlights the fact that collective training
within a team context is not a simple matter of joining or adding individual tasks at the proper time and place. Team
training is subject to different processes, attributes, requirements, and goals than individual training. Knowledge
construction, in particular, is tied to team performance with multiple researchers pointing to the importance of
shared cognition (Salas, Cooke, & Rosen, 2008) as a precursor to successful team performance. In their review of
fifty years of team performance research, Salas and his associates concluded that shared cognition is a critical driver
of team performance (Salas & Fiore, 2004, as cited in Salas, et al., 2008). Shared cognition is particularly important
in developing shared mental models, team situation awareness, effective team communication, and team decision-
2015 Paper No. 15121 Page 4 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
making, all of which are critical components of mission performance using highly automated systems within
multilevel C2 mission environments.
TABLE 1. TEAM TRAINING COMPETENCIES
Competency Category
Competency
Descriptor
Attitude
Mutual trust
Trust across and between team members
Team / collective efficacy
How well the team works together effectively
Team / collective orientation
Common focus of the team
Psychology safety
Feeling of security in team and decision
Cognition
Accurate / shared mental models
Common cognitive model for mission activities
Cue-strategy associations
Triggers that provide cues to associate action
Behavior
Closed-loop communication
Communications within the team
Team leadership
Leadership roles for each crew member
Mutual performance monitoring
Individual monitoring of team performance
Backup / supportive behavior
Performance of the individual that benefits the team
Conflict management
Management of disputes within team
Mission analysis
Individual, collective analysis of desired outcomes
Team adaptation
Ability of team unit to adapt to any change
Cobb, et al. (2014), adapted from Weaver, Rosen, Salas, Baum, and King (2010)
Team Analysis Design
The Team Analysis component of our alternative FEA strategy can be conceptualized as a top-down analysis – the
operational environment, crew configuration, and mission requirements are identified early in the process, followed
by a deconstruction of roles, tasks, and processes that constitute mission performance. While this approach is similar
to a Mission Analysis conducted with traditional task analyses, it differs in that the primary unit of analysis is team
performance, rather than individual performance. This top-down approach required that a conceptual framework be
established to organize and bound the information for our analyses. Based largely on information provided by AMD
subject matter experts; our framework was bounded by a focus on Air Battle Management (ABM) performance.
Figure 3, following, is a detail of the Team Analysis shown previously in Figure 2.
FIGURE 3. TEAM ANALYSIS COMPONENT OF ALTERNATIVE FEA
2015 Paper No. 15121 Page 5 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
In our team analysis design illustrated in Figure 3 (Drzymala, et al., 2015), team analysis is process and function
oriented, and examines the relationships between individual and team performance. In this research effort, the
functions, inputs, and outputs of team requirements were identified, constructed, and compared to each crew role
and interdependencies within the crew-based system. Interdependencies were examined to identify communication
and coordination processes and requirements within the crew as well as between the crew and other command
echelons.
As shown in Figure 3, the first step is to define the team. While this may seem obvious, consider that the team
definition establishes the scope of the analysis, and in turn, frames its results. The team can be as small as a two-
person crew that must work in concert, or as large as a multi-theater group of agencies that are required to perform
collectively and collaboratively. The defined team should directly reflect the purpose and scope of the analysis.
Questions guiding the team analysis were oriented around teamwork and team performance during ABM mission
performance, including operational context, environment, team composition and processes, and individual and
collective task management, and coordination requirements. The following questions were organized to begin with
larger issues, such as describing the mission, then subsequently followed with more specific questions to gather
information about specific interactions during performance and details about what defined those interactions.
1. Describe the team’s mission.
2. Identify and describe the crew composition.
3. Describe the team’s position and mission relationships in respect to higher and lower echelons.
4. Describe the team’s mission requirements and processes. (Use follow-up questions).
a. Identify and describe key events/moments (e.g., milestones) during each phase.
b. Using the defined mission requirements, identify crew member tasks and responsibilities. (Use
more follow-up questions to gather details with greater specificity).
i. Describe each crew member’s responsibilities during each mission phase.
ii. Describe what each crew member does to accomplish/reach each milestone.
iii. What are the knowledge, skills, and abilities each crew member needs to accomplish their
job?
iv. What are the crew leaders’ responsibilities during each mission phase?
Crew level processes were distinguished from individual tasks and requirements using team-focused questions, such
as: 1. Describe what the crew does to accomplish each milestone.
2. Describe how the crew interacts during each phase.
3. Who coordinates crew interactions?
4. Does another team member step in to assist or question what is happening?
Additionally, more focused questions were asked to gather information pertaining to specific topics or issues, such
as: 1. What are the risks during each mission phase?
2. How are risks mitigated?
3. How are errors detected?
4. How are errors corrected?
As described earlier, in the use case application of our FEA strategy to an existing AMD system, we focused on
crews predominately responsible for air battle management. Subsequently, fire control crews were the subject of our
use case application to an emerging multisystem architecture to set boundary conditions that paralleled those in the
previous use case application. Our design employed open-ended questions to allow interviewees to describe
functions and processes from their own mission experiences. This approach allowed the subject matter experts to use
their experience to provide detailed information about the mission, the requirements, and their performance, without
being limited by our team’s preconceived ideas or performance expectations.
2015 Paper No. 15121 Page 6 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
The Expertise Analysis
Training to expert performance has a long history in performance-based activities (e.g., sports, dance), and is applied
increasingly to other domains (see Erikson, 2006, for a comprehensive discussion). Chi (2006) proposed a
proficiency scale (novice, initiate, apprentice, journeyman, expert, and master) directly relevant to military training.
Expertise can be viewed as a learning process characterized as an active problem solving process in a specific
context (Valkeavaara, 1999). Expertise is not related to the amount of experience in the domain, but rather the
amount of deliberate effort and practice applied to improve performance (van Gog, Erikson, Rikers, & Paas, 2005).
Expertise Analysis Background
Kozlowski (1998) identified different types of expertise and differentiated routine expertise from adaptive expertise.
According to Kozlowski, routine expertise is effective in well-defined predictable situations, and can be achieved
through the learning and rehearsal of routine tasks. Adaptive expertise, on the other hand, is necessary for more
ambiguous, unpredictable situations. Adaptive expertise requires problem solving and adaptation of previously
learned skills, knowledge, and experiences to achieve successful task outcomes. Following this model, expertise,
situation predictability, and task requirements all lie in a corresponding continuum, as illustrated in Figure 4.
FIGURE 4. SITUATION PREDICTABILITY, EXPERTISE, AND TRAINING CONTINUUM
According to Valkeavaara (1999), a better understanding of expertise in the field can be achieved by taking a closer
look at problematic situations encountered by experts in the field, or those lying toward the right side of the
continuum illustrated in Figure 4. Considered within the FEA context, these are the situations most dependent on
decision-making and problem solving, where established (and trained) performance requirements may not be
sufficient for mission success. With this in mind, the expertise portion of our alternative FEA focused on the type of
knowledge gained from problematic encounters, the knowledge base needed to place operators in a position to
resolve and learn from those situations, and how that knowledge translates to performance.
Expertise Analysis Design
A lack of attention to expertise requirements in traditional FEAs may be due to the expectation that expertise is
achieved through field experience and on-the-job training, two things that do not often fall within the realm of a
training requirements analysis. Our intent by incorporating expertise analysis within an FEA is not to determine how
to train to expert levels, but rather how training can be used to enable faster or more efficient expertise development.
The expertise analysis component of our alternative FEA strategy identified characteristics of expert performance
and provided an understanding of how expertise was developed (e.g. through training, experience, or other means).
Research questions that drove our expertise analyses included:
1. What differentiates expert performance from qualified performance?
2. What are characteristics of experts?
i. What are the skills that experts demonstrate?
ii. What are the capabilities that an expert possesses?
iii. How would you describe an expert’s understanding of (the particular aspect of the system)?
3. How did you develop your expertise?
i. How would you describe what you did to develop expertise in (the particular aspect of
the system)?
4. How does expertise decline?
i. What aspects decline and at what rate relative to others?
2015 Paper No. 15121 Page 7 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
Expert operators were sought out to inform our expertise analysis. Experts, for this research effort, were defined as
operators with experience in deployed and/or mission theaters. In addition to providing first hand data based on their
own experiences and performance, these experts are also able to reflect on the progression of their experience and
identify salient aspects of training or experience that transitioned them from novice to qualified operator to expert
operator.
Somewhat unexpectedly, we found data from non-experts (operators with limited operational experience) to also be
valuable in identifying expertise requirements. While non-experts could not comment on their own expertise, they
provided detailed information about their own performance and learning challenges. Non-experts typically identified
skills and characteristics of experts to emulate for their own development, and indicated that expert performance
demonstrations had highlighted areas that they were struggling to develop. Non-expert data, on its own, would not
have been sufficient for our expertise analysis. Combined with experts’ data, however, it provided an opportunity to
triangulate data from multiple perspectives as illustrated in Figure 5.
FIGURE 5. TRIANGULATING EXPERTISE ANALYSIS DATA
Our expertise analysis focused on data pertaining to complex, difficult, and/or ambiguous situations encountered in
the mission or training environment. This orientation was taken in the expectation that adaptive, rather than routine,
expertise would be most informative in differentiating expert and non-expert performance. The research team
assumed there was little performance difference between qualified and expert performance during routine tasks. On
the other hand, it was expected that expert operators would be more skilled at responding to ambiguous situations
than their less accomplished counterparts. Our interview questions, then, were designed to understand the nature of
such events and how individuals subsequently learned from them, as illustrated by the following sample questions.
• Identify and describe an unpredictable event you encountered during operations.
o How was the event identified?
o How was the event handled and resolved?
o What knowledge and/or skill did you require to handle/resolve the event?
• If unpredictable events cannot be identified by the interviews, a similar line of questioning may be
followed using events characterized as “difficult” or “unusual” rather than unpredictable.
This questioning yielded information about the types of situations, the types of solutions sought or attempted, and
the type of knowledge and skills gained from these experiences. The type of knowledge gained from problematic
encounters, and what knowledge base was needed to place operators in a position to resolve and learn from
problematic situations is important to determine salient aspects of training or experience that facilitated the
accumulation of new and advanced knowledge or skills.
2015 Paper No. 15121 Page 8 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
RESULTS
The use case applications to an existing task-based designed AMD training program and the emerging requirements
of a new multisystem architecture provided a number of insights into the utility of our alternative FEA strategy.
Research Question 1: Could an alternative FEA strategy identify requirements beyond those established for a
current system using the traditional SAT approach to FEAs (i.e., already being trained)?
Given that the existing training system is the result of the defined and refined requirements for the qualification and
certification of operators as determined through the application of traditional SAT methods, then any unique
requirements emerging through the application of our alternative strategy could arguably be: 1) newly revealed
because they were not captured by previous SAT applications; or 2) newly found because the SAT had been
incorrectly/ineffectively applied previously and missed some requirements. The latter point seems unlikely since the
existing training program is the result of several years of development and refinement through SAT applications.
Interestingly, the requirements we identified were not surprising to the community, rather, they were fairly well
known and understood across the SMEs we interviewed. Yet, previous analyses provided no avenue to capture and
define them in a manner readily translatable to actual training programs and requirements. Most significant in our
findings was an apparent lack of formal training in key team skills, such as crew resource management and
situational awareness. Teamwork training appeared to be a by-product of repeated exercise activities that required
team members to work together to meet mission (and evaluation) goals. This is not to say that teamwork skills are
not being developed (they are) but suggests they are not being developed as efficaciously as possible. Our
alternative FEA indicated that these skills required focused, tailored training and measures in order to be sufficiently
developed, evaluated, and refined across a wide range of crewmembers and instructors.
Research Question 2: Could an alternative FEA strategy render recommendations for adjusting training
progression to enhance expertise development?
While indirectly inferred in previous SAT analyses with its reliance on well defined, easily measured task
performance standards, our FEAs showed that greater consideration needed to be given to the mechanisms defining
expertise earlier in operators’ training progression. An in-depth understanding of and the ability to apply system
knowledge in a wide range of contexts was seen by interviewed SMEs as some of the primary hallmarks of an
expert. These findings relate to the previously cited work by Hawley (2007) regarding automation bias. Much as the
SMEs in our FEA pointed out, Hawley points out that operators’ lack of understanding about how the system works
limits their ability to identify and react to faulty data. In other words, a lack of system knowledge contributes to
automation bias which hinders effective decision making...one of the hallmarks of recognized expertise.
Currently, operators gain knowledge and develop their expertise primarily through direct experience, peer (or near
peer) coaching, and informal self-directed study and training. Employing expertise analysis methods showed that
current evaluations are performance based without consistent assessment of the depth and scope of crewmembers’
understanding. Consequently, operators can pass evaluations and certifications without truly knowing or
understanding the “why’s” behind their actions. Reconstructing training programs to emphasize system knowledge
and understanding of how actions impact system operations provides a sounder foundation and context for
subsequent training and expertise development. Our analyses also indicate that understanding how the system fits
within and impacts the context of the larger mission will increase conceptualization of the system as a tool to
perform the mission, rather than viewing successful system operations as the mission itself.
Research Question 3: Could the FEA prove flexible enough to be useful for analysis of both established and
emerging systems?
One problem inherent in any emerging system is its lack of use in an operational environment. Consequently,
operational procedures and documentation may not be fully tested and personnel experienced in using the system
may not be available. Both situations were true of the selected emerging multisystem architecture, and we found
ourselves lacking sufficient data sources to conduct a complete team or expertise analysis for the alternative FEA.
2015 Paper No. 15121 Page 9 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
Consequently, research on the emerging system examined during the third and final phase of our effort relied
heavily on system documentation, emerging test outcomes, and findings from the analyses conducted for Phase II of
our effort. Phase II findings provided a baseline for like roles and functions and allowed us to extrapolate to the
emerging systems. The system documentation we relied on included a user manual developed by the system
developer, capability documentation, and evolving operational task lists. All were living documents and subject to
revision during our research. The capability documentation contained proposed mission aspects, and functions and
responsibility areas of the team members included in the analysis. For the most part, this information provided a
framework for understanding the draft tasks provided by the operational task lists. Interviews with personnel
familiar with and directly involved in the emerging system design and development were used to provide context for
the information in the documentation and to address specific questions regarding evolving system implementation
and projected task/role allocations.
To address the lack of experienced operators, we used archival data collected during the previous phase to leverage
relevant findings on similar requirements between existing system air battle management requirements and issues
and projected system fire control tasks and requirements. While actual relevance cannot be confirmed until Soldiers
begin applying the emerging system to the air defense mission, during operations and in test simulations, this
adjustment seemed to provide the flexibility needed to inform emerging system training requirements and designs.
FUTURE APPLICATIONS OF THE ALTERNATIVE FEA STRATEGY
The system context should be carefully considered when determining whether the alternative approach explored in
this research effort should be used for a future FEA. It is important to remember that this alternative builds upon a
traditional task-based analysis foundation. In the case of a new or evolving system, the FEA must remain flexible
enough to adapt to changes in defined roles, tasks, and responsibilities. The utility of the team analysis component is
self-evident – as it is only meaningful when a team, crew, or other defined collective is an appropriate unit of
analysis and performance. However, a key consideration is not to underestimate the degree to which actual
operational success depends upon collective interactions and collaborative efforts between multiple actors or
agencies. In the case of the expertise analysis, one clear factor is the availability of subject matter experts to provide
first-hand information and assessments based on their own proven experience or performance. Secondly, a focused,
detailed expertise analysis is probably most useful in cases where adaptive expertise is required for successful
performance. For routine expertise requirements, the necessary time investment for this type of analysis would
probably not yield sufficient benefits (i.e., new information) to justify its use beyond relying on the outcomes from a
well designed task analysis.
Consequently, our research demonstrated that much is to be gained by applying this alternative FEA strategy to
existing systems and training programs. However, this approach is also recommended for emerging systems
provided that:
a) a similar system can be used to inform the expertise and team analyses; OR
b) in the absence of a similar system, after a traditional task-based analysis has adequately defined critical
positions, roles, functions, and performance standards and an initial pool of training experience has
been developed among system designers and/or simulation and test outcomes; AND
c) the FEA focuses first on team dynamics and processes with the expertise analysis delayed until a
foundational cadre of personnel have developed at least a functional expertise with the systems’
requirements through simulators, operational tests, and functional assessments.
ACKNOWLEDGEMENTS
The researchers would like to thank the U.S. Army Fires Center of Excellence, Fort Sill OK, the U.S. Army
TRADOC Capability Manager – Army Air and Missile Defense Command (TCM-AAMDC) Capability
Development Integration Directorate; the Office of the Chief of the Air Defense Artillery (OCADA), Fort Sill OK,
and the numerous personnel that provided their time and expertise, allowing us to collect information, probe their
insights, and observe their performances during the course of our research.
2015 Paper No. 15121 Page 10 of 11
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2015
REFERENCES
Buehner, T. M., Drzymala, N., Brent, L., Cobb, M. G., & Nelson, J. (2015). Patriot training: Application of an
alternative front end analysis. (ARI Research Report 1984). Fort Belvior, VA; U. S. Army Research
Institute for the Behavioral and Social Sciences.
Chi, M. T. (2006). Two approaches to the study of experts’ characteristics. In K. A. Ericsson, N. Charness & P. J.
Feltovich (Eds.), The Cambridge handbook of expertise and expert performance, (pp. 21-30).
Cobb, M. G., Brent, L., Buehner, T. M., Drzymala, N., & Nelson, J. (2014). An alternative front end analysis for
complex systems (ARI Research Report 1981). Fort Belvior, VA; U. S. Army Research Institute for the
Behavioral and Social Sciences.
Cooke, N. J., Salas, E., Cannon-Bowers, J. A., & Stout, R. J. (2000). Measuring team knowledge. Human Factors:
The Journal of the Human Factors and Ergonomics Society, 42(1), 151-173.
Drzymala, N., Buehner, T. M., Cobb, M. G., Nelson, J., & Brent, L. (2015). Application of an Alternative Front End
Analysis: The Army Integrated Air and Missile Defense Fire Control Element (submitted for publication).
Fort Belvior, VA: U. S. Army Research Institute for the Behavioral and Social Sciences.
Ericsson, K. A. (Ed.). (2006). The Cambridge handbook of expertise and expert performance. Cambridge University
Press.
Hawley, J. K. (2007). Looking Back at 20 Years of MANPRINT on Patriot: Observations and Lessons. Adelphi,
MD: Army Research Laboratory.
Hawley, J. K. (2011). Not by widgets alone: The human challenge of technology-intensive military systems. Armed
Forces Journal, Feb 2011. Retrieved from http://www.armedforcesjournal.com/2011/02.
Hawley, J.K., & Mares, A. L. (2006). Developing effective human supervisory control of air and missile defense
systems. Adelphi, MD: Army Research Laboratory.
Hawley, J. K., Mares, A. L., & Giammanco, C. A. (2005). The Human Side of Automation: Lessons for Air Defense
Command and Control. Adelphi, MD: Army Research Laboratory.
Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and
developments. Human Factors, 50, 540–547.
Salas, E., & Fiore, S. M. (Eds.). (2004). Team cognition: Understanding the factors that drive process and
performance. Washington, DC: American Psychological Association.
U.S. Department of the Army. (2004). Systems approach to training analysis. TRADOC Pamphlet 350-70-6. Fort
Monroe, VA: Headquarters, U.S. Army Training and Doctrine Command.
U.S. Department of the Army. (2013). Army Educational Processes. TRADOC Pamphlet 350-70-7. Fort Eustis, VA:
Headquarters, U.S. Army Training and Doctrine Command.
Weaver, S., Rosen, M., Salas, E., Baum, K., & King, H. (2010). Integrating the science of team training: guidelines
for continuing education. Journal Of Continuing Education In The Health Professions, 30(4), 208-220.
Valkeavaara, T. (1999). Sailing in Calm Waters Doesn't Teach: constructing expertise through problems in work—
the case of Finnish human resource developers. Studies in Continuing Education, 21(2), 177-196.
van Gog, T., Ericsson, K. A., Rikers, R. M., & Paas, F. (2005). Instructional design for advanced learners:
Establishing connections between the theoretical frameworks of cognitive load and deliberate practice.
Educational Technology Research and Development, 53(3), 73-81.
2015 Paper No. 15121 Page 11 of 11