Architecture for a Soldier-Centered Learning Environment

To read the full-text of this research, you can request a copy directly from the authors.


In January 2011, the US Army Training and Doctrine Command (TRADOC) published the "Army Learning Concept for 2015 (ALC 2015)," [1] describing sweeping changes in the way Soldiers will be trained in the future. Noting that digital age learners are comfortable with technology, ALC 2015, now known as the Army Learning Model (ALM), describes how current and future technology can be leveraged to "make learning content more operationally relevant, engaging, individually tailored, and accessible." The concept moves away from instructor-led training using hundreds of presentation slides to 1) small group, collaborative problem-solving environments; 2) learning tailored to the individual's experience level; and, 3) a blended learning environment, using live, virtual, and constructive simulations and games. The Army Research Lab's Simulation and Training Technology Center and the Army Research Institute are conducting research into the development of advanced technology-enabled training methodologies. They are developing an integrated learning environment and prototype instructional materials, and assessing their effectiveness as tools to develop, deliver, and track training and education. Consistent with ALC 2015 concepts, the Soldier-Centered Army Learning Environment (SCALE) provides a prototype data-driven architecture to support training and education across multiple hardware platforms (personal computer and mobile devices), using mobile applications, virtual classrooms, and virtual worlds. This paper provides an overview of the initial SCALE prototype and a sample training application, based on land navigation. In addition, we will discuss our preliminary results of using data-driven concepts in a learning environment, including the use of an ontology engine to organize training content and the methods used to provide this training content and data to client applications.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
This paper investigates the use of ontologies in processes of collabo- rative learning and knowledge generation. The creation and use of ontologies is analysed from an activity theoretical perspective in order to understand proc- esses of shared conceptualization as well as the role of ontologies in processes of change and transformation. Scenarios of ontology-based collaborative learning and knowledge-creation are presented. This work is based on the cultural-historical activity theory, providing a theoretical framework (1) for understanding processes of knowledge-creation which take place when generating and using ontologies and (2) to investigate the dynamic relationship (coupling) between individual learning and the transformation of a community.
In the traditional systems modeling approach, the modeler is required to capture a user's view of some domain in a formal conceptual schema. The designer's conceptualization may or may not match with the user's conceptualization. One of the reasons for these conflicts is the lack of an initial agreement among users and modelers concerning the concepts belonging to the domain. Such an agreement could be facilitated by means of an ontology. If the ontology is previously constructed and formalized so that it can be shared by the modeler and the user in the development process, such conflicts would be less likely to happen. Following up on that, a number of investigators have suggested that those working on information systems should make use of commonly held, formally defined ontologies that would constrain and direct the design, development, and use of information systems - thus avoiding the above mentioned difficulties. Whether ontologies represent a significant advance from the more traditional conceptual schemas has been challenged by some researchers. We review and summarize some major themes of this complex discussion. While recognizing the commonalities and historical continuities between conceptual schemas and ontologies, we think that there is an important emerging distinction that should not be obscured and should guide future developments. In particular, we propose that the notions of conceptual schemas and ontologies be distinguished so as to play essentially different roles for the developers and users of information systems. We first suggest that ontologies and conceptual schemas belong to two different epistemic levels. They have different objects and are created with different objectives. Our proposal is that ontologies should deal with general assumptions concerning the explanatory invariants of a domain - those that provide a framework enabling understanding and explanation of data across all domains inviting explanation and understanding.
Several challenges exist related to applying ontologies in real-world environments. The authors present an integrated enterprise-knowledge management architecture, focusing on how to support multiple ontologies and manage ontology evolution.
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia application. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. In this article we introduce the Representational State Transfer (REST) architectural style, developed as an abstract model of the Web architecture and used to guide our redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers. We describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. We then compare the abstract model to the currently deployed Web architecture in order to elicit mismatches between the existing protocols and the applications they are intended to support.