Conference PaperPDF Available

Iterative Scenario-Based Testing in an Operational Design Domain for Artificial Intelligence Based Systems in Aviation

Authors:
Iterative Scenario-Based Testing in an
Operational Design Domain for Artificial
Intelligence Based Systems in Aviation
Bojan Lukic1*, Jasper Sprockhoff1, Alexander Ahlbrecht1, Siddhartha Gupta1,
Umut Durak1
1German Aerospace Center (DLR), Institute of Flight Systems, Lilienthalplatz 7, 38108 Braunschweig, Germany
*bojan.lukic@dlr.de
Abstract. The use and development of Artificial Intelli-
gence (AI) based systems is becoming increasingly promi-
nent in different industries. The aviation industry is
also gradually adopting AI-based systems, for instance,
with Machine Learning algorithms for flight assistance.
There are several reasons why adopting these technolo-
gies poses additional obstacles in aviation compared to
other industries. One reason are the strong safety re-
quirements which lead to obligatory and thorough as-
surance activities such as testing to obtain certification.
Therefore, a systematic approach is needed for develop-
ing, deploying, and assessing test cases for AI-based sys-
tems in aviation. This paper proposes a method for iter-
ative scenario-based testing for AI-based systems. The
method contains three major parts: First, a high-level
description of test scenarios; second, the generation
and execution of these scenarios; and last, monitoring
of parameters during scenario execution. Parameters
are refined, and the steps are repeated iteratively. The
method forms a basis for developing iterative scenario-
based testing solutions. As a domain-specific example,
a practical implementation of this method is illustrated.
For an object detection application used on an airplane,
flight scenarios, including multiple airplanes are gener-
ated from a descriptive scenario model and executed in a
simulation environment. The parameters are monitored
using a custom Operational Design Domain monitoring
tool and refined in the process of iterative scenario gen-
eration and execution. The proposed iterative scenario-
based testing method helps in generating precise test
cases for AI-based systems while having a high potential
for automation.
Introduction
The practical use of Machine Learning (ML) applica-
tions for Artificial Intelligence (AI) based systems in
aviation is still in an early stage. One reason is the
premature nature of guidelines illustrating the proper
implementation of those applications. Specifically, the
additional and strict requirements and constraints for
introducing new systems in the aviation industry pose
an obstacle. This makes the implementation and cer-
tification of ML algorithms for autonomy challeng-
ing. Recently, the European Union Aviation Safety
Agency (EASA) [1] and Society of Automotive Engi-
neers (SAE) [2] each published early versions of fun-
damental guidelines, discussing the implementation of
ML applications in aeronautical systems. These guide-
lines provide guidance for implementing level 1 ML ap-
plications, which can assist humans. Due to the prema-
ture nature of these guidelines, the certifiability of ML
applications in aviation, especially for fully AI-based
systems, is not yet given. Yet, similar to traditional soft-
ware, it is certain that specific verification artifacts need
to be provided to increase trust. Typical artifacts in-
clude the results of conducted tests. As defined in the
EASA guidance, implementing AI-based systems re-
quires the exact definition of their Operational Design
Domain (ODD).
The ODD defines the conditions under which a sys-
tem operates correctly. In the domain of AI-based sys-
tems, the ODD defines the execution boundaries un-
der which the AI-based system is designed and defines
the parameters which need to be satisfied for the sys-
tem to properly operate [3]. The definition of parame-
ter boundaries for the correct behavior of an AI-based
system becomes especially important when working in
safety-critical domains such as aviation. For instance,
the ODD of an aviation system can help with the def-
inition of design assurance levels [4]. In this context,
the definition of the system’s ODD guarantees the gen-
eration of precise test cases for high test coverage. One
systematic approach for developing test cases for AI-
based systems in their operational domain is model-
based testing using the Model-Based Systems Engi-
neering (MBSE) methodology. Due to its highly de-
scriptive nature and model-centric approach [5], MBSE
is an appropriate methodology to model systems on
all levels of abstraction, making it useful in the devel-
opment process of test cases for ML applications [6].
The iterative scenario testing concept presented in this
work is amongst others exemplified with methods from
MBSE.
This paper discusses the generation of test scenarios
for an AI-based system. The use case is about using a
computer vision algorithm to perform object detection
and determine the distance to other aircraft to predict
dangerous situations. The scenarios represent different
situations with foreign aircraft used for testing. The de-
tailed use case is explained in [7]. A method for itera-
tive scenario-based testing of AI-based systems is pre-
sented in the scope of this work. Three essential parts
of the method are defined: A high-level description of
the scenarios to be executed, the testing environment
in which test scenarios are executed, and a monitoring
tool for narrowing down the parameter boundaries for
the ODD of the respective system. A domain-specific
implementation of this methodology is also presented.
For modeling the systems involved and developing test
cases, the MBSE tool Cameo1is used. The simulation
is executed in FlightGear, a highly customizable open-
source software for flight simulation2. The scenarios
are generated in a model-based approach in Cameo and
then executed in a FlightGear instance. Parameters are
monitored using a custom Python library. The findings
show that the presented iterative scenario-based testing
method facilitates the definition and refinement of test
scenarios for AI-based applications.
The remaining paper is structured as follows:
In Section 1, related work and the status quo of
scenario-based testing with a model-based approach
are discussed. Section 2 presents the development of
a domain-independent method for iterative scenario-
based testing. The implementation of this methodology
is presented in Section 3 with tools used for defining
scenarios, executing them, and monitoring them.
1Dassault Systemes, 2022, Cameo Systems Modeler, available at
https://www.3ds.com/products-services/catia/
products/no-magic/cameo- systems-modeler/.
2FlightGear developers & contributors, 2021. FlightGear, Available at
https://www.flightgear.org/.
1 Related Work
In [8], Jafer and Durak discuss the complexity of sim-
ulation scenario development in aviation. They pro-
pose ontology-based approaches to develop an avia-
tion scenario definition language (ASDL). According
to the authors, ontologies provide invaluable possibil-
ities to tackle the complexity of simulation scenario de-
velopment. Durak presents a model-driven engineering
perspective for scenario development in [9]. The use
of metamodels for generating executable scenarios is
demonstrated with a sample implementation. Durak’s
work is closely related to the research presented in the
work at hand, specifically the development of concep-
tual metamodels for generating executable scenarios.
Simulation-based data and scenario generation for
AI-based airborne systems is discussed by Gupta in
[10]. In the work, the authors aim to answer the ques-
tion of what needs to be simulated for synthetic data
and scenario generation in the simulation engineering
process of an AI-based system. The used methods are a
simulation-based data generation process adapted from
EASAs first usable guidance for Level 1 machine learn-
ing applications and the scenario-based approach us-
ing SES, which is explained more thoroughly in the
publications of Durak [11], [12] as well as Karmokar
[13]. The work in [10] is succeeded with [14], which
discusses behavioral modeling for scenario-based test-
ing in aviation and introduces an enhanced approach
for scenario-based testing called Operational Domain
Driven Testing.
Closely related, [15] demonstrates the testing of
black box systems, such as AI-based applications for
autonomous road vehicles, in their ODD. The frame-
work introduced by the authors is used to learn monitors
in a feature space and prevent the system from using
critical components when exiting its ODD. Scenario-
based testing of autonomous road vehicles is discussed
in [16] and [17]. The authors present an automated
scenario-based testing methodology for vehicles using
advanced AI-based applications. The work shows that
the presented formal simulation approach effectively
finds relevant tests for track testing with a real au-
tonomous vehicle.
In [18], Hungar presents scenario-based testing for
automated road vehicles. The outcome of the work is
the PEGASUS method, which is used to assess highly
automated driving functions. According to the author,
the most important steps for scenario-based testing in-
volve capturing all evolutions, i.e. variants, of func-
tional scenarios, formalization of them, systematic test-
ing, the analysis of critical regions, and finally, the de-
velopment of a risk chart.
Closely related to [9], the work presented in this pa-
per discusses model-driven scenario development. In
addition to the methodologies discussed in the related
work, an iterative scenario parameter adjustment and
generation process is introduced, forming the iterative
scenario-based testing method. The method is illus-
trated with an exemplary generation of test scenarios
for an AI-based demonstrator. In the next section, the
methodology for this domain- and tool-independent it-
erative scenario-based testing method is presented.
2 Iterative Scenario-Based
Testing Concept
The related work shows that there are many ways to re-
alize scenario-based testing for AI-based systems. Es-
pecially when talking about domain-specific tools, a va-
riety of testing strategies are possible. A generalization
of these testing strategies can help with defining uni-
versal testing methods. To achieve that, a fundamental,
tool-independent method is needed to describe the ba-
sic methodology for iterative scenario-based testing on
a high level of abstraction. This method can then be
used to build some domain-specific testing tools. For
such iterative scenario-based testing, three fundamental
components have been identified:
Scenario Model
First, a high-level description of the testing scenar-
ios needs to be defined. This high-level model can
be achieved by describing the scenarios’ fundamental
components. Modeling tools or formalized methods can
for instance be used to formulate the scenarios and de-
rive all required scenario variations from the high-level
model. The method shall be capable of generating an
arbitrary number of scenarios with high parameter vari-
ation from the high-level description to achieve satis-
factory test coverage for the application to be verified.
Environment for Scenario Execution
Second, an environment for executing the derived sce-
narios should be selected. The environment can be of
different types, such as simulated, real system, or a mix
of both, e.g. real systems extended with elements from
augmented reality. These environments have different
advantages and disadvantages. A simulated system can
be deployed quickly, offers consistent conditions, and is
cost-effective. The biggest drawback of simulated envi-
ronments is their sim-to-real gap. The gap refers to the
applicability of simulations to real-life environments, as
many simulated environments cannot fully offer all rel-
evant conditions as a real system. The biggest advan-
tage of a real system is its closeness to the real-life en-
vironment in which the tested application is designed to
operate in. Real systems are hard to deploy and costly.
Especially when talking about automated and acceler-
ated testing, real systems can pose a financial and tem-
poral bottleneck in the testing process.
ODD Monitoring
Last, a monitoring tool is required for verification and
for tracking all parameters that are necessary for and
can have some variance on scenarios. By tracking these
parameters and verifying the application to be tested,
a precise ODD can be defined for the system. With
feedback from the monitoring tool, parameters can be
adjusted, or new parameters can be chosen for a new it-
eration of scenario generation. The tools for monitoring
in the chain of scenario-based testing can be arbitrarily
chosen as long as they are capable of monitoring pa-
rameters in real-time for synchronization purposes.
The described method is of an iterative nature. Each
component feeds the next with some information. This
loop is depicted in Figure 1.
Figure 1: Iterative scenario-based testing
The execution of test scenarios can be accomplished
in a simulated environment as well as a real system. Al-
though both approaches are important to consider, the
method depicted in Figure 1 is tailored towards testing
in simulated environments. For generating application-
readable scenario descriptions with the scenario model-
ing tool, some application, e.g. script, is needed. Simi-
larly, after scenario execution and monitoring, some ap-
plication is needed which feeds the result logs to the
scenario modeling tool, decides on parameter adjust-
ment, and triggers new scenario generation. The use of
such intermediate applications and scripts enables high
automation and optimization of the method. In ideal cir-
cumstances, the iterative scenario-based testing method
forms a closed loop with automated test scenario gen-
eration, execution, and real-time monitoring of param-
eters.
3 Exemplary Implementation
This section explains an exemplary implementation to
demonstrate the derived method. For the implementa-
tion, domain-specific tools were selected that can be ex-
changed depending on the use case. The exemplary im-
plementation of discussed method can be divided into
three components: First, the MBSE-based scenario de-
scription and generation using Cameo; second, the exe-
cution of scenarios defined in generated XML files with
the flight simulator FlightGear; and last, the monitoring
of parameters during scenario execution with a custom
ODD monitoring tool. The basic flow of information
and steps are illustrated in Figure 2.
The high-level model of the scenarios is described
with a profile diagram in Cameo. Profile diagrams are
defined in the System Modeling Language. Addition-
ally, extensions are used to increase the modeling ca-
pabilities with profile diagrams. One configuration of
a specific scenario is generated with a block definition
diagram, which can be transformed and exported into
the desired XML scenario files with the help of scripts.
XML files are generated for the use case on hand, since
FlightGear uses a XML format for the scenario execu-
tion. However, other domain-specific formats can be
used as well. The scenarios are executed within an in-
stance of FlightGear and the parameters are monitored
with a custom ODD monitoring tool. A more detailed
description of the implementation is shown in the fol-
lowing subsections.
Figure 2: Flow of information and steps for iterative
scenario-based testing used in this work
3.1 Scenario Format and MBSE-Based
Scenario Generation
A high-level description of the necessary files for sce-
nario execution can be observed in Figure 3. Along with
scenario files, flight plan files are needed for scenario
execution, as will be explained shortly.
The scenario files include various tags which de-
fine the inputs, objects, and attributes when executing
them in FlightGear. An important tag is the <entry> tag
which defines objects used in a scenario and can include
the following additional tags: <callsign> the identifi-
cation of the aircraft, <type> and <model> the type
and model of the aircraft, <flightplan> the flight plan
which the scenario refers to, and <repeat> a Boolean
flag that indicates whether the scenario shall be repeated
once or infinitely often.
The flight plans, which the scenario files refer to,
are also in XML format. The most important tag in the
flight plan is the <wpt> tag, which can include the fol-
lowing additional tags: <name> the name of the way-
point, <lat> the latitude of the entry that refers to the
flight plan, <lon> the longitude, <alt> the altitude,
<ktas> the knots true airspeed,<on-ground> if the
specified object starts from the ground or not, <gear-
down> if the landing gear is retracted or extended, and
<flaps-down> if the flaps are retracted or extended.
FlightGear offers many more configuration files which
can be adjusted to change environmental parameters as
Figure 3: High-level description of the configuration files for
FlightGear
well as parameters of entities and other components of
interest for scenario-based testing. For simplicity, only
the scenario and flight plan files along with their param-
eters are discussed here. Some high-level description,
i.e. metamodel, of the scenario and flight plan files is
needed to generate arbitrary test scenarios.
Figure 4 depicts one instance of the high-level de-
scription of the scenario and flight plan files.
3.2 Scenario Execution
The scenarios are executed within FlightGear. The
respective XML files can be executed manually in a
FlightGear instance, or referred to as parameters for au-
tomatic execution with startup of FlightGear. For au-
tomation purposes, we chose the latter. As explained
in the previous subsection, one or more entries, e.g.
planes, can be defined in a scenario file, with each flying
according to a predefined route. In this instance, one
passenger airplane is defined, which narrowly passes
the user’s plane. Figure 5 shows a screenshot of the
scenario during execution in FlightGear.
3.3 ODD Definition and Monitoring
The ODD defines the conditions under which a system
operates properly. Several parameters can have a vari-
ance on the scenarios executed in FlightGear, some of
which were defined above. Additional parameters, such
Figure 4: Block definition diagram of one scenario and flight
plan configuration
Figure 5: Passenger airplane narrowly passing user’s Cessna
as weather conditions, need to be considered. A high-
level description of the domain model for the ODD of
the AI-based system used on an airplane is depicted in
Figure 6.
Figure 6: Domain model for the ODD of the scenario-based
testing method
The parameter boundaries for the use case of ob-
ject detection during scenario execution can be deter-
mined in an iterative process. Due to the high num-
ber of parameters to be considered, a manual exhaus-
tive search for parameter boundaries is highly time-
consuming. Therefore, some tool is needed which can
track the necessary parameters during scenario execu-
tion and give feedback on the result of the tested appli-
cation.
For monitoring these parameters in FlightGear, a
public Python library3for fetching parameters from
FlightGear’s property tree is used. The AI-based system
tested in this example is an object detection application.
The result of the domain model’s object detection and
desired parameters can be logged using the monitoring
tool. The feedback generated from the tool can then
be used to adjust the values of selected parameters in
Cameo, generate new scenarios, and narrow down the
parameters to fit the ODD of the application. This it-
erative process can be used to narrow down the ODD
boundaries of each parameter with every iteration.
A simplified ODD for the system with the two pa-
rameters altitude and speed can be defined as follows:
“The application performs correct object detection of
3Munyakabera Jean Claude, 2022. flightgear_interface, Available
at: https://github.com/ironmann250/flightgear_
interface
intruding airplanes of type Boeing 737 within follow-
ing parameter boundaries:
Altitude of intruding airplane relative to own air-
plane in feet, alt: -100 to 100.
Cumulative speed of intruding airplane as well as
own airplane in knots true airspeed, Σktas: 0 to
500.”
Table 1 shows an exemplary log recorded during ex-
ecution of a scenario in FlightGear. For completeness
and to reflect other relevant parameters currently cov-
ered in the scenario model, the latitude and longitude of
the intruding airplane are logged along with the afore-
mentioned altitude and speed.
Log # lat lon alt Σktas detect
1 63.970 -22.65 100 400 no
2 63.974 -22.65 100 399 no
3 63.978 -22.65 100 400 no
4 63.982 -22.65 100 399 yes
5 63.986 -22.65 100 400 yes
6 63.990 -22.65 100 403 yes
7 63.994 -22.65 100 399 no
8 63.998 -22.65 100 400 no
Table 1: Exemplary log of parameters monitored during
scenario execution in FlightGear.
The table shows that a successful object detection
is on hand for logs four to six. Therefore, the prede-
fined ODD holds for the combination of parameters on
hand. Now, single parameters can be adjusted for a po-
tential revaluation of the predefined ODD. In this case,
the altitude of the intruding airplane relative to the own
airplane is increased by 100 feet. First, a scenario with
a new configuration of attributes needs to be generated,
similar to the one depicted in Figure 4. In this case, the
altitude is adjusted to reflect the definition for the new
test case. Lastly, the necessary XML files are gener-
ated from the configuration model. Now, scenario ex-
ecution in FlightGear and parameter monitoring can be
performed. Table 2 shows the log for the second itera-
tion of parameter monitoring.
As shown in the second table, the object detection is
successful for logs three to five. The predefined ODD
still holds for the combination of parameters on hand.
However, the ODD can now be adjusted and phrased
Log # lat lon alt Σktas detect
1 63.970 -22.65 200 400 no
2 63.974 -22.65 200 399 no
3 63.978 -22.65 200 401 yes
4 63.982 -22.65 200 400 yes
5 63.986 -22.65 200 400 yes
6 63.990 -22.65 200 400 no
7 63.994 -22.65 200 400 no
8 63.998 -22.65 200 400 no
Table 2: Exemplary log of parameters monitored during
scenario execution in FlightGear.
more precisely in line with the altered parameter. The
ODD for the application can therefore be rephrased as
follows:
“[...] within following parameter boundaries:
Altitude of intruding airplane relative to own air-
plane in feet, alt: -100 to 200.
[...]”
The loop of parameter adjustment, scenario gener-
ation, execution, and monitoring can be repeated until
the changes in detection results fall below some prede-
fined value and an ODD with some desired precision
has been determined. The example for ODD monitor-
ing and parameter adjustment presented above is sim-
plistic and, for instance, does not consider constraints.
Many more parameters can be and need to be consid-
ered when defining a precise ODD for the underlying
application. Also, the granularity for testing parame-
ter boundaries of the ODD needs to be determined ac-
curately. For instance, a higher logging frequency of
parameters can be chosen, which makes the tests more
precise but also increases the testing effort. Also, in-
stead of a Boolean for the result of the object detec-
tion, the more granular confidence of the object detector
from the machine learning application can be used as a
metric. The framework in itself requires fine-tuning and
more testing to provide the right conditions for success-
ful iterative scenario-based testing of various systems.
The exemplary implementation of model-based sce-
nario generation and ODD monitoring in this section
follows the method presented in Figure 1. Domain-
specific tools such as Cameo, XML files, and a Python
application were used to build a framework for iterative
scenario-based testing. The implementation can be seen
as a minimal working example, demonstrating the iter-
ative scenario-based testing method explained in Sec-
tion 2. The implementation can be developed further to
allow for closed-loop scenario-based testing with auto-
mated scenario generation, execution, and monitoring.
4 Conclusion and Discussion
The use of ML applications in AI-based systems such
as airplanes is steadily increasing. The thorough testing
of these systems is a fundamental part of their devel-
opment process. Certain industries, such as aviation,
impose strict requirements and constraints for the use
of AI-based applications, increasing the testing efforts
required to certify and use these applications. Addition-
ally, ML applications are often considered a black box.
Therefore, black box testing methods need to be put in
place that are as rigorous as current testing methods for
common software systems.
This work depicts a method for iteratively testing an
AI-based system which performs object detection in an
airplane. For this purpose, a scenario-based testing loop
was developed, including the three steps of generating
application-readable scenario descriptions from mod-
els, execution of these scenarios, and parameter moni-
toring with model parameter adjustments. In addition to
generating arbitrary test cases, the presented method il-
lustrates the approximation of boundaries for the ODD
of the ML application with iterative parameter adjust-
ments.
This method can be further optimized by connecting
its components, i.e. the high-level scenario description,
scenario execution, and ODD monitoring, and creating
a closed loop with automated scenario generation, ex-
ecution, and parameter adjustment. Additionally, test
oracles that determine the success or failure of individ-
ual tests should be investigated. The granularity of test
cases, i.e. only success and failure evaluation or more
finely grained evaluations, is important. These findings
will be investigated in future research.
Acknowledgement
The presented research is financed by and part of the
project Model-Based Systems Engineering for Artificial
Intelligence (MBSE4AI). We thank everyone who pro-
vided support, insight, and expertise that greatly as-
sisted the research.
References
[1] EASA. EASA Concept Paper: First usable guid-
ance for Level 1 machine learning applications.
Tech. rep. Apr. 2021. URL :https : / / www .
easa . europa . eu / en / downloads /
126648/en.
[2] EUROCAE WG-114/SAE and G-34 Artificial In-
telligence Working Group. Artificial Intelligence
in Aeronautical Systems: Statement of Concerns.
SAE International. Apr. 2021. DOI :https : / /
doi.org/10.4271/AIR6988.
[3] On-Road Automated Driving (ORAD) Commit-
tee. Taxonomy and Definitions for Terms Related
to Driving Automation Systems for On-Road Mo-
tor Vehicles. Standard. SAE International, Apr.
2021. DO I:https ://doi . org / 10.4271/
J3016_202104.
[4] DO-178C - Software Considerations in Airborne
Systems and Equipment Certification. Standard.
RTCA, Dec. 2011. URL:https : / / www .
do178.org/.
[5] A. Wayne Wymore. Model-Based Systems Engi-
neering. 1993. IS BN : 9780203746936. DO I:10 .
1201/9780203746936.
[6] Azad M. Madni. “MBSE Testbed for Rapid, Cost-
Effective Prototyping and Evaluation of System
Modeling Approaches”. In: Applied Sciences 11.5
(2021). IS SN: 2076-3417. DO I:10 . 3390 /
app11052321.
[7] Jasper Sprockhoff et al. “Model-Based Systems
Engineering for AI-Based Systems”. In: AIAA
SCITECH 2023 Forum. Jan. 2023. DOI:10 .
2514/6.2023-2587.
[8] Shafagh Jafer and Umut Durak. “Tackling the
Complexity of Simulation Scenario Development
in Aviation”. In: Proceedings of the Symposium on
Modeling and Simulation of Complexity in Intel-
ligent, Adaptive and Autonomous Systems. Soci-
ety for Computer Simulation International, 2017.
IS BN: 9781510840300.
[9] Umut Durak et al. “Scenario Development: A
Model-Driven Engineering Perspective”. In: SI-
MULTECH 2014 - 4th International Conference
on Simulation and Modeling Methodologies, Tech-
nologies and Applications. SCITEPRESS Sci-
ence and Technology Publications, 2014, pp. 117–
124. UR L:https://elib.dlr.de/94626/.
[10] Siddhartha Gupta et al. “From Operational Sce-
narios to Synthetic Data: Simulation-Based Data
Generation for AI-Based Airborne Systems”. In:
AIAA SCITECH 2022 Forum. Jan. 2022. DO I:10.
2514/6.2022-2103.
[11] Umut Durak et al. “Using System Entity Struc-
tures to Model the Elements of a Scenario in a Re-
search Flight Simulator”. In: AIAA Modeling and
Simulation Technologies Conference. 2017. URL:
https://elib.dlr.de/112664/.
[12] Umut Durak et al. “Computational Representation
for a Simulation Scenario Definition Language”.
In: 2018 AIAA Modeling and Simulation Technolo-
gies Conference.DOI:10 . 2514 / 6 . 2018 -
1398.
[13] Bikash Chandra Karmokar et al. “Tools for Sce-
nario Development Using System Entity Struc-
tures”. In: Jan. 2019. DOI:10.2514/6.2019-
1712.
[14] Siddhartha Gupta and Umut Durak. “Behavioural
Modeling for Scenario-based Testing in Aviation”.
In: AIAA SCITECH 2023 Forum (not yet pub-
lished). Jan. 2023.
[15] Hazem Torfah et al. “Learning Monitorable Oper-
ational Design Domains for Assured Autonomy”.
In: Proceedings of the International Symposium on
Automated Technology for Verification and Analy-
sis (ATVA). Oct. 2022.
[16] Daniel Fremont et al. “Formal Scenario-Based
Testing of Autonomous Vehicles: From Simula-
tion to the Real World”. In: Sept. 2020, pp. 1–
8. DOI:10 . 1109 / ITSC45102 . 2020 .
9294368.
[17] Francis Indaheng et al. “A Scenario-Based Plat-
form for Testing Autonomous Vehicle Behav-
ior Prediction Models in Simulation”. In: ArXiv
abs/2110.14870 (2021).
[18] Hardi Hungar. “Scenario space exploration for
establishing the safety of automated vehicles”.
In: 3rd China Autonomous Driving Testing Tech-
nology Innovation Conference, 2020. Dec. 2020.
UR L:https://elib.dlr.de/139626/.
Article
Full-text available
The bibliometric analysis of 395 articles selected from the Web of Science (WoS) database between 2004 and 2024 is designed to provide a foundation for future research by mapping scientific collaborations, conceptual clusters, citation relationships, and intellectual structures in the research area, highlighting the international scope of the research area, and identifying emerging trends and influential studies. The results show that dominant topics such as machine learning, deep learning, aviation safety, atmospheric modeling, and anomaly detection are being studied in academia, highlighting the central role of AI in improving aviation safety and operational efficiency. High-impact journals such asIEEE Access and Aerospace have emerged as leading platforms, while Transportation Research Part C and the Journal of Air Transport Management are prominent in logistics and aviation-focused research. China and the United States lead aerospace and AI research with high publication volumes and significant impact. Italy contributes fewer publications but makes a notable impact, while the United Kingdom plays an important role in this field with active research efforts. Institutions such as Nanjing University of Aeronautics, Astronautics, and Vanderbilt University play an important role in advancing the field. These data show that, on both a journal and country basis, certain centers and countries have assumed dominant roles in the global research agenda in aerospace and AI, which have directly contributed to the formation of the aerospace ecosystem. These results provide important clues as to where future research will focus, and show that research communities are increasingly collaborating.
Article
Full-text available
Model-based systems engineering (MBSE) has made significant strides in the last decade and is now beginning to increase coverage of the system life cycle and in the process generating many more digital artifacts. The MBSE community today recognizes the need for a flexible framework to efficiently organize, access, and manage MBSE artifacts; create and use digital twins for verification and validation; facilitate comparative evaluation of system models and algorithms; and assess system performance. This paper presents progress to date in developing a MBSE experimentation testbed that addresses these requirements. The current testbed comprises several components, including a scenario builder, a smart dashboard, a repository of system models and scenarios, connectors, optimization and learning algorithms, and simulation engines, all connected to a private cloud. The testbed has been successfully employed in developing an aircraft perimeter security system and an adaptive planning and decision-making system for autonomous vehicles. The testbed supports experimentation with simulated and physical sensors and with digital twins for verifying system behavior. A simulation-driven smart dashboard is used to visualize and conduct comparative evaluation of autonomous and human-in-the-loop control concepts and architectures. Key findings and lessons learned are presented along with a discussion of future directions.
Conference Paper
Full-text available
The talk presents an approach how simulation may be employed to contribute substantially to the safety case of an automated vehicle of SAE level 3 or higher.
Conference Paper
Full-text available
Despite the gravity of scenarios in modeling and simulation, there is a lack of common tools, formats and practices. This eventually impairs shareability and interoperability. There is a recent interest in aviation domain for the standardization of scenario definition languages. System Entity Structure (SES) is proposed as a formal approach for a standard simulation scenario definition language. SES provides formal means to capture concepts and their relations in a domain. It enables specifying family of concept structures and properties which can then be pruned to a Pruned Entity Structure (PES). Simulation scenario definition language is formally specified as an SES and its pruning yields a particular scenario. This paper utilizes an XML-based representation of SES and PES. It presents tools for scenario development using SES: an SES modeling environment, namely SES Editor, and an interactive pruning tool, namely PES Editor.
Conference Paper
Full-text available
While any simulation study starts with a scenario, scenario development is usually conducted in an unstructured and ad hoc manner. In order to streamline scenario development, a formal approach is envisioned in the research flight simulator facility of German Aerospace Center (DLR), namely Air Vehicle Simulator (AVES). System Entity Structure (SES) which is a high level ontology that was introduced to specify a set of system structures and parameter settings is proposed as the foundations. The paper outlines a model-based methodology for scenario development. SES is exploited for metamodeling in order to capture all possible elements of a scenario that can be simulated in AVES. Then a scenario modeling methodology is built upon this metamodel.
Conference Paper
In recent years, there has been significant progress in Artificial Intelligence (AI), leading to an increasing interest for integration of AI-based functions into newly developed systems. AI promises several benefits, amongst others, beyond the state-of-the-art functions and performance. However, the use of AI-techniques also introduces new challenges regarding safety and security of systems and their certification. These challenges mostly originate from the "black box nature" of complex AI algorithms. To tackle the challenges, safety of the AI-based systems has to be addressed throughout the entire development and life cycle of the system. The adaption of existing methods to the development of AI-based systems is necessary. An established method for the development of complex systems is Model-Based Systems Engineering (MBSE), which offers several advantages for the systems engineering process. In this paper three application examples of how MBSE can support the engineering process of AI-based systems are presented using an application use case: An AI-based threat localization system. First, a systematic development framework is used to design and model the AI-based system. Second, it is demonstrated how safety analysis can be integrated into a model of the system to identify potentially hazardous scenarios, which could arise, for example, due to erroneous predictions by an AI. For the analysis, an approach called Model-Based STPA is utilized which is based on the System-Theoretic Process Analysis. Third, it is demonstrated how MBSE can help in performing scenario-based safety assessment. From the operational domain model, executable configurations are generated to run scenario-based test cases.
Conference Paper
View Video Presentation: https://doi.org/10.2514/6.2022-2103.vid Developing safety-critical AI-based systems is an emerging challenge in aviation. Amongst others, the recent concept paper of the European Union Aviation Safety Agency (EASA), "First usable guidance for Level 1 machine learning applications", provides invaluable insights to tackle this challenge. It particularly highlights the importance of synthetic data for training, validation and testing as a means of complementing real-world data for completeness and representatives. The primary source of synthetic data is simulations. The literature recognizes simulation not only as a crucial data source but an effective method for verification. This paper uses EASA guidance as a baseline to propose a simulation-based data generation process for AI-based airborne systems.
Conference Paper
System complexity is a key characteristic in aviation industry which leads to broad utilization of modeling and simulation in in this global business. Scenario development is an important aspect of a simulation study. It starts at the very first steps when the operational scenarios are defined with the stakeholders and ends with a successful simulation execution. Although the importance of simulation scenarios has long been well-known, there still exists a lack of common understanding and standardized practices which lead to degraded interoperability and shareability. There is a recent effort coordinated by the American Institute of Aeronautics and Astronautics (AIAA) Modeling and Simulation Technical Committee (MSTC) towards development of a standard scenario definition language for aviation. This effort is being challenged by the same system complexity. Ontologies provide means to tackle complexity in domain modelling. This paper presents two distinct ontology based approaches to develop simulation scenario definition language for aviation. They both provide formal bases towards a standard domain specific language for scenario development.