Content uploaded by Max Mauro Santos
Author content
All content in this area was uploaded by Max Mauro Santos on Nov 18, 2017
Content may be subject to copyright.
Open Issues for the Automotive Software Testing
Antonio Tierno
AIT Division
ALTRAN
Modena, Italy
antonio.tierno@altran.com
Max M. Santos and Benedito A. Arruda
Department of Electronics
UTFPR – Ponta Grossa
Ponta Grossa, Brazil
maxsantos@utfpr.edu.br, beneditoarruda@utfpr.edu.br
Abstract — In recent years, innovations in the automotive
industry are based mainly on car electronics and its embedded
software, making aspects of the development process of
automotive embedded devices a key point. At present, there is a
trend in the automotive industry towards employ the
methodology of model-based design as in the development
process being still possible to be adopted for the software testing.
Software components are no longer handwritten in C language or
Assembly code in face of the higher level of the functions
complexity hosted in cars but they are modelled with specific
graphical approach with tools like MATLAB/Simulink/Stateflow,
Statemate, or similar tools. Software models facilitate the
formulation of algorithms because a specific terminology and a
graphical notation provide more clarity than plain-text
specifications or program code. Accordingly, also software
testing is moving towards that direction: model-based testing is a
valid variant of testing that relies on explicit behavior models
that encode the intended behavior of a system and possibly the
behavior of its environment. A separate discussion is reserved for
testing the graphical human machine interface (HMI) of
automotive infotainment systems. It has shown to be costly and
challenging due to its large function scope, high complexity and
multiple variants. There are various model-based testing
concepts for graphical HMIs of infotainment systems, but OEMs
and Tier 1 companies still perform most of tests manually. In this
paper, we describe the classical method still widely used today for
software testing of infotainment ECUs, showing its limitations
and propose a few advices to improve speed and accuracy of
testing activity for HMI systems.
Keywords — Embedded software, testing, model-based,
component and infotainement.
I. INTRODUCTION
The life cycle of software should follow a stages sequence
comprising as: Requirements Analysis, Software Development,
Testing for Verification and Validation and Maintenance. The
most widely model used today for automotive electronic
systems is the V-model (see also [1]). For the left side are the
steps for software development and in the right side are the
steps for the software tests.
Model-Based Design commonly defined as MBD, is a
mathematical and visual design methodology for designing
complex systems in different domains, such as in automotive,
aerospace, motion control and industrial equipment
applications. The system model is at the center of the
development process, from requirements development, through
design, implementation and testing [2]. MBD provides an
efficient approach for establishing a common framework for
communication throughout the design process while supporting
the development cycle ("V" diagram).
MBD provides a simple and easy approach to follow for the
software development process, with a logical sequence in
which these phases have their own functionalities and defines
relationships between the phases. It is commonly known which
the stages of the MBD are composed basically for the MIL
(Model-In-the Loop), SIL (Software-In-the Loop), PIL
(Processor-In-the Loop) and HIL Hardware-In-the Loop) [ ].
The diagram in Fig. 1 is a typical representation of the V-
Model which can be adopted the MBD approach.
Fig. 1. V-Model life cycle for the automotive software development.
Indeed, for the process of software development should
still consider the testing actions which are important all stages
of the development to assure if its attend the preliminary
requirements defined and be a product with high level of
quality to attend the user needs. For the automotive software
this note should be adopted considering the high level of
complexity in the automotive domain which in the
development processes are involved various stakeholders.
The testing tasks of software are generated from knowledge
gained from development tasks, for instance the high level
design will generate the unit and integration tests. A project
following this model will move through these tasks one at a
time, moving on when the current task is completed. Thus,
firstly it is made the unit testing and then the integration tests
are written when the high level design is then finished, the unit
tests are written when the detailed specifications are finished as
described preliminary.
The studies made in [2], show that model-based
development is being used for series development by the
Requirements
Functional
Design
Physical
Design
Coding
Implement
Verification
Functional
Integration
Validation
System Level
(OEM)
Component Level
(Supplier)
Analysis and Simulation
Bench Tests and Verification
Review & Tests
majority of the automotive companies but the main negative
aspect, which was reported, is the extremely high process
redesign costs, which have to be invested to employ and
develop model-based design in the workflow. The experts
report which are needed a high level of engineering efforts for
the process redesign. They indicate that the main costs for the
process redesign are not just costs for tools (although tool costs
are a major cost factor), but also costs for defining a new
development process, training costs for the employees and the
regeneration of hand-coded projects. Another negative aspect is
the high dependency from tool vendors.
The extension of MBD for the testing actions, we can
consider the Model-Based Testing approach. The Model-based
testing commonly defined as MBT is a systematic method to
generate test cases from models of system requirements. It
allows you to evaluate requirements independent of algorithm
design and development. Furthermore, even if the model-based
testing is widely-used for function tests, the test of the HMI
(Human Machine Interface) remains mainly a manual,
demanding and time consuming task committing often the
warranty ensure time to market.
Even nowadays, OEMs and Tier 1 automotive companies
provide software specifications to their suppliers mainly as
plain text. Starting from those specifications testing engineers
generated testing procedures and performed the relevant tests
for software testing. Very often happens that different
documents express opposing requirements, causing a waste of
time (and money) for the interpretation.
In the following we describe a classic testing process and
the operations performed to improve its speed and accuracy
and to introduce a first step towards automation considering
that the automotive functions are increasing in quantity and
complexity that are widely allocated in distributed computer
architecture.
This paper is organized as follows. Section II provides a
background of model-based testing and some related aspects
target architecture which dashboard is designed. Section III
presents the testing procedures considering in the end the
automated testing. Section IV describes the aspects of HMI
testing. In the Section V, the case studies are presented to
show the gaps of traditional tests when apply for the HMI
systems. The paper is concluded in Section VI.
II. MODEL-BASED TESTING
The life cycle of software should follow a fixed path from
Requirements Analysis to Testing and Maintenance. The most
widely model used today for automotive electronic systems is
the V-model (see also [1])
Model-Based Design is centered on computational models
of the system under design. The design involves the use of
multiple models that represent different system views at
different granularity levels. Then, model transformations are
applied from top level down to implementation, while
preserving the essential properties of the system. Finally,
automatic code generation produces the implementation as
embedded code [1]. In order to test both model and the code
generated in Model Based Design it is necessary to create a
case test.
Fig. 2 illustrates the classical approach to be employed in
automotive domain of workflow for the software testing.
Fig. 2. Workflow of automotive software testing.
Although systematic testing is important to ensure software
quality, manual testing is expensive, error-prone and very time-
consuming. However test automation is a main consideration in
improving test efficiency. Currently, however, only manually
crafted test scripts are automatically executed. Hence new
methodologies and tools that simplify the test phases deserve
special attention.
Model-based testing relies on behavior models for the
automatic generation of model traces: input and expected
output-test cases-for an implementation [2]. MBT allows
evaluating requirements independent of algorithm design and
development. It involves the following stages as: creating a
model of system requirements for testing, generating test data
from this requirements-model representation and verifying
your design algorithm with generated test cases.
Fig. 3. Workflow of model-based testing for automotive software.
The model based approach comprise of plug-ins such as
model advisor and Simulink design verifier that help in
automatic testing of requirements as per the level of criticality
[3]. MBT approaches generate tests from requirements with the
purpose to ensuring test efficiency and quality.
In model-based testing, you use requirement models to
generate test cases to verify your design. This process also
helps automate other verification tasks and streamlines the
review process by linking test cases and verification objectives
to high-level test requirements. In the Fig. 3 is illustrated the
classical workflow of MBT.
For the following section, it is described the aspects of test
procedure in automotive domain which can be considered for a
real application automotive functions in a general way. The
tests procedure can be automated with the purpose to accelerate
the teste procedure and assure the software quality. Besides, we
can show later that since with the MBT approach some gaps
can be found when we consider automotive functions as the
HMI and still there are wide fields of research to be done in
this area.
III. TEST PROCEDURE
In an overview, the test procedure is considered as being a
detailed instructions document for the set-up, execution, and
evaluation of results for a given test case. This document
contains a set of associated instructions and may have steps
specifying a sequence of actions for the execution of a test.
We show briefly regarding aspects related to test
procedure for automotive software which are composed in
layer of testing and the test design proces. They are important
key points to ensure that the test can have a high level of
coverage. Indeed, test coverage is described in details more
forward and is considered as a metric to evaluate in how many
percent a software was fully tested.
At the end of this section, it is yet shown a brief
description of the procedures for the automated testing.
A. Layers of Testing
In networked electronic systems like those used today, it is
often no possible to assign software functions to a single ECU.
As a result, software functions are divided into sub-functions
for implementation in several ECUs, commonly defined as
functional partitioning allocation. Therefore, electronic systems
in the vehicle are partitioned into subsystems and are
developed based on the principle of division of labor. The
subsystems are then tested and validated, and subsequently
integrated step by step into an electronic system in a distributed
architecture.
With reference to the V-model, software testing is
organized into: unit, integration, system and acceptance.
1) Unit testing: its purpose is to take the smallest piece of
testable software in the application, isolate it from the
remainder of the code, and determine whether it behaves
exactly as you expect. Each unit is tested separately before
integrating them into modules. In the Automotive field the
continuing quantum leaps in hardware technology and
performance facilitate the implementation of many increasingly
powerful vehicle functions by means of software. These
functions are referred to as software functions.
2) Integration testing: its purpose is to exercises the
interfaces between the modules. It takes as its input modules
that have been unit tested, groups them in larger aggregates,
applies tests defined in an integration test plan to those
aggregates, and delivers as its output the integrated system
ready for system testing.
3) System testing: its purpose is to test the complete
system, configured in a controlled environment, by simulating
the real time scenarios that occurs in a simulated real life test
environment.
4) Acceptance testing: it’s the final stage in the testing
process before the system is accepted for operational use
according the user intention.
Fig. 4. V-Model life cycle for the automotive software testing
Here a few misuses and common misunderstandings
follow.
Very often unit testing is performed by the same actor that
writes the code, contrary to the rules of good programming.
This can takes for interpretation errors and misunderstandings.
We are considering these malfunctions of testes for the unit test
The basic difference in integration testing and system
testing is about its approach and scale. In system testing the
basic code level knowledge of various modules is not required.
Also the most popular approach of system testing is Black Box
testing, i.e. without any knowledge of the internal structure or
coding in the program. System testing aims to detect defects
both within the integrated modules and also within the system
as a whole single unit.
Acceptance testing may reveal errors and omissions in the
system requirements definition, because the real data exercise
the system in different ways from the test data.
Our aim is to design an effective test procedure that allows
us to detect all (ideally) faults and bugs and to ensure that the
software requirements have been met.
B. Software Test Design Process
A test procedure is a detailed instructions document for the
set-up, execution, and evaluation of results. This
documentation may have steps specifying a sequence of actions
for the execution of the tests. Thus, a test is a finite collection
of test cases to be done in a specific test unit desired.
Test case is a particular choice of input values and
conditions under which a tester will determine whether a
system under test satisfies requirements.
Verification
Validation
System
Analysis
Software
Design
Module
Design
Coding
Unit
Testing
Integration
Testing
System
Testing
Requirement
Gathering
Acceptance
Testing
The functionality is a set of services delivered by the
software product. Usually, the classical software testing
activity consists in:
1) Analyzing the carmaker software requirements: testers
must first read, analyze and understand the carmaker software
requirements, the same requirements used by software
developers.
2) Designing the test cases: testers proceed to a manual
design of the test cases, creating and designing test scenarios as
close to the real life situations as possible. The performance of
this activity is mainly based on the experience of the testers.
3) Perform tests, simulating the test cases on the software
product and detecting the defects: once the test cases are
developed, they are simulated on the software product in order
to check that they satisfy the requirements and are (ideally) free
of defects as possible.
4) Keeping track of the new requirements: the carmaker
may modify a requirement or introduce a new one. This means
modify one or more test cases or create new ones.
5) Producing evaluation reports: this is the output of the
testing process. Creating a test repository with a document
management system (DMS) is a very good practice.
6) Discussing with customer about defects and bugs
detected.
C. Automated Testing
The automated software testing is a process in which
software tools execute pre-scripted tests on a software
application to control the execution of tests and the comparison
of actual outcomes with predicted outcomes. It can automate
some repetitive but necessary tasks in a formalizes testing
process already in place or add additional testing that would be
difficult to perform manually.
The objective of automated testing is to simplify as much of
the testing effort as possible with a minimum set of scripts. If
unit testing consumes a large percentage of a quality assurance
team's resources, for example, then this process might be a
good candidate for automation. Automated testing tools are
capable of executing tests, reporting outcomes and comparing
results with earlier test runs. Tests carried out with these tools
can be run repeatedly, at any time of day.
IV. HMI TESTING
Nowadays, most of the innovations in vehicles are
conducted in the entertainment field, not forgotten to mention
also that the paradigm of intelligent transport systems leading
up to have greater connectivity between vehicle-vehicle and
vehicle-infrastructure.
Therefore, In-car entertainment (ICE) or In-Vehicle
Infotainment (IVI) is defined as a collection of hardware
devices installed into automobiles, or other forms of road
transportation, to provide audio or video entertainment, as well
as automotive navigation systems. IVI systems use audio/video
(A/V) interfaces, touchscreens, keypads and other types of
devices to provide these types of services. Thus, It is heavily
based on HMI.
Automotive infotainment is growing at the pace of
consumer products: there is increasing desire to have latest and
greatest video and audio capabilities in the vehicle. Today, a
hot topic is that of the “Connected Car”. It is a car that is well
connected to the inside and outside world. Connectivity within
the vehicle includes providing connectivity to devices that are
brought into the vehicle. Connectivity outside the vehicle
primarily involves the Internet connection via Wi-Fi, LTE, or
some other means.
An infotainment system HMI is the front-end of the
infotainment system composed of different ECUs and bus
systems. It reacts to user events triggered via different input
facilities and interacts with the ECUs of the infotainment
system via underlying applications [10].
An infotainment system HMI contains:
Graphical elements: UI elements contained in a screen
are usually called widgets. Widgets such as the title and soft
keys present static text to users. These widgets usually have a
text label, while the contents of such text labels are usually
externally defined, since current HMIs usually have more than
one language or changes of these texts must be performed. A
Widget such as the status bar contains sub-widgets which are
icons. A widget such as a scroll list is a complex widget, and
contains further complex widgets such as the scroll bar and
several rows of options. This list widget has complex behavior.
The user can scroll between the rows; during this operation the
focused row must be highlighted. If there is at least one page
after the current page, the list must display an arrow at the
bottom pointing downward. If there is a previous page, the list
must present an arrow at the head pointing upward. The scroll
bar must also be able to calculate the position of the current
page in order to correctly indicate the position.
Dynamic behavior: an infotainment system HMI usually
has a very complex menu behavior. A menu change can be
triggered by a user action, for instance entering a destination in
the navigation system, or a message sent by underlying
applications, for instance an incoming call. Automotive HMIs
usually also contain a number of pop-up windows, which are
used to present temporary or spontaneous information to users.
For instance, the pop-up displayed on the instrument cluster
indicating an empty tank is initialized by the underlying
application [10].
There are many challenges associated with the
implementation of connectivity: Android Apps, The Internet
Streaming Multimedia Content, Security… There are some
solutions already available for connected car e.g. MirrorLink,
Miracast, DLNA , Ford AppLink, Apple Carplay.
In addition, HMI systems encompass all the elements a
person will touch, see, hear, or use to perform control
functions and receive feedback on those actions. Where one
function might be served by pushbutton, key lock, and rotary
switches, multiple functions could require several screen
displays to cover operator functions and options. Feedback is
critical to operator effectiveness and efficiency. Feedback can
be visual, auditory, tactile, or any combination of these that is
necessary for the application. Feedback is essential in systems
that have no mechanical travel, such as a touchscreen or a
capacitive device that when triggered has no moving parts. In
some cases feedback provides confirmation of an action, while
in others it adds to the functionality.
To sum up, the complexity of infotainment HMIs is
growing with the functionalities of infotainment systems. A
user interface giving a faultless experience is one of the most
important requirements of today’s infotainment systems.
Difficulties of HMI testing compared to function tests are
fully discussed in [3]. It is still very common and widely used
approach to specify an HMI on paper as:
Description of software implementation details written
as plain text
Menu flow layout for graphics specified as flow charts
Screen design defined in a drawing tool
Texts in various languages (for internationalization)
held in a spreadsheet
The main problem of this approach is that when
specifications change during development, documents become
inconsistent and this results in a high number of expensive
change requests.
A. Challenges of HMI Testing
Furthermore, testing the automotive HMI faces new
challenges due to its special characteristics:
1. The HMI is an embedded system, which communicates
with the ECUs via underlying applications. The dynamic
menu behavior and the represented contents are dependent on
these underlying applications.
2. An infotainment system HMI is based on the concept
of screens and can include thousands of screens. Screens and
screen changes must be especially considered for testing.
3. The HMI is usually provided in several languages.
Therefore, concrete texts cannot be directly included in the
HMI.
4. An infotainment system HMI has a large set of
variants. Variability must be taken into account when
modeling and testing the HMI.
5. Actuators and vision sensors
B. HMI error types
Because of its nature, HMI errors are much more varied
than function errors. We must firstly find out which HMI
errors can occur in practice:
1. Menu behavior errors: they are caused by erroneous
implementation of the menu, for instance the symptom of this
error type is switching to an unexpected screen or missing
menu changes.
2. Screen content errors: wrong content in static text is
very common, as well as in dynamic text (sent from the
underlying applications at runtime - for example, an incoming
call number). Wrong characters or language errors can be
displayed. In addition the order of text rows, icons or tabs can
be misleading. Graphic errors can occur. Texts or icons can
also be completely missing or occur unexpectedly in a screen.
3. Pop-up errors: usually errors detected in connection
with pop-up windows are caused by underlying applications,
overloaded bus systems, or the board computer. For example,
the underlying application sends a wrong message to the HMI,
so that the HMI reacts with a different pop-up than expected.
4. Design errors: design rules define how screen contents
should be represented in terms of position, distances, colors,
fonts etc. Errors caused by breaking the design rules are
design errors. For instance, shifting of pixels is a frequent
design error.
5. HMI framework errors: in very few cases, errors can
also be caused by an erroneous HMI framework, e.g. bugs in
the event queue.
C. The need for a proper test equipment
Another key point is the test equipment. Multiple screens and
options in HMI make it difficult to verify manually.
Furthermore, the high number of languages and variants are
time consuming to do manually.
In order to design a (total) test automation for complex
graphical and touch interfaces, an integration of vision
systems, touch (robotics) actuators and test engines are
required.
Vision systems play an important role in inspection of HMI:
depending on the inspection result the actuators will go for
respective area such as rejection or acceptance. The camera
has to be chosen based on vision basics (resolution, focal
length, distance to object, lighting of unit under test, etc.).
Modern cameras have various ready to use post processing
vision tools such as: pattern match, color, edge, OCR, etc.
Easy to use software is also included. It allows to define
different jobs for testing, such identify the regions of interest
(telltales, text, etc.), select the correct post processing vision
tool for the different regions of interest (OCR for text on HMI,
color vision tool for telltales) and when the camera is
triggered, it returns the results for the defined jobs.
Obviously, the underlying hardware must be able to simulate
inputs and communication signals required for the unit under
test. An automated test equipment is capable to cover all the
test cases in real time environment, reducing the errors and
saving time.
V. IMPROVING THE TESTING PROCESS
Testing activity is expensive and time consuming, so it is
important that you choose effective test cases. Effectiveness
means basically two things:
1. The test cases should show that, when used as expected,
the component that you are testing does what it is supposed to
do.
2. If there are defects in the component, these should be
revealed by test cases.
Test cases are developed using various test techniques to
achieve more effective testing, see [4] for a brief description
of main testing methods and techniques. By this, conditions of
testing which get the greatest probability of finding errors are
chosen. So, test techniques enable testers to design testing
conditions in a systematic way. Combines all sorts of existing
test techniques, one will obtain better results rather if one uses
just one test technique.
Fig. 4. Manual Testing Process
A. Software Requirement Specification
A first step to improve the testing process is to adopt the
Software Requirement Specification (SRS) model to express
the various expectations of a software product. In [5] we can
find the IEEE recommended practice for the SRS. It describes
recommended approaches for the specification of software
requirements. Once the SRS document is ready, it is the
unique source of inspiration for testers to design their test
cases for the software product.
B. Test Scripting
Another improvement is translate manual test cases into
automation test scripts.
A Test Script is a set of instructions - written using a
scripting/programming language - that can run automatically
without human interaction.. Test scripts are used in automated
testing.
There are many test automation tools that generate the test
scripts for you, without the need for actual coding. Many of
these tools have their own scripting languages.
For functional testing, test scripts typically have a pattern, as
reflected in Figure 3 that:
Initializes the SUT
Loops through a set of test cases, and for each test case
Initialize the target
Initializes the output to a value other than the expected
output (if possible)
Sets the inputs
Executes the SUT
Captures the output and stores off the results to be
verified against the actual output at some later time,
when a test report can be created [8]
There are a lot of scripting languages: Python, Ruby, Perl,
Javascript, CAPL, … Which to use depends upon what we are
automating, the project, the expected outcomes and so on.
Fig. 5. Test Script Automation
C. Capture & Replay
An alternative approach is capture & replay: while the tests
are executed manually, they are recorded so that later they can
be replayed several times.
Anyway, test execution tools do not help in developing the test
cases. Test cases have to be developed by humans, who, while
reading and studying specifications, think hard about what to
test and about how to write test scripts that test what they want
to test. One of the main bottlenecks for automating the test
generation process is the shape and status of specifications. In
the first place, as explained above, many current-day
specifications are unclear, incomplete, imprecise and
ambiguous, which is not a good starting point for systematic
development of test cases. In the second place, current-day
specifications are written in natural language e.g., English.
Natural language specifications are not easily amenable to
tools for automatic derivation of the test cases.
Fig. 6. Capture & Replay
D. Model-based Testing
A further improvement to testing is the use of a model to
describe the behavior of a system. Model based paradigm is
very pervasive today. Models can be utilized in many ways
throughout the product life-cycle, including: improved quality
of specifications, code generation, reliability analysis, and test
generation.
There are several advantages of model-based testing:
1. The test model is usually quite small, easy to
understand, and easy to maintain.
2. The use of test models often allows traceability from
requirements to test cases.
3. MBT can be used for testing after system development
as well as for test-first approaches. As well known, test-first
approaches help reducing costs. Furthermore, experience
shows that the early creation of formal test models also helps
in finding faults and inconsistencies within the requirements.
4. The test model can be used to automatically generate
small or huge test suites that satisfy a corresponding coverage
criterion.
Fig. 7. Model-based Testing
We introduced Model-based Testing in Section II. We cannot
possibly talk in detail about all software models, but we
introduce a subset of models that have been useful for testing.
1. Finite state machine / State diagram: It is a
mathematical model of computation applicable to any model
that can be accurately described with a finite number (usually
quite small) of specific states.
2. UML: the unified modeling language (UML) models
have the same goal as any model but replace the graphical-
style representation of state machines with the power of a
structured language. UML can also include other types of
models within it, so that finite state machines and statecharts
can become components of the larger UML framework.
3. Markov chain: It is a stochastic model, a directed
graph, in which states of use are connected by arcs labeled
with usage events. A usage event is an external stimulus
applied to the system under test, while different states of use
are used to enable proper sequencing and relative likelihood of
inputs [11]. A specific class of Markov chains, the discrete-
parameter, finite-state, time-homogenous, irreducible Markov
chain, has been used to model the usage of software. It is
structurally similar to a finite state machine and can be
thought of as probabilistic automata. Their primary worth has
been, not only in generating tests, but also in gathering and
analyzing failure data to estimate such measures as reliability
and mean time to failure.
4. Grammars: they have mostly been used to describe the
syntax of programming and other input languages.
Functionally speaking, different classes of grammars are
equivalent to different forms of state machines. Sometimes,
they are much easier and more compact representation for
modeling certain systems such as parsers. Although they
require some training, they are, thereafter, generally easy to
write, review, and maintain. However, they may present some
concerns when it comes to generating tests and defining
coverage criteria.
5. Decision table/tree: a decision table is a compact way
to model complex rule sets and their corresponding actions.
Decision tables associate conditions with actions to perform.
Decision tables provide a systematic way of stating complex
business rules, which is useful for developers as well as for
testers. Decision tables help testers explore the effects of
combinations of different inputs and other software states that
must correctly implement business rules. A decision tree is a
flowchart-like structure in which each internal node represents
a "test" on an attribute, each branch represents the outcome of
the test and each leaf node represents a class label (decision
taken after computing all attributes). The paths from root to
leaf represents classification rules.
E. Formal methods
Due to their mathematical underpinning formal methods allow
to specify systems with more precision, more consistency and
less ambiguity. Moreover, formal methods allow to formally
simulate, validate and reason about system models, i.e., to
prove with mathematical precision the presence or absence of
particular properties in a design or specification. This makes it
possible to detect deficiencies earlier in the development
process. An important aspect is that specifications expressed
in a formal language are much easier processable by tools,
hence allowing more automation in the software development
trajectory [13].
Currently, there are more than 100 different formal methods
(FMs) listed on the de-facto formal methods repository at the
World Wide Web Virtual Library on Formal Methods,
http://formalmethods.wikia.com/wiki/VL. This is a de-facto
database of anything relating to formal methods, with entries
mostly from the USA and Europe. Formal methods are in
different stages of development, in a wide spectrum from
formal languages with no tool support, to internationally
standardized languages with tool support and industrial users.
The field of formal methods is in a great flux and evolving
rapidly, leaving research laboratories and making inroads into
industrial practice. However, companies that have safety,
security, compliance to (often international) standards,
certification, or product quality in mind are interested in
formal methods.
Few projects in automotive field report concrete usage of
formal methods or specifications. An explanation for this
could be that if those projects are not associated with high
safety critical features in vehicle, engineers do not give
consideration towards formal methods.
In addition there is not much proposal of usage of formal
methods in software testing in Functional Safety standards.
Researchers working in the field of software engineering
should address these points in order to make formal methods
more favorable to automotive industry.
VI. IMPROVING THE HMI TESTING PROCESS
For differentiating themselves from the competition and attract
buyers, automotive manufacturers use electronics to introduce
advanced multimedia and convenience features such as
satellite radio, CD and DVD players, TV, navigation systems,
telemetry systems, hands-free phone, and so on. There is an
emphasis on developing and deploying such kind of systems
to meet consumers' requirements, while being simple enough
to use, and of high quality to avoid the need for costly
software bug fixes.
A. Main issues of IVI
To obtain easy to use systems, a significant amount of time
and effort is devoted to designing the HMIs for these systems
in addition to designing the underlying electronics.
The starting point is usually their functional requirements,
which are often written in terms of the customer interaction
with the interface device. For example, a typical requirement
for the radio system could be the following:
“The Main screen shows five applications: Radio, Media,
Navi, Phone, Setup. The applications can be chosen with
joystick rotation. To select an application the user needs to
press the joystick button.”
A typical development process takes these requirements and
proceeds through the system design and implementation stages
to test the system to determine if it meets the requirements or
not. If it does not meet the requirements, an expensive and
time consuming debugging process follows to determine if the
implementation is faulty or if one or more requirements are
inconsistent with each other.
In either case, because functional and performance testing
does not begin until prototype hardware is available, there is a
significant time lag to turn around design changes to meet
requirements changes or fix design errors. To address these
issues, many product development organizations are shifting
from this hardware-based traditional development cycle,
which relies on designing with a prototype and test iteration,
to Model-Based Design [12].
B. Model Based approach for IVI
By using models in the early design stages, engineers can
create what are known as "executable specifications" that
enable them to immediately validate and verify specifications
against the requirements. Thus, Model-Based Design allows
engineers to detect errors earlier in the development process
when the cost to fix them is less expensive. Furthermore,
models can be used to communicate between engineering
teams with different specializations, allowing them to work
together and to communicate between stages in the overall
process. Moreover, initial design models can be incrementally
extended to include increasing implementation detail. Thus,
Model-Based Design allows engineers to explore different
design alternatives early in the design process using the
models which are part of the executable specification.
After the concept phase is completed, the same executable
specification (containing the model) is used by the next team
which may need to include detailed implementation effects
into the model. This process continues with each part of the
design team elaborating the same model used by the previous
team and testing their contribution to the design. In contrast, a
document-centered approach requires each design stage to
generate new artifacts or design documents to communicate
the state of the design as it passes from one stage to the next
[12].
VII. CONCLUSION
This paper presented the general aspects of automotive
software testing considering under the model-based design
method for design. In specific the focus in over infotainment
functions that are complex and the traditional methods are not
enough to attend the requirements defined previously.
REFERENCES
[1] R. Awédikian and B. Yannou. “ Design of a Validation Test Process of
an Automotive Software.” Intenational Journal on Interactive Design
and Manufacturing – IJIDeM, vol. 4, issue 4, pp. 259-268, November
2010.
[2] M. Brou, S. Kirstan, H. Krcmar and B. Schatz. “What is the Benefit of a
Model-Based Design of Embedded System in the Car Industry?”.
Emerging Technologies for the Evolution and Maintenance of Software
Models. DOI: 10.4018/978-1-61350-438-3.ch013. pp. 343-369. 2012.
[3] T. H. Changa, T. Yeh and R. C. Miller. GUI Testing using Computer
Vision.” In Proceedings of the 28th International Conference on Human
Factors in Computing Systems – CHI’10. ACM, New York, NY, USA,
1535-1544, 2010.
[4] I. Jovanovic. “Software Testing Methods and Techniques.” The IPSI
BgD Tranactions on Internet Research, vol. 5, n. 1 Eason, B. Noble, and
I.N. Sneddon, “On certain integrals of Lipschitz-Hankel type involving
products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247,
pp. 529-551, April 1955. (references)
[5] IEEE Recommended Practice for Software Requirements Specifications.
1998. Doi:10.1109/IEEESTD.1998.88286. ISBN 0-7381-0332-2.
[6] Vector Informatik. Programming with CAP – CANoe and CANalyzer.
December 14, 2004.
[7] I. K. El-Far and J. A. Whittaker. Model-based Software Testing.
Encyclopedia on Softwarre Engineering (Edited by J. J. Marciniak),
Willey, 2001.
[8] M. Blackbum, R. Busser and A. Nauman. Why Model-Based Test
Automation is Different and What You Should Know to Get Started.
Software Productivity Consortium, NFP. Dec. 2008.
[9] M. Utting, A. Pretschner and B. Legeard. A Taxonomy of Model-Based
Testing. Working Paper Series. ISSN 1170-487X. April 2006.
[10] L. Duan. Model-Based Testing of Automotive HMIs with Consideration
for Product Variability. Dissertation. University of Munchen. Munich,
June 208, 2012.
[11] S. J. Prowell. Using Markov Chain Usage Models to Test Complex
Systems. Proceeding of the 38th Hawaii International Conference on
System Services. 2005.
[12] C. Fillyaw, J. Friedman and S. M. Prabhu. Creating Human Machine
Interface (HMI) Based Tests with Model-Based Design. SAE World
Congress 2007. 2007-01-0780.
[13] J. Tretmans and A. Belinfante. Automatic Testing with Formal Methods.
University of Twente. Netherlands.