Content uploaded by Thierry Lecomte
Author content
All content in this area was uploaded by Thierry Lecomte on Apr 03, 2023
Content may be subject to copyright.
Safe and Secure Architecture Using Diverse
Formal Methods
Thierry Lecomte1
CLEARSY, 320 avenue Archim`ede, Aix en Provence, France
thierry.lecomte@clearsy.com
Abstract. The distribution of safety functions along the tracks requires
the networking of the ECUs1that support them, to facilitate their oper-
ation and maintenance. The latter enables logs to be sent, commands to
be received and sent that will lead to a state change of one of the con-
nected equipment, and the ECU application software to be updated. All
these activities are naturally subject to targeted attacks aimed at reduc-
ing the availability of the equipment or disrupting its operational safety
to the point of creating accidents. This article presents an innovative ap-
proach partitioning security and safety on two different computers. One
computer connected to the network ensures security and is regularly up-
dated according to known threats. The other computer ensures safety
and communicates only through a secure filter. Each computer embeds
technological elements that have been specified, implemented and proven
with 2 different formal methods.
Keywords: formal methods, cybersecurity, safety
1 Introduction
Railway signalling is a safety-critical system whose responsibility is to guaran-
tee a safe and efficient operation of railway networks. The decentralised railway
signalling systems have a potential to increase capacity, availability and reduce
maintenance costs of railway networks. Given the safety-critical nature of rail-
way signalling and the complexity of novel distributed signalling solutions, their
safety should be guaranteed. With the forthcoming progressive distribution of
the signalling functions (sensing, making decision, controlling) based on net-
work connectivity, it is also mandatory to ensure their security as well. The two
worlds, namely safety and security, are quite orthogonal as they require to re-
sist to ”probabilistic failures” on one hand and to specifically crafted attacks
that would timely target the existing vulnerabilities on the other hand. Their
requirements are sometimes contradictory, as safety critical systems are usually
expected to last decades without modification once certified, while secure sys-
tems are supposed to evolve often to take into account uncovered vulnerabilities.
1Electronic Control Unit is an embedded system that controls one or more electrical
systems or subsystems
The segregation between security and safety, enabling system updates at a dif-
ferent pace and in a decorrelated manner, has been at the centre of a technical
thinking, leading to a research project where safety and security aspects are
developed on different computers and connected by a single secure link. This
article intends to present the architecture being developed, based on existing,
formally proven building blocks, and draw the picture of its future deployment
modes for distributed computation.
This paper is structured in 7 parts. Section 2 introduces the terminology. Sec-
tion 3 presents safety computation and computer. The CLEARSY Safety Plat-
form, safety related building block, is exposed in Section 4. Section 5 presents the
security requirements coming from various environments and standards. Section
6 sketches the technical architecture before concluding in section 7.
2 Terminology
This section contains specific definitions, concepts, and abbreviations used through-
out this paper.
ASIC refers to Application-specific integrated circuit. It is an integrated
circuit chip which often include entire microprocessors, memory blocks, and other
large building blocks.
CRC refers to Cyclic Redundancy Check. It is a checksum used for error
detection.
Formal methods refers to mathematically rigorous techniques for the spec-
ification, development and verification of software and hardware systems. [8]
identifies a collection of formal methods and tools that have been applied in
railways.
Safety refers to the control of recognised hazards in order to achieve an
acceptable level of risk.
Cybersecurity refers to the protection of digital systems and related net-
works from information disclosure, theft of or damage to their hardware, soft-
ware, or electronic data, as well as from the disruption or misdirection of the
services they provide.
HSM refers to Hardware Security Module. It is a device that safeguards and
manages digital keys, performs encryption and decryption functions for digital
signatures, strong authentication and other cryptographic functions.
OT refers to Operational Technology. It is the hardware and software that
detects or causes a change, through the direct monitoring and/or control of
industrial equipment, assets, processes and events. It is related to industrial
control systems environment.
PKI refers to Public Key Infrastructure. It is a set of roles, policies, hardware,
software and procedures needed to create, manage, distribute, use, store and
revoke digital certificates and manage public-key encryption. It binds public keys
with respective identities of entities. The binding is established through a process
of registration and issuance of certificates at and by a certificate authority.
Softcore is a digital circuit that can be wholly implemented using logic
synthesis on FPGA (programmable component) . It allows to run and assess
digital circuits (processors) without creating a costly ASIC2and with a slower
execution speed.
TEE refers to Trusted Execution Environment. It is a secure area of a main
processor. It guarantees code and data loaded inside to be protected with respect
to confidentiality and integrity.
TPM refers to Trusted Platform Module. It is a secure cryptoprocessor, a
dedicated microcontroller designed to secure hardware through integrated cryp-
tographic keys, compliant to the TPM international standard.
3 Ensuring safety
Safety computers control systems for which a catastrophic failure may lead to
people being injured or killed. These computers have to be designed in confor-
mance with domain-related standards. These standards provide guidance and
recommendations, based on industry return of experience and current state-of-
the-art. If they provide recommendations, they do not provide solutions - it is
up to the designer to find a way to comply with the constraints. Following the
standards is the easier way to get the system certified. However it is always
possible not to follow the standards and to provide an original design, but in
this case, the designer has to provide a demonstration that his design is safe,
not only based on simulation, but rather on a reasoning [14]. In many cases, the
certification is carried out by an individual who takes full responsibility in case
of mistake and as such he is not tempted to stray from the standard.
For SIL3 and SIL4 functions, one processor is not enough to reach the ex-
pected safety/reliability - so a second processor is often used in combination
with a voter. In case of diverging execution among the two processors, the sys-
tem has to fall back to a safe mode [13] (also called restrictive mode) - usually
the system simply stops its execution [6]. To improve availability, more than two
processors are used, together with a voter, ensuring a continuous service even if
one processor is failing. In [2], one processor is computing while a co-processor
is checking memory and program counter coherency.
Common failure mode is one of the main aspect to take into account as it is
considered unlikely to get the same failure happening on the multiple processors
at the same time within the same conditions (if the common failure mode is pos-
sible, the voter cannot detect divergent execution and the safety is not ensured
at all). It is not required that every part of a redundant system is developed
differently from the others, with different components of different technology,
different teams, different programming languages, different tools, etc. However
the safety case have to demonstrate how it is not possible to meet common failure
mode because of the development or verification cycle, compilation techniques,
etc.
2The non-recurring engineering cost of an ASIC is in millions of euros while a FPGA
board price is in hundreds or thousands euros.
A safety computer has to implement a number of technical features such as:
–a watchdog [12] to check liveness. The watchdog has to be hardware and not
be re-programmable on the fly by software. Several watchdogs are typically
used for low, medium and high latency actions.
–a memory checker to check coherency. In [9] each variable has two fields: a
value (integer) and its corresponding signature (code3). When an arithmetic
operation is performed or an assignment, both value and code are modified. If
the code and the value do not correspond, a memory corruption is detected.
Program counter corruption is also detected with so-called compensation
tables: each function is initialized with a value, modified (masked) every time
an instruction is executed. At the end of the function, the value calculated
depends on the path (branches). The compensation table contains all possible
values: if the calculated value does not belong to the compensation table,
the program counter has been corrupted and the execution of the program
has to stop.
–a voter for inputs correlation. Usually inputs are not safety related - they are
just captured by a sensor and provided to the computer [11]. Inputs have
to be cross-checked with data captured with another sensor, by the same
sensor, but as seen from another processor. In case of analog data, different
approaches may be used to reconcile input data and avoid system being
switched off [5].
–the ability to communicate with other safety computers [10], involving sev-
eral time-consuming verifications. The use of medium with protocol aimed
at improving availability could jeopardize the safety [7].
In [3], the Vital Coded Processor is used in combination with the B formal
method to respectively detect errors in the code production chain (compiling,
linking, etc.) or resulting from hardware failures, and to detect design or coding
errors.
4 The CLEARSY Safety Platform
The CLEARSY Safety Platform is a generic PLC able to perform command and
control over inputs and outputs. For safety critical applications, the PLC has
to be able to determine whether it is fully functional or not. In case of failure,
the PLC should move to restrictive mode where all the outputs are deactivated.
The stronger the risk of harming people in case of failure, the higher the Safety
Integrity Level. For SIL3 and SIL4, the computations have to be performed by
a minimum of two processors and checked with a voting system.
The CLEARSY Safety Platform is made of two parts: an IDE to develop the
software and an electronic board to execute this software. From a safety point
of view, the current architecture is valid for any kind of mono-core processor.
3One value is associated with a single code, but a code may be associated to several
values
Multi-core applications would require an hypervisor, a Memory Management
Unit (MMU), and a micro-kernel able to guaranty memory isolation.
The full development and execution process is described in Fig. 1 where
CPU1 and CPU2 are PIC32 microcontrollers.
Fig. 1: Full path from function description to safe execution.
It strictly follows the B method which can be summarised as:
–specification model is written first from the natural language requirements
(Function), then comes the implementation model, both using the same lan-
guage (B).
–models are proved to be coherent and to be correct refinements.
–source code or binary is generated from implementation model:
•Replica 1 (HEX file) is directly compiled from the implementation B
model. The compiler has been developed in-house for supporting this
technology.
•Replica 2 (HEX file as well) is generated in two steps. First, Implemen-
tation models are translated to C, using the Atelier B C code generator.
Then the C code is compiled with gcc.
–The two binaries are linked to a top-level sequencer and a safety library, both
software developed in B by the CLEARSY Safety Platform IDE development
team once for all, to constitute the final software.
–This software is then loaded on the flash memory of the two microcontrollers
(bootload mode).
–When the board enters the execution mode or is reset, the content of the flash
memory is copied in RAM for both microcontrollers which start executing
it.
–For each microcontroller, the top-level sequencer enters a never-ending loop
and
•calls in sequence Replica 1 then Replica 2 for one iteration
•calls the safety library in charge of performing verification.
•If the verification fails, the board enters panic mode, deactivates its out-
puts and enters an infinite loop doing nothing.
For the safety case, the feared event is the wrong powering of one of the
outputs i.e. this output has to be OFF (the relay should not be powered), but
it is currently ON (the relay is powered). The power is provided by both micro-
controllers, so if one of the two is reset, it would not power the relay and the
board is in a restrictive safe state. The safety principles are distributed on the
board and on the safety library. The safety case demonstrates that the verifi-
cation performed during development and execution are sufficient to ensure the
target safety integrity level.
The bootloader, on the electronic board, checks the integrity of the program
(CRC, separate memory spaces). Then both microcontrollers start to execute
the program. During execution, the following verifications are conducted. If any
of these verification fails, the board enters the panic mode:
–internal verification (performed within a single microcontroller):
•every cycle, Replica 1 and Replica 2 data memory spaces (variables) are
compared within each microcontroller;
•regularly, Replica 1 and Replica 2 program memory spaces are compared.
This verification is performed “in the background” over thousands /
millions cycles - to keep a reasonable cycle time.;
•regularly, the identity between memory outputs states and physical out-
put states is checked to detect if the board is unable to command the
outputs.
–external verification (performed between both microcontrollers):
•regularly (every 50ms maximum), data memory spaces (variables) are
compared between CPU1 and CPU2.
The safety is built on top of several principles:
–a B formal model of the function to develop, proved to be coherent, to
correctly implement its specification, and to be programming error-free i.e.
no division by zero, no overflow, no access to a table outside of its range;
–four instances of the same function running on two micro-controllers (two
per micro-controller with different binaries obtained from diverse tool-chains)
and the detection of any divergent behaviour among the four instances;
–the deferred cross-verification of the programs on-board the two micro-
controllers;
–outputs require both CPU1 and CPU2 to be live and running as one provides
energy and the other one the command;
–physical output states are regularly verified to comply with the software
output states, to check the ability of the board to command its outputs;
–input signals are continuous (0 or 5V) and are made dynamic (addition of a
frequency signal) in order not to consider short-circuit current as high level
(permissive) logic.
5 Cybersecurity requirements
Railway systems are becoming vulnerable to cyber attack due to the move away
from bespoke stand-alone systems to open-platform, standardised equipment
built using Commercial Off The Shelf (COTS) components, and increasing use
of networked control and automation systems that can be accessed remotely via
public and private networks. The connection of a safety computer to any net-
work is not secure as this computer has been designed to resist to ”probabilistic
failures”, not to specifically crafted attacks that would timely target the existing
vulnerabilities. This connection eases the exploitation of the safety computer as
it allows:
– logs compilation and emission, towards maintenance equipment or su-
pervision system. This feature helps to gain an understanding of the internal
of the device, but not to directly modify its behaviour.
– commands receiving and sending, when safety computers are used in
combination. This feature is security critical as an attacker impersonating
another device could modify the behaviour of the device.
– safety-critical firmware update. This feature is security critical as it
allows an attacker to fully reprogram the device and to implement any dan-
gerous behaviour.
None of the safety features implemented by the CLEARSY Safety Platform
protect against such attacks. In particular, the main integrity check is based on
CRC that is not considered as a cryptographic primitive 4. Messages received
can only be checked well-formed, but not issued from a valid emitter.
Moreover the embedding of cryptographic capabilities (algorithms, data stor-
age) requires resources (computing, memory) that are not necessarily available
onboard. Ciphering and deciphering, generating and managing keys, controlling
correct protocol execution imply extra processing time that could prevent hard
real-time compliance (in OT, availability is preferred over security).
The Technical Specification CLC/TS 50701 ‘Railway applications – Cyber-
security’ has been issued in 2021 to provide requirements and recommendations
to handle cybersecurity in a unified way for the railway sector. This specification
takes into consideration relevant safety related aspects (EN 50126) and takes in-
spiration from different sources (IEC 62443-3-3, CSM-RA), adapting them to the
railway context. It covers numerous key topics such as railway system overview,
cybersecurity during a railway application life cycle, risk assessment, security
design, cybersecurity assurance and system acceptance, vulnerability manage-
ment and security patch management. In this paper, the focus is on the security
design, without neglecting the other aspects which are all important.
OT security implies:
– confidentiality (keeping data secure): ensures that sensitive information
are accessed only by an authorised person and kept away from those not
authorised to possess them ;
4They are not robust to collision attacks, meaning that somebody can take a given
CRC and easily find a second input that matches it.
– integrity (keeping data clean): ensures that information are in a format
that is true and correct to its original purposes.
– availability (keeping data accessible): ensures that information and re-
sources are available to those who need them.
Usually security design implements these three principles with public key cryp-
tography and specific hardware to constitute a Root of Trust, a source that can
always be trusted within a cryptographic system and is critical for PKI. Hard-
ware could rely on a TPM 5, a HSM 6or any Secure Enclave module. In addition,
an isolated execution environment (a TEE, such as ARM TrustZone) provides
security features such as isolated execution, integrity of applications executing
with the TEE, and confidentiality of their assets.
The security standards do not impose any particular architecture, so the de-
tailed design may vary depending on the hardware platform (off-the-shelf compo-
nent, softcore model running on a FPGA, tailored ASIC) and associated security
features, on the software architecture (bare-metal or OS-based application), and
selected communication protocols. Demonstration of compliance with security
standards also depends: IEC 62443 covers the whole development cycle while
Common Criteria-based CSPN7only requires a Security Target document, a
user manual, and a third-party penetration testing.
Railways critical infrastructure have to demonstrate a strong resilience as
the surface of attack (the network) is large (especially with communication-
based safety systems), and the (usually nation-state) attackers have a high level
of expertise and extensive resources. Equipment subject to cyber attacks have
to resist to reverse engineering and physical attacks (side-channel, timing). They
must also ensure a proper level of protection by taking into account discovered
vulnerabilities and by regularly updating their software.
6 Resulting architecture
6.1 Introduction
Designing an equipment combining safety and security on the same computer
is difficult. Cybersecurity mechanisms [4] are difficult to reconcile with the real-
time constraints of programmable controllers. Also Safety and security require-
ments are sometimes contradictory and lead to conflicts (a certified safety system
is expected to not evolve any more while updates of a security system allow to
protect against new attacks). Moreover the use of technologies not fully mastered
5A TPM contains a hardware random number generator, facilities for the secure
generation of cryptographic keys for limited uses, a generator of unforgeable hash
key summary of a configuration, and a data encryptor / decryptor.
6A HSM is similar to a TPM. HSMs are focused on performance and key storage
space, where as TPMs are only designed to keep a few values and a single key in
memory and don’t put much effort into performance.
7Certification de S´ecurit´e de Premier Niveau -
https://www.ssi.gouv.fr/administration/produits-certifies/cspn/
by the designer may leave unwanted access to the resources to be protected. This
includes for example:
–the BadUSB exploit: USB flash drives, containing a programmable Intel
8051 microcontroller, can be reprogrammed, turning a USB flash drive into
a malicious device.
–the UEFI vulnerabilities: as of February 2022, 23 vulnerabilities have been
identified on the firmware (in one library of widely used framework). An
attacker with privileged user access to the targeted system can exploit the
vulnerabilities to install highly persistent malware. The attacker can bypass
endpoint security solutions, Secure Boot, and virtualisation-based security.
The active exploitation of all the discovered vulnerabilities can’t be detected
by firmware integrity monitoring systems due to limitations of the TPM.
6.2 Original Architecture
An original architecture 2 has been designed and is being implemented and as-
sessed during the project CASES 8. The CASES project was selected in the first
call for projects “Development of critical innovative technologies” – launched by
BPI France to co-finance R&D on innovative and critical technological bricks in
cybersecurity. The project, entirely executed by CLEARSY, aims to build a safe
and secure generic sovereign computer, enabling critical infrastructures to be
controlled and commanded with the highest level of integrity. The project con-
sists in the separation of the safety and the security on two different computers
with a formally proven communication link in between.
The CASES ECU consists of two ECUs:
–a SIL4 level safety computer, the CLEARSY Safety Platform CS0 (see Sec-
tion 4 ). This computer is based on 2 PIC32 microcontrollers, a secure boot-
loader (integrity) and a safety library to ensure safe operation even in case
of malfunction. This ECU runs a monitoring/control application (reading
the status of sensors, controlling outputs) - which can be a purely compu-
tational application if no inputs or outputs are monitored. The safety ECU
can receive updates to the application (safety firmware) and send/receive
commands from the outside.
–A security computer, which provides the (wired) interface between the safety
computer and the outside world. This computer is based on a RISC-V type
processor. A secure microkernel, ProvenCore9, allows the isolation of the dif-
ferent services offered: updating of the safety firmware, reception and emis-
sion of commands, transmission of information (supervision). Communica-
tions are secured through a VPN and a TCP/TLS stack.
These 2 ECUs are each in the form of a daughter board to be plugged onto
a motherboard providing power and secure inputs/outputs. The resulting ECU
combines best practices in the areas of :
8https://www.clearsy.com/en/research-and-development/project-cases/
9https://provenrun.com/products/provencore/
Fig. 2: Safe and secure architecture.
–safety: the CLEARSY Safety Platform computer is certified at SIL4 level by
Bureau Veritas. SIL4 is the highest level defined by the EN50126 standard
(railways) ;
–security: the microkernel ProvenCore is certified at EAL7 level by ANSSI10.
EAL7 is the highest level defined by the Common Criteria certification
scheme.
6.3 Rationale
Both elements are formally developed and proved:
–the safety library and the vital part of the application are developed with B.
Among the properties modelled are the correct verification of microcontroller
structural elements like RAM and ALU, and the management of deadlines
with watchdogs ;
–the micro-kernel is developed in Smart language11, using proprietary tools
integrated to Eclipse. The main property of ProvenCore [1] is the isolation
property. It ensures that the resources of a process cannot be observed and
cannot be tampered by other processes, unless said process gave explicit
authorisation.
The communication filter is the only link (serial interface) between the two
ECUS. It is developed in B, to implement a grammar of messages defined once for
all, including a number of integrity/security features like cryptographic hashes
and message counter, based on shared secrets. Non complying messages are then
discarded. Once the CLEARSY Safety Platform bootloader is able to handle this
10 Agence Nationale de la S´ecurit´e des Syst`emes d’Information -
https://www.ssi.gouv.fr/en/
11 Developed by Prove & Run, Smart lets one write both the implementation and the
specifications, including the various properties, axioms, auxiliary lemmas, and so on.
Smart is a strongly-typed polymorphic functional language with algebraic data-types
(structures and variants).
communication, the security ECU may evolve without compromising the safety
ECU certificate, as long as the communication protocol remains unchanged.
The choice of a RISC-V based processor allows to go beyond EAL4 level. Digital
circuits with security oriented features (like MMU or MPU) are often proprietary
and their internals not available publicly for analysis, preventing them to reach
Common Criteria highest security levels. With the Open Hardware movement,
RISC-V offers the possibility to get access to these secure parts and to perform
white box analysis. The communication with the outside (supervision/SCADA,
other CASES ECUS, networked devices) relies on a PKI managed externally
and on TLS to ensure confidentiality and authenticity.
6.4 Assessment
The use case identified is typical of the problem of decentralisation of decision-
making in critical railway infrastructures. Decisions are to be taken as close as
possible to the sensor and actuator, rather than having the information from the
sensor/controller travel (tens of) kilometres via cables. This is (see Figure 3) a
turnout control system:
–Node 1: turnout
–Node 2 :traffic light
–Remote PC: supervision, updating
Fig. 3: Use case.
This demonstrator will allow the implementation of several functionalities nec-
essary for the deployment of this technology:
–Communication (control) between 2 CASES nodes. Node 1 (switch) makes
a decision based on a perceived state and information received. It tells node
2 to change its state in relation to the action to be taken on the actuator
linked to node 1.)
–Communication (maintenance) between a CASES node and a supervision
system. CASES nodes send maintenance information (status, statistics, etc.)
synchronously or asynchronously.
–Communication (software update) between a supervisory system and a CASES
node. A remote PC performs the update of the application software of the
safety computer of the CASES nodes.
The development and assessment of the demonstrator is planned for 2022 and
2023. Results will be published when available.
7 Conclusion and Perspectives
This article does not directly address the distribution of railway signalling func-
tions, but is more focused on the technical and regulatory constraints resulting
from the operation of decentralised safety related devices connected to network.
The architecture presented allows to address some of the regulatory constraints
linked to safety and security design. From a functional point of view, the CASES
ECU supports any algorithm and is not limited to the Boolean equations of PLC
programming languages. It can be adapted to a wide variety of technical envi-
ronments. The segregation between the safety and the security parts limits the
surface of attack, while their interaction are verifiable. Two independent formal
methods and related tooling are used to ensure respectively safety and security
(isolation). A significant use-case is expected to demonstrate both usability and
resilience through functional and attack scenarios. The security computer aims
to be ready for CC EAL5+ certification at the end of the project.
However a number of issues need to be addressed to better ensure the security
of the ECU. Among them, we may cite:
–resistance to reverse engineering with a microscopic mesh that detects any
probe intrusion and leads to a deletion of all data, or the ciphering of all
data stored on the ECU (code, data) ;
–the definition of the Software Bill of Material and the vulnerability analysis of
both the source code generated and the binaries contained in the compilation
tool-chains.
Acknowledgements
The work and results described in this article were partly funded by BPI-France
(Banque Publique d’Investissement) as part of the project CASES (Calculateur
Sˆur et S´ecuritaire) selected for the call ”Strat´egie Cyber 2021 - D´eveloppement
de technologies innovantes critiques”.
References
1. ProvenCore: Towards a Verified Isolation Micro- Kernel. Zenodo (Jan 2015),
https://doi.org/10.5281/zenodo.47990
2. Baro, S.: A High Availability Vital Computer for Railway Applications: Archi-
tecture & Safety Principles. In: Embedded Real Time Software and Systems
(ERTS2008). Toulouse, France (Jan 2008), https://hal.archives-ouvertes.fr/hal-
02269811
3. Behm, P., Benoit, P., Faivre, A., Meynadier, J.M.: Meteor: A successful application
of b in a large project. In: FM’99 — Formal Methods. pp. 369–387. Springer Berlin
Heidelberg (01 1999)
4. Bendovschi, A.: Cyber-attacks – trends, patterns and security countermeasures.
Procedia Economics and Finance 28, 24–31 (12 2015)
5. Cao, Y., Lu, H., Wen, T.: A safety computer system based on multi-sensor data
processing. Sensors 19, 818 (02 2019)
6. Cao, Y., Ma, L.C., Li, W.: Monitoring method of safety computer condition for
railway signal system. Jiaotong Yunshu Gongcheng Xuebao/Journal of Traffic and
Transportation Engineering 13, 107–112 (06 2013)
7. Essame, D., Arlat, J., Powell, D.: Padre: a protocol for asymmetric duplex redun-
dancy. pp. 229–248 (12 1999)
8. Ferrari, A., ter Beek, M.H., Mazzanti, F., Basile, D., Fantechi, A., Gnesi, S., Piat-
tino, A., Trentini, D.: Survey on formal methods and tools in railways: The astrail
approach. In: Collart-Dutilleul, S., Lecomte, T., Romanovsky, A. (eds.) Reliabil-
ity, Safety, and Security of Railway Systems. Modelling, Analysis, Verification, and
Certification. pp. 226–241. Springer International Publishing, Cham (2019)
9. Forin, P.: Vital coded microprocessor principles and application for var-
ious transit systems. IFAC Proceedings Volumes 23(2), 79 – 84 (1990),
http://www.sciencedirect.com/science/article/pii/S1474667017526531,
iFAC/IFIP/IFORS Symposium on Control, Computers, Communications in
Transportation, Paris, France, 19-21 September
10. Gao, Y., Cao, Y., Sun, Y., Ma, L., Hong, C., Zhang, Y.: Analysis and verification
of safety computer time constraints for train-to-train communications. Tongxin
Xuebao/Journal on Communications 39, 82–90 (12 2018)
11. Ingibergsson, J., Kraft, D., Schultz, U.: Safety computer vision rules for improved
sensor certification (04 2017)
12. Kilmer, R., McCain, H., Juberts, M., Legowik, S.: SAFETY COMPUTER DE-
SIGN AND IMPLEMENTATION. (01 1985)
13. Wang, H.f., Li, W.: Component-based safety computer of railway signal interlocking
system. vol. 1, pp. 538–541 (09 2008)
14. Zheng, S., Cao, Y., Zhang, Y., Jing, H., Hu, H.: Design and verification of general
train control system’s safety computer 38, 128–134+145 (06 2014)