ArticlePDF Available

The Power of 10: Rules for Developing Safety-Critical Code

Authors:
  • Nimble Research

Abstract

Existing coding guidelines therefore offer limited benefit, even for critical applications. A verifiable set of well-chosen coding rules could, however, assist in analyzing critical software components for properties that go well beyond compliance with the set of rules itself. To be effective, though, the set of rules must be small, and it must be clear enough that users can easily understand and remember it. In addition, the rules must be specific enough that users can check them thoroughly and mechanically. To put an upper bound on the number of rules, the set is restricted to no more than 10 rules that will provide an effective guideline. Although such a small set of rules cannot be all-encompassing, following it can achieve measurable effects on software reliability and verifiability
A preview of the PDF is not available
... Notably, this philosophy mirrors approaches in safety-critical domains. For instance, Holzmann's Power of Ten rules for NASA (Holzmann, 2006) likewise mandate simple control flows (no goto, limited recursion), fixed loop bounds, limited pointer use, etc., to enable static analysis and prevent hard-to-detect errors. Several of Holzmann's rules are directly analogous to PTS. ...
... If a team rigorously applies defensive programming and standard checks (as per coding standards), a whole class of bugs is caught either at compile time or immediately at runtime with clear assertions. For example, one of NASA's ten rules (Holzmann, 2006) states that each function must check the validity of its parameters and each calling function must check return values. Code written under this rule will rarely have unhandled error conditions, which transforms into less time spent diagnosing mysterious failures that propagate from non-validated inputs. ...
Article
Full-text available
This article examines programming as a craft rather than merely a branch of science or engineering. Tracing the historical tension from early computing, through the institutional push for software engineering discipline, to the resurgence of craftsmanship ideals, it argues that true software quality stems from the skilled application of consistent practices balanced with creative design. The article outlines how low-level standardisation of coding style reduces errors and cognitive load, while high-level software architecture remains an arena for genuine creative judgment. It critiques over-reliance on rigid processes, showing that craftsmanship-oriented cultures yield better software with less bureaucratic overhead. The article discusses the Philips Terminal Systems (PTS) coding guidelines from (Danielson, 2023) as a model for disciplined coding standards, demonstrating their potential to approach near-error-free software development. It highlights the importance of individual responsibility within a supportive community of practice and explains how aesthetic considerations, clarity, simplicity, coherence, are vital aspects of high-quality code. Integrating influences from computational thinking, agile principles, and craftsmanship philosophy , the article makes the case that good programming must blend strict discipline at the implementation level with freedom and ingenuity at the design level. This dual approach leverages human skill where it matters most and reduces the need for compensatory processes. Ultimately, the craft perspective elevates programming to a form of rational creation: precise, deliberate, yet deeply human in its blend of technical excellence and artistic vision.
... First, by caching canonicalization and exploiting the problem structure, the compiled solver and the compiled differentiator are faster. Second, the compiled solver and in some applications also the compiled differentiator can be deployed in embedded systems, fulfilling rules for safety-critical code [Hol06]. ...
Preprint
Full-text available
We introduce custom code generation for parametrized convex optimization problems that supports evaluating the derivative of the solution with respect to the parameters, i.e., differentiating through the optimization problem. We extend the open source code generator CVXPYgen, which itself extends CVXPY, a Python-embedded domain-specific language with a natural syntax for specifying convex optimization problems, following their mathematical description. Our extension of CVXPYgen adds a custom C implementation to differentiate the solution of a convex optimization problem with respect to its parameters, together with a Python wrapper for prototyping and desktop (non-embedded) applications. We give three representative application examples: Tuning hyper-parameters in machine learning; choosing the parameters in an approximate dynamic programming (ADP) controller; and adjusting the parameters in an optimization based financial trading engine via back-testing, i.e., simulation on historical data. While differentiating through convex optimization problems is not new, CVXPYgen is the first tool that generates custom C code for the task, and increases the computation speed by about an order of magnitude in most applications, compared to CVXPYlayers, a general-purpose tool for differentiating through convex optimization problems.
... This characteristic is especially beneficial in developing safety-critical software, such as that used onboard spacecraft, where dynamic memory allocation is often avoided to enhance reliability and safety [116]. Indeed, SHMPC can also be implemented using static memory allocation, however, this process is significantly more challenging and requires certain programmatic techniques to manage the variable horizon effectively. ...
Thesis
Full-text available
Modern satellite missions increasingly rely on formations of smaller, cost-effective satellites working in coordination to achieve objectives traditionally handled by a single, complex spacecraft. This paradigm shift, driven by advancements in technology and the demand for robust and flexible systems, has underscored the need for innovative solutions to guidance, navigation, and control challenges specific to satellite formations. This thesis develops a comprehensive toolbox tailored to the Triton-X micro-satellite platform, enabling autonomous formation flying in low Earth orbit while addressing navigation and control challenges associated with underactuated propulsion systems and limited navigation capabilities. In the navigation domain, this research focuses on the problem of relative navigation for widely separated satellites equipped with single-frequency global navigation satellite system receivers. An extended Kalman filter algorithm is developed, incorporating a bi-linear ionospheric model to mitigate ionospheric errors that can significantly degrade measurement accuracy over large inter-satellite distances. Additionally, a carefully selected set of state variables and observables optimizes the filter's performance for the considered problem. For the guidance layer, the satellite formation reconfiguration problem is formulated as a series of convex optimization problems, including quadratic programming, second-order cone programming, and linear programming formulations. These formulations balance fuel efficiency and computational feasibility while maintaining the same constraints. The minimum thrust constraint, stemming from the hardware limitations is approximated by an affine relaxation, for which an acceleration pruning algorithm is developed to enhance the problem's feasibility. Furthermore, centralized guidance strategies are extended to distributed frameworks, addressing scalability issues for large satellite formations by achieving linear computational growth as the number of satellites increases. The control strategies introduced in this thesis integrate the most efficient guidance solutions into two model predictive control frameworks: the shrinking-horizon model predictive control and the fixed-horizon model predictive control. These strategies enable closed-loop real-time operation, ensuring continuous adjustments to control inputs based on sensor feedback. The fixed-horizon strategy offers computational stability and simplicity, while the shrinking-horizon strategy excels in adapting to disturbance-rich environments. Finally, the toolbox is extended to include absolute orbit maintenance capabilities using time-optimal maneuvers using formation flying techniques. A nonlinear programming-based model predictive control framework addresses the coupled dynamics of attitude and orbital control, enabling efficient maneuvering with thrust applied during attitude slews. This work provides a robust and versatile guidance, navigation, and control solution for underactuated satellite formations, enabling autonomous, fuel-optimal operations for future missions in low Earth orbit.
... By keeping the horizon length fixed, the size of the recurrent guidance problem is also fixed, which simplifies the writing/generation of embedded software without the need for dynamic memory allocation. This characteristic is especially beneficial in developing safety-critical software, such as that used onboard spacecraft, where dynamic memory allocation is often avoided to enhance reliability and safety [36]. Indeed, SHMPC can also be implemented using static memory allocation, however, this process is significantly more challenging and requires certain programmatic techniques to manage the variable horizon effectively. ...
... By keeping the horizon length fixed, the size of the recurrent guidance problem is also fixed, which simplifies the writing/generation of embedded software without the need for dynamic memory allocation. This characteristic is especially beneficial in developing safety-critical software, such as that used onboard spacecraft, where dynamic memory allocation is often avoided to enhance reliability and safety [32]. Indeed, SHMPC can also be implemented using static memory allocation, however, this process is significantly more challenging and requires certain programmatic techniques to manage the variable horizon effectively. ...
Preprint
Full-text available
This study presents autonomous guidance and control strategies for the purpose of reconfiguring close-range multi-satellite formations. The formation under consideration includes N under-actuated deputy satellites and an uncontrolled virtual or physical chief spacecraft. The guidance problem is formulated as a trajectory optimization problem that incorporates typical dynamical and physical constraints, alongside a minimum acceleration threshold. This latter constraint arises from the physical limitations of the adopted low-thrust technology, which is commonly employed for precise, close-range relative orbital maneuvers. The guidance and control problem is addressed in two frameworks: centralized and distributed. The centralized approach provides a fuel-optimal solution, but it is practical only for formations with a small number of deputies. The distributed approach is more scalable but yields sub-optimal solutions. In the centralized framework, the chief is a physical satellite responsible for all calculations, while in the distributed framework, the chief is treated as a virtual point mass orbiting the Earth, and each deputy performs its own guidance and control calculations onboard. The study emphasizes the spaceborne implementation of the closed-loop control system, aiming for a reliable and automated solution to the optimal control problem. To this end, the risk of infeasibility is mitigated through first identifying the constraints that pose a potential threat of infeasibility, then properly softening them. Two Model Predictive Control architectures are implemented and compared, namely, a shrinking-horizon and a fixed-horizon schemes. Performances, in terms of fuel expenditure and achieved control accuracy, are analyzed on typical close-range reconfigurations requested by Earth observation missions and are compared against different implementations proposed in the literature.
... В 2006 г. ведущий специалист подразделения надежного программного обеспечения LaRS лаборатории реактивных двигателей НАСА Джерард Хольцман[21] сформулировал 10 правил создания надежного софта, которые выработались в многолетней практике подготовки критически важного программного обеспечения 2 . Эти правила, схожи с классическими принципами структурного программирования, придуманными в 60-х годах XX века, которые ориентированы на формирование кодировщиком исходного кода, который затем подвергается автоматическому анализу. ...
Article
Full-text available
Качество программ принято характеризовать числом ошибок на 1000 строк кода. Эта характеристика получается в результате регрессионного анализа числа выявленных ошибок в последовательных версиях кода с последующей экстраполяцией в отдаленное будущее. Данная процедура является очень трудоемкой даже для крупных компаний. Проверить достоверность такой оценки для обычных пользователей как правило довольно сложно. Такая проблема возникает из-за недоступности исходных данных. Существуют различные способы оценки числа ошибок в программе, например, модель Шумана, Муса, Ла Падула, Джелинского ‒ Моранды, Шика ‒ Волвертона, модель переходных вероятностей, статистическая модель Миллса, простая интуитивная модель, модель Липова, Коркорэна, Бернулли, Нельсона и т.д.. Часто приходится иметь дело с программами, в которых формально нет ошибок, при этом их качество на первый взгляд не очевидно. В данной статье предложен новый метод количественной оценки качества программ, написанных на языке Perl. Данный метод позволяет выявить места в программах, где могут быть допущены ошибки. В отличие от существующих алгоритмов данный метод позволяет оценивать качество программы, анализируя стиль написания ее кода. Предлагаемый метод оценки качества применим к любым программам, написанным на алгоритмических языках высокого уровня с открытыми кодами (Python, Perl, PHP и т. д.). В том числе его можно использовать для сравнения качества и выбора программ, решающих одну и ту же задачу.
... Many of these attributes of flight software development are qualitative rather than quantitative in nature and refer to design pillars such as robustness, modularity, and autonomy. There are, however, many sources of design rules [9] [10], as well as industry standards for flight software [11] [12] that act as guidelines for the development of flight software. Before launch, professional software is often subjected to strenuous reliability testing performed by external testing facilities to verify that specifications set out by launchers, clients and regulatory bodies are met. ...
Article
Full-text available
The drastic rise of CubeSat missions in recent years has forced satellite developers to deliver satellites at high speeds. The complexity of flight software for missions, however, leaves it susceptible to programmatic failure. To develop the DockSat mission in an acceptable timeframe, this article proposes to utilize digital twins of satellite components to allow software development and complex testing at earlier stages of the mission development cycle where hardware and experimental setups are not available.
Article
As traditional space-grade computing systems struggle to meet the increasing computational demands of modern space missions, RISC-V emerges as a promising alternative due to its open-source and highly customizable nature. However, the extensive hardware customization options in RISC-V introduce complexity in validation, making it challenging to ensure system reliability. This paper introduces a robust methodology for validating RISC-V-based systems under accelerated radiation beams, focusing on test uptime, leveraging Commercial Off-The-Shelf (COTS) FPGA devices, which offer flexibility and cost-effectiveness, to enable concurrent hardware and software development. We demonstrate how our methodology offers a comprehensive approach for testing heterogeneous systems on FPGAs, balancing thorough integration with cost-efficiency and test robustness. During our experiments with accelerated neutrons to assess the resilience of RISC-V cores, our approach guaranteed the correct delivery of 100% of the packages, while minimizing system downtime during radiation testing by reducing the Test Fixture SEFI cross-section.
Preprint
We need ways to improve the code quality. Programmers have different level of tenure and experience. Standard and programming languages change and we are forced to re-use legacy code with minimum revision. Programmers develop their habits and can be slow to incorporate new technologies to simplify the code or improve the performance. We rolled out our customized code review and pair programming process to address these issues. The paper discusses the about the improvement of mandatory code review and pair programming practiced in the commercial software development, and also proposes effective approaches to customize the code review and pair programming to avoid the pitfalls and keep the benefits.
ResearchGate has not been able to resolve any references for this publication.