This post gives a generic description of the S3-based formal safety verification workflows. It starts with a short description of typical safety critical system development processes used in the industry. The two next section gives respectively a brief introduction to the S3 Modeling Languages, and a description of the development of the Safety Specification against which the system is to be verified. The rest of the section presents the S3 workflows together with the architecture and techniques used to ensure the trustworthiness of the solution.
This post is the third of a serie of four. You may also be interested by the two previous posts:
Even though this document focuses on software-based safety-critical systems, and specifically on the functional code of these systems, S3 solutions may also be applied to other kinds of systems, implemented using electro-mechanical devices such as relays, or IEC-61131 logic to be run on a PLC, or even implemented as a specific FPGA. The development processes of these systems share a number of generic common features that may be dictated by standards (EN 50128 or DO 178C), good practices, how-to lore, economical concerns, or historical reasons, and more generally a mix of a number of these. This accounts for the numerous alternative processes and technologies used in the industry to develop critical systems. The intent of this section is neither to fully describe these processes, nor to be exhaustive in their diversity, but rather to give a high level simplified vision, with only the level of detail needed to describe the interactions between the S3 tools and these processes.
The previous figure gives an overly simplified view of three alternative development processes. Every process starts with some kind of informal specification, requirement, or even design document, most commonly in natural language. In the first alternative, the development team creates a formal design of the system in a language which syntax and semantics are unambiguously defined (i.e. a formal language). Languages such as Simulink©, or Scade©, can serve this purpose. From this formal design, the code of the system is obtained either by a manual development performed by the development team, or, more commonly, by automatic code generation, or compilation to another formalism. In the second alternative, the code is directly developed from the informal specification without passing through a formalization of the design (the copy of the obtained code is just an artificial way of obtaining a unified interface of the different process alternatives with the S3 workflow presented in the following sections, see in particular section Specific S3 Workflows). The last alternative process is an extension of the second alternative, commonly used for safety, availability, and/or certification purposes, in which two independent implementations are simultaneously developed in a diversified manner (i.e. with different teams possibly using different coding languages) from the informal specification or design. They are then either loaded on an hardware platform which runs both and contains some sort of output comparison or voting device, or merely compared to show that they are identical.
The first alternative will be used in the following sections to illustrate the use of the S3 workflow. The usages for the other workflows are fairly similar, but some specificities are discussed in section Specific S3 Workflows.
When designing a formal verification solution targeted at industrial users, it is important to offer a simple and clear language to express safety specifications. However, it is also important to have a rich and expressive language simplifying the task of developing specialized translators from system designs or code coming from various domains. The HLL was created to answer both of these needs and keeps evolving to address new challenging problems.
HLL is similar to the Lustre language. It is a data-flow, synchronous, declarative language with formally defined syntax and semantics. It manipulates streams objects which are typed variables valued over an infinite sequence representing discrete time.
HLL is organized in sections
types: standard scalar and structured types, functions, predicates, hierarchical enumerations,…
expressions: a broad set of operators on these types, quantifications
time: memories, and the basic
genericity: sorts, predicates, quantification
concatenation (remove first footnote of § Uncertified Workflow)
constraints and PO
sHLL (bridge the gap)
In a typical S3 solution, the safety specification is expressed in HLL, bringing simplicity, clarity, and overall readability in the formalization of both the safety properties, expressed as PO, and the environment hypotheses expressed as Constraints.
A number of good practices shall drive the formalization of the safety specification. It is usually desirable to interpose an insulation layer between the model of the system and the safety specification, renaming inputs, outputs, and internal variables in order to protect the safety specification from potential modifications in the naming convention of the system itself. HLL namespaces are a useful way of supporting such an insulation. Integration of the safety specification and the system is obtained by using these renamed system variables in the safety specification. As far as possible, higher level concepts describing the system or its environment shall be captured in an intermediate layer of helper definitions to maximize both clarity and reuse (e.g. plane is landing, train is approaching a signal, washing machine is in spin cycle,… ).
Different “levels” of safety specifications exist. At the lowest level, loosely related to safety, this specification can be automatically generated based on the code structure and semantics. Such a specification allows to perform analyses ensuring the absence of undefined behaviors in the code (e.g. division by zero, out of bounds array access, use of uninitialized variable,… ) or detecting dead or otherwise unreachable code.
At an intermediate level, the safety specification can be derived from some level of system functional specification (i.e. requirements) to show that the code conforms to this functional specification. The closer the functional specification and the code, the simpler it will be to express and prove the properties. The model of the environment needed to perform the proof will also be simplified, or, most often, unnecessary. The same applies to lemmas needed to achieve the proofs. All in all, the development of the safety specification is thus rather simple.
However, such a safety specification can “only” be used to check that the code respects its functional specification. It does not give any insight on the correctness of this functional specification, nor on its ability to ensure the overall safety or mission of the system. Another drawback is that, when the functional specification is very close to the code, the expression of the properties may become very similar to that of the code, and the proof becomes a mere verification of the equivalence of two diversified developments. Pushed to the limit, this may become a complete validation strategy based on proving the equivalence of two diversified implementations (see third alternative of the figure about development processes), one of which could be expressed using HLL.
When based on the specification of a system, the properties usually contain references to some of the code internal variables in their expressions. The system is considered as a white-box in which the developer of the safety specification can look. Here it is important that every internal variable used in some properties to characterize the way another variable or output shall be computed is itself characterized by other properties. This shall be applied iteratively to express an uninterrupted chain of characterizations, linking the inputs of the system to its outputs.
When applicable, better results are usually achieved by taking a safety-related point of view to the design of the safety specification. Performing a hazard analysis of the system enables to identify the safety hazards leading to an accident. Refining these safety hazards enables to develop a set of safety properties which, when achieved by the system, ensures its safety. Being obtained using a method different from that used to develop the system, such properties are able to find problems in the code, but also potential flaws in the specification preventing it from ensuring the safety of the system. Once the code has been proved to respect such a safety specification, it may be used as a convincing argument in the demonstration of the overall safety of the system.
Depending on the level of refinement of the safety hazards, several levels of safety properties may be expressed. As a general rule, the closer to the safety hazards, the easier the modeling of the safety properties, and the more convincing that they imply the overall safety of the system. However, using high-level properties also complexifies the environment model, and makes these safety properties harder to prove.
Contrarily to the specification based approach, here, the system is considered as a black-box for the safety specification, and the safety properties are expressed using primarily the inputs and outputs of the system. No chain of characterization is needed. Still, the lemmas may have access to the internal variables of the code. However, that does not breach the reasoning because lemmas are always proved before being used: they are not part of the safety specification, just a helper for the model checker.
This approach may also be applied to address the analysis of the security of a system, and to a lesser extent its ability to achieve its missions in the general sense.
Finally, for systems developed in a normative environment or for which trustworthiness of the safety verification is required (see preliminary post), the assessment of the correctness and completeness of the safety specification is usually achieved by reviews performed by an independent team. In this sense, having a clear, simple, and readable safety specification, which unambiguously implies the safety of the system at the highest possible level, is clearly a plus. A drawback lies in the complexity of the environment model whose constraints have to be thoroughly checked for soundness, correctness, and completeness. In particular, it shall be verified that this modeling does not over-constrain the input scenarios (i.e. the constraints are fulfilled by the real environment). However, this is usually a simpler task than checking completeness of a low-level safety specification.
When designing an S3 based safety verification workflow, the first task is to build the part of the workflow needed to perform the analyses, without considering trustworthiness or certification. This workflow will be used to develop, debug, and mature both the safety specification and the system design. It will also serve as the tool-chain supporting the search for induction strengthening lemmas, and it will be the basis on which the certifiable tool-chain will be built (see section Certifiable S3 Workflow).
Using a specialized translator from the design formal language to HLL, the solution creates a Formal Model of the system design preserving its execution semantics. In an S3 solution, models are commonly bit-precise (exact), however some user-defined domain-specific abstractions may also be used. This model is concatenated 1 to the safety specification. The connection between the system model and the safety specification is performed through the use of the same variables in the two parts (possibly renamed in an insulation/integration layer). The resulting HLL file contains the safety specification connected to the system model ready for analysis.
The S3 model checker does not work directly at the HLL level, and a further model transformation is needed: a translation, called expansion, from HLL to the LLL, a restricted subset of the HLL language containing only boolean streams and a very limited number of core boolean operators (negation, implication, equivalence, and next, a time related operator). An expander is used to inline HLL blocks, flatten structured types, expand quantifiers, bit-blast arithmetic, until the HLL model is transformed into a semantically equivalent LLL model.
This LLL model can then be handed over to S3 for analysis. When S3
reports that a
property has been proved (using induction), nothing remains to be done.
However, when it reports a falsification, or a non-inductive PO, it also
generates a counter-example, or a step-counter-example (see preliminary
post). In both cases, the
counter-example is given on the LLL model. It thus needs to be translated
back to the HLL model for investigation (translation performed by the
cex_simulator tool not shown in the figure). Counter-examples can be
translated to a human understandable format, or to scenarios to be run
inside a simulator of the system (on the design, or on the generated
code in a desktop simulator, or even in an hardware-in-the-loop
simulator). This is often a good alternative to communicate potential
problems to the designers. For both types of counter-examples, another
alternative is to use the
why tool, which loads an LLL counter-example
(or step-counter-example) and guides the user from the violated safety
property down to the root of the problem, displaying the values of the
traversed HLL expressions, using powerful heuristics to choose what part of
an expression shall be followed.
The uncertified solution described in the previous section is used iteratively to correct the potential bugs in the safety specification and in the system, and to search and express lemmas until all safety properties are proved. At this point, the problem of the confidence that can be granted to the formal safety verification solution shall be addressed. In particular, this is mandated by most certification standards when using such a solution in a verification process.
Starting from the uncertified workflow in the previous section, a hazard analysis is performed to build the comprehensive list of the different weaknesses in which a bug of a tool might lead to the erroneous proof of a non-valid safety property. Each of these potential weaknesses is addressed by applying specific and adapted protection techniques.
The first identified weakness is of course that of the model checker incorrectly reporting some property to be proved. This can arise if S3 reports as proved a property that it did not prove, or for which it derived an incorrect proof, or if it reasoned about a system different from the one contained in the given LLL file (e.g. because of a bug in its parser). Ensuring the correctness of a complete model checker using a standard mix of peer-reviews and tests is a very difficult task. A more pragmatic approach is to use an a posteriori technique called proof logging/proof checking. In this technique, S3 is enhanced with the capability, when it finds the proof of a property, to log this proof in a file which, starting from a reformulation of the analyzed LLL file, derives the proof as a deduction tree in a formally specified resolution-based proof system. After the S3 run, a specialized tool, the proof-checker checks that the reformulation corresponds to the LLL file given to S3, that the proofs are made of correct deductions, and that all properties have been proved.
In SAT-based model checking, finding a proof is a complex task relying on error-prone algorithms, optimizations, and heuristics. However, checking a proof is fairly simple and can be done with a high degree of confidence.
The following figure shows a synoptic view of the proof logging/proof checking mechanism. It is used for every run of the model checker (except when used in the uncertified workflow).
The next identified weakness is in the translation of the system design to HLL. An error in this translation could easily lead to an erroneous proof because the properties are analyzed on an incorrect model. The following figure gives a synoptic view of the solution used to protect against this weakness.
The translation is protected using diversification. The translation is done twice, by diversified tools, developed by independent teams, using different programming languages and paradigms. Here, to increase confidence, the sources are also diversified by considering, on the one hand the translation of the system design, and on the other the translation of the system code. The obtained HLL models are then both expanded to LLL. The expansion is a complex model transformation in which an error could also lead to an incorrect model (i.e. when the semantics is not preserved between the HLL and the LLL models), and thus to an erroneous proof. To protect this transformation, the expansion is also diversified using two independent expanders. For each model transformation (i.e. translations and expansion), a precise Logical Foundation Document, describing the syntax and semantics of the involved languages together with the transformation principles, is used to specify this transformation, prove its correctness and soundness, and develop test suites. This document is extensively reviewed by an independent expert team.
A second technique, called sequential equivalence checking, is used to compare the resulting LLL models. A tool, the equivalence-constructor takes the pair of LLL models and creates a third one, that encodes, as an LLL problem, the fact that the two models, fed with the same input sequences, produce the same outputs. The resulting equivalence LLL file is given to the S3 model checker to prove, using induction, that the equivalence holds. When it is the case, confidence is gained that the various model transformations (i.e. translations and expansions) have been correctly performed. There is indeed very little chance that a bug in one translator or expander is “matched” by another bug in the other tool-chain.
To make the equivalence system inductive and simplify the model checker task, an additional tool is needed. The mapper attempts to create a list as complete as possible of mapping points between the design and the code, i.e. variables or expressions that correspond in the two models. After having proved that they are indeed equivalent, these points are used to help prove the equivalence of the overall models. These pairs of mapping points are thus used in a way similar to lemmas for safety properties. As the potential equivalences output by the mapper are going to be proved by the model checker before being used, this tool does not need to be protected. It can take any source to perform its work, in particular the models themselves, and any traceability information produced by the code generator.
Finally, sometimes an analysis of the produced LLL model of the code is also necessary to ensure some aspects of the correctness of the code modeling and translation. Some languages do not have a fully defined semantics and the modeling adds automatic POs to the resulting model to verify that none of these undefined behaviors (such as accessing an uninitialized variable, reading outside an array,… ) are triggered by the code. The modeling is thus considered correct, with respect to these undefined behaviors, if and only if these POs hold.
Incidentally, the proof that the code is equivalent to the design is an interesting result in itself. First, it shows that the code generator, or compiler, has produced a correct code 2, and thus enables to remove the verification activities usually performed to assess this correctness. Secondly, it shows that the safety properties that have been shown to hold on the design also hold on the code.
The last identified weakness is in the expansion of the overall system composed of the safety specification and the system model. The expansion of this element corresponds to the connection of the two parts through named variables, and the model transformation to LLL. Once again, an error in this transformation can lead to an incorrect model and thus an incorrect proof. As previously, this transformation is protected (see following figure) by expanding the system twice with independent expanders, and proving that the obtained pair LLL of models are equivalent. Finally one of the models is analyzed to prove that the safety specification holds on the system design.
The overall solution synoptic, assembling the different protection mechanisms described previously, is presented in the following figure. It can be completed with a number of CRC to insure that the files input and produced by the solution are not degraded during the various transformations and analyses.
This workflow aims at automating the verification process and eliminating as much as possible the repetitive, time-consuming, expensive, incomplete, and error-prone human verification activities usually devoted to this task.
The expanders, equivalence-constructor, and the S3 model checker are generic tools proven in use on industrial-size systems. Some generic translators for design and coding languages also exists, for sHLL (see section Modeling Languages), Scade©, C, and Ada. For other domain or customer specific languages, custom translators are developed. These translators produce a bit-precise, exact HLL model of the system. The various tools of the S3 toolset have been developed using a process involving a complete set of specification, design, and validation documentation, together with the necessary quality and verification activities. This process, associated to the described protection mechanisms enables an formal S3 verification solution to be used as a “verification tool”.
When applied to the alternative development processes described in section Typical Development Workflows, the S3 workflows remain fundamentally the same. The design translator can be replaced by a second diversified code translator that can be from another coding language (e.g. the third alternative generating the two implementations in two different languages), or for the same language with diversified sources (e.g. the third alternative generating the two implementations in the same languages), or from a common source but with diversified translators (e.g. the second alternative).
Depending on the coding and design languages, the S3 run to prove the absence of undefined behaviors may be needed or not, on the design and/or on the code.
It is sometime possible to connect the safety specification (including the lemmas) to both diversified models (e.g. the second alternative, or when the diversified sources are close enough). In this situation, a simpler version of the workflow, mutualizing the diversified translations, expansions, and the equivalence checks, can be used.
All in all, the S3 formal safety verification solution is not to be seen as an off-the-shelf integrated tool, but rather as a collection of generic tools together with protection methodologies enabling Systerel to build custom and efficient formal safety verification solutions targeted at specific missions and system development processes.
The following post will expand on the specific case of instanciated systems.
For commercial information and contact see the S3 product page.