9.6. Promela to C Refinement#
Promela models are more abstract than concrete C code. A rigorous link, known as a refinement, needs to be established from Promela to C. This is composed of two parts: manual annotations in the Promela model to make its behavior easy to identify and parse; and a refinement defined as a YAML file that maps from annotations to corresponding C code. A single refinement YAML file is associated with each Promela model.
9.6.1. Model Annotations#
Promela printf
statements are used in the models to output information that
is used by spin2test
to generate test code. This information is used to
lookup keys in a YAML file that defines the mapping to C code. It uses a simple
format that makes it easy to parse and process, and is also designed to be
easily understood by a human reader. This is important because any V&V process
will require this information to be carefully assessed.
9.6.1.1. Annotation Syntax#
Format, where \(N \geq 0\):
@@@ <pid> <KEYWORD> <parameter1> ... <parameterN>
The leading @@@
is a marker that makes it easy to filter out this
information from other SPIN output.
Parameters take the following forms:
<pid>
Promela Process Id ofproctype
generating annotation
<word>
chunk of text containing no white space
<name>
Promela variable/structure/constant identifier
<type>
Promela type identifier
<tid>
OS Task/Thread/Process Id
_
unused argument (within containers)
Each <KEYWORD>
is associated with specific forms of parameters:
LOG <word1> ... <wordN>
NAME <name>
INIT
DEF <name> <value>
DECL <type> <name> [<value>]
DCLARRAY <type> <name> <value>
TASK <name>
SIGNAL <name> <value>
WAIT <name> <value>
STATE tid <name>
SCALAR (<name>|_) [<index>] <value>
PTR <name> <value>
STRUCT <name>
SEQ <name>
END <name>
CALL <name> <value1> ... <valueN>
9.6.2. Annotation Lookup#
The way that code is generated depends on the keyword in the annotation. In particular, the keyword determines how, or if, the YAML refinement file is looked up.
Direct Output - no lookup (
LOG
,DEF
)Keyword Refinement - lookup the
<KEYWORD>
(NAME
,INIT
,SIGNAL
,WAIT
)Name Refinement - lookup first parameter (considered as text) (
TASK
,DECL
,DCLARRAY
,PTR
,CALL
,SCALAR
)
The same name may appear in different contexts,
and the name can be extended with a suffix of the form
_XXX
to lookup less frequent uses:
_DCL
- A variable declaration
_PTR
- The pointer value itself
_FSCALAR
- A scalar that is a struct field
_FPTR
- A pointer that is a struct field
Generally, the keyword, or name is used to lookup mymodel-rfn.yml
to get a
string result. This string typically has substitution placeholders, as used by
the Python format()
method for strings. The other parameters in the
annotation are then textually substituted in, to produce a segment of test code.
9.6.3. Specifying Refinement#
Using the terminology of the The RTEMS Test Framework each Promela model is converted into a set of Test Cases, one for each complete scenario produced by test generation. There are a number of template files, tailored for each model, that are used to assemble the test code sources, along with code segments for each Promela process, based on the annotations output for any given scenario.
The refinement mapping from annotations to code is defined in a YAML file that describes a Python dictionary that maps a name to some C code, with placeholders that are used to allow for substituting in actual test values.
The YAML file has entries of the following form:
entity: |
C code line1{0}
...
C code lineM{2}
The entity is a reference to an annotation concept, which can refer to key declarations, values, variables, types, API calls or model events. There can be zero or more arguments in the annotations, and any occurrence of braces enclosing a number in the C code means that the corresponding annotation argument string is substituted in (using the python string object format() method).
The general pattern is working through all the annotations in order. The
code obtained by looking up the YAML file is collated into different
code-segments, one for each Promela process id (<pid>
).
In addition to the explicit annotations generated by the Promela models, there
are two implicit annotations produced by the python refinement code. These
occur when the <pid>
part of a given explicit annotation is different to the
one belonging to the immediately preceding annotation. This indicates a point in
a test generation scenario where one task is suspended and another resumes. This
generates internal annotations with keywords SUSPEND
and WAKEUP
which
should have entries in the refinement file to provide code to ensure that the
corresponding RTEMS tasks in the test behave accordingly.
The annotations can also be output as comments into the generated test-code, so it is easy to check that parameters are correct, and the generated code is correct.
If a lookup fails, a C comment line is output stating the lookup failed. The translation continues in any case.
9.6.3.1. Lookup Example#
Consider the following annotation, from the Events Manager model:
@@@ 1 CALL event_send 1 2 10 sendrc
This uses Name refinement so a lookup with event_send
as the key
and gets back the following text:
T_log( T_NORMAL, "Calling Send(%d,%d)", mapid( ctx, {1}), {2} );
{3} = ( *ctx->send )( mapid( ctx, {1} ), {2} );
T_log( T_NORMAL, "Returned 0x%x from Send", {3} );
Arguments 1
, 2
, 10
, and sendrc
are then substituted to get the code:
T_log( T_NORMAL, "Calling Send(%d,%d)", mapid( ctx, 2), 10 );
sendrc = ( *ctx->send )( mapid( ctx, 2 ), 10 );
T_log( T_NORMAL, "Returned 0x%x from Send", sendrc );
Given a Promela process id of 1
, this code is put into a code segment
for the corresponding RTEMS task.
9.6.4. Annotation Refinement Guide#
This guide describes how each annotation is processed by the test generation software.
9.6.4.1. LOG#
LOG <word1> ... <wordN>
(Direct Output)Generate a call to
T_log()
with a message formed from the<word..>
parameters. This message will appear in the test output for certain verbosity settings.
9.6.4.2. NAME#
NAME <name>
(Keyword Refinement)Looks up
NAME
(currently ignoring<name>
) and returns the resulting text as is as part of the code. This code should define the name of the testcase, if needed.
9.6.4.3. INIT#
INIT
(Keyword Refinement)Lookup
INIT
and expect to obtain test initialisation code.
9.6.4.4. TASK#
TASK <name>
(Name Refinement)Lookup
<name>
and return corresponding C code.
9.6.4.5. SIGNAL#
SIGNAL <value>
(Keyword Refinement)Lookup SIGNAL and return code with <value> substituted in.
9.6.4.6. WAIT#
WAIT <value>
(Keyword Refinement)Lookup WAIT and return code with <value> substituted in.
9.6.4.7. DEF#
DEF <name> <value>
(Direct Output)Output
#define <name> <value>
.
9.6.4.8. DECL#
DECL <type> <name> [<value>]
(Name Refinement)Lookup
<name>_DCL
and substitute<name>
in. If<value>
is present, append=<value>
. Add a final semicolon. If the <pid> value is zero, prependstatic
.
9.6.4.9. DCLARRAY#
DCLARRAY <type> <name> <value>
(Name Refinement)Lookup
<name>_DCL
and substitute<name>
and<value>
in. If the <pid> value is zero, prependstatic
.
9.6.4.10. CALL#
CALL <name> <value0> .. <valueN>
(Name refinement,N
< 6)Lookup
<name>
and substitute all<value..>
in.
9.6.4.11. STATE#
STATE <tid> <name>
(Name Refinement)Lookup
<name>
and substitute in<tid>
.
9.6.4.12. STRUCT#
STRUCT <name>
Declares the output of the contents of variable
<name>
that is itself a structure. The<name>
is noted, as is the fact that a structured value is being processes. Should not occur if already be processing a structure or a sequence.
9.6.4.13. SEQ#
SEQ <name>
Declares the output of the contents of array variable
<name>
. The<name>
is noted, as is the fact that an array is being processed. Values are accumulated in a string now initialsed to empty. Should not occur if already processing a structure or a sequence.
9.6.4.14. PTR#
PTR <name> <value>
(Name Refinement)If not inside a
STRUCT
, lookup<name>_PTR
. Two lines of code should be returned. If the<value>
is zero, use the first line, otherwise use the second with<value>
substituted in. This is required to handle NULL pointers.If inside a
STRUCT
, lookup<name>_FPTR
. Two lines of code should be returned. If the<value>
is zero, use the first line, otherwise use the second with<value>
substituted in. This is required to handle NULL pointers.
9.6.4.15. SCALAR#
There are quite a few variations here.
SCALAR _ <value>
Should only be used inside a
SEQ
. A space and<value>
is appended to the string being accumulated by thisSEQ
.SCALAR <name> <value>
(Name Refinement)If not inside a
STRUCT
, lookup<name>
, and substitute<value>
into the resulting code.If inside a
STRUCT
, lookup<name>_FSCALAR
and substitute the savedSTRUCT
name and<value>
into the resulting code.This should not be used inside a
SEQ
.SCALAR <name> <index> <value>
(Name Refinement)If not inside a
STRUCT
, lookup<name>
, and substitute<index>
and<value>
into the resulting code.If inside a
STRUCT
, lookup<name>_FSCALAR
and substitute the savedSTRUCT
name and<value>
into the resulting code.This should not be used inside a
SEQ
.
9.6.4.16. END#
END <name>
If inside a
STRUCT
, terminates processing a structured variable.If inside a
SEQ
, lookup<name>_SEQ
. For each line of code obtained, substitute in the saved sequence string. Terminates processing a sequence/array variable.This should not be encountered outside of a
STRUCT
orSEQ
.
9.6.4.17. SUSPEND and WAKEUP#
A change of Promela process id from oldid
to newid
has been found.
Increment a counter that tracks the number of such changes.
SUSPEND
(Keyword Refinement)
Lookup
SUSPEND
and substitute in the counter value,oldid
andnewid
.
WAKEUP
(Keyword Refinement)
Lookup
WAKEUP
and substitute in the counter value,newid
andoldid
.
9.6.5. Annotation Ordering#
While most annotations occur in an order determined by any given test scenario,
there are some annotations that need to be issued first. These are, in order:
NAME
, DEF
, DECL
and DCLARRAY
, and finally, INIT
.
9.6.6. Test Code Assembly#
The code snippets produced by refining annotations are not enough. We also need boilerplate code to setup, coordinate and teardown the tests, as well as providing useful C support functions.
For a model called mymodel the following files are required:
mymodel.pml
- the Promela modelmymodel-rfn.yml
- the model refinement to C test codetc-mymodel.c
- the testcase setup C filetr-mymodel.h
- the test-runner header filetr-mymodel.c
- the test-runner setup C file
The following files are templates used to assemble a single test-runner C file for each scenario generated by the Promela model:
mymodel-pre.h
- preamble material at the startmymodel-run.h
- test runner materialmymodel-post.h
- postamble material at the end.
The process is entirely automated:
tbuild all mymodel
The steps of the process are as follows:
9.6.6.1. Scenario Generation#
When SPIN is invoked to find all scenarios,
it will produce a number (N) of counterexample files
with a .trail
extension.
These files hold numeric data that refer to SPIN internals.
mymodel.pml1.trail
...
mymodel.pmlN.trail
SPIN is then used to take each trail file and produce a human-readable
text file, using the same format as the SPIN simulation output.
SPIN numbers files from 1 up,
whereas the RTEMS convention is to number things,
including filenames, from zero.
SPIN is used to produce readable output in text files with a .spn
extension,
with 1 subtracted from the trail file number.
This results in the following files:
mymodel-0.spn
...
mymodel-{N-1}.spn
9.6.6.2. Test Code Generation#
Each SPIN output file mymodel-I.spn
is converted to a C test runner file tr-mymodel-I.c
by concatenating the following components:
mymodel-pre.h
This is a fairly standard first part of an RTEMS test C file. It is used unchanged.
- refined test segments
The annotations in
mymodel-I.spn
are converted, in order, into test code snippets using the refinement filemymodel-rfn.yml
. Snippets are gathered into distinct code segments based on the Promela process number reported in each annotation. Each code segment is used to construct a C function calledTestSegmentP()
, whereP
is the relevant process number.mymodel-post.h
This is static code that declares the top-level RTEMS Tasks used in the test. These are where the code segments above get invoked.
mymodel-run.h
This defines top-level C functions that implement a given test runner. The top-level function has a name like
RtemsMyModel_Run
This is not valid C as it needs to produce a name parameterized by the relevant scenario number. It contains Python string format substitution placeholders that allow the relevant number to be added to end of the function name. So the above run function actually appears in this file asRtemsMyModel_Run{0}
, soI
will be substituted in for{0}
to result in the nameRtemsMyModel_RunI
. In particular, it invokesTestSegment0()
which contains code generated from Promela process 0, which is the main model function. This declares test variables, and initializes them.
These will generate test-runner test files as follows:
tr-mymodel-0.c
...
tr-mymodel-{N-1}.c
In addition, the test case file tc-mymodel.c
needs to have entries added
manually of the form below, for I
in the range 0 to N-1.:
T_TEST_CASE( RtemsMyModelI )
{
RtemsMyModel_RunI(
...
);
}
These define the individual test cases in the model, each corresponding to precisely one SPIN scenario.
9.6.6.3. Test Code Deployment#
All files starting with tc-
or tr-
are copied to the
relevant testsuite directory.
At present, this is testsuites/validation
at the top level in
the rtems
repository.
All the names of the above files with a .c
extension are added
into a YAML file that
defines the Promela generated-test sources.
At present, this
is spec/build/testsuites/validation/model-0.yml
at the top-level in the rtems
repository.
They appear in the YAML file under the source
key:
source:
- testsuites/validation/tc-mymodel.c
- testsuites/validation/tr-mymodel-0.c
...
- testsuites/validation/tr-mymodel-{N-1}.c
- testsuites/validation/ts-model-0.c
9.6.6.4. Performing Tests#
At this point build RTEMS as normal. e.g., with waf
,
and the tests will get built.
The executable will be found in the designated build directory,
(e.g.):
rtems/build/sparc/gr740/testsuites/validation/ts-model-0.exe
This can be run using the RTEMS Tester (RTEMS User Manual, Host Tools, RTEMS Tester and Run).
Both building the code and running on the tester is also automated (see Formal Tools Setup).
9.6.7. Traceability#
Traceability between requirements, specifications, designs, code, and tests, is a key part of any qualification/certification effort. The test generation methodology developed here supports this in two ways, when refining an annotation:
If the annotation is for a declaration of some sort, the annotation itself is added as a comment to the output code, just before the refined declaration.
// @@@ 0 NAME Chain_AutoGen // @@@ 0 DEF MAX_SIZE 8 #define MAX_SIZE 8 // @@@ 0 DCLARRAY Node memory MAX_SIZE static item memory[MAX_SIZE]; // @@@ 0 DECL unsigned nptr NULL static item * nptr = NULL; // @@@ 0 DECL Control chain static rtems_chain_control chain;
If the annotation is for a test of some sort, a call to
T_log()
is generated with the annotation as its text, just before the test code.T_log(T_NORMAL,"@@@ 0 INIT"); rtems_chain_initialize_empty( &chain ); T_log(T_NORMAL,"@@@ 0 SEQ chain"); T_log(T_NORMAL,"@@@ 0 END chain"); show_chain( &chain, ctx->buffer ); T_eq_str( ctx->buffer, " 0" );
In addition to traceability, these also help when debugging models, refinement files, and the resulting test code.