Model-based software testing
Timing and code coverage metrics are widely used for assessing the quality of safety-critical software. Guidance for aerospace (DO-178C) and automotive (ISO 26262) software development both recommend or mandate the use of these metrics.
MATLAB® Simulink® is a popular model-based development tool. When used in the safety-critical arena, code generated by model-based development technologies like Simulink still needs to be tested to the same level as hand-written code. There are different ways to perform code coverage and timing analysis on model-generated code, including using native tools created by Mathworks® and third-party solutions.
Back-to-back testing through MIL, SIL, PIL & HIL
'Back-to-back' model-based testing is the optimal testing method for model based developers. Using this method, it is only necessary to write your tests once, and then the same tests can be used all the way from initial model-based analysis to the final target hardware environment.
Back-to-back testing massively improves efficiency and ensures that requirements are traced in the same way through the entire software development lifecycle. There are four widely recognized phases of model based testing: MIL, SIL, PIL and HIL.
|Model In the Loop (MIL)||MIL testing is done on-host within Simulink and applies tests at the model level. No code has been generated at this point, let alone compiled. It's a simulation.|
|Software In the Loop (SIL)||SIL testing is again performed on-host, but this time it is the compiled software that was generated from the model that is tested. This is often performed in some kind of emulated environment.|
|Processor In the Loop (PIL)||This is the first phase of on-target testing, i.e. not running on the development machine alongside Simulink. It is common for this testing to be performed on a development board.|
|Hardware In the Loop (HIL)||HIL testing uses the final target hardware that the software is designed to be run on. This is often a more complex test rig and is designed to exactly mirror the real environment.|
On-target testing using Rapita Verification Suite (RVS)
RVS can be used for back-to-back testing of your Simulink project throughout the software development lifecycle.
Timing analysis, including WCET
RapiTime, the timing analysis component of the RVS toolsuite can be used to find the worst-case-execution-time (WCET) of code generated by Simulink Embedded Coder. This lets you generate reports on WCET at the source level, which can then be manually related back to the Simulink model.
Data that RapiTime can report includes:
- Execution Time Profiles that show the distribution of execution times for functions and their children
- High water mark execution time, which shows the path of the longest observed execution time
- Worst-case execution time, which can also be shown as a path through the code
- Contribution reports, which show what percentage of the worst-case or high water mark path a specific piece of code is responsible for
- Which code has been tested and which is untested
Code coverage, including MC/DC
For coverage analysis, RapiCover can be used to automate the collection of code coverage metrics on Simulink-generated code up to and including MCDC. Flexible integration strategies ensure efficient verification, regardless of the target hardware.
RapiCover is an advanced code coverage tool designed for on-host and target analysis. DO-178C guidance places significant value on achieving coverage via system tests, which in the case of model-based development can be defined at the model level. RapiCover supports performing this type of system level testing both at earlt stages all the way through to the hardware-in-the-loop phase of testing.
RapiCover also measures code that other tools don’t, such as treating Boolean and bitwise operators as decisions, and supports testing decisions with up to 1,000 conditions.
What do Mathworks offer for testing?
Mathworks have addressed the need for coverage (including MC/DC) analysis in their toolchain with Simulink Coverage™. Simulink Coverage enables code and model coverage analysis to measure testing completeness.
Simulink Coverage produces interactive reports showing how much of your model, C/C++ S-functions, MATLAB functions, and code generated by Embedded Coder® has been exercised.
Mathworks offer some timing analysis features via their real-time kernel for desktop, Simulink Desktop Real-Time™. Its features include the ability to analyze model execution performance and produce task-level performance and block-level timing information.
These basic profiling features can be useful, but they do not support the determination of software worst-case execution time (WCET), which is needed for DO-178C and ISO 26262 certification.
Understanding code coverage and timing at the model level
Clearly, for developers adopting a model-based development approach, it is advantageous to understand code coverage and timing data at the model level rather than the source level.
Rapita have developed custom integrations of RapiTime and RapiCover with MATLAB Simulink, which allows timing information and code coverage to be reported at the model level, starting in the early stages of development.
The RapiTime integration allows a range of timing metrics including WCET to be reported at the model level. The worst-case contribution time metric is particularly useful when testing models. As shown on the screenshot (right), Simulink blocks that appear on the worst-case path are colored in blue. The contribution of each block to the overall WCET is shown on the model.
The data produced by RapiTime is especially helpful for optimizing the WCET of code that comes from models, as RapiTime lets users rapidly identify which blocks have the largest contribution on WCET and focus optimization efforts on them.
A single integration between RapiTime and MATLAB Simulink can also provide code coverage metrics via RapiCover, reported at the model level.
The integration can work with both PIL (processor in the loop) and HIL (hardware in the loop) configurations. Coverage could be measured at SIL (software in the loop) level, although timing measurements collected on a PC could vary considerably from measurements for a specific embedded target.
In the case of coverage, this means that you can produce evidence that the simulation input vectors adequately test the model early in the development cycle.
RapiTime and RapiCover both have qualification kits available for ISO 26262 or DO-178B/C, and have already been qualified on a number of projects.