DO-178B/C

Automating test generation with AUTOSAC

Find out how automated test generation from SPARK Ada pre and post-conditions can cut the effort needed to run test projects. 
Read More

How do I demonstrate the safe use of instrumented source code analysis?

In the second of two related blog posts, I describe one method to present a certification argument for the use of instrumentation in source code coverage analysis.
Read More

Presenting a safety case

Our blogs typically concentrate on technical details which will help with the detailed verification of your software. This blog sets the scene to future blogs describing some of the ways in which our verification tools can be used to help meet safety certification requirements. Verification data provides evidence about the performance and functionality of your code, however, you'll also need to justify why that verification data is relevant, how it fulfills certification requirements, whether it is sufficiently complete, consistent, correct etc..
Read More

CAST-10 "Literal" Interpretation of Decision Coverage Increases Rigor of Testing Requirements

The Certification Authorities Software Team (CAST) issue guidance on various issues regarding the interpretation of DO-178B and DO-178C. In the CAST-10 position paper [1], they discuss the interpretation of a "decision". In CAST-10, three possible interpretations are made for a decision:
Read More

Function pointers and their impact on stack analysis

Function pointers present a real problem for static code analysis, including when calculating stack usage. Understanding software stack requirements is an activity that is required for several standards/guidelines including DO-178B and DO-178C. Nevertheless function pointers are supported and therefore prevalent in most system-level languages (C and Ada both have them, whilst they are used all the time with C++).
Read More

Pages