Your browser does not support JavaScript! Skip to main content
Free 30-day trial Customer portal Contact
 
Rapita Systems
 

Industry leading verification tools & services

Rapita Verification Suite (RVS)

  RapiTest - Unit/system testing   RapiCover - Structural coverage analysis   RapiTime - Timing analysis (inc. WCET)   RapiTask - Scheduling visualization   RapiCoverZero - Zero footprint coverage analysis   RapiTimeZero - Zero footprint timing analysis   RapiTaskZero - Zero footprint scheduling analysis

Multicore verification

  CAST-32A Compliance   Multicore Timing Solution   RapiDaemons

Services

  V & V Services   Qualification   Training   Tool Integration

Industries

  Aerospace (DO-178C)   Automotive (ISO 26262)   Space

Other

  RTBx   Mx-Suite   Software licensing   Product life cycle policy

Latest from Rapita HQ

Latest news

Propelling the next generation of scientists
DO-178C Virtual Training - November 2020
NXP MCFA 2020

Latest from the Rapita blog

Leveraging FACE Conformance Artifacts to Support Airworthiness
Assured Multicore Partitioning for FACE Systems
Going above and beyond the quality standards

Upcoming events

Safe Use of Multi-Core Processors Seminar
2021-04-05

Technical resources for industry professionals

Latest White Papers

Multicore Timing Analysis for DO-178C
Seven Roadblocks to 100% Structural Coverage (and how to avoid them)
Eight top code coverage questions in embedded avionics systems

Latest Videos

MASTECS Project
Testing using the RapiTest scripting language thumbnail
Testing using the RapiTest scripting language
Continuous verification with RVS and Jenkins Thumbnail
Continuous verification with RVS and Jenkins
Zero footprint timing analysis with RapiTime Zero Thumbnail
Zero footprint timing analysis with RapiTime Zero
RapiTask Zero Thumbnail
Zero-footprint system event tracing with RapiTask Zero

Latest Webinars

Incremental Assurance of Multicore Integrated Modular Avionics (IMA)
Certifying multicore systems for DO-178C (CAST-32A) projects
Airborne Safety with FACE™ in the Digital Battlespace

Latest Case studies

Cobham Aerospace Connectivity: RapiCover continues to deliver on the most challenging targets
DO-178B Level A Embraer FCS
Validation of COTS Ada Compiler for Safety-Critical Applications

Discover Rapita

Who we are

The company menu

  • About us
  • Customers
  • Distributors
  • Locations
  • Partners
  • Research projects
  • Contact us

Careers

Careers menu

  • Current opportunities & application process
  • Working at Rapita

US office

+1 248-957-9801
info@rapitasystems.com
41131 Vincenti Ct.
Novi, MI, 48375
USA

UK office

+44 (0)1904 413945
enquiries@rapitasystems.com
Atlas House
York, YO10 3JB
UK

Back to Top

Generating low level tests from system tests

Breadcrumb

  1. Home
  2. Blog
  3. Generating low level tests from system tests
Jonny Woodrow
2020-07-07

In software development, system testing can identify issues with the software, but is often an expensive process. We've been working on a tool that lets you capture data from system tests to automatically generate unit tests that test software functions in an equivalent way. This could be used to minimize the engineering effort needed to test software functionality across the software development life cycle.

Software system tests are often used to identify issues with the tested software. These tests, which often involve manual processes by test engineers, are effort-intensive to run. Lower level tests such as unit tests typically require less effort to run, but often must be written by hand.

We've been working on a tool that lets you automatically generate unit tests that test software in an equivalent way to system tests. As the unit tests are automatically generated and easier to run, this means that you can save a huge amount of time by not needing to rerun your system tests every time you change your code.

How does it work?

To capture values from system tests, RVS statically analyzes the source code and instruments it so that it can observe the inputs and outputs of functions, whether they are parameters or global variables. This allows it to follow pointers, and unroll any arrays and structures needed to understand the program behavior. It can even detect ‘volatile’ results (such as the creation of a file handle or a malloc pointer).

Then, when the system test is run, RVS captures significant events or "highlights" that you can play back as stand-alone unit tests (Figure 1). System tests may observe dozens of significant events for a specific function over the course of minutes or even hours, while a unit test will execute significant events end-to-end, significantly reducing the time needed to test for them.

Figure 1. Capturing highlights from system tests to generate unit tests

After capturing data from the system test, you can select functions for which to create unit tests. RVS will then look at the captured system test data including observed input and output parameters for the function, global state changes and the call-stack, and will generate RapiTest unit tests that mimic the behavior observed during the system test.

When generating tests for a function, you can select which of the observed values to use as criteria for generating new unique tests. For example, you can configure tests to be automatically generated based on differences in coverage, the data being fed to the function, the global state, or the underlying call-stack of the function.

The tests generated by RapiTest can then be run through an automated environment. Figure 2 shows part of a test spreadsheet generated from system tests, where the criteria for each "unique" test were the data passed to the function under test.

Figure 2. Values of arrays, pointers and structure elements captured by the system to unit converter

Selecting acceptance criteria

For a test to be meaningful, it must include acceptance criteria. These depend on the test, the application, and the environment in which the test is run. Some tests, when executed, will pass merely if the function under test exits without error, while others may be much more complex.

When capturing the data for a function, RVS observes the inputs, outputs, global state and call-stack of functions, and their timing and coverage behavior. When generating unit tests for a function, you can select which of the observed values to use as acceptance criteria for your tests. In testing terms, this could be considered to be "oracle" testing – if the system test RVS captured results from worked when some input produced the observed output, the unit test should ensure that those inputs consistently produce the same outputs every run.

Figure 3 gives an example of automatically generated test acceptance criteria. This test automatically checks changes in values when the test function is executed with a specific input pattern.

Figure 3. Generating unit tests from captured values

Running the tests

While the unit tests generated by RapiTest are not traceable to requirements, they could let you take that three-hour system test that you have to run every release, extract the "best bits" of it to generate a few milliseconds' worth of unit tests, and run them as regression tests on a continuous integration server. This would let you, for example, set up regression tests for software functionality you've already completed. If any changes to your system cause your software's functionality to change, your regression tests would automatically flag these for your attention, letting you avoid the risk of nasty surprises towards the end of your testing cycle.

This technology offers a flexible approach to testing throughout development and could be used in many ways. It can be combined with other test-generation facilities such as bounds checking, fuzz testing and auto-generation for coverage (up to MC/DC) to provide a robust verification layer before formal testing even begins, greatly reducing the cost of testing.

We plan to release this technology officially in a future version of RVS. In the meantime, if you want more information about it, or have any ideas on how you would use the technology, contact us.

White papers

  • Solutions
    • Rapita Verification Suite
    • RapiTest
    • RapiCover
    • RapiTime
    • RapiTask

    • CAST-32A Compliance Package
    • Verification and Validation Services
    • Qualification
    • Training
    • Integration
  • Latest
  • Latest menu

    • News
    • Blog
    • Events
    • Videos
  • Downloads
  • Downloads menu

    • Brochures
    • Webinars
    • White Papers
    • Case Studies
    • Product briefs
    • Technical notes
    • Software licensing
  • Company
  • Company menu

    • About Rapita
    • Customers
    • Distributors
    • Industries
    • Locations
    • Partners
    • Research projects
    • Contact
  • Discover
    • AMC 20-193
    • What is CAST-32A?
    • Multicore Timing Analysis

    • MC/DC Coverage
    • Code coverage for Ada, C & C++
    • Embedded Software Testing Tools
    • Aerospace Software Testing
    • Automotive Software Testing
    • Certifying eVTOL
    • DO-178C Testing

    • WCET Tools
    • Worst Case Execution Time

All materials © Rapita Systems Ltd. 2021 - All rights reserved | Privacy information Subscribe to our newsletter