Your browser does not support JavaScript! Skip to main content
Free 30-day trial Customer portal Careers DO-178C Handbook
 
Rapita Systems
 

Industry leading verification tools & services

Rapita Verification Suite (RVS)

  RapiTest - Unit/system testing   RapiCover - Structural coverage analysis   RapiTime - Timing analysis (inc. WCET)   RapiTask - Scheduling visualization   RapiCoverZero - Zero footprint coverage analysis   RapiTimeZero - Zero footprint timing analysis   RapiTaskZero - Zero footprint scheduling analysis

Multicore verification

  MACH178   Multicore Timing Solution   RapiDaemons

Services

  V & V Services   Qualification   Training   Tool Integration  Support

Industries

  Aerospace (DO-178C)   Automotive (ISO 26262)   Space

Other

  RTBx   Mx-Suite   Software licensing   Product life cycle policy  RVS development roadmap

Latest from Rapita HQ

Latest news

RVS 3.18 Launched
Solid Sands partners with Rapita Systems
Danlaw Acquires Maspatechnologies - Expanding Rapita Systems to Spain
Rapita co-authored paper wins ERTS22 Best paper award
View News

Latest from the Rapita blog

Measuring response times and more with RapiTime
Why mitigating interference alone isn’t enough to verify timing performance for multicore DO-178C projects
There are how many sources of interference in a multicore system?
Supporting modern development methodologies for verification of safety-critical software
View Blog

Latest discovery pages

do178c DO-178C Guidance: Introduction to RTCA DO-178 certification
matlab_simulink MATLAB® Simulink® MCDC coverage and WCET analysis
code_coverage_ada Code coverage for Ada, C and C++
amc-20-193 AMC 20-193
View Discovery pages

Upcoming events

Aerospace Tech Week Europe 2023
2023-03-29
Aeromart Montreal 2023
2023-04-04
Certification Together International Conference
2023-05-10
View Events

Technical resources for industry professionals

Latest White papers

DO178C Handbook
Efficient Verification Through the DO-178C Life Cycle
A Commercial Solution for Safety-Critical Multicore Timing Analysis
Compliance with the Future Airborne Capability Environment (FACE) standard
View White papers

Latest Videos

Streamlined software verification with RVS 3.18
Sequence analysis with RapiTime
Visualize call dependencies with RVS thumbnail
Visualize call dependencies with RVS
Analyze code complexity thumbnail
Analyze code complexity with RVS
View Videos

Latest Case studies

Supporting ISO 26262 ASIL D software verification for EasyMile
RapiCover’s advanced features accelerate the certification of military UAV Engine Control
Front cover of whitepaper collins
Delivering world-class tool support to Collins Aerospace
View Case studies

Other Downloads

 Webinars

 Brochures

 Product briefs

 Technical notes

 Research projects

Discover Rapita

Who we are

The company menu

  • About us
  • Customers
  • Distributors
  • Locations
  • Partners
  • Research projects
  • Contact us

US office

+1 248-957-9801
info@rapitasystems.com
Rapita Systems, Inc.
41131 Vincenti Ct.
Novi
MI 48375
USA

UK office

+44 (0)1904 413945
info@rapitasystems.com
Rapita Systems Ltd.
Atlas House
Osbaldwick Link Road
York, YO10 3JB
UK

Spain office

+34 930 46 42 72
info@rapitasystems.com
Rapita Systems S.L.
Parc UPC, Edificio K2M
c/ Jordi Girona, 1-3, Office 306-307
Barcelona 08034
Spain

Working at Rapita

Careers

Careers menu

  • Current opportunities & application process
  • Working at Rapita
Back to Top

Generating low level tests from system tests

Breadcrumb

  1. Home
  2. Blog
  3. Generating low level tests from system tests
Jonny Woodrow
2020-07-07

In software development, system testing can identify issues with the software, but is often an expensive process. We've been working on a tool that lets you capture data from system tests to automatically generate unit tests that test software functions in an equivalent way. This could be used to minimize the engineering effort needed to test software functionality across the software development life cycle.

Software system tests are often used to identify issues with the tested software. These tests, which often involve manual processes by test engineers, are effort-intensive to run. Lower level tests such as unit tests typically require less effort to run, but often must be written by hand.

We've been working on a tool that lets you automatically generate unit tests that test software in an equivalent way to system tests. As the unit tests are automatically generated and easier to run, this means that you can save a huge amount of time by not needing to rerun your system tests every time you change your code.

How does it work?

To capture values from system tests, RVS statically analyzes the source code and instruments it so that it can observe the inputs and outputs of functions, whether they are parameters or global variables. This allows it to follow pointers, and unroll any arrays and structures needed to understand the program behavior. It can even detect ‘volatile’ results (such as the creation of a file handle or a malloc pointer).

Then, when the system test is run, RVS captures significant events or "highlights" that you can play back as stand-alone unit tests (Figure 1). System tests may observe dozens of significant events for a specific function over the course of minutes or even hours, while a unit test will execute significant events end-to-end, significantly reducing the time needed to test for them.

Figure 1. Capturing highlights from system tests to generate unit tests

After capturing data from the system test, you can select functions for which to create unit tests. RVS will then look at the captured system test data including observed input and output parameters for the function, global state changes and the call-stack, and will generate RapiTest unit tests that mimic the behavior observed during the system test.

When generating tests for a function, you can select which of the observed values to use as criteria for generating new unique tests. For example, you can configure tests to be automatically generated based on differences in coverage, the data being fed to the function, the global state, or the underlying call-stack of the function.

The tests generated by RapiTest can then be run through an automated environment. Figure 2 shows part of a test spreadsheet generated from system tests, where the criteria for each "unique" test were the data passed to the function under test.

Figure 2. Values of arrays, pointers and structure elements captured by the system to unit converter

Selecting acceptance criteria

For a test to be meaningful, it must include acceptance criteria. These depend on the test, the application, and the environment in which the test is run. Some tests, when executed, will pass merely if the function under test exits without error, while others may be much more complex.

When capturing the data for a function, RVS observes the inputs, outputs, global state and call-stack of functions, and their timing and coverage behavior. When generating unit tests for a function, you can select which of the observed values to use as acceptance criteria for your tests. In testing terms, this could be considered to be "oracle" testing – if the system test RVS captured results from worked when some input produced the observed output, the unit test should ensure that those inputs consistently produce the same outputs every run.

Figure 3 gives an example of automatically generated test acceptance criteria. This test automatically checks changes in values when the test function is executed with a specific input pattern.

Figure 3. Generating unit tests from captured values

Running the tests

While the unit tests generated by RapiTest are not traceable to requirements, they could let you take that three-hour system test that you have to run every release, extract the "best bits" of it to generate a few milliseconds' worth of unit tests, and run them as regression tests on a continuous integration server. This would let you, for example, set up regression tests for software functionality you've already completed. If any changes to your system cause your software's functionality to change, your regression tests would automatically flag these for your attention, letting you avoid the risk of nasty surprises towards the end of your testing cycle.

This technology offers a flexible approach to testing throughout development and could be used in many ways. It can be combined with other test-generation facilities such as bounds checking, fuzz testing and auto-generation for coverage (up to MC/DC) to provide a robust verification layer before formal testing even begins, greatly reducing the cost of testing.

We plan to release this technology officially in a future version of RVS. In the meantime, if you want more information about it, or have any ideas on how you would use the technology, contact us.

DO-178C webinars

DO178C webinars

White papers

DO178C Handbook Efficient Verification Through the DO-178C Life Cycle
A Commercial Solution for Safety-Critical Multicore Timing Analysis
Compliance with the Future Airborne Capability Environment (FACE) standard
5 key factors to consider when selecting an embedded testing tool

Related blog posts

Metrowerks CodeTest - How and why to upgrade

.
2021-02-01

Race condition testing

.
2019-11-20

False positive and false negative in software testing

.
2019-05-22

Automating test generation with AUTOSAC

.
2017-07-20

Pagination

  • Current page 1
  • Page 2
  • Next page Next ›
  • Last page Last »
  • Solutions
    • Rapita Verification Suite
    • RapiTest
    • RapiCover
    • RapiTime
    • RapiTask
    • MACH178

    • Verification and Validation Services
    • Qualification
    • Training
    • Integration
  • Latest
  • Latest menu

    • News
    • Blog
    • Events
    • Videos
  • Downloads
  • Downloads menu

    • Brochures
    • Webinars
    • White Papers
    • Case Studies
    • Product briefs
    • Technical notes
    • Software licensing
  • Company
  • Company menu

    • About Rapita
    • Careers
    • Customers
    • Distributors
    • Industries
    • Locations
    • Partners
    • Research projects
    • Contact
  • Discover
    • AMC 20-193
    • What is CAST-32A?
    • Multicore Timing Analysis
    • MC/DC Coverage
    • Code coverage for Ada, C & C++
    • Embedded Software Testing Tools
    • Aerospace Software Testing
    • Automotive Software Testing
    • Certifying eVTOL
    • DO-178C
    • WCET Tools
    • Worst Case Execution Time
    • Timing analysis (WCET) & Code coverage for MATLAB® Simulink®

All materials © Rapita Systems Ltd. 2023 - All rights reserved | Privacy information | Trademark notice Subscribe to our newsletter