How does RVS fit into my development environment?

The design requirements of all RVS tools include the ability to work alongside your existing development environment. You can configure these tools to generate verification metrics and run tests using custom scripts, and can integrate the tools into your existing version control and continuous build systems. 

How large a code base can RVS tools handle?

RVS tools are designed to handle very large code bases. Because of the efficient algorithms used by RVS tools, there is no fundamental limitation to the number of lines of code that RVS can process, and our RVS tools have been used on projects with millions of lines of code.

Which hardware architectures do RVS tools support?

RVS tools can be integrated to work with almost any embedded target. Our engineers can work with you to determine the optimal strategy for integrating the tool with your target, even for multi-core architectures. For more information on the hardware architectures we have integrated RVS tools with, see our support specification.

How do RVS tools collect data from my target?

All RVS tools support numerous data collection strategies, which are optimized to achieve a minimal instrumentation overhead. You can collect data using debuggers, logic analyzers, directly from the address bus, or using our datalogger, RTBx. By working with you to determine the optimal data collection strategy, you can ensure you achieve results with minimal effort.

Can I use RVS on my target, which has limited RAM and/or ROM?

The instrumentation overheads for RVS tools are the lowest in the market, and the tools can support zero-instrumentation overhead in some cases. Because of this, RVS can generate verification data from your source code in fewer builds than possible using other tools. For example, in a recent customer evaluation, RapiCover instrumentation overhead was 4 to 5 times lower than competitor tools, leading to a 92% reduction in the number of builds necessary to test the code base. If necessary, you can configure RVS tools to instrument your code in multiple builds and compile collected data into a single report. 

How is my data presented?

All RVS tools include a friendly user-interface that presents your data in both tabular and graphical formats. Using this feature, you can filter your results to zoom in on target functions, making it easy to find the information you are looking for.

How do you support RVS users?

We provide an extensive set of RVS documentation with each of our products, and offer training courses guiding you through the most effective use of RVS tools. All our users can benefit from privileged access to our website, which includes downloads for new product releases. 

What happens if I encounter an issue while using an RVS tool?

All RVS licenses include access to our dedicated in-house support team, who will work with you to provide a rapid fix to your issue. This is a critical part of our vision. During 2016, we responded to 97% of new support requests within one working day, closed 56% of these within 3 working days and 91% within 20 working days. We also inform our customers of known issues via our website and email. 

Which industries use RVS?

We have been providing quality products and customer service to companies primarily in the aerospace and automotive embedded industries since 2004, as demonstrated by our case studies. As our products are designed to meet the stringent requirements for DO-178B/C certification in the aerospace industry, they are well-suited to any safety/mission-critical application such as those in the nuclear, medical, industrial and rail industries.

My software is part of a product that must be certified against a safety guideline. Can RVS tools be qualified for use in my project?

All our RVS tools are designed to meet the most stringent needs of certification processes, such as the DO-178B/C process used in the aerospace industry and the ISO 26262 process used in the automotive industry. We can provide developer qualification documents, a template integration qualification report and on-site tests to support you in qualifying RVS tools in projects requiring certification.

How are RVS products licensed?

We offer both “Node-locked” and “Floating” licenses, and a license server to support use of our tools in your specific development environment. Floating licenses follow the “Enterprise” model, meaning you can use our tools across geographical boundaries and different projects and users.

Which host operating systems can RVS be used on?

RVS tools support Windows 7 or newer, Windows Server 2008 R2 or newer, and a variety of Linux distributions including Ubuntu and Red Hat.

RVS tools can be used on projects with unsupported operating systems by using a clone integration to split the process and delegate parts of it to the unsupported machine.

How do I learn more about RVS?

You can request a trial version of RVS. You can also arrange a demonstration, where a member of our team will work with you to show the benefits RVS can offer you.

Can I use RapiTime with my build system?

RapiTime can be integrated to work with almost any compiler and target hardware. Our integration service promises to deliver a robust integration of RapiTime into your build system.

Which types of timing data does RapiTime calculate?

When you run your code on-target, or on-host using a simulator, RapiTime collects several timing metrics. These include worst-case execution time, high and low water mark paths, and minimum, average and maximum execution times for each test of your code. 

How does RapiTime calculate the worst-case execution time of my software?

The worst-case execution time reported by RapiTime is pessimistic, meaning the actual worst-case execution time of your code could not be higher than that reported by RapiTime. This satisfies the stringent requirements of embedded system certification. 

How do I determine the execution time overhead from instrumenting my source code?

RapiTime includes inbuilt functionality to let you determine the execution time overhead of adding instrumentation to your source code. This overhead can then be automatically subtracted from the timing metrics reported by the tool. 

Can I use RapiTime to analyze timing behavior on multi-core architectures?

As with all RVS tools, RapiTime supports collection of timing data on multi-core architectures. RapiTime achieves this by suspending data collection during a task interrupt and reinitializing collection when the task on the current core resumes execution. You can also use RapiTime to identify the interference produced by other cores on your system under test.

Can I determine timing metrics on my target, which has limited RAM and/or ROM?

The instrumentation overheads for RapiTime tools are very low, and zero-instrumentation overhead can even be supported in some cases, depending on your target hardware and data collection strategy. This means that you can use RapiTime with virtually any target. 

In addition to guaranteeing timing deadlines are met, how else can I use RapiTime?

You can use timing metrics to optimize your code for timing behavior. RapiTime highlights the costliest functions in terms of execution time, guiding you towards the best optimization candidates. By running RapiTime again after you optimize your code, you can determine performance improvements.

Which languages does RapiTime support?

RapiTime supports C, C++ and Ada projects, including mixed-language ones.

Which host operating systems can RVS be used on?

RVS tools support Windows 7 or newer, Windows Server 2008 R2 or newer, and a variety of Linux distributions including Ubuntu and Red Hat.

RVS tools can be used on projects with unsupported operating systems by using a clone integration to split the process and delegate parts of it to the unsupported machine.

How do I learn more about RapiTime?

You can request a trial version of RVS, which includes RapiTime. You can also arrange a demonstration, where a member of our team will work with you to show the benefits RapiTime can offer you.

If you're interested in RapiTime in academia, you can search for it in the academic press

How does RapiCover work?

RapiCover works by injecting instrumentation code into source code and executing the native build system so coverage results are collected during program execution. Data can be collected from almost any target hardware by a variety of approaches

For more information on how RapiCover works, see here

Which coverage criteria can I measure using RapiCover?

You can measure the most common coverage criteria required to support DO-178B/ED-12B, DO-178C/ED-12C and ISO 26262 certification using RapiCover. This includes function, call statement, branch, decision and condition coverage, and MC/DC.

Can I determine coverage for a decision containing large numbers of conditions?

By default, RapiCover supports 30 conditions per decision. We can work with you to develop custom configurations, allowing the tool to support a larger number of conditions, as necessary.

Can I collect coverage over multiple builds?

Using our configurable instrumentation mechanism, you can instruct RapiCover to determine coverage for specific subprograms as necessary. After running your tests, it is easy to merge coverage data from these runs into a combined report.  

Can I add manual configurations that flag my code as being exempt/uncoverable?

RapiCover includes a powerful “justification” mechanism that lets you mark code as covered. Using this feature, you can provide a rationale for justifying the code and create templates to justify code more easily. When your code changes, justifications are automatically migrated to represent the new location of your justified code.

For more information on using justifications in RapiCover, see our white paper.

What happens when I change my code?

RapiCover retains information about the revision of your code it used to generate results. The tool will report an error if you try to merge coverage from incompatible revisions. RapiCover includes an Optimal Dataset Calculator feature you can use to calculate the least expensive tests you need to run again when your code changes, saving you valuable testing effort.

Can I collect coverage data across power cycles and reset sequences?

RapiCover can be configured to collect data in real-time while your software runs. By writing data to an external device, the data will remain in place while your system reboots, and collection can be reinitialized when it restarts. This means that you can collect coverage data across a shutdown or reset sequence. This is subject to your target hardware architecture. 

Which languages does RapiCover support?

RapiCover supports C, C++ and Ada projects, including mixed-language ones.

Which host operating systems can RVS be used on?

RVS tools support Windows 7 or newer, Windows Server 2008 R2 or newer, and a variety of Linux distributions including Ubuntu and Red Hat.

RVS tools can be used on projects with unsupported operating systems by using a clone integration to split the process and delegate parts of it to the unsupported machine.

How do I learn more about RapiCover?

You can request a trial version of RVS, which includes RapiCover. You can also arrange a demonstration, where a member of our team will work with you to show the benefits RapiCover can offer you.

If you're interested in RapiCover in academia, you can search for it in the academic press


Can I use RapiTest with my build system?

RapiTest can be integrated to work with almost any compiler and target hardware. Our integration service promises to deliver a robust integration of RapiTest into your build system.

What types of testing does RapiTest support?

You can use RapiTest to generate code for testing both high- and low-level requirements, and it supports the generation of tests for various levels and definitions understood in the industry, such as unit, module, integration and system tests.  

How do I write tests for use with RapiTest?

RapiTest supports tests written in spreadsheet formats and as scripts, and is customizable to support existing or alternative test formats. You can use our spreadsheet and scripting formats to write your tests, and if necessary, we can develop a converter for RapiTest to interpret tests written in your existing format. 

What kind of stubbing behavior does RapiTest support?

The test generation algorithms RapiTest uses are powerful and support generation of all types of stubs used in the industry, including stubs, mocks, fakes, spies and dummies

How many tests can RapiTest run at once?

You can include as many tests as you want to in a single build on your target, subject to the resources available on your target and your data collection strategy. As with all our RVS tools, RapiTest has a very low overhead, meaning you can complete your test cycle in fewer builds than using other tools.

How is my test data displayed?

RapiTest test data is displayed using a friendly user-interface. When you run a test, your results are displayed based on just the subprograms you tested, letting you quickly check your results against requirements. 

Which languages does RapiTest support?

RapiTest supports C, C++ and Ada projects, including mixed-language ones.

Which host operating systems can RVS be used on?

RVS tools support Windows 7 or newer, Windows Server 2008 R2 or newer, and a variety of Linux distributions including Ubuntu and Red Hat.

RVS tools can be used on projects with unsupported operating systems by using a clone integration to split the process and delegate parts of it to the unsupported machine.

How do I learn more about RapiTest?

You can request a trial version of RapiTest. You can also arrange a demonstration, where a member of our team will work with you to show the benefits RapiTest can offer you.

If you're interested in RapiTest in academia, you can search for it in the academic press

We supply standard cables to connect the RTBx to LVDS or TTL I/O ports. If these ports are unavailable, you may need to install adapters on your embedded board. Rapita Systems provides support on the best way to connect the RTBx to your target. 

You can connect RTBx to an address bus that runs at up to 250 MHz. To do this, you must reserve a range of addresses for ipoints, with one bit reserved to indicate that the value on the address bus is an ipoint. The ipoint instrumentation writes a value to a specific address in that region to denote a specific ipoint. This approach reduces the maximum trace duration of the RTBx.

The RTBx automatically timestamps data it collects, using either an internal clock or that on embedded hardware. This removes the need to configure a timestamping procedure on the embedded target itself, which would incur code size and execution time overheads.

Compared to other hardware that can be used for timestamping such as debuggers and logic analyzers, the RTBx can collect trace data for far longer, meaning that it doesn't become a bottleneck in your testing.

This is the maximum tracing rate that can be sustained over time, calculated from the number of ipoints the RTBx can process per second. The RTBx can support a higher tracing rate for short periods of time, provided that the minimum separation between instrumentation points is met.

This depends on the number of CPU cycles it takes to output successive ipoints, and the rate ipoints are written at. For example, RTBx 2220 can collect trace data via an I/O port with a minimum separation of 4 ns (250 MHz). This model can therefore support a 1 GHz CPU that outputs trace data once every 4 cycles. 

When developing safety-critical applications to DO 178C (CAST32A) guidelines or ISO 26262 standards, there are special requirements for using multicore processors. Evidence must be produced to demonstrate that software operates within timing deadlines.

The goal of multicore timing analysis is to produce execution time evidence for these complex systems. In multicore processors, multiple cores compete for the same shared resources, resulting in potential interference channels that can affect execution time. Accounting for this interference and producing robust execution time evidence is a challenge addressed by Rapita’s Multicore Timing Solution.

By following a V-model process, our engineers investigate multicore systems and produce evidence about multicore timing behavior. Our approach has been designed to support projects within the DO 178C (CAST32A) and ISO 26262 context.

You can see an example workflow of Rapita’s Multicore Timing Solution in our Multicore White Paper.

Our multicore timing analysis solution comprises three components: a process, tool automation, and services.

Our multicore timing analysis process is a V-model process that we developed in line with DO-178 and CAST-32A. It follows a requirements-based testing approach that focuses on identifying and quantifying interference channels on multicore platforms.

The tools we have developed let us apply tests to multicore hardware (RapiTest) and collect timing data (RapiTime) and other metrics such as scheduling metrics (RapiTask) from them. We use RapiDaemons (developed by the Barcelona Supercomputing Center) to create a configurable degree of traffic on shared hardware resources during tests, so we can analyze the impact of this on the application’s timing behavior.

Our multicore timing analysis services include tool integration, porting RapiDaemons, performing timing analysis, identifying interference channels, and others depending on customer needs.

Yes. Our multicore timing analysis solution can be used to produce timing evidence needed to satisfy DO-178C objectives (in line with CAST-32A guidance), and ISO 26262 standards.

Rapita Systems are uniquely positioned to offer the combination of expertise and tools required to effectively perform multicore timing analysis.

Whilst the challenge of certifying multicore systems for safety-critical applications is a relatively new one for the industry as a whole, we have been researching this area for over a decade. Rapita are working with key industry stakeholders, including major chip-manufacturers like NXP, to support them in refining the evidence required to satisfy certification authorities.

Rapita have extensive experience in providing software verification solutions for some of the best-known aerospace and automotive companies in the world. For example, BAE Systems used RapiTime (one of the tools in our Multicore Timing Solution) to identify worst-case execution time optimizations for the Mission Control Computer on their Hawk Trainer jet.

See more of our Case Studies

In the CAST-32A position paper published by the FAA, an interference channel is defined as "a platform property that may cause interference between independent applications". This definition can be applied to a range of ‘platform properties’, including thermal factors etc.

Of these interference channels, interference caused by the sharing of certain resources in multicore systems is one of the most significant in terms of execution times. Interference based on shared resources may occur in multicore systems when multiple cores simultaneously compete for use of shared resources such as buses, caches and main memory.

Rapita’s Multicore Timing Solution analyzes the effects of this type of interference channel.

A very simple example of a shared resource interference channel is as follows:

Interference channels

In this simplified example, tasks running independently on the two cores may need to access main memory simultaneously via the memory controller. These accesses can interfere with each other, potentially degrading system performance.

A RapiDaemon is an application designed to create contention in a predictable and controlled manner on a specific shared resource in a multicore system. The effect of this contention can then be measured to identify interference channels and impacts on execution times.

Rapita have developed and optimized a set of standard RapiDaemons that target shared resources common to most multicore architectures, such as main memory. Some projects will require the creation of custom RapiDaemons for a specific architecture.

We can analyze almost all hardware architectures. To work with an architecture that is new to us, we first identify what metrics we can collect from the hardware, then adapt RapiDaemons for the architecture and implement a strategy to collect data from it.

For more on this, see the FAQ: ‘configuring and porting’.

This is part of the standard Multicore Timing Solution Workflow (detailed in our Multicore White Paper).

Rapita have been providing execution time analysis services and tooling since 2004.

RapiTime, part of the Rapita Verification Suite (RVS), is the timing analysis component of our Multicore Timing Solution. Our customers have qualified RapiTime on several DO178C DAL A projects where it has been successfully used to generate certification evidence by some of the most well-known aerospace companies in the world. See our Case Studies.

Learn more about our tool qualification support for RapiTime in projects requiring DO-178B/C certification.

As well as providing a mature tool chain, we support the customer in ensuring that their test data is good enough, so that the timing information they generate from the target is reliable.

Our RapiDaemons are configured and tested (see the FAQ: ‘configuring and porting’) to ensure that they behave as expected on each specific customer platform.

We also assess available observability channels as part of a processor analysis. This primarily applies to the use of performance counters, where we assess their accuracy and usefulness for obtaining meaningful insights into the system under observation.

Before timing analysis can begin, it is essential to confirm that existing RapiDaemons operate as intended on a specific board. If they do not, it may be necessary to customize them to make them behave as they are designed to (a similar intention to a target integration, where we get the instrumented application software running on a specific target).

'Porting’ and ‘configuration’ of RapiDaemons are roughly equivalent terms in this context. For RapiDaemons to work as intended, they need to go through a configuration phase where their internal parameters are tuned as required. They’re then tested to ensure compatibility with the platform in question.

Each type of RapiDaemon may be implemented for a different instruction-set architecture and platform. While the main logic behind their behavior remains the same, they must be ported to execute correctly on each new platform.

RTOS vendors may provide partitioning mechanisms for their multicore processors, but these do not guarantee the complete elimination of interference. Instead, they are designed to provide an upper limit on the interference, sometimes at the expense of average-case performance.

In aerospace, these partitioning mechanisms may be referred to as ‘robust partitioning’. CAST-32A (the FAA’s position paper on multicore processors in avionics) identifies allowances for some of the objectives if you have robust partitioning in place – but it is still necessary to verify that the partitioning is indeed as robust as claimed.

From a certification standpoint, regardless of the methodology behind the RTOS vendor’s approach to eliminating interference, the effectiveness of the technology needs to be verified.

It is possible for companies to perform multicore timing analysis internally, but it is a highly complex undertaking which is very costly in terms of budget and effort. Anecdotally, one of our customers reported that it took them five years and a budget in the millions of dollars to analyze one specific platform.

Our Multicore Timing Solution is typically delivered as a turn-key solution, from initial system analysis and configuration all the way through to providing evidence for certification.

Some customers prefer to outsource only parts of the process to Rapita. For example, it is possible for a customer to purchase RapiDaemons under license and use them to gather and analyze their own data.

We’re completely flexible, and we understand that different customers have different needs. As such, you can purchase any component of our multicore timing analysis solution separately if that’s what you need. This includes, but is not limited to:

  • Tool licenses (RapiTest, RapiTime, RapiTask)
  • Services to integrate automation tools to work with your multicore system
  • RTBx hardware to collect trace data from your multicore system
  • Generic libraries of RapiDaemons
  • Services to port RapiDaemons to your multicore system
  • Services to perform multicore timing analysis

Yes: our approach can be used to get an in-depth understanding of how sensitive software can be to other software. For example:

  • Task 1 executes acceptably in isolation and with most other tasks, but if it executes simultaneously with Task 127, its function X takes 10 times as long to return.
  • This intelligence can feed into system integration activities to ensure that function X can never execute at the same time as Task 127.

The information from this type of analysis can also provide insights into potential improvements to the implementation of the two tasks. Sensitive tasks are not always the guilty party: other tasks can be overly aggressive and cause delays in the rest of the system.

For safety reasons, WCET will always be somewhat pessimistic. However, techniques that work well for single-core systems risk generating a WCET that is unreasonably large when applied to multicore systems, because the effects of contention can become disproportionate. The objective, therefore, is to calculate a value that is plausible and useful, without being optimistic. Optimism in relation to WCET is inherently unsafe.

It is not enough to identify how sensitive an application’s tasks are to different types and levels of interference; it is also necessary to understand what degree of interference a task may suffer in reality. It is possible to lessen the pessimism in WCET analysis by viewing the processor under observation through this paradigm.

The degree to which we can reduce pessimism is dependent on how effectively we can analyze the system. Factors influencing this include:

  • The overhead of the tracing mechanism (which affects depth of instrumentation)
  • The availability and reliability of performance counters
  • The availability of information regarding other tasks executing on the system
  • The quality of tests that exercise the code

Cache partitioning is all about predictability, not performance. Your code might execute faster on average without cache partitioning, but it’s probably not as predictable and can be quite sensitive to whatever executes in parallel.

Cache partitioning aims to remove all the sensitivity to other tasks sharing the caches, thus making your task more predictable – but potentially at the expense of overall performance. In critical systems, predictability is of far greater importance than performance.

Rapita’s Multicore Timing Solution can be used to exercise cache partitioning mechanisms by analyzing any shared – and usually undocumented – structures internal to the caches.

To analyze how a specific task is affected by contention on a specific resource, we need to be able to synchronize the execution of the task with the execution of RapiDaemons (the applications that generate contention on the resource).

Usually it is highly desirable to have RTOS/HV support for enabling user-level access to performance counters. Additionally, context switch information is very valuable when performing timing analysis.

Yes. Our solution makes it easy to specify the core on which you run your tests, and the level of resource contention to apply from each other core in the system.

We can also analyze systems that use non-synchronized clocks such as those often present in AMP platforms, by using the RTBx to timestamp data.

The maximum number of metrics we can collect depends on the performance monitoring unit(s) (or equivalent) on the hardware. An ARM A53, for example, lets us collect at least 30 metrics, but only access 6 in a single test. By running tests multiple times, however, we could collect all 30 metrics.

Developing a one-button tool solution for multicore timing analysis would be impossible. This is because interference, which can have a huge impact on a task’s execution time, must be taken into account when analyzing multicore timing behavior.

Analyzing interference effects is a difficult challenge that cannot be automatically solved through a software-only solution. Using approaches developed for timing analysis of single-core systems would result in a high level of pessimism, as it would assume that the highest level of interference possible is feasible, while this is almost never the case.

It is possible to collect a range of metrics by instrumenting your source code with the Rapita Verification Suite (RVS), including a range of execution time metrics:

  • RapiTime: high-water mark and maximum execution times
  • RapiTask: scheduling metrics such as periodicity, separation, fragmentation and core migration

It is also possible to collect information on events in your hardware using performance counters. The information we can collect depends on the performance monitoring unit(s) (or equivalent) of your system, but typically includes events such as L2 cache accesses, bus accesses, memory accesses and instructions executed. We can also collect information about operating system activity such as task switching and interrupt handling via event tracing or hooks.

Yes. We develop and test RapiDaemons against appropriate requirements, e.g. RapiDaemon M should access resource R when run N times.

Yes, we formally test and assess the accuracy of performance counters to ensure the validity of results we collect for the software under analysis.