The design requirements of all RVS tools include the ability to work alongside your existing development environment. You can configure these tools to generate verification metrics and run tests using custom scripts, and can integrate the tools into your existing version control and continuous build systems.
RVS tools are designed to handle very large code bases. Because of the efficient algorithms used by RVS tools, there is no fundamental limitation to the number of lines of code that RVS can process, and our RVS tools have been used on projects with millions of lines of code.
RVS tools can be integrated to work with almost any embedded target. Our engineers can work with you to determine the optimal strategy for integrating the tool with your target, even for multi-core architectures. For more information on the hardware architectures we have integrated RVS tools with, see our support specification.
All RVS tools support numerous data collection strategies, which are optimized to achieve a minimal instrumentation overhead. You can collect data using debuggers, logic analyzers, directly from the address bus, or using our datalogger, RTBx. By working with you to determine the optimal data collection strategy, you can ensure you achieve results with minimal effort.
The instrumentation overheads for RVS tools are the lowest in the market, and the tools can support zero-instrumentation overhead in some cases. Because of this, RVS can generate verification data from your source code in fewer builds than possible using other tools. For example, in a recent customer evaluation, RapiCover instrumentation overhead was 4 to 5 times lower than competitor tools, leading to a 92% reduction in the number of builds necessary to test the code base. If necessary, you can configure RVS tools to instrument your code in multiple builds and compile collected data into a single report.
All RVS tools include a friendly user-interface that presents your data in both tabular and graphical formats. Using this feature, you can filter your results to zoom in on target functions, making it easy to find the information you are looking for.
We provide an extensive set of RVS documentation with each of our products, and offer training courses guiding you through the most effective use of RVS tools. All our users can benefit from privileged access to our website, which includes downloads for new product releases.
All RVS licenses include access to our dedicated in-house support team, who will work with you to provide a rapid fix to your issue. This is a critical part of our vision. During 2016, we responded to 97% of new support requests within one working day, closed 56% of these within 3 working days and 91% within 20 working days. We also inform our customers of known issues via our website and email.
We have been providing quality products and customer service to companies primarily in the aerospace and automotive embedded industries since 2004, as demonstrated by our case studies. As our products are designed to meet the stringent requirements for DO-178B/C certification in the aerospace industry, they are well-suited to any safety/mission-critical application such as those in the nuclear, medical, industrial and rail industries.
All our RVS tools are designed to meet the most stringent needs of certification processes, such as the DO-178B/C process used in the aerospace industry and the ISO 26262 process used in the automotive industry. We can provide developer qualification documents, a template integration qualification report and on-site tests to support you in qualifying RVS tools in projects requiring certification.
We offer both “Node-locked” and “Floating” licenses, and a license server to support use of our tools in your specific development environment. Floating licenses follow the “Enterprise” model, meaning you can use our tools across geographical boundaries and different projects and users.
RVS tools can be used on Windows XP or newer, Windows Server 2003 or newer, and a variety of Linux distrubutions including Ubuntu and Red Hat.
RapiTime can be integrated to work with almost any compiler and target hardware. Our integration service promises to deliver a robust integration of RapiTime into your build system.
When you run your code on-target, or on-host using a simulator, RapiTime collects several timing metrics. These include worst-case execution time, high and low water mark paths, and minimum, average and maximum execution times for each test of your code.
The worst-case execution time reported by RapiTime is pessimistic, meaning the actual worst-case execution time of your code could not be higher than that reported by RapiTime. This satisfies the stringent requirements of embedded system certification.
RapiTime includes inbuilt functionality to let you determine the execution time overhead of adding instrumentation to your source code. This overhead can then be automatically subtracted from the timing metrics reported by the tool.
As with all RVS tools, RapiTime supports collection of timing data on multi-core architectures. RapiTime achieves this by suspending data collection during a task interrupt and reinitializing collection when the task on the current core resumes execution. You can also use RapiTime to identify the interference produced by other cores on your system under test.
The instrumentation overheads for RapiTime tools are very low, and zero-instrumentation overhead can even be supported in some cases, depending on your target hardware and data collection strategy. This means that you can use RapiTime with virtually any target.
You can use timing metrics to optimize your code for timing behavior. RapiTime highlights the costliest functions in terms of execution time, guiding you towards the best optimization candidates. By running RapiTime again after you optimize your code, you can determine performance improvements.
RapiTime supports C, C++ and Ada projects, including mixed-language ones.
RapiTime can be used on Windows XP or newer, Windows Server 2003 or newer, and a variety of Linux distrubutions including Ubuntu and Red Hat.
RapiCover works by injecting instrumentation code into source code and executing the native build system so coverage results are collected during program execution. Data can be collected from almost any target hardware by a variety of approaches.
For more information on how RapiCover works, see here.
You can measure the most common coverage criteria required to support DO-178B/ED-12B, DO-178C/ED-12C and ISO 26262 certification using RapiCover. This includes function, call statement, branch, decision and condition coverage, and MC/DC.
By default, RapiCover supports 30 conditions per decision. We can work with you to develop custom configurations, allowing the tool to support a larger number of conditions, as necessary.
Using our configurable instrumentation mechanism, you can instruct RapiCover to determine coverage for specific subprograms as necessary. After running your tests, it is easy to merge coverage data from these runs into a combined report.
RapiCover includes a powerful “justification” mechanism that lets you mark code as covered. Using this feature, you can provide a rationale for justifying the code and create templates to justify code more easily. When your code changes, justifications are automatically migrated to represent the new location of your justified code.
For more information on using justifications in RapiCover, see our white paper.
RapiCover retains information about the revision of your code it used to generate results. The tool will report an error if you try to merge coverage from incompatible revisions. RapiCover includes an Optimal Dataset Calculator feature you can use to calculate the least expensive tests you need to run again when your code changes, saving you valuable testing effort.
RapiCover can be configured to collect data in real-time while your software runs. By writing data to an external device, the data will remain in place while your system reboots, and collection can be reinitialized when it restarts. This means that you can collect coverage data across a shutdown or reset sequence. This is subject to your target hardware architecture.
RapiCover supports C, C++ and Ada projects, including mixed-language ones.
RapiTest can be integrated to work with almost any compiler and target hardware. Our integration service promises to deliver a robust integration of RapiTest into your build system.
You can use RapiTest to generate code for testing both high- and low-level requirements, and it supports the generation of tests for various levels and definitions understood in the industry, such as unit, module, integration and system tests.
RapiTest supports tests written in spreadsheet formats and as scripts, and is customizable to support existing or alternative test formats. You can use our spreadsheet and scripting formats to write your tests, and if necessary, we can develop a converter for RapiTest to interpret tests written in your existing format.
The test generation algorithms RapiTest uses are powerful and support generation of all types of stubs used in the industry, including stubs, mocks, fakes, spies and dummies.
You can include as many tests as you want to in a single build on your target, subject to the resources available on your target and your data collection strategy. As with all our RVS tools, RapiTest has a very low overhead, meaning you can complete your test cycle in fewer builds than using other tools.
RapiTest test data is displayed using a friendly user-interface. When you run a test, your results are displayed based on just the subprograms you tested, letting you quickly check your results against requirements.
RapiTest can be used on Windows XP or newer, Windows Server 2003 or newer, and a variety of Linux distrubutions including Ubuntu and Red Hat.
We supply standard cables to connect the RTBx to LVDS or TTL I/O ports. If these ports are unavailable, you may need to install adapters on your embedded board. Rapita Systems provides support on the best way to connect the RTBx to your target.
You can connect RTBx to an address bus that runs at up to 250 MHz. To do this, you must reserve a range of addresses for ipoints, with one bit reserved to indicate that the value on the address bus is an ipoint. The ipoint instrumentation writes a value to a specific address in that region to denote a specific ipoint. This approach reduces the maximum trace duration of the RTBx.
The RTBx automatically timestamps data it collects, using either an internal clock or that on embedded hardware. This removes the need to configure a timestamping procedure on the embedded target itself, which would incur code size and execution time overheads.
Compared to other hardware that can be used for timestamping such as debuggers and logic analyzers, the RTBx can collect trace data for far longer, meaning that it doesn't become a bottleneck in your testing.
This is the maximum tracing rate that can be sustained over time, calculated from the number of ipoints the RTBx can process per second. The RTBx can support a higher tracing rate for short periods of time, provided that the minimum separation between instrumentation points is met.
This depends on the number of CPU cycles it takes to output successive ipoints, and the rate ipoints are written at. For example, RTBx 2220 can collect trace data via an I/O port with a minimum separation of 4 ns (250 MHz). This model can therefore support a 1 GHz CPU that outputs trace data once every 4 cycles.