Your browser does not support JavaScript! Skip to main content
Free 30-day trial Customer portal Contact
 
Rapita Systems
 

Industry leading verification tools & services

Rapita Verification Suite (RVS)

  RapiTest - Unit/system testing   RapiCover - Structural coverage analysis   RapiTime - Timing analysis (inc. WCET)   RapiTask - Scheduling visualization   RapiCoverZero - Zero footprint coverage analysis   RapiTimeZero - Zero footprint timing analysis   RapiTaskZero - Zero footprint scheduling analysis

Multicore verification

  CAST-32A Compliance   Multicore Timing Solution   RapiDaemons

Services

  V & V Services   Qualification   Training   Tool Integration  Support

Industries

  Aerospace (DO-178C)   Automotive (ISO 26262)   Space

Other

  RTBx   Mx-Suite   Software licensing   Product life cycle policy

Latest from Rapita HQ

Latest news

Rapita hosts SCADE webinar with ANSYS
FACE Virtual Technical Interchange Meeting
RVS 3.14 Launched
Propelling the next generation of scientists
View News

Latest from the Rapita blog

Software verification on the Solar Orbiter
Metrowerks CodeTest - How and why to upgrade
Leveraging FACE Conformance Artifacts to Support Airworthiness
Assured Multicore Partitioning for FACE Systems
View Blog

Latest discovery pages

matlab_simulink MATLAB Simulink MCDC coverage and WCET analysis
code_coverage_ada Code coverage for Ada, C and C++
amc-20-193 AMC 20-193
embedded_software_testing Embedded Software Testing Tools
View Discovery pages

Upcoming events

Future Airborne Capability Environment (FACE) Exihibition
2021-09-14
Safe Use of Multi-Core Processors Seminar
2021-11-10
View Events

Technical resources for industry professionals

Latest White papers

Multicore Timing Analysis for DO-178C
Seven Roadblocks to 100% Structural Coverage (and how to avoid them)
Eight top code coverage questions in embedded avionics systems
View White papers

Latest Videos

Efficient testing with RVS and ANSYS® SCADE® Test™
Enabling cost-effective modular avionics with FACE
Streamlined software verification thumbnail
Streamlined software verification with RVS 3.14
Qualification guidance thumbnail
Clear qualification guidance with RVS qualification kits
View Videos

Latest Case studies

Cobham Aerospace Connectivity: RapiCover continues to deliver on the most challenging targets
DO-178B Level A Embraer FCS
Validation of COTS Ada Compiler for Safety-Critical Applications
View Case studies

Other Downloads

 Webinars

 Brochures

 Product briefs

 Technical notes

 Research projects

Discover Rapita

Who we are

The company menu

  • About us
  • Customers
  • Distributors
  • Locations
  • Partners
  • Research projects
  • Contact us

Careers

Careers menu

  • Current opportunities & application process
  • Working at Rapita

US office

+1 248-957-9801
info@rapitasystems.com
41131 Vincenti Ct.
Novi, MI, 48375
USA

UK office

+44 (0)1904 413945
info@rapitasystems.com
Atlas House
York, YO10 3JB
UK

Back to Top

Breadcrumb

  1. Home
  2. RVS
  3. CAST-32A Compliance
Multicore

Solving the challenges of multicore certification

We help you:

  •  Produce evidence to support DO-178C (CAST-32A) certification
  •  Characterize and quantify multicore interference
  •  Evaluate & select multicore hardware and RTOS
  •  Reduce analysis effort through automation
Multicore white paperContact us
  • Overview
  • How it works
  • Automation
  • Qualification
  • Resources
  • FAQs

Produce evidence to support DO-178C (CAST-32A) certification

Our CAST-32A Compliance packages provide an end-to-end solution for providing certification evidence to satisfy DO-178C and CAST-32A objectives.

We provide a comprehensive range of tools, services and training that lets you address CAST-32A objectives including analyzing your software’s timing behavior in the context of interference caused by contention on shared hardware resources.

produce CAST-32A evidence
characterize and quantify interference

Characterize and quantify multicore interference

Using a combination of our expert services, RapiDaemons, (small applications designed to generate configurable load on specific shared resources), and RVS tools to automate the collection of timing evidence on-target, our CAST-32A Compliance packages help you identify sources of multicore interference and quantify their effects on your application.

Following our robust process, we identify the potential interference channels your multicore platform, produce on-target tests that analyze the impact of interference on software timing behavior, and run these on your system through an automated framework. Our expert team then analyzes the results to produce the evidence you need to demonstrate that your code will meet its timing deadlines.

Evaluate and select multicore hardware and RTOS

Different multicore hardware and RTOS environments may have vastly different effects on the timing behavior of applications.

Our CAST-32A Compliance package helps you evaluate different environments to identify the best one for you, so you can mitigate the impact of interference channels in the system and ensure that your platform has what’s needed to support your verification all the way through to certification.

CAST-32A Compliance
multicore tool automation

Reduce analysis effort through automation

Our CAST-32A Compliance packages create an environment where multicore timing behavior can be analyzed through an automated tool framework. Within this environment, input tests for multicore timing behavior (which use RapiDaemons to create contention on specific hardware resources) are converted into test harnesses, and these are run on the target to produce results that can be analyzed.

This automated environment significantly reduces the effort needed to run tests. Typically, we use this framework to analyze multicore timing behavior ourselves, but if you want to do the analysis yourself, we can help you by offering in-depth training.

Tool automation

We use our mature software verification toolsuite – the Rapita Verification Suite – to apply tests to multicore hardware (RapiTest) and collect timing metrics (RapiTime) and other metrics such as scheduling metrics (RapiTask) from them. Using these tools, we automate various stages of the multicore timing analysis process.

CAST-32A automation

To analyze the timing behavior of a specific multicore system, the Rapita Systems multicore timing analysis solution uses the following software, hardware and service components:

  • Rapita Verification Suite (RVS), a collection of embedded software verification tools that is widely used in the critical aerospace industry.
  • RapiDaemons, a collection of specialized programs to generate contention on shared hardware resources.
  • RTBx, a high-rate datalogger used to collect and timestamp execution information from embedded hardware.
  • Integration of hardware and software into the multicore development environment under analysis.

Tests & RapiDaemons

We provide a set of carefully designed tests designed to provide evidence of interference channels in your multicore processor. We have standard libraries of tests for a range of multicore processors.

RapiDaemons are specialized applications designed to generate targeted contention on specific hardware resources such as buses, caches and GPUs. By generating contention on shared resources during multicore tests, RapiDaemons support the analysis of multicore timing behavior.

Each RapiDaemon applies contention to a specific hardware resource on a specific hardware architecture, either matching a desired level of contention or maximizing contention on the resource.

A complete solution for CAST-32A

Our CAST-32A Compliance Package is an end-to-end solution for meeting DO-178C guidelines (including CAST-32A objectives) for multicore projects.The package is a combination of mature products and expert services (for example hardware characterization) that can be delivered alone or as part of a complete solution.

Multicore timing analysis using a V-model approach

By following a V-model process, our engineers investigate multicore systems and produce evidence about their timing behavior.

Our industry-leading tooling, including our unique RapiDaemon technology (which generates interference during tests), reduces analysis effort through automation.

Our approach has been specifically designed to support multicore aerospace projects following DO-178C and CAST-32A guidance.

v_model
Rapitas approach

Addressing CAST-32A Objectives

Our CAST-32A Compliance packages are designed to support CAST-32A compliance, addressing the following objectives:

  • MCP_Planning_1
  • MCP_Planning_2
  • MCP_Resource_Usage_1
  • MCP_Resource_Usage_2
  • MCP_Resource_Usage_3

 

Later in your CAST-32A project, we help you produce the evidence needed to implement your plans:

  • MCP_Resource_Usage_4
  • MCP_Software_1
  • MCP_Accomplishment_Summary
  • MCP_Error_Handling_1
  • MCP_Software_2

 

v_model

Expertise & Analysis

Timing analysis of single core systems can be entirely automated by using software tools such as RapiTime which analyze the worst-case execution time (WCET) of tasks running on the system.

This isn’t the case for multicore systems, for which we must consider the effects of interference caused by resource contention on software execution times. Interference effects are complex, interlinked, and involve components specific to both the multicore architecture and the scheduling and resource allocation systems in the software.

This means that, to properly perform the analysis, we need to apply the expertise of engineers who know the system in detail. While this expertise can be used to direct the use of software tools (for example specifying levels of contention to apply to specific resources), no automated timing analysis tool will be able to understand a multicore system in enough depth to perform the analysis alone.

Evidence & qualification

Our CAST-32A compliance package produces evidence to satisfy all of the CAST-32A objectives.

All components of our Multicore Timing Solution are designed for compliance with DO-178C and CAST-32A guidance:

  • Our RVS automation tools are classified as Tool Qualification (TQL) 5 tools as per DO-178C. Qualification support is available for RapiTest and RapiTime, which have been qualified in multiple DAL A aerospace projects.
  • The performance and behavior of our RapiDaemons are validated through extensive testing and we provide evidence of this testing on delivery. Timing tests are performed using RapiTime, a qualifed tool recognised by the FAA as a “an example of a mature tool in this aspect”. As RapiDaemons are not considered to be tools as per DO-178C, they do not need to be qualified.

Incremental assurance

Using the CAST-32A Compliance Solution, assurance evidence can be developed incrementally and independently for the multicore platform and each hosted application, supporting the development of Integrated Modular Avionics.

The solution is designed to meet use cases for each of the avionics roles identified in DO-297/ED-124, whether you're a Certification Applicant, System Integrator, Platform or Application Supplier. The solution supports the needs of Certification Applicants and System Integrators by defining a consistent strategy for generating certification evidence across all platforms and applications.

Incremental Assurance

How we support CAST-32A compliance

Objective

Description

Customer role

RTOS/HW role

Rapita role

MCP_Planning_1

System description

Document in PSAC/PHAC

Early architecture evaluation

Early platform evaluation

MCP_Planning_2

List of MCP shared resources, active HW dynamic features

Document in PSAC/PHAC, how to verify in SVP

RTOS + HW information

HW characterization

MCP_Resource_Usage_1

Configuration settings

Incorporation of recommendations in PSAC, add HLR

Recommendations of mitigation strategies

Analysis and recommendations

MCP_Resource_Usage_2

Mitigations for inadvertently altered CCS

Document in PSAC/PHAC, verify and analyze

N/A

Architecture analysis, review, test

MCP_Resource_Usage_3

List of interference channels and verification methods

Review results, incorporate in PSAC, identify in HLRs, V&V methods in SVP

RTOS + HW information

HW characterization

MCP_Resource_Usage_4

In a worst-case scenario, it has been verified that the software's resource demands do not exceed those available

Review results, incorporate in PSAC, identify in HLRs, V&V methods in SVP

RTOS information

HW characterization

Analysis and methods

Verify and analyze

MCP_Software_1

WCET analysis of all SW components

Support in running tests, review results

RTOS Information

WCET analysis and results; we provide evidence on the execution time behavior of your code that takes multicore interference into account

MCP_Software_2

Data Coupling/Control Coupling analysis by RBT

Customer to define and perform

N/A

Tools & services

MCP_Error_Handling_1

SafetyNet

Customer to define and perform

Customer or RTOS

Review, test

MCP_Accomplishment_Summary

 

Showing compliance

 

Incorporate results in SAS

Support

 

Rapita to support evidence; we provide multicore timing evidence that you can easily include in your SAS, including traceability information and a summary of test plans, implementation and results

Videos

Enabling cost-effective modular avionics with FACE
00:07:11 | Explainer
 
CAST-32A compliance for DO-178C projects Thumbnail
CAST-32A Compliance for DO-178C projects
00:02:15 | Overview
 
MASTECS Project
00:02:18 | Research project
 
Ask the expert: Multicore safety
Ask the expert: Multicore safety
00:11:19 | Explainer
 

Downloads

  Webinar
Incremental Assurance of Multicore Integrated Modular Avionics (IMA)
  Webinar
Certifying multicore systems for DO-178C (CAST-32A) projects
  Webinar
Out of the box Solution for Multicore Analysis - Rapita & DDC-I
  Webinar
Verifying Multicore RTOS Partitioning for DO-178C (CAST-32A) Projects
  Webinar
Multicore Timing for DO-178 Projects Webinar
 
Multicore Timing Analysis for DO-178C

Pagination

  • Current page 1
  • Page 2
  • Next page ›
  • Last page Last »

News & Blog

News
Another successful DO-178C Virtual Training Course complete
News
DO-178 & Multicore Training Bristol 2019
News
Rapita Sponsor NXP's Multi-Core For Avionics (MCFA) Conference 2019
News
A look back on Rapita's DO-178C training workshop in San Diego
News
Another successful DO-178C training workshop in Bristol
News
Rapita/ConsuNova hosts DO-178 training workshop in San Diego, USA

Frequently asked questions

  • General
  • Multicore timing analysis
  • Interference channel analysis
  • Qualification and certification
  • Compatibility
  • Licensing and support
Expand All
  • What is the CAST-32A Compliance Solution? 
  • What is multicore timing analysis? 
  • What is Rapita's approach to multicore timing analysis?  
  • What components are involved in the CAST-32A Compliance Solution? 
  • Can your solution help me with certification aspects of my multicore project?  
  • Can the CAST-32A Compliance Solution be used to generate assurance evidence incrementally? 
  • Have you completed an FAA or EASA certification using your CAST-32A Compliance packages? 
  • Do you provide detailed process descriptions to be incorporated into traditional DO-178C planning documents such as the PSAC and SVP? 
  • Why should I use Rapita's solution? 
  • What is an interference channel?  
  • Is there a standard list of interference channels that I should test? 
  • How long does it take to comprehensively analyze the interference channels present in multicore hardware?  
  • Which hardware architectures can you analyze? 
  • Have you performed hardware analysis for my multicore platform? 
  • Do you support the analysis of GPU-based architectures for multicore timing behavior? 
  • How do I know when my testing is complete? How much is enough to satisfy CAST-32A? 
  • Why can I trust Rapita's Multicore worst-case execution time statistics?  
  • If my RTOS vendor says they provide robust partitioning, why do I need Rapita?  
  • Why can't I do my own multicore timing analysis and certification?  
  • Does CAST-32A guidance apply to platforms with multiple different processors? 
  • Can I take advantage of individual components of the CAST-32A Compliance Solution?  
  • Can you help me optimize the configuration of my multicore system?  
  • How do you ensure that worst-case execution time metrics are not excessively pessimistic? 
  • Is cache partitioning helpful or harmful to multicore timing performance? 
  • Can you quantify our cache partitioning to maximize our performance?  
  • Are there any constraints on the application scheduling, supervisor, or hypervisor?  
  • Can you analyze systems using asymmetric multiprocessing?  
  • How many performance counters can you collect per test?  
  • Why don't you have a tool that automates multicore timing analysis? 
  • Which metrics can you collect from my multicore platform?  
  • Can statistical modeling approaches be used to provide support for multicore timing measurements? 
  • How does RVS support multicore timing analysis?  
  • How do RapiDaemons support multicore timing analysis? 
  • Do you test the validity of performance counters?  
  • Can statistical modeling approaches be used to provide support for multicore timing measurements? 
  • What is multicore timing analysis?

    When developing safety-critical applications to DO-178C (CAST-32A) guidelines or ISO 26262 standards, there are special requirements for using multicore processors. Evidence must be produced to demonstrate that software operates within its timing deadlines.

    The goal of multicore timing analysis is to produce execution time evidence for these complex systems. In multicore processors, multiple cores compete for the same shared resources, resulting in potential interference channels that can affect execution time. Rapita's Multicore Timing Solution and CAST-32A Compliance Solution account for interference to produce robust execution time evidence in multicore systems.

  • What is an interference channel?

    In the CAST-32A position paper published by the FAA, an interference channel is defined as "a platform property that may cause interference between independent applications". This definition can be applied to a range of ‘platform properties’, including thermal factors etc.

    Of these interference channels, interference caused by the sharing of certain resources in multicore systems is one of the most significant in terms of execution times. Interference based on shared resources may occur in multicore systems when multiple cores simultaneously compete for use of shared resources such as buses, caches and main memory.

    Rapita’s solutions for multicore timing analysis analyze the effects of this type of interference channel.

    A very simple example of a shared resource interference channel is shown below:

    Interference channels

    In this simplified example, tasks running independently on the two cores may need to access main memory simultaneously via the memory controller. These accesses can interfere with each other, potentially degrading system performance.

  • Why can I trust Rapita's Multicore worst-case execution time statistics?

    Rapita have been providing execution time analysis services and tooling since 2004.

    RapiTime, part of the Rapita Verification Suite (RVS), is the timing analysis component of our Multicore Timing Solution. Our customers have qualified RapiTime on several DO178C DAL A projects where it has been successfully used to generate certification evidence by some of the most well-known aerospace companies in the world. See our Case Studies.

    Learn more about our tool qualification support for RapiTime in projects requiring DO-178B/C certification.

    As well as providing a mature tool chain, we support the customer in ensuring that their test data is good enough, so that the timing information they generate from the target is reliable.

    Our RapiDaemons are configured and tested (see the FAQ: ‘configuring and porting’) to ensure that they behave as expected on each specific customer platform.

    We also assess available observability channels as part of a processor analysis. This primarily applies to the use of performance counters, where we assess their accuracy and usefulness for obtaining meaningful insights into the system under observation.

  • Why can't I do my own multicore timing analysis and certification?

    It is possible for companies to perform multicore timing analysis internally, but it is a highly complex undertaking which is very costly in terms of budget and effort. Anecdotally, one of our customers reported that it took them five years and a budget in the millions of dollars to analyze one specific platform.

    Our Multicore Timing Solution and CAST-32A Compliance Solution are typically delivered as a turn-key solution, from initial system analysis and configuration all the way through to providing evidence for certification.

    Some customers prefer to outsource only parts of the process to Rapita. For example, it is possible for a customer to purchase RapiDaemons under license and use them to gather and analyze their own data.

  • If my RTOS vendor says they provide robust partitioning, why do I need Rapita?

    RTOS vendors may provide partitioning mechanisms for their multicore processors, but these do not guarantee the complete elimination of multicore interference. Instead, they are designed to provide an upper limit on interference, sometimes at the expense of average-case performance.

    In aerospace, these partitioning mechanisms may be referred to as ‘robust partitioning’. CAST-32A (the FAA’s position paper on multicore processors in avionics) identifies allowances for some of the objectives if you have robust partitioning in place, but it is still necessary to verify that the partitioning is as robust as it is claimed to be.

    From a certification standpoint, regardless of the methodology behind the RTOS vendor’s approach to eliminating interference, the effectiveness of the technology needs to be verified.

  • Can you help me optimize the configuration of my multicore system?

    Yes: our approach can be used to get an in-depth understanding of how sensitive software can be to other software. For example:

    • Task 1 executes acceptably in isolation and with most other tasks, but if it executes simultaneously with Task 127, its function X takes 10 times as long to return.
    • This intelligence can feed into system integration activities to ensure that function X can never execute at the same time as Task 127.

    The information from this type of analysis can also provide insights into potential improvements to the implementation of the two tasks. Sensitive tasks are not always the guilty party: other tasks can be overly aggressive and cause delays in the rest of the system.

  • How do you ensure that worst-case execution time metrics are not excessively pessimistic?

    For safety reasons, WCET will always be somewhat pessimistic. However, techniques that work well for single-core systems risk generating a WCET that is unreasonably large when applied to multicore systems, because the effects of contention can become disproportionate. The objective, therefore, is to calculate a value that is plausible and useful, without being optimistic. Optimism in relation to WCET is inherently unsafe.

    It is not enough to identify how sensitive an application’s tasks are to different types and levels of interference; it is also necessary to understand what degree of interference a task may suffer in reality. It is possible to lessen the pessimism in WCET analysis by viewing the processor under observation through this paradigm.

    The degree to which we can reduce pessimism is dependent on how effectively we can analyze the system. Factors influencing this include:

    • The overhead of the tracing mechanism (which affects depth of instrumentation)
    • The availability and reliability of performance counters
    • The availability of information regarding other tasks executing on the system
    • The quality of tests that exercise the code
  • Can you quantify our cache partitioning to maximize our performance?

    Cache partitioning is all about predictability, not performance. Your code may execute faster on average without cache partitioning, but it probably wouldn't be as predictable and could be quite sensitive to whatever executes in parallel.

    Cache partitioning aims to remove all the sensitivity to other tasks sharing the caches, thus making your task more predictable – but potentially at the expense of overall performance. In critical systems, predictability is of far greater importance than performance.

    Rapita’s solution for multicore timing analysis can be used to exercise cache partitioning mechanisms by analyzing any shared – and usually undocumented – structures internal to the caches.

  • Are there any constraints on the application scheduling, supervisor, or hypervisor?

    To analyze how a specific task is affected by contention on a specific resource, we need to be able to synchronize the execution of the task with the execution of RapiDaemons (the applications that generate contention on the resource).

    Usually, it is highly desirable to have RTOS/HV support for enabling user-level access to performance counters. Context switch information is also very valuable when performing multicore timing analysis.

  • Can you analyze systems using asymmetric multiprocessing?

    Yes. Our solution makes it easy to specify the core on which you run your tests, and the level of resource contention to apply from each other core in the system.

    We can also analyze systems that use non-synchronized clocks such as those often present in AMP platforms by using the RTBx to timestamp data.

  • How many performance counters can you collect per test?

    The maximum number of metrics we can collect depends on the performance monitoring unit(s) (or equivalent) on the hardware. An ARM A53, for example, lets us collect at least 30 metrics, but only access 6 in a single test. By running tests multiple times, however, we could collect all 30 metrics.

  • Why don't you have a tool that automates multicore timing analysis?

    Developing a one-button tool solution for multicore timing analysis would be impossible. This is because interference, which can have a huge impact on a task’s execution time, must be taken into account when analyzing multicore timing behavior.

    Analyzing interference effects is a difficult challenge that cannot be automatically solved through a software-only solution. Using approaches developed for timing analysis of single-core systems would result in a high level of pessimism, as it would assume that the highest level of interference possible is feasible, while this is almost never the case.

  • Which metrics can you collect from my multicore platform?

    It is possible to collect a range of metrics by instrumenting your source code with the Rapita Verification Suite (RVS), including a range of execution time metrics:

    • RapiTime: high-water mark and maximum execution times
    • RapiTask: scheduling metrics such as periodicity, separation, fragmentation and core migration

    It is also possible to collect information on events in your hardware using performance counters. The information we can collect depends on the performance monitoring unit(s) (or equivalent) of your system, but typically includes events such as L2 cache accesses, bus accesses, memory accesses and instructions executed. We can also collect information about operating system activity such as task switching and interrupt handling via event tracing or hooks.

  • Do you test the validity of performance counters?

    Yes, we formally test and assess the accuracy of performance counters to ensure the validity of results we collect for the software under analysis.

  • What is Rapita's approach to multicore timing analysis?

    By following a V-model process, our engineers investigate multicore systems and produce evidence about multicore timing behavior. Our approach has been designed to support projects within the DO-178C (CAST-32A) and ISO 26262 contexts.

    You can see an example workflow of how Rapita approaches multicore timing analysis in our White Paper.

  • Which hardware architectures can you analyze?

    We can analyze almost all hardware architectures. Our engineers work with you to determine the optimal strategy for integrating our RVS tools with your target, including hardware characterization and design considerations to best fit the hardware you're using.

    To work with an architecture that is new to us, we first identify which metrics we can collect from the hardware, then adapt RapiDaemons for the architecture and implement a strategy to collect data from it.

    We've worked with the following boards, CPUs and RTOSs:

    Board CPU RTOS
    Ultrascale ZCU102 ARM A53 Deos
    NXP QorIQ T2080 PowerPC e6500 VxWorks 653 3.0
    NXP QorIQ T2080 PowerPC e6500 PikeOS 5.x
    TI Keystone K2L ARM A15 PikeOS
    NVIDIA Xavier SoC Carmel (ARMv8 variant by NVIDIA) QNX
    AURIX Tricore TC377TX Bare metal
    AURIX Tricore TC397 AUTOSAR

    We also have upcoming projects with the following combinations of boards, CPUs and RTOSs:

    Board CPU RTOS
    Ultrascale ZCU102 ARM A53 and R5 Helix/VxWorks7
    NXP Layerscape LS1048A ARM A53 Deos
  • What is the CAST-32A Compliance Solution?

    Our CAST-32A Compliance packages provide a complete solution for certifying multicore aerospace projects in accordance with DO-178C and CAST-32A guidance.

    It includes a range of products and services to address all CAST-32A objectives.

  • What components are involved in the CAST-32A Compliance Solution?

    Our CAST-32A Compliance Solution comprises three components: a process, tool automation, and services.

    Our multicore timing analysis process is a V-model process that we developed in line with DO-178 and CAST-32A. It follows a requirements-based testing approach that focuses on identifying and quantifying interference channels on multicore platforms. We help address CAST-32A objectives not related to multicore timing through engineering services.

    The tools we have developed let us apply tests to multicore hardware (RapiTest) and collect timing data (RapiTime) and other metrics such as scheduling metrics (RapiTask) from them. We use RapiDaemons to create a configurable degree of traffic on shared hardware resources during tests, so we can analyze the impact of this on the application’s timing behavior.

    Our CAST-32A Compliance services include tool integration, porting RapiDaemons, performing timing analysis, identifying interference channels, and others depending on customer needs.

  • Have you completed an FAA or EASA certification using your CAST-32A Compliance packages?

    No, but we are currently working on projects that will go through certification with the FAA and EASA.

    Rapita is a recognized leader in multicore timing analysis, with multiple professional publications on the subject:

    • Steven H. VanderLeest and Samuel R. Thompson, “Measuring the Impact of Interference Channels on Multicore Avionics”  to appear in the Proceedings of the 39th Digital Avionics Systems Conference (DASC), San Antonio, TX, Oct 2020.
    • VanderLeest, S.H. and Evripidou, C., “An Approach to Verification of Interference Concerns for Multicore Systems (CAST-32A),” best papers of 2020 SAE Aerotech, to appear in SAE International Journal of Advances and Current Practices in Mobility.
    • VanderLeest, S.H. and Evripidou, C., “An Approach to Verification of Interference Concerns for Multicore Systems (CAST-32A),” SAE Technical Paper 2020-01-0016, 2020, doi:10.4271/2020-01-0016.
    • Steven H. VanderLeest, Jesse Millwood, Christopher Guikema, “A Framework for Analyzing Shared Resource Interference in a Multicore System,” Proceedings of the 37th Digital Avionics Systems Conference (DASC), London, Sep 2018.

    Our multicore team technical leads have also served in the following professional organizations:

    • Vice-chair for the Enterprise Architecture (EA-25) subcommittee on airworthiness for the Future Airborne Capability Environment (FACE) consortium, 2020.
    • Chair for the “Multicore Assurance” session of the 39th Digital Avionics Systems Conference, 2020.
    • Chair for the “Integrated Modular Avionics” track of the 37th Digital Avionics Systems Conference, 2018.
  • Is there a standard list of interference channels that I should test?

    There is no standard list that fits all platforms. Some interference channels can be more common than others, such as the ones related to the memory hierarchy (i.e. caches and main memory). The identification of interference channels (for which we provide a service) is an important activity that identifies the interference channels whose impact on the system’s timing behavior must be assessed.

  • How long does it take to comprehensively analyze the interference channels present in multicore hardware?

    This depends on the platform, project needs, and whether we have already performed analysis on a similar hardware platform previously. Our solution includes an initial pilot phase in which we study the system and estimate the amount of time needed for subsequent phases. Typical projects run for between 2 and 12 months, depending on the scope of the analysis and complexity of the system.

  • Have you performed hardware analysis for my multicore platform?

    Some of the multicore systems that we’ve worked with are listed in our FAQ “Which hardware architectures can you analyze?”.

    If we have already worked on a similar multicore platform to yours, it may take less time to perform hardware analysis for your platform.

  • Do you support the analysis of GPU-based architectures for multicore timing behavior?

    Yes. We have already run projects analyzing the Nvidia Xavier AGX (CUDA) and we have ongoing projects analyzing AMD’s Embedded Radeon E9171 GPU featuring the CoreAVI Vulkan SC driver.

  • How do I know when my testing is complete? How much is enough to satisfy CAST-32A?

    As per CAST-32A, each interference channel on the system must be analyzed to ensure that its effects on timing behavior have been mitigated. Some channels may be eliminated by architectural analysis or because they are deactivated for your system. For all channels that have not been eliminated, CAST-32A requires that multicore timing analysis be done to ensure that software operates within its timing deadlines. For each channel, one must provide analysis test reports and a summary to show that the set of tests is complete – Rapita provides this service to show completeness. Note that CAST-32A includes objectives other than testing, such as planning objectives. We provide templates to help you complete these.

  • Is cache partitioning helpful or harmful to multicore timing performance?

    This depends on the performance requirements of the platform and the hosted software.

    The primary benefit of cache partitioning is that it provides protection from one core/partition evicting another. There are two broad approaches to achieve this:

    • Hardware: In hardware, the processor has built in support for partitioning the cache, allocating each core in the system its own area that it can use. This is supported on the T2080, for example (see e6500 TRM section 2.12.4).
    • Software: In set-associative caches, the location in cache that each block of memory may be loaded to is known. Using techniques like cache colouring, the software is placed in specific memory blocks in such a way that it is ensured that there will be no two cores/partitions that can end up using the same section of the cache.

    In terms of execution time, the prime benefit of cache partitioning is typically a significant reduction in the variability i.e. a comparison with and without cache partitioning would indicate that execution times have a greater spread when there is cross-core interference present. This can be a valuable contribution towards the claim for robust partitioning.

    The downside of cache partitioning is that, depending on the nature of the hosted application, it can have a significant impact on performance on the average case and even on the worst-case execution time. The reason for this is that each core/partition now has a smaller section of the cache to work with; if it no longer fits into the cache, then it will see an increased cache miss rate which has a direct impact on execution time. Whether this is acceptable should be carefully tested and evaluated.

    A common misconception for shared cache partitioning is that it eliminates the effects of interference from shared L2 caches. Depending on the hardware, the shared caches can have shared buffers/queues that are not part of the partitioning. Therefore, even though the interference due to one core or partition evicting another can approach zero, the increase in cache misses can cause slowdowns due to contention on these shared internal structures.

    For an IMA platform, it is recommended that the effectiveness of cache partitioning is evaluated empirically. Specifically, perform experiments/tests where the cache partitioning is enabled where RapiDaemons targeting the L2 cache generate interference, and compare against equivalent interference scenarios where the partitioning is disabled. It is quite likely that there will be observed slowdowns in the average case, and potentially also in the worst-case. The results from this analysis could be converted into constraints for the partition developers and integrators. For example "hosted IMA partitions on any core may not exceed X number of accesses outside L1 cache over a time window of Y nanoseconds".

  • Do you provide detailed process descriptions to be incorporated into traditional DO-178C planning documents such as the PSAC and SVP?

    Yes, we provide templates for the additions that should be made to each applicable DO-178C planning document to meet CAST-32A objectives. Alternatively, we can provide a stand-alone document that focuses on CAST-32A across all the planning activities (which can be referenced once from the PSAC).

  • Can your solution help me with certification aspects of my multicore project?

    Yes. Our solution can be used to produce timing evidence needed to satisfy DO-178C objectives in line with CAST-32A guidance.

  • Can I take advantage of individual components of the CAST-32A Compliance Solution?

    We’re completely flexible and we understand that different customers have different needs. As such, you can purchase any component of our CAST-32A Compliance Solution separately if that’s what you need. This includes, but is not limited to:

    • Tool licenses (RapiTest, RapiTime, RapiTask)
    • Services to integrate automation tools to work with your multicore system
    • RTBx hardware to collect trace data from your multicore system
    • Generic libraries of RapiDaemons
    • Services to port RapiDaemons to your multicore system
    • Services to perform multicore timing analysis
    • Services to analyze data coupling and control coupling
    • Services to analyze error handling
  • Why should I use Rapita's solution?

    Rapita Systems are uniquely positioned to offer the combination of expertise and tools required to effectively address CAST-32A objectives.

    Whilst the challenge of certifying multicore systems for safety-critical applications is a relatively new one for the industry as a whole, we have been researching this area for over a decade. Rapita are working with key industry stakeholders, including major chip-manufacturers like NXP, to support them in refining the evidence required to satisfy certification authorities.

    Rapita have extensive experience in providing software verification solutions for some of the best-known aerospace and automotive companies in the world. For example, BAE Systems used RapiTime (one of the tools in our CAST-32A Compliance Solution) to identify worst-case execution time optimizations for the Mission Control Computer on their Hawk Trainer jet.

    See more of our Case Studies.

  • Can the CAST-32A Compliance Solution be used to generate assurance evidence incrementally?

    Yes. Using the CAST-32A Compliance Solution, assurance evidence can be developed incrementally and independently for the multicore platform and each hosted application.

    The solution is designed to meet use cases for each of the avionics roles identified in DO-297/ED-124, whether you're a Certification Applicant, System Integrator, Platform or Application Supplier. The solution supports the needs of Certification Applicants and System Integrators by defining a consistent strategy for generating certification evidence across all platforms and applications.

  • How does RVS support multicore timing analysis?

    RVS, Rapita's verification toolsuite, supports multicore timing analysis by letting you capture and analyze a range of performance metrics from multicore platforms, including software timing behavior and other metrics such as cache hits and misses.

    RVS makes it easy to view and analyze multicore timing results by letting you filter your results on the performance metrics and tests you want to see and letting you select a baseline against which to compare your results.

  • How do RapiDaemons support multicore timing analysis?

    RapiDaemons support multicore timing analysis by allowing the timing behavior of a multicore platform to be analyzed under different levels of resource contention. By stressing specific shared resources at known levels, they support precise analysis.

    For more information on RapiDaemons, see the RapiDaemons web page.

  • Does CAST-32A guidance apply to platforms with multiple different processors?

    Yes. While CAST-32A does not specifically provide guidance on compliance for multicore platforms with different processors, multicore interference may be present in these systems, and as such CAST-32A guidance applies to them. 

  • Can statistical modeling approaches be used to provide support for multicore timing measurements?

    In general, statistical modeling approaches such as Queueing Theory are not applicable to timing analysis of multicore software as software timing behavior does not fit standard statistical assumptions. While the analysis of multicore timing results is based on a representation of the measured real distribution of execution times, it would not be correct to try to estimate averages and standard distributions from the data because we cannot assume that it follows any standard statistical distribution without experimental evidence to show this. 

    In some cases, software timing behavior may fit standard statistical assumptions, but this is the exception rather than the rule and must be proven before relying on results from statistical modeling. 

  • Solutions
    • Rapita Verification Suite
    • RapiTest
    • RapiCover
    • RapiTime
    • RapiTask

    • CAST-32A Compliance Package
    • Verification and Validation Services
    • Qualification
    • Training
    • Integration
  • Latest
  • Latest menu

    • News
    • Blog
    • Events
    • Videos
  • Downloads
  • Downloads menu

    • Brochures
    • Webinars
    • White Papers
    • Case Studies
    • Product briefs
    • Technical notes
    • Software licensing
  • Company
  • Company menu

    • About Rapita
    • Customers
    • Distributors
    • Industries
    • Locations
    • Partners
    • Research projects
    • Contact
  • Discover
    • AMC 20-193
    • What is CAST-32A?
    • Multicore Timing Analysis
    • MC/DC Coverage
    • Code coverage for Ada, C & C++
    • Embedded Software Testing Tools
    • Aerospace Software Testing
    • Automotive Software Testing
    • Certifying eVTOL
    • DO-178C Testing
    • WCET Tools
    • Worst Case Execution Time
    • Timing analysis (WCET) & Code coverage for Matlab Simulink

All materials © Rapita Systems Ltd. 2021 - All rights reserved | Privacy information Subscribe to our newsletter