
Multicore Avionics Certification for High-integrity DO-178C projects
Produce evidence to support DO-178C (A(M)C 20-193) certification
MACH178 provides an end-to-end solution for providing certification evidence to satisfy DO-178C and A(M)C 20-193 objectives and objectives for defense standards such as AMACC and MIL-HDBK-516C / AA-22-01.
We provide a comprehensive range of products and services that lets you address airworthiness objectives including analyzing your software’s timing behavior in the context of interference caused by contention on shared hardware resources.

Characterize and quantify multicore interference
Using a combination of our expert services, RapiDaemons, (small applications designed to generate configurable load on specific shared resources), and RVS tools to automate the collection of timing evidence on-target, MACH178 helps you identify sources of multicore interference and quantify their effects on your application.
Following our robust process, we identify the potential interference channels your multicore platform, produce on-target tests that analyze the impact of interference on software timing behavior, and run these on your system through an automated framework. Our expert team then analyzes the results to produce the evidence you need to demonstrate that your code will meet its timing deadlines.
Evaluate and select multicore hardware and RTOS
Different multicore hardware and RTOS environments may have vastly different effects on the timing behavior of applications.
MACH178 helps you evaluate different environments to identify the best one for you, so you can mitigate the impact of interference channels in the system and ensure that your platform has what’s needed to support your verification all the way through to certification.

Reduce analysis effort through automation
MACH178 creates an environment where multicore timing behavior can be analyzed through an automated tool framework. Within this environment, input tests for multicore timing behavior (which use RapiDaemons to create contention on specific hardware resources) are converted into test harnesses, and these are run on the target to produce results that can be analyzed.
This automated environment significantly reduces the effort needed to run tests. Typically, we use this framework to analyze multicore timing behavior ourselves, but if you want to do the analysis yourself, we can help you by offering in-depth training.
Bell Invictus 360
Bell have selected Rapita's MACH178 solution for performing multicore software certification (DO-178C) for the North Atlantic Industries SIU36-based Data Concentrator Unit (DCU) of the Invictus 360.
Discover how we are supporting Bell with multicore certification for the FARA program.
Product features
-
Produce AMC 20-193 and CAST-32A compliance evidence Produce compliance evidence for DO-178C, AMC 20-193 and CAST-32A.Discover this feature
-
Characterize and quantify multicore interference Characterize the impact of multicore interference on software worst-case execution time.Discover this feature
-
Evaluate and select multicore hardware and RTOS Evaluate multicore hardware and RTOS to mitigate interference effects for easier certification.Discover this feature
-
Reduce analysis effort through automation An automated tool environment reduces testing and retesting effort.Discover this feature
-
Reports Reports describing the results of platform and software analysis and characterization.Discover this feature
-
Template compliance documents Template planning documents for DO-189C, AMC 20-193 and CAST-32A compliance.Discover this feature
-
Characterization tests Test artifacts to analyze the impact of interference on multicore platforms and software.Discover this feature
-
Process documents Supplementary evidence for DO-178C, describing how to use the Rapita workflow to perform Multicore Timing Analysis.Discover this feature
-
RVS toolsuite Automated tool support for generating, running, and analyzing results from multicore tests.Discover this feature
-
RapiDaemons Configurable contention for shared multicore resources.Discover this feature
-
Easily analyze multicore performance metrics Analyze a variety of performance metrics from multicore systems easily.Discover this feature
-
Capture values from hardware event monitors Capture and view values from hardware event monitors on your multicore platform.Discover this feature
-
Automatically merge timing results Automatic data merging supports timing analysis for multicore systems.Discover this feature
-
Custom exports Design and generate custom results exports.Discover this feature
-
Platform Analysis and Characterization Service Support for generating compliance evidence for platform analysis and characterization.Discover this feature
-
Software Analysis and Characterization Service Support for generating compliance evidence for software analysis and characterization.Discover this feature
-
Target Integration Service Support for integrating RVS and RapiDaemons with a multicore platform.Discover this feature
-
Training Training on the MACH178 workflow.Discover this feature
-
Consulting Consulting services to meet specific needs related to AMC 20-193/CAST-32A compliance.Discover this feature
-
Tool Qualification Kits DO-330 tool qualification evidence for automation tools.Discover this feature
-
Qualified Target Integration Service Service to support qualification for RVS tools.Discover this feature
-
RapiDaemon Qualification Service Service to support RapiDaemon qualification.Discover this feature
-
Incremental assurance Develop assurance evidence incrementally, supporting Integrated Modular Avionics.Discover this feature
A complete solution for A(M)C 20-193
MACH178 is an end-to-end solution for meeting DO-178C guidelines (including A(M)C 20-193 objectives) for multicore projects. The solution is a combination of mature products and services.
We can help throughout the entire compliance life cycle, or you can take advantage of our documents, tests, automation tools and certification support while performing activities yourself.
- Planning – processes and strategies for multicore verification are documented in your DO-178C planning documents
- Platform analysis – the interference channels on your multicore platform are identified
- Platform characterization – characterization tests are developed and the potential impact of interference on your multicore platform is characterized
- Software analysis – timing properties and requirements of software are identified and baseline timing tests are developed
- Software characterization – software behavior is characterized on the multicore platform
- Certification submission – certification evidence is supplemented with qualification kits and services so you can demonstrate compliance
Multicore timing analysis using a V-model approach
By following a V-model process, our engineers investigate multicore systems and produce evidence about their timing behavior.
Our industry-leading tooling, including our unique RapiDaemon technology (which generates interference during tests), reduces analysis effort through automation.
Our approach has been specifically designed to support multicore aerospace projects following DO-178C and A(M)C 20-193 guidance.
Addressing A(M)C 20-193 Objectives
MACH178 is designed to support A(M)C 20-193 compliance, addressing the following objectives:
- MCP_Planning_1
- MCP_Planning_2
- MCP_Resource_Usage_1
- MCP_Resource_Usage_2
- MCP_Resource_Usage_3
Tool automation
We use our mature software verification toolsuite – the Rapita Verification Suite – to apply tests to multicore hardware (RapiTest) and collect timing metrics (RapiTime) and other metrics such as scheduling metrics (RapiTask) from them. Using these tools, we automate various stages of the multicore timing analysis process.
To analyze the timing behavior of a specific multicore system, Rapita Systems' MACH178 uses the following software and service components:
- Rapita Verification Suite (RVS), a collection of embedded software verification tools that is widely used in the critical aerospace industry.
- RapiDaemons, a collection of specialized programs to generate contention on shared hardware resources.
- Integration of automation software into the multicore development environment under analysis.
Tests & RapiDaemons
We provide a set of carefully designed tests designed to provide evidence of interference channels in your multicore processor. We have standard libraries of tests for a range of multicore processors.
RapiDaemons are specialized applications designed to generate targeted contention on specific hardware resources such as buses, caches and GPUs. By generating contention on shared resources during multicore tests, RapiDaemons support the analysis of multicore timing behavior.
Each RapiDaemon applies contention to a specific hardware resource on a specific hardware architecture, either matching a desired level of contention or maximizing contention on the resource.
Expertise & Analysis
Timing analysis of single core systems can be entirely automated by using software tools such as RapiTime which analyze the worst-case execution time (WCET) of tasks running on the system.
This isn’t the case for multicore systems, for which we must consider the effects of interference caused by resource contention on software execution times. Interference effects are complex, interlinked, and involve components specific to both the multicore architecture and the scheduling and resource allocation systems in the software.
This means that, to properly perform the analysis, we need to apply the expertise of engineers who know the system in detail. While this expertise can be used to direct the use of software tools (for example specifying levels of contention to apply to specific resources), no automated timing analysis tool will be able to understand a multicore system in enough depth to perform the analysis alone.
Evidence & qualification
MACH178 helps you produce evidence needed to satisfy all of the A(M)C 20-193 objectives, and all components of the solution are designed for compliance with DO-178C and A(M)C 20-193 guidance.
Our RVS tools and RapiDaemons are classed as Tool Qualification (TQL) 5 tools as per DO-178C.
- Qualification support is available for RapiTest and RapiTime, which have been qualified in multiple DAL A aerospace projects, through our DO-330 Qualification Kits and Qualified Target Integration Service.
- Qualification support is available for RapiDaemons through our DO-330 Qualification Kits and RapiDaemon Qualification Service.
Incremental assurance
Using MACH178, assurance evidence can be developed incrementally and independently for the multicore platform and each hosted application, supporting the development of Integrated Modular Avionics.
The solution is designed to meet use cases for each of the avionics roles identified in DO-297/ED-124, whether you're a Certification Applicant, System Integrator, Platform or Application Supplier. The solution supports the needs of Certification Applicants and System Integrators by defining a consistent strategy for generating certification evidence across all platforms and applications.

How we support A(M)C 20-193 compliance
Objective |
Description |
Customer role |
RTOS/HW role |
Rapita role |
System description |
Document in PSAC/PHAC |
Early architecture evaluation |
Early platform evaluation |
|
List of MCP shared resources, active HW dynamic features |
Document in PSAC/PHAC, how to verify in SVP |
RTOS + HW information |
HW characterization |
|
Configuration settings |
Incorporation of recommendations in PSAC, add HLR |
Recommendations of mitigation strategies |
Analysis and recommendations |
|
Mitigations for inadvertently altered CCS |
Document in PSAC/PHAC, verify and analyze |
N/A |
Architecture analysis, review, test |
|
List of interference channels and verification methods |
Review results, incorporate in PSAC, identify in HLRs, V&V methods in SVP |
RTOS + HW information |
HW characterization |
|
In a worst-case scenario, it has been verified that the software's resource demands do not exceed those available |
Review results, incorporate in PSAC, identify in HLRs, V&V methods in SVP |
RTOS information |
HW characterization Analysis and methods Verify and analyze |
|
WCET analysis of all SW components |
Support in running tests, review results |
RTOS Information |
WCET analysis and results; we provide evidence on the execution time behavior of your code that takes multicore interference into account |
|
Data Coupling/Control Coupling analysis by RBT |
Customer to define and perform |
N/A |
Tools & services |
|
SafetyNet |
Customer to define and perform |
Customer or RTOS |
Review, test |
|
|
Showing compliance
|
Incorporate results in SAS |
Support
|
Rapita to support evidence; we provide multicore timing evidence that you can easily include in your SAS, including traceability information and a summary of test plans, implementation and results |
Videos
Downloads
News & Blog
Compatibility
We can analyze almost all multicore hardware architectures. See below for a list of components of multicore systems that we have analyzed.
Architectures
Here are some of the architectures we have worked with:
SoC | Cores |
---|---|
Infineon® AURIX™ | Tricore™ |
NVIDIA® Xavier™ | Carmel Armv8 |
NXP® LS1048A | Arm® Cortex®-A53 |
NXP® LS1088M | Arm® Cortex®-A53 |
NXP® LX2160A | Arm® Cortex®-A72 |
NXP® P2041 | PowerPC® e500mc |
NXP® T1040/2 | PowerPC® e5500 |
NXP® T2080/1 | PowerPC® e6500 |
TI Keystone™ K2L | Arm® Cortex®-A15 |
Xilinx® Ultrascale+® Zynq MPSoC | Arm® Cortex®-A53, Arm® Cortex-R5 |
Xilinx® Ultrascale+® Zynq RFSoC | Arm® Cortex®-A53, Arm® Cortex-R5 |
If your architecture is not on the list above, contact us.
RTOSs
Here are some of the RTOSs we have worked with:
RTOS |
---|
Bare metal |
Blackberry® QNX™ |
DDC-I Deos™ |
Green Hills® INTEGRITY® |
KRONO-SAFE® ASTERIOS® |
Lynx Software Technologies LynxSecure® |
SYSGO PikeOS® |
Vector MICROSAR |
Wind River Helix®/VxWorks® |
Custom RTOSs |
If your RTOS is not on the list above, contact us.
Boards
We have worked with multicore boards manufactured by the following manufacturers:
Board Manufacturer |
---|
Abaco™ |
Curtiss-Wright® |
Mercury Systems® |
North Atlantic Industries™ |
NXP® |
Texas Instruments® |
Xilinx® |
If your board manufacturer is not on the list above, contact us.
Middleware
We have worked with middleware for multicore systems developed by the following manufacturers:
Middleware Supplier |
---|
CoreAVI® |
GateWare Communications™ |
Presagis® |
Richland Technologies™ |
If your middleware manufacturer is not on the list above, contact us.
Frequently asked questions
-
What is multicore timing analysis?
When developing safety-critical applications to DO-178C, AMC 20-193 and CAST-32A guidelines or ISO 26262 standards, there are special requirements for using multicore processors. Evidence must be produced to demonstrate that software operates within its timing deadlines.
The goal of multicore timing analysis is to produce execution time evidence for these complex systems. In multicore processors, multiple cores compete for the same shared resources, resulting in potential interference channels that can affect execution time. MACH178 and Rapita's Multicore Timing Solution account for interference to produce robust execution time evidence in multicore systems.
-
What is an interference channel?
In AMC 20-193, the official guidance document for multicore aspects of certification for ED-12C projects, an interference channel is defined as "a platform property that may cause interference between software applications or tasks". This definition can be applied to a range of ‘platform properties’, including thermal factors etc.
Of these interference channels, interference caused by the sharing of certain resources in multicore systems is one of the most significant in terms of execution times. Interference based on shared resources may occur in multicore systems when multiple cores simultaneously compete for use of shared resources such as buses, caches and main memory.
Rapita’s solutions for multicore timing analysis analyze the effects of this type of interference channel.
A very simple example of a shared resource interference channel is shown below:
In this simplified example, tasks running independently on the two cores may need to access main memory simultaneously via the memory controller. These accesses can interfere with each other, potentially degrading system performance.
-
Why can I trust Rapita's Multicore worst-case execution time statistics?
Rapita have been providing execution time analysis services and tooling since 2004.
RapiTime, part of the Rapita Verification Suite (RVS), is the timing analysis component of our Multicore Timing Solution. Our customers have qualified RapiTime on several DO178C DAL A projects where it has been successfully used to generate certification evidence by some of the most well-known aerospace companies in the world. See our Case Studies.
As well as providing a mature tool chain, we support the customer in ensuring that their test data is good enough, so that the timing information they generate from the target is reliable.
Our RapiDaemons are configured and tested (see the FAQ: ‘configuring and porting’) to ensure that they behave as expected on each specific customer platform.
We also assess available observability channels as part of platform analysis. This primarily applies to the use of hardware event monitors, where we assess their accuracy and usefulness for obtaining meaningful insights into the system under observation.
-
Why can't I do my own multicore timing analysis and certification?
It is possible for companies to perform multicore timing analysis internally, but it is a highly complex undertaking which is very costly in terms of budget and effort. Anecdotally, one of our customers reported that it took them five years and a budget in the millions of dollars to analyze one specific platform.
MACH178 and our Multicore Timing Solution are typically delivered as a turn-key solution, from initial system analysis and configuration all the way through to providing evidence for certification.
-
If my RTOS vendor says they provide robust partitioning, why do I need Rapita?
RTOS vendors may provide partitioning mechanisms for their multicore processors, but these do not guarantee the complete elimination of multicore interference. Instead, they are designed to provide an upper limit on interference, sometimes at the expense of average-case performance.
In aerospace, these partitioning mechanisms may be referred to as ‘robust partitioning’. AMC 20-193 (EASA's official guidance on multicore aspects of certification for ED-12C projects) and CAST-32A (the FAA’s position paper on multicore processors in avionics) identify allowances for some of the objectives if you have robust partitioning in place, but it is still necessary to verify that the partitioning is as robust as it is claimed to be.
From a certification standpoint, regardless of the methodology behind the RTOS vendor’s approach to eliminating interference, the effectiveness of the technology needs to be verified.
-
Can you help me optimize the configuration of my multicore system?
Yes: our approach can be used to get an in-depth understanding of how sensitive software can be to other software. For example:
- Task 1 executes acceptably in isolation and with most other tasks, but if it executes simultaneously with Task 127, its function X takes 10 times as long to return.
- This intelligence can feed into system integration activities to ensure that function X can never execute at the same time as Task 127.
The information from this type of analysis can also provide insights into potential improvements to the implementation of the two tasks. Sensitive tasks are not always the guilty party: other tasks can be overly aggressive and cause delays in the rest of the system.
-
How do you ensure that worst-case execution time metrics are not excessively pessimistic?
For safety reasons, WCET will always be somewhat pessimistic. However, techniques that work well for single-core systems risk generating a WCET that is unreasonably large when applied to multicore systems, because the effects of contention can become disproportionate. The objective, therefore, is to calculate a value that is plausible and useful, without being optimistic. Optimism in relation to WCET is inherently unsafe.
It is not enough to identify how sensitive an application’s tasks are to different types and levels of interference; it is also necessary to understand what degree of interference a task may suffer in reality. It is possible to lessen the pessimism in WCET analysis by viewing the processor under observation through this paradigm.
The degree to which we can reduce pessimism is dependent on how effectively we can analyze the system. Factors influencing this include:
- The overhead of the tracing mechanism (which affects depth of instrumentation)
- The availability and reliability of hardware event monitors
- The availability of information regarding other tasks executing on the system
- The quality of tests that exercise the code
-
Can you quantify our cache partitioning to maximize our performance?
Cache partitioning is all about predictability, not performance. Your code may execute faster on average without cache partitioning, but it probably wouldn't be as predictable and could be quite sensitive to other software components executing in parallel.
Cache partitioning aims to remove all the sensitivity to other tasks sharing the caches, thus making your task more predictable – but potentially at the expense of overall performance. In critical systems, predictability is of far greater importance than performance.
Rapita’s solution for multicore timing analysis can be used to exercise cache partitioning mechanisms by analyzing any shared – and usually undocumented – structures internal to the caches.
-
Are there any constraints on the application scheduling, supervisor, or hypervisor?
To analyze how a specific task is affected by contention on a specific resource, we need to be able to synchronize the execution of the task with the execution of RapiDaemons (the applications that generate contention on the resource).
Usually, it is highly desirable to have RTOS/HV support for enabling user-level access to hardware event monitors. Context switch information is also very valuable when performing multicore timing analysis.
-
Can you analyze systems using asymmetric multiprocessing?
Yes. Our solution makes it easy to specify the core on which you run your tests, and the level of resource contention to apply from each other core in the system.
We can also analyze systems that use non-synchronized clocks such as those often present in AMP platforms by using the RTBx to timestamp data.
-
How many hardware event monitors can you collect values from per test?
The maximum number of hardware event monitors we can collect values from depends on the performance monitoring unit(s) (or equivalent) on the hardware. An ARM A53, for example, lets us collect at least 30 metrics, but only access 6 in a single test. By running tests multiple times, however, we could collect all 30 metrics.
-
Why don't you have a tool that automates multicore timing analysis?
Developing a one-button tool solution for multicore timing analysis would be impossible. This is because interference, which can have a huge impact on a task’s execution time, must be taken into account when analyzing multicore timing behavior.
Analyzing interference effects is a difficult challenge that cannot be automatically solved through a software-only solution. Using approaches developed for timing analysis of single-core systems would result in a high level of pessimism, as it would assume that the highest level of interference possible is feasible, while this is almost never the case.
-
Which metrics can you collect from my multicore platform?
It is possible to collect a range of metrics by instrumenting your source code with the Rapita Verification Suite (RVS), including a range of execution time metrics:
- RapiTime: high-water mark and maximum execution times
- RapiTask: scheduling metrics such as periodicity, separation, fragmentation and core migration
It is also possible to collect information on events in your hardware using hardware event monitors. The information we can collect depends on the performance monitoring unit(s) (or equivalent) of your system, but typically includes events such as L2 cache accesses, bus accesses, memory accesses and instructions executed. We can also collect information about operating system activity such as task switching and interrupt handling via event tracing or hooks.
-
Do you test the validity of hardware event monitors?
Yes, we formally test and assess the accuracy of hardware event monitors to ensure the validity of results we collect for the software under analysis.
-
What is Rapita's approach to multicore timing analysis?
By following a V-model process, our engineers investigate multicore systems and produce evidence about multicore timing behavior. Our approach has been designed to support projects within the DO-178C (AMC 20-193 and CAST-32A) and ISO 26262 contexts.
You can see an example workflow of how Rapita approaches multicore timing analysis in our White Paper.
-
Which hardware architectures can you analyze?
We can analyze almost all hardware architectures. Our engineers work with you to determine the optimal strategy for integrating our RVS tools with your target, including hardware characterization and design considerations to best fit the hardware you're using.
To work with an architecture that is new to us, we first identify which metrics we can collect from the hardware, then adapt RapiDaemons for the architecture and implement a strategy to collect data from it.
For a list of multicore architectures, RTOSs, boards and middleware we've worked with, see our compatibility page.
-
What is MACH178?
MACH178 provides a solution for certifying multicore aerospace projects in accordance with DO-178C, AMC 20-193 and CAST-32A guidance.
It includes a range of products and services to help address all AMC 20-193 and CAST-32A objectives.
-
Which components are involved in MACH178?
MACH178 comprises four high-level components:
- Certification artifacts
- Software tools
- Engineering services
- Qualification support
MACH178 certification artifacts deliver the evidence needed to plan for multicore aspects of certification and demonstrate results from applying the MACH178 process for a specific platform and application. This includes template compliance documents, processes, multicore tests and results from Platform and Software Analysis and Characterization.
Our software tools let us apply tests to multicore hardware (RapiTest) and collect timing data (RapiTime) and other metrics such as scheduling metrics (RapiTask) from them. We use RapiDaemons to create a configurable degree of traffic on shared hardware resources during tests, so we can analyze the impact of this on the application’s timing behavior.
Services that support MACH178 include tool integration and configuration, platform analysis and characterization, software analysis and characterization, and others depending on customer needs.
RapiTest, RapiTime and RapiDaemons, which are used to automate MACH178 processes, are classified as Tool Qualification Level 5 tools as per DO-330. Qualification kits and services provide the evidence needed to qualify their use in MACH178 projects.
-
Have projects supported by MACH178 completed FAA or EASA certification?
Not yet, but we are currently working on projects that will go through certification with the FAA and EASA.
Rapita is a recognized leader in multicore timing analysis, with multiple professional publications on the subject:
- Steven H. VanderLeest and Samuel R. Thompson, “Measuring the Impact of Interference Channels on Multicore Avionics” to appear in the Proceedings of the 39th Digital Avionics Systems Conference (DASC), San Antonio, TX, Oct 2020.
- VanderLeest, S.H. and Evripidou, C., “An Approach to Verification of Interference Concerns for Multicore Systems (CAST-32A),” best papers of 2020 SAE Aerotech, to appear in SAE International Journal of Advances and Current Practices in Mobility.
- VanderLeest, S.H. and Evripidou, C., “An Approach to Verification of Interference Concerns for Multicore Systems (CAST-32A),” SAE Technical Paper 2020-01-0016, 2020, doi:10.4271/2020-01-0016.
- Steven H. VanderLeest, Jesse Millwood, Christopher Guikema, “A Framework for Analyzing Shared Resource Interference in a Multicore System,” Proceedings of the 37th Digital Avionics Systems Conference (DASC), London, Sep 2018.
Our multicore team technical leads have also served in the following professional organizations:
- Vice-chair for the Enterprise Architecture (EA-25) subcommittee on airworthiness for the Future Airborne Capability Environment (FACE) consortium, 2020.
- Chair for the “Multicore Assurance” session of the 39th Digital Avionics Systems Conference, 2020.
- Chair for the “Integrated Modular Avionics” track of the 37th Digital Avionics Systems Conference, 2018.
-
Is there a standard list of interference channels that I should test?
There is no standard list that fits all platforms. Some interference channels can be more common than others, such as the ones related to the memory hierarchy (i.e. caches and main memory). The identification of interference channels (for which we provide a service) is an important activity that identifies the interference channels whose impact on the system’s timing behavior must be assessed.
-
How long does it take to comprehensively analyze the interference channels present in multicore platforms?
The length of time needed to comprehensively analyze interference channels is typically between 2 and 12 months, but this duration depends heavily on the scope of the analysis, the complexity of the system, and whether we have already analyzed a similar platform or not. Our solution includes an initial pilot phase in which we study the system and estimate the amount of time needed for subsequent phases, including analysis of interference channels.
-
Have you performed platform analysis for my multicore platform?
Some of the multicore systems that we’ve worked with are listed in our FAQ “Which hardware architectures can you analyze?”.
If we have already worked on a similar multicore platform to yours, it may take less time to perform platform analysis for your platform.
-
Do you support the analysis of GPU-based architectures for multicore timing behavior?
Yes. We have run projects analyzing the Nvidia Xavier AGX (CUDA) and AMD’s Embedded Radeon E9171 GPU (featuring the CoreAVI Vulkan SC driver).
-
How do I know when my testing is complete? How much is enough to satisfy AMC 20-193/CAST-32A?
As per AMC 20-193 and CAST-32A, each interference channel on the system must be analyzed to ensure that its effects on timing behavior have been mitigated. Some channels may be eliminated by architectural analysis or because they are deactivated for your system. For all channels that have not been eliminated, the guidance requires that multicore timing analysis be done to ensure that software operates within its timing deadlines. For each channel, one must provide analysis test reports and a summary to show that the set of tests is complete – Rapita provides this service to show completeness. Note that AMC 20-193 and CAST-32A include objectives other than testing, such as planning objectives. We provide templates to help you complete these.
-
Is cache partitioning helpful or harmful to multicore timing performance?
This depends on the performance requirements of the platform and the hosted software.
The primary benefit of cache partitioning is that it provides protection from one core/partition evicting another. There are two broad approaches to achieve this:
- Hardware: In hardware, the processor has built in support for partitioning the cache, allocating each core in the system its own area that it can use. This is supported on the T2080, for example (see e6500 TRM section 2.12.4).
- Software: In set-associative caches, the location in cache that each block of memory may be loaded to is known. Using techniques like cache colouring, the software is placed in specific memory blocks in such a way that it is ensured that there will be no two cores/partitions that can end up using the same section of the cache.
In terms of execution time, the prime benefit of cache partitioning is typically a significant reduction in the variability i.e. a comparison with and without cache partitioning would indicate that execution times have a greater spread when there is cross-core interference present. This can be a valuable contribution towards the claim for robust partitioning.
The downside of cache partitioning is that, depending on the nature of the hosted application, it can have a significant impact on performance on the average case and even on the worst-case execution time. The reason for this is that each core/partition now has a smaller section of the cache to work with; if it no longer fits into the cache, then it will see an increased cache miss rate which has a direct impact on execution time. Whether this is acceptable should be carefully tested and evaluated.
A common misconception for shared cache partitioning is that it eliminates the effects of interference from shared L2 caches. Depending on the hardware, the shared caches can have shared buffers/queues that are not part of the partitioning. Therefore, even though the interference due to one core or partition evicting another can approach zero, the increase in cache misses can cause slowdowns due to contention on these shared internal structures.
For an IMA platform, it is recommended that the effectiveness of cache partitioning is evaluated empirically. Specifically, perform experiments/tests where the cache partitioning is enabled where RapiDaemons targeting the L2 cache generate interference, and compare against equivalent interference scenarios where the partitioning is disabled. It is quite likely that there will be observed slowdowns in the average case, and potentially also in the worst-case. The results from this analysis could be converted into constraints for the partition developers and integrators. For example "hosted IMA partitions on any core may not exceed X number of accesses outside L1 cache over a time window of Y nanoseconds".
-
Do you provide detailed process descriptions to be incorporated into traditional DO-178C planning documents such as the PSAC and SVP?
Yes, we provide templates for the additions that should be made to each applicable DO-178C planning document to meet AMC 20-193 and CAST-32A objectives. Alternatively, we can provide a stand-alone document that focuses on AMC 20-193/CAST-32A across all the planning activities (which can be referenced once from the PSAC).
-
Can MACH178 help me with certification aspects of my multicore project?
Yes. MACH178 can be used to produce timing evidence needed to satisfy DO-178C objectives in line with AMC 20-193 and CAST-32A guidance.
-
Why should I use Rapita's solution?
Rapita Systems are uniquely positioned to offer the combination of expertise and tools required to effectively address AMC 20-193 and CAST-32A objectives.
Whilst the challenge of certifying multicore systems for safety-critical applications is a relatively new one for the industry as a whole, we have been researching this area for over a decade. Rapita are working with key industry stakeholders, including major chip-manufacturers like NXP, to support them in refining the evidence required to satisfy certification authorities.
Rapita have extensive experience in providing software verification solutions for some of the best-known aerospace and automotive companies in the world. For example, BAE Systems used RapiTime (one of the tools in MACH178) to identify worst-case execution time optimizations for the Mission Control Computer on their Hawk Trainer jet.
See more of our Case Studies.
-
Can MACH178 be used to generate assurance evidence incrementally?
Yes. With MACH178, assurance evidence can be developed incrementally and independently for the multicore platform and each hosted application.
The solution is designed to meet use cases for each of the avionics roles identified in DO-297/ED-124, whether you're a Certification Applicant, System Integrator, Platform or Application Supplier. The solution supports the needs of Certification Applicants and System Integrators by defining a consistent strategy for generating certification evidence across all platforms and applications.
-
How does RVS support multicore timing analysis?
RVS, Rapita's verification toolsuite, supports multicore timing analysis by letting you create and run multicore timing tests, automatically capture verification results as these tests run on your multicore platform, efficiently analyze collected results, and generate compliance evidence when your analysis is complete.
RVS includes a range of features that support multicore timing analysis, including:
- RVS supports the collection of a range of metrics during testing, including execution time results and values from hardware event monitors on your platform, such as the number of cache hits and cache misses.
- RVS makes it easy to view and analyze multicore timing results by letting you filter your results on the performance metrics and tests you want to see and letting you select a baseline against which to compare your results.
- RVS lets you generate custom exports to provide compliance evidence. Custom exports reduce your documentation effort by automatically pulling in results from your reports and automatically reporting pass/fail status based on your success criteria.
- RapiTime lets you automatically merge timing results collected during multicore timing analysis into a single report. This lets you collect and analyze results from multiple test runs, multiple builds and multiple time points, and supports running results on multiple test rigs.
-
How do RapiDaemons support multicore timing analysis?
RapiDaemons support multicore timing analysis by allowing the timing behavior of a multicore platform to be analyzed under different levels of resource contention. By stressing specific shared resources at known levels, they support precise analysis.
For more information on RapiDaemons, see the RapiDaemons web page.
-
Does AMC 20-193/CAST-32A guidance apply to platforms with multiple different processors?
Yes. While AMC 20-193 and CAST-32A do not specifically provide guidance on compliance for multicore platforms with different processors, multicore interference may be present in these systems, and as such the guidance applies to them.
-
Can statistical modeling approaches be used to provide support for multicore timing measurements?
In general, statistical modeling approaches such as Queueing Theory are not applicable to timing analysis of multicore software as software timing behavior does not fit standard statistical assumptions. While the analysis of multicore timing results is based on a representation of the measured real distribution of execution times, it would not be correct to try to estimate averages and standard distributions from the data because we cannot assume that it follows any standard statistical distribution without experimental evidence to show this.
In some cases, software timing behavior may fit standard statistical assumptions, but this is the exception rather than the rule and must be proven before relying on results from statistical modeling.