In December, Rapita Systems were pleased to welcome Dan Iorga of the Multicore Programming Group, Imperial College London, to our York offices.
Researching for his PhD, Dan's focus is on worst-case execution time analysis of multicore systems. The need to use multicore processors in the aerospace industry is growing, prompting the FAA to release the CAST-32A position paper. This is an area Rapita Systems has been researching for over 10 years, developing the first commercially-viable solution.
During his visit, Dan and our senior multi-core engineer, Dr Christos Evripidou, compared their approaches to generating microbenchmarks, which create configurable degrees of contention on shared hardware resources in multicore systems, subsequently generating interference between cores. Microbenchmarks are a key component of Rapita's multicore timing analysis solution.
We conducted a short interview with Dan during his visit, the transcript of which can be found below:
What are the biggest challenges of using multicore within the safety-critical domain?
Manufacturers optimize their processors for average performance. But in the safety-critical industry we are actually interested in the worst case performance. Multicore processors are especially difficult to reason about in terms of worst case behavior as they are more complex.
Let's say you are designing a car, [and] you are interested in the airbag system of that car. You want that airbag to deploy exactly when you planned it to deploy, not one microsecond later, not one microsecond earlier.
So Multicore processors are especially difficult to reason about because they are more complex and therefore have more shared resources that can cause a lot of interference.
Where do you think your multicore worst-case execution time research can be applied in the industry?
There are a couple of industries that are interested in real time. I can think of the aerospace industry, the automotive industry and even the medical industry.
If you have a plane's landing gear, or the airbag of a car, or the pacemaker of a patient, you want all these devices to work without any kind of a delay.
The biggest challenge you have in multicore systems is it's difficult to determine all the subtle interactions that you have between the cores of your system.
Also, industry manufacturers might choose not to disclose all of these details in order to gain a competitive advantage.
In my research, I focus on automatically uncovering these subtle interactions in order to expose the worst-case performance, the flaws of a processor.
What is your approach to generating microbenchmarks?
My approach for evaluating worst-case execution time on multicore systems relies on automically generating microbenchmarks. These automatically generated multicore benchmarks might uncover some subtle interactions in multicore processors that a human expert might not think of, or might miss out.