What should we really think about measuring WCET?
Following on from my last blog post grumbling about the mixing of terminologies from timing and safety domains, this post explains some of the background to WCET analysis and what RapiTime does.
Following on from my last blog post grumbling about the mixing of terminologies from timing and safety domains, this post explains some of the background to WCET analysis and what RapiTime does.
New performance-enhancing features in modern processors can mean it is harder, not easier, to establish the worst-case execution time (WCET) of an application. Why is this happening?
In a recent blog post we observed how the presence of advanced hardware features in modern processors makes it more difficult to establish the worst-case execution time (WCET) of an application. Continuing this theme, let’s examine the use of pipelined processor architectures and the effect that this has on WCET in real-time systems.
Continuing our series on how the presence of advanced hardware features in modern processors makes it more difficult to establish the worst-case execution time (WCET) of an application, this week we examine the issues surrounding the use of instruction caches in CPUs and the effect that this has on WCET in real-time systems.
Over at The Engineer the debate about "should everyone be a programmer?" rages on. Here are Zoe Stephenson's thoughts.
Welcome back to the series of blog posts on how the presence of advanced hardware features in modern processors makes it more difficult to establish the worst-case execution time (WCET) of an application. This week, we consider the difficulties presented by one recent development in CPU design: multicore processors.
We recently tested a system with two different processors, a PowerPC and a TriCore. For anyone concerned with variability, the results make for interesting reading.
Users of RapiTime will probably be aware that one of the categories of information shown in a RapiTime report is "Ipoint Coverage". So, given that RapiTime supplies coverage information, why do you need RapiCover?
Because of their complexity, most modern systems are reliant on scheduling algorithms for efficient multitasking and multiplexing. Invariably these algorithms implement compromises based on specific objectives such as meeting deadlines. This blog post looks at two tasking models which implement different compromises depending on the objectives set by the system user: these models are called “co-operative” and “pre-emptive”.
In a conversation with a colleague, I found myself wondering what was the impact of running code under Windows vs a "bare metal" x86 box. One of the nice things about working for a tool vendor is that you have the tools to hand to perform these kind of experiments "for fun".