Testing

False positive and false negative in software testing

‘False positive’ and ‘false negative’ are terms commonly heard in software verification. But depending on whether you are looking at static or dynamic analysis, the level of seriousness each conveys can differ.
Read More

Automating test generation with AUTOSAC

Find out how automated test generation from SPARK Ada pre and post-conditions can cut the effort needed to run test projects. 
Read More

Merging coverage data from multiple test runs

One of the questions we are asked most frequently is how we merge coverage data from multiple test runs. This seems to be an unnecessary bottleneck that we have solved for many of our customers.
Read More

It’s a bad idea to hide problems from customers – so we don’t

What’s the one thing you’ll almost never hear in your meeting with a software sales representative? An admission that the software you may be thinking of buying has bugs in it.
Read More

What does less than 100 percent code coverage mean?

Assuming that you adopt the approach of requirements-based testing, code coverage allows you to demonstrate that: Every requirement is implemented in code; Code implements only what the requirements describe. So what does it mean if you don’t achieve 100% code coverage? It could be:
Read More