Your browser does not support JavaScript! Skip to main content
Free 30-day trial DO-178C Handbook RapiCoupling Preview DO-178C Multicore Training Multicore Resources
Rapita Systems
 

Industry leading verification tools & services

Rapita Verification Suite (RVS)

  RapiTest - Unit/system testing  RapiCover - Structural coverage analysis  RapiTime - Timing analysis (inc. WCET)  RapiTask - Scheduling visualization  RapiCoverZero - Zero footprint coverage analysis  RapiTimeZero - Zero footprint timing analysis  RapiTaskZero - Zero footprint scheduling analysis  RapiCouplingPreview - DCCC analysis

Multicore Verification

  MACH178  MACH178 Foundations  Multicore Timing Solution  RapiDaemons

Engineering Services

  V&V Services  Data Coupling & Control Coupling  Object code verification  Qualification  Training  Consultancy  Tool Integration  Support

Industries

  Civil Aviation (DO-178C)   Automotive (ISO 26262)   Military & Defense   Space

Other

RTBx Mx-Suite Software licensing Product life cycle policy RVS Assurance issue policy RVS development roadmap

Latest from Rapita HQ

Latest news

SAIF Autonomy to use RVS to verify their groundbreaking AI platform
RVS 3.22 Launched
Hybrid electric pioneers, Ascendance, join Rapita Systems Trailblazer Partnership Program
Magline joins Rapita Trailblazer Partnership Program to support DO-178 Certification
View News

Latest from the Rapita blog

How to certify multicore processors - what is everyone asking?
Data Coupling Basics in DO-178C
Control Coupling Basics in DO-178C
Components in Data Coupling and Control Coupling
View Blog

Latest discovery pages

control_tower DO-278A Guidance: Introduction to RTCA DO-278 approval
Picture of a car ISO 26262
DCCC Image Data Coupling & Control Coupling
Additional Coe verification thumb Verifying additional code for DO-178C
View Discovery pages

Upcoming events

Avionics and Testing Innovations 2025
2025-05-20
DASC 2025
2025-09-14
DO-178C Multicore In-person Training (Fort Worth, TX)
2025-10-01
DO-178C Multicore In-person Training (Toulouse)
2025-11-04
View Events

Technical resources for industry professionals

Latest White papers

Mitigation of interference in multicore processors for A(M)C 20-193
Sysgo WP
Developing DO-178C and ED-12C-certifiable multicore software
DO178C Handbook
Efficient Verification Through the DO-178C Life Cycle
View White papers

Latest Videos

Rapita Systems - Safety Through Quality
Simulation for the Motorola 68020 microprocessor with Sim68020
AI-driven Requirements Traceability for Faster Testing and Certification
Multicore software verification with RVS 3.22
View Videos

Latest Case studies

GMV case study front cover
GMV verify ISO26262 automotive software with RVS
Kappa: Verifying Airborne Video Systems for Air-to-Air Refueling using RVS
Supporting DanLaw with unit testing and code coverage analysis for automotive software
View Case studies

Other Resources

 Webinars

 Brochures

 Product briefs

 Technical notes

 Research projects

 Multicore resources

Discover Rapita

Who we are

The company menu

  • About us
  • Customers
  • Distributors
  • Locations
  • Partners
  • Research projects
  • Contact us

US office

+1 248-957-9801
info@rapitasystems.com
Rapita Systems, Inc.
41131 Vincenti Ct.
Novi
MI 48375
USA

UK office

+44 (0)1904 413945
info@rapitasystems.com
Rapita Systems Ltd.
Atlas House
Osbaldwick Link Road
York, YO10 3JB
UK

Spain office

+34 93 351 02 05
info@rapitasystems.com
Rapita Systems S.L.
Parc UPC, Edificio K2M
c/ Jordi Girona, 1-3
Barcelona 08034
Spain

Working at Rapita

Careers

Careers menu

  • Current opportunities & application process
  • Working at Rapita
Back to Top Contact Us

Is your compiler smarter than an undergraduate?

Breadcrumb

  1. Home
2013-11-13

Recently I chanced upon a discussion among some undergraduates studying Computer Science. They were debating whether programmers should routinely employ efficient bit-manipulation in their source code rather than trusting that the compiler will do a good job on a more straightforward representation of the algorithm. My initial thought was "why is this even a question these days?"

The canonical source is on Standford's website and is a treasure-trove of interesting snippets. Let's take a convenient and simple example, testing whether a word has a single bit set:

f = v && !( v & (v-1) )

With a little bit of thought, it's possible to see what this is doing - when you subtract 1 from a number, its bit-representation has the same top bit set except in the cases where the borrow goes all the way to the top bit - in which case it was a power of two. The wrinkle is that zero passes this plain test, so it's necessary to discount that possibility by explicitly catching the case where the input is zero.

The equivalent explicit form would be something like the following:

int bitset_loop32( int v )
{
   int b = 0;
   int k = 0;
   unsigned int m = 1;
   for( b = 0; b < 32; b++ )
   {
      if( v & m )
      {
         k++;
      }
      m <<= 1;
   }
   return k == 1;
}

Which one is "best"? The one-liner is a low-level optimization where the behaviour is only apparent after some thought, so there are several factors to think about:

  • The bit-hack version is concise, so you can see everything that the code is doing right there in one line, although it takes some effort to figure out what it's doing.
  • The one-liner is also expressed in very low-level terms, so it will likely trace better to the object code.
  • However, since it's already in low-level terms, there's not much scope for a compiler to optimise it.
  • However, if you're really looking for source to object code traceability, that's possibly a good thing, as it means your code will largely be where you said you wanted it.
  • The bit-twiddling doesn't use any extra storage (from the perspective of the source code) compared to the bitset_loop approach.
  • Since it doesn't include a loop, the one-liner version is more likely to have predictable execution time.
  • The looping version is of a form that's typically easier to relate to higher-level requirements, especially when it comes to traceability and independent review.
  • Although it's not so relevant for this example, maintaining and debugging an explicit-but-inefficient implementation is generally easier than doing so on the equivalent optimized version.

My advice, coming from a background where reviews and justification are important, would be:

  • Make sure you have the explicit version and the optimized version available side by side in some way. You could keep the explicit version in a comment or perhaps include a way of configuring the code to be able to test one or the other in a test-suite (or even to test them together.)
  • Don't fall into the trap of convenient premature optimizations without first seeing if this is an optimization worth doing. Test early and often, and you'll focus your effort on problems that are important to solve.
  • If you care about efficiency enough to use it as justification for optimizations, then you should be measuring your execution times for whatever target environment(s) you're running in. You will need data to show that the one-bit-set algorithm is a barrier to overall system efficiency, and you will need data to show that changing the implementation leads to improved overall system efficiency, and crucially you'll need to show that the change to the implementation is the cause of the improvement (rather than, for example, the change to the implementation changing something innocuous like memory alignments or register allocations that happen to have a knock-on improvement effect).

In the end, Sean Anderson's advice sums it up pretty well: "Benchmarking is the best way to determine whether one method is really faster than another, so consider the techniques below as possibilities to test on your target architecture."

DO-178C webinars

DO178C webinars

White papers

Mitigation of interference in multicore processors for A(M)C 20-193
Sysgo WP Developing DO-178C and ED-12C-certifiable multicore software
DO178C Handbook Efficient Verification Through the DO-178C Life Cycle
A Commercial Solution for Safety-Critical Multicore Timing Analysis

Related blog posts

Conditional code without branches

.
2015-12-10

Optimising for code size might not do what you expect - a GCC and PowerPC example

.
2015-02-09

Multi-core pitfalls: unintended code synchronization

.
2015-01-07

Software Optimization Techniques #17: Repositioning Code and State Machines

.
2011-06-02

Pagination

  • Current page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Next page Next ›
  • Last page Last »
  • Solutions
    • Rapita Verification Suite
    • RapiTest
    • RapiCover
    • RapiTime
    • RapiTask
    • MACH178

    • Verification and Validation Services
    • Qualification
    • Training
    • Integration
  • Latest
  • Latest menu

    • News
    • Blog
    • Events
    • Videos
  • Downloads
  • Downloads menu

    • Brochures
    • Webinars
    • White Papers
    • Case Studies
    • Product briefs
    • Technical notes
    • Software licensing
  • Company
  • Company menu

    • About Rapita
    • Careers
    • Customers
    • Distributors
    • Industries
    • Locations
    • Partners
    • Research projects
    • Contact
  • Discover
    • Multicore Timing Analysis
    • Embedded Software Testing Tools
    • Worst Case Execution Time
    • WCET Tools
    • Code coverage for Ada, C & C++
    • MC/DC Coverage
    • Verifying additional code for DO-178C
    • Timing analysis (WCET) & Code coverage for MATLAB® Simulink®
    • Data Coupling & Control Coupling
    • Aerospace Software Testing
    • Automotive Software Testing
    • Certifying eVTOL
    • DO-178C
    • AC 20-193 and AMC 20-193
    • ISO 26262
    • What is CAST-32A?

All materials © Rapita Systems Ltd. 2025 - All rights reserved | Privacy information | Trademark notice Subscribe to our newsletter