logoalt Hacker News

vanderZwanyesterday at 8:37 PM1 replyview on HN

> The header is the cost. Not the reflection. The reflection algorithm is fast – asymptotically ~0.07 ms per enumerator, essentially the same as the hand-rolled switch in the X-macro version (~0.06 ms). What makes reflection look expensive is <meta>: just including it costs ~155 ms per TU over the baseline.

So speaking of old ways, I'm not a C++ dev, but a while ago saw someone comment that they still organize their C++ projects using tips from John Lakos' Large-scale C++ software design from 1997, and that their compile times are incredibly fast. So I decided to find a digital copy on the high seas and read it out of historical curiosity. While I didn't finish it, one wild thing stood out to me: he advised for using redundant external include guards around every include, e.g.

     #ifndef INCLUDED_MATH
     #include <math>
     #define INCLUDED_MATH
     #endif
The reason for this being that (in 1997) every include required that the pre-processor opened the file just to check for an include guard and reading it all the way to the end to find the closing #endif, causing potentially O(N*2) disk read overhead (if anyone feels like verifying this, it's explained on pages 85 to 87).

Again, that was in 1997. I have no idea what mitigations for this problem exist in compilers by now, but I hope at least a few, right?

This conclusion is making me wonder if following that advice still would have a positive impact on compile times today after all though. Surely not, right? Can anyone more knowledgeable about this comment on that?


Replies

SuperV1234yesterday at 8:46 PM

This cost is not significant nowadays, it's the frontend/parsing time.

You can also use `#pragma once` which works everywhere, is nicer, and technically needs less work by the compiler, but compilers have optimized for include guards since a long time ago.

Some random measurements I found: https://github.com/Return-To-The-Roots/s25client/issues/1073

show 1 reply