logoalt Hacker News

benhoytlast Wednesday at 8:07 PM2 repliesview on HN

That's a good caution. However, traversing a flat AST (iterating a "struct of arrays" rather than a pointer-based tree) is also going to be faster. So the next steps of the compiler, say type checking and code emitting, will also be faster. But how much, or whether it's worth it even then, I'm not sure.


Replies

Joker_vDyesterday at 12:16 AM

Another small trick is to use a "reversed" bump allocator, that starts handing out the memory from the end with the larger addresses. Since AST structures are almost always created bottom-up, children before the parents, the root node at the very end on the return from parseProgram/parseModule/etc. function, you will end up with the AST that has most its pointers pointing forward. This means that during AST walks, you'll be going from lower addresses to the higher ones which I think is actually somewhat faster than the reverse order.

munificentlast Wednesday at 11:05 PM

True, but that does also depend on where you store semantic information. Zipping past a nicely packed AST won't buy you much if for every node you have to look up its type or other semantic information somewhere else in memory through some slow process.