logoalt Hacker News

tialaramextoday at 7:57 AM0 repliesview on HN

It's like this because the 1970s C programmer, typically a single individual, is expected to maintain absolute knowledge of the full context of everything at all times. So these functions (the old non-re-entrant C functions) just assume you - that solo programmer - will definitely know which string you're currently tokenising and would never have say, a sub-routine which also needs to tokenize strings.

All of this is designed before C11, which means that hilariously it's actually always Undefined Behaviour to write multi-threaded code in C. There's no memory ordering rules yet in the language, and if you write a data race (how could you not in multi-threaded code) then the Sequentially Consistent if Data Race Free proof, SC/DRF does not apply and in C all bets are off if you lose Sequential Consistency† So in this world that's enough, absolute mastery and a single individual keeping track of everything. Does it work? Not very well but hey, it was cheap.

† This is common and you should assume you're fucked in any concurrent language which doesn't say otherwise. In safe Rust you can't write a data race so you're always SC, in Java losing SC is actually guaranteed safe (you probably no longer understand your program, but it does have a meaning and you could reason about it) but in many languages which say nothing it's game over because it was game over in C and they can't do better.