logoalt Hacker News

Some C habits I employ for the modern day

70 pointsby signa11last Monday at 8:03 AM18 commentsview on HN

Comments

WalterBrighttoday at 1:47 AM

> I’ve long been employing the length+data string struct. If there was one thing I could go back and time to change about the C language, it would be removal of the null-terminated string.

It's not necessary to go back in time. I proposed a way to do it in modern C - no existing code would break:

https://www.digitalmars.com/articles/C-biggest-mistake.html

It's simple, and easy to implement.

show 2 replies
matheusmoreiratoday at 12:22 AM

> In the absence of proper language support, “sum types” are just structs with discipline.

With enough compiler support they could be more than that. For example, I submitted a tagged union analysis feature request to gcc and clang, and someone generalized it into a guard builtin.

https://github.com/llvm/llvm-project/issues/74205

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112840

GCC proved to be too complex for me to hack this in though. To this day I'm hoping someone better than me will implement it.

show 1 reply
keyletoday at 2:30 AM

That made me smile

     If I find myself needing a bunch of dynamic memory allocations and lifetime management, I will simply start using another language–usually rust or C#.
Now that is some C habit for the modern day... But huh, not C.
tom_today at 2:06 AM

If you really insist on not having a distinction between "u8"/"i8" and "unsigned char"/"signed char", and you've gone to the trouble of refusing to accept CHAR_BIT!=8, I'm pretty sure it'd be safer to typedef unsigned char u8 and typedef signed char i8. uint8_t/int8_t are not necessarily character types (see 6.2.5.20 and 7.22.1.1) and there are ramifications (see, e.g., 6.2.6.1, 6.3.2.3, 6.5.1).

canpantoday at 12:43 AM

Regarding memory, I recently changed to try to not use dynamic memory, or if I need to, to do it once at startup. Often static memory on startup is sufficient.

Instead use the stack much more and have a limit on how much data the program can handle fixed on startup. It adds the need to think what happens if your system runs out of memory.

Like OP said, it's not a solution for all types of programs. But it makes for very stable software with known and easily tested error states. Also adds a bit of fun in figuring out how to do it.

show 2 replies
JamesTRexxtoday at 12:54 AM

Two things I thought while reading the post: Why not typedef BitInt types for stricter size and accidental promotion control when typedeffing for easier names anyway? I came across a post mentioning using regular arrays instead of strings to avoid the null terminatorand off-by-one pitfalls.

I still have a lot of conversion to do before I can try this in my hobby project, but these are interesting ideas.

skywalqeryesterday at 11:41 PM

Nice post, but the flashy thing on the side is pretty distracting. I liked the tuples and maybes.

show 1 reply
BigJonotoday at 2:06 AM

I really dislike parsing not validating as general advice. IMO this is the true differentiator of type systems that most people should be familiar with instead of "dynamic vs static" or "strong vs weak".

Adding complexity to your type system and to the representation of types within your code has a cost in terms of mental overhead. It's become trendy to have this mental model where the cost of "type safety" is paid in keystrokes but pays for itself in reducing mental overhead for the developers. But in reality you're trading one kind of mental overhead for another, the cost you pay to implement it is extra.

It's like "what are all the ways I could use this wrong" vs "what are all the possibilities that exist". There's no difference in mental overhead between between having one tool you can use in 500 ways or 500 tools you can use in 1 way, either way you need to know 500 things, so the difference lies elsewhere. The effort and keystrokes that you use to add type safety can only ever increase the complexity of your project.

If you're going to pay for it, that complexity has to be worth it. Every single project should be making a conscious decision about this on day one. For the cost to be worth it, the rate of iteration has to be low enough and the cost of runtime bugs has to be high enough. Paying the cost is a no brainer on a banking system, spacecraft or low level library depended on by a million developers.

Where I think we've lost the plot is that NOT paying the cost should be a no brainer for stuff like front end web development and video games where there's basically zero cost in small bugs. Typescript is a huge fuck up on the front end, and C++ is a 30 year fuck up in the games industry. Javascript and C have problems and aren't the right languages for those respective jobs, but we completely missed the point of why they got popular and didn't learn anything from it, and we haven't created the right languages yet for either of those two fields.

Same concept and cost/benefit analysis applies to all forms of testing, and formal verification too.

jcalvinowenstoday at 1:38 AM

  #if CHAR_BIT != 8
   #error "CHAR_BIT != 8"
  #endif
In modern C you can use static_assert to make this a bit nicer.

  static_assert(CHAR_BIT == 8, "CHAR_BIT is not 8");
...although it would be a bit of a shame IMHO to add that reflexively in code that doesn't necessarily require it.

https://en.cppreference.com/w/c/language/_Static_assert.html

show 1 reply
sys_64738today at 12:46 AM

#define BEGIN {

#define END }

/* scream! */