logoalt Hacker News

raw_anon_1111today at 4:20 PM2 repliesview on HN

> My stance has been pretty rigid for some time: LLMs hallucinate, so they aren’t reliable building blocks. If you can’t rely on the translation step, you can’t treat it as a serious abstraction layer because it provides no stable guarantees about the underlying system.

This is technically true. But unimportant. When I write code in a higher level language and it gets compiled to machine code, ultimately I am testing statically generated code for correctness. I don’t care what type of weird tricks the compiler did for optimizations.

How is that any different than when someone is testing LLM generated C code? I’m still testing C code that isn’t going to magically be changed by the LLM without my intervention anymore than my C code is going to be changed without my recompiling it.

On this latest project I was on, the Python generated code by Codex was “correct” with the happy path. But there were subtle bugs in the distributed locking mechanics and some other concurrency controls I specified. Ironically, those were both caught by throwing the code in ChatGPT in thinking mode.

No one is using an LLM to compute is a number even or odd at runtime.


Replies

rileymichaeltoday at 5:06 PM

> I don’t care what type of weird tricks the compiler did for optimizations.

you might not, but plenty of others do. on the jvm for example, anyone building a performance sensitive application has to care about what the compiler emits + how the jit behaves. simple things like accidental boxing, megamorphic call preventing inlining, etc. have massive effects.

i've spent many hours benchmarking, inspecting in jitwatch, etc.

show 2 replies
skydhashtoday at 4:30 PM

Because for all high level languages, errors happen at the same level of the language. You do not write programs in Go and then verify it in opcodes with a dissasembler. Incorrect syntax and runtime reference the Go files and symbols, not CPU registers.

The same thing happens in JavaScript. I debug it using a Javascript debugger, not with gdb. Even when using bash script, you don’t debug it by going into the programs source code, you just consult the man pages.

When using LLM, I would expect not to go and verify the code to see if it actually correct semantically.

show 1 reply