logoalt Hacker News

lpcvoidtoday at 9:23 AM5 repliesview on HN

And have it hallucinate stuff? Nah, this stuff is hard enough without LLMs guessing.


Replies

empiricustoday at 9:40 AM

Well, I mean just choosing better names, don't touch the actual code. and you can also add a basic human filtering step if you want. You cannot possible say that "v12" is better than "header.size". I would argue that even hallucinated names are good: you should be able to think "but this position variable is not quite correctly updated, maybe this is not the position", which seems better than "this v12 variable is updated in some complicated way which I will ignore because it has no meaning".

show 1 reply
sigseg1vtoday at 12:52 PM

If you ask an LLM to do a statically verifiable task without writing a simple verifier for it, and it hallucinates, that mistake is on you because it's a very quick step to guarantee something like this succeeds.

show 1 reply
orbital-decaytoday at 11:46 AM

It's a labeling task with benign failure modes, much better suited for an LLM compared to generation

jitltoday at 9:46 AM

i think for obj-c specifically (can’t speak to other langs) i’ve had a great experience. it does make little mistakes but ai oriented approach makes it faster/easier to find areas of interest to analyze or experiment with.

obj-c sendmsg use makes it more similar to understanding minified JS than decompiling static c because it literally calls many methods by string name.

Cthulhu_today at 1:14 PM

I'd argue that as long as it produced working code it's better than nothing, in this case.

show 1 reply