logoalt Hacker News

TeMPOraL01/21/20250 repliesview on HN

Sure, but you can still re-prompt them again telling them to just do better.

In case people missed it, I'm referencing an observation recently made by 'minimaxir, described here:

https://minimaxir.com/2025/01/write-better-code/

As it turns out, you can improve the quality of code generated by some LLMs by repeatedly responding to it with just three words: "write better code".

While Max only tested this on Claude 3.5 Sonnet, I see no reason why this wouldn't work with the "thinking" models either. Even if it doesn't the results might still be interesting. With that in mind, here's the article's experiment applied to o1-preview:

https://cloud.typingmind.com/share/69e62483-45a4-4378-9915-6...

Eyeballing the output, it seems to align with the article's observation.

(o1-preview is the only "thinking" model I currently have API access to; official ChatGPT app doesn't let me reply to o1 family, forcing further interactions to be with "non-thinking" models instead.)