logoalt Hacker News

raincoletoday at 3:09 AM2 repliesview on HN

Yeah, remind me of this: https://news.ycombinator.com/item?id=46929505

> I have a source file of a few hundred lines implementing an algorithm that no LLM I've tried (and I've tried them all) is able to replicate, or even suggest, when prompted with the problem. Even with many follow up prompts and hints.

People making this kind of claim will never post the question and prompts they tried. Because if they did, everyone will know it's just they don't know how to prompt.


Replies

ares623today at 3:30 AM

At what point will the proper way to prompt just be "built-in"? Why aren't they built-in already if the "proper way to prompt" is so well understood?

logicprogtoday at 8:38 AM

Eh, I think that one is fair. LLMs aren't great at super novel solutions