logoalt Hacker News

handfuloflight11/08/20240 repliesview on HN

> LLMs tend to follow the prompt much too closely

> produce large amounts of convoluted code that in the end prove not only unnecessary but quite toxic.

What does that say about your prompting?