logoalt Hacker News

sweaterkokurotoday at 2:31 PM1 replyview on HN

In my experience its in all Language Models' nature to maximize token generation. They have been natively incentivized to generate more where possible. So if you dont put down your parameters tightly it will let loose. I usually put hard requirements of efficient code (less is more) and it gets close to how I would implement it. But like the previous comments say, it all depends on how deeply you integrate yourself into the loop.


Replies

anthonyrstevenstoday at 3:52 PM

>> They have been natively incentivized to generate more where possible

Do you have any evidence of this?

show 1 reply