logoalt Hacker News

raincoletoday at 10:43 AM1 replyview on HN

When it comes to LLM you really cannot draw conclusions from first principles like this. Yes, it sounds reasonable. And things in reality aren't always reasonable.

Benchmark or nothing.


Replies

samustoday at 11:01 AM

There have been papers about introducing thinking tokens in intermediary layers that get stripped from the output.