logoalt Hacker News

CharlieDigitalyesterday at 9:19 PM1 replyview on HN

Because protocols provide structure that increases correctness.

It is not a guarantee (as we see with structured output schemas), but it significantly increases compliance.


Replies

ambicapteryesterday at 9:30 PM

You're interacting with an LLM, so correctness is already out the window. So model-makers train LLMs to work better with MCP to increase correctness. So the only reason correctness is increased with MCP is because LLMs are specifically trained against it.

So why MCP? Are there other protocols that will provide more correctness when trained? Have we tried? Maybe a protocol that offers more compression of commands will overall take up more context, thus offering better correctness.

MCP seems arbitrary as a protocol, because it kinda is. It doesn't >>cause<< the increase in correctness in of itself, the fact that it >>is<< a protocol is the reason it may increase correctness. Thus, any other protocol would do the same thing.

show 2 replies