logoalt Hacker News

remramyesterday at 4:38 PM3 repliesview on HN

I tried using MCP to run some custom functions from ollama & openwebui. The experience was not great.

Doing anything with LLM feels more like arguing than debugging, but this was really surreal: I can see the LLM calling the function with the parameters I requested, but then instead of giving me the returned value, the LLM always pretends it doesn't know the function and tries to guess what the result should be based on its name.

The protocol itself is really weird, almost based on standards but not quite. It was made by one vendor to fix one problem. It has the benefit of existing, but I don't know if it is worthy of any praise?


Replies

svachalekyesterday at 4:59 PM

I don't know what model you're using through ollama but a lot of people pick up a 4b model and expect it to be ChatGPT when it's like 0.2% of the size. 4b models are mostly toys imo. The latest generation of 8b models are sometimes useful, but often still laughably stupid. 14b starts to have potential, 30b are pretty good.

But remember, the hosted frontier models are still gigantic compared to these, and still make stupid mistakes all the time.

csomaryesterday at 5:03 PM

Unless you are running DeepSeek/OpenAI/Anthropic models, I suspect your LLM will struggle with the complexity. That being said, except for Puppeteer and usebrowser, every MCP I have tried was complete sh+t. As: doesn't really work and will confuse the hell out of your LLM.

never_inlineyesterday at 4:57 PM

I mean, LLMs you can run on ollama are usually pretty bad ones.