logoalt Hacker News

andaitoday at 1:20 PM0 repliesview on HN

See also: https://gfrm.in/posts/agentic-ai/

> I’ve spent the last few months building agents that maintain actual beliefs and update them from evidence — first a Bayesian learner that teaches itself which foods are safe, then an evolutionary system that discovers its own cognitive architecture. Looking at what the industry calls “agents” has been clarifying.

> What would it take for an AI system to genuinely deserve the word “agent”?

> At minimum, an agent has beliefs — not hunches, not vibes, but quantifiable representations of what it thinks is true and how certain it is. An agent has goals — not a prompt that says “be helpful,” but an objective function it’s trying to maximise. And an agent decides — not by asking a language model what to do next, but by evaluating its options against its goals in light of its beliefs.

> By this standard, the systems we’re calling “AI agents” are none of these things.