logoalt Hacker News

morpheuskafkatoday at 6:52 AM0 repliesview on HN

Surely at least part of the issue here is that even an LLM operates in two digit tokens per second, not to mention extra tokens for "thinking/reasoning" mode, while a real autopilot probably has response times in tens of milliseconds. Plus the network latency vs a local LLM.