logoalt Hacker News

valleyeryesterday at 3:09 PM1 replyview on HN

The LLM inference itself doesn't "run code" per se (it's just doing tensor math), and besides, it runs on OpenAI's servers, not your machine.


Replies

chongliyesterday at 3:50 PM

There still needs to be a harness running on your local machine to spawn the processes in their sandboxes. I consider that "part of the LLM" even if it isn't doing any inference.

show 1 reply