That's why LLM will eventually be used only for initial interaction between the user in their language, to prepare the data to a specialized model.
Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".
Wouldn't this be faster with an agent skill that has code?
/skill-creator [or /create-skill] Write an agent skill with code script(s) that use an existing user space IP library that works with your agent runtime, to [...]
ComposioHQ/awesome-claude-skills: https://github.com/ComposioHQ/awesome-claude-skills
anthopics/skills//skill-creator/SKILL.md: https://github.com/anthropics/skills/blob/main/skills/skill-...
/.agents/skills/skill-name/SKILL.md, scripts/{script_name.py,__init__.py}
Next up: Claude replacement to handle simdjson processing.
Perhaps one day, all network services will be provided by LLMs natively. Truly, that would be a day in the future.
think about how much faster it would've been with a small local model!