logoalt Hacker News

bsenftnertoday at 11:42 AM0 repliesview on HN

Well, for one, by eliminating external tool calling, the model gains an amount of security. This occurs because the tools being called by an LLM can be corrupted, and in this scenario corrupted tools would not be called.