logoalt Hacker News

rlpblast Friday at 12:52 PM0 repliesview on HN

> That doesn't take an LLM to accomplish, I don't think. After all, a car has a limited number of functions. It should be mostly a matter of broadening the voice recognition dictionary and expanding the fixed logic to deal with that breadth.

I think the most effective way to get this accurate and effective is to give an LLM the user’s voice prompt and current context and ask to convert the user’s request into an API call. The user wouldn’t be chatting with the LLM directly.

The point is that it doesn’t require a static dictionary to already have your exact phrasing and will just work with plain English.