> All an LLM does is introduce the possibility that a command that worked fine yesterday will randomly not work
Aren't hallucinations part of GenAI? I would assume that "AI" voice recognition doesn't have that baked in, but I'm not working in either of those spaces so maybe I'm missing the details. So many things are being looped into the "AI" umbrella that would have just been called machine learning or pattern recognition a decade ago (e.g. "facial recognition" vs "AI" at a time when "AI" also means chatbots like ChatGPT).
The point is Amazon is adding an “Alexa+” mode that uses LLMs. The plain voice recognition + keyword matching or however the old version works is more reliable (I assume, I didn’t use the new mode much because it immediately failed at what I wanted)