This has more to do with Voice Activity Detection (VAD) than the latency described in the article
That seems to be the issue: VAD is insufficient here.
Knowing when to respond requires semantic understanding, which probably only the model itself is capable enough.
Maybe it’s hard for them to train it to only respond once it seems appropriate to do so?
Exactly. It's a tangent, but clearly a pain point for enough users.
That seems to be the issue: VAD is insufficient here.
Knowing when to respond requires semantic understanding, which probably only the model itself is capable enough.
Maybe it’s hard for them to train it to only respond once it seems appropriate to do so?