Glad you liked the website it was such fun project. Getting the hug of death from HN so that might be why you're getting a worse experience, please try again :)
Tried again today, latency seemed a little better- still a lot of interrupting himself to change thoughts.
I'm still most impressed by the image recognition - could clearly read even tiny or partially obscured print on products I held up and name them accordingly. Curious how you're achieving that level of fidelity without sacrificing throughput.
Just tried this. Most amazing thing I've ever seen. Utterly incredible that this is where we're at.
It was disabled yesterday due to the high traffic - but I was able to connect today and after saying hello the chat immediately kicked me off after I asked a question. So unfortunately I've not been able to test it out for more than a few seconds of the "Hello, how can I help you today?"
One thing I've noticed for a lot of these AI video agents, and I've noticed it in Meta's teaser for their virtual agents as well as some other companies, is they seem to love to move their head constantly. It makes them all a bit uncanny and feel like a video game NPC that reacts with a head movement on every utterance. It's less apparent on short 5-10s video clips but the longer the clips the more the constant head movements give it away.
I'm assuming this is, of course, a well known and tough problem to solve and is being worked on. Since swinging too far in the other direction of stiff/little head movements would make it even more uncanny. I'd love to hear what has been done to try and tackle the problem or if at this point it is an accepted "tell" so that one knows when they're speaking with a virtual agent?