I'm surprised no one has tried to skip cameras all together and use ultrasound. mics use two or maybe even three orders of magnitude for an audio -> object inference stack vs visual. of course you can't detect colors or do A LOT of things, but hey, you could make glasses really look like regular glasses, especially if you got rid of the screens too.
> I'm surprised no one has tried to skip cameras all together and use ultrasound.
The ratios of image resolution and viewing distance to physical size are veeeeeeery bad with sound compared to cameras though. Cameras are also completely passive sensors that don't require an attached emitter in most circumstances.
>you could make glasses really look like regular glasses
The cameras are not what makes the glasses bulky and people find a lot of utility in taking and sharing pictures and videos from their glasses. So you'll probably always want to have at least one camera on the product for that use case.
They tried it back with the Powerglove.
Not sure why you think we have off the shelf miniaturized sonar hardware at scale and shape detection tech that could beat out mobile cameras and computer vision software.
Glasses as a computer form factor is not really proven out yet, but cameras on the glasses are one of the things that people are actually using the Meta Raybans for. One of the primary things people do with them is capture POV video. Take away the cameras and you're left with what. ChatGPT on command and headphones and that's it? The Humane Pin would like a word. People buy smart glasses specifically for a rich feature set, the more the better (because it's a nerd/early adopter product as of now).
And also in the real world people just do not care about cameras on glasses as much as people on HN trot out the glasshole articles from a decade ago. Both smart glasses and phones that are actively recording are everywhere already.