logoalt Hacker News

PebblesHDlast Tuesday at 10:25 AM5 repliesview on HN

Rather than improving testing for fallible accessibility assists, why not leverage AI to eliminate the need for them? An agent on your device can interpret the same page a sighted or otherwise unimpaired person would giving you as a disabled user the same experience they would have. Why would that not be preferable? It also puts you in control of how you want that agent to interpret pages.


Replies

simonwlast Tuesday at 10:50 AM

I'm optimistic that modern AI will lead to future improvements in accessibility tech, but for the moment I want to meet existing screenreader users where they are and ensure the products I build are as widely accessible as possible.

K0nservlast Tuesday at 11:39 AM

It adds loads of latency for one. If you watch someone who is a competent screen reader user you'll notice they have the speech rate set very high, to you it'll be hard to understand anything. Adding an LLM in the middle of this will add, at least, hundreds of milliseconds of latency to interactions.

erulast Tuesday at 10:33 AM

What you are describing is something the end user can do.

What simonw was describing is something the author can do, and end user can benefit whether they use AI or not.

8organicbitslast Tuesday at 12:22 PM

The golden rule of LLMs is that they can make mistakes and you need to check their work. You're describing a situation where the intended user cannot check the LLM output for mistakes. That violates a safety constraint and is not a good use case for LLMs.

devinpraterlast Tuesday at 2:12 PM

I, myself, as a singular blind person, would absolutely love this. But we ain't there yet. On-device AI isn't finetuned for this, and neither Apple nor Google have shown indications of working on this in release software, so I'm sure we're a good 3 years away from the first version of this.