Whispir is a much better TTS than almost anything else. However, when it gets it wrong, oh boy does it get it wrong.
For everything else? Not really. JS thrashing the DOM is as much a pain as ever. Using ico files instead of either emoji or... Text... Makes UIs painful and inconsistent.
Everyone using Electron and its broken [0] accessibility, including core Windows features...
These aren't things that can be reasoned away with an LLM. An interface is not just text - its a reasoned nodegraph. And when I'm blind (comes and goes), I need the nodegraph. Not an image of the screen reinterpreted.
I imagine this is where LLMs could really help actually. LLMs are natively surfing the web now so I suspect LLM descriptions of sites or even having them re-render a site in a more usable way is becoming much more possible.
I find it very hard to know what to do to follow best practice. For example the biggest UK charity for blind people make social media posts about the importance of text descriptions and alt tags that break what I thought was good practice (they duplicate text in post and alt tag) and they seem to encourage this.