My biggest issue with LLM‑assisted webpages (Claude Code is especially egregious) is the lack of respect for basic web content accessibility guidelines.
The number of dark‑mode sites I’ve seen where the text (and subtext) are various shades of dark brown or beige is just awful. For reference, you want a contrast ratio between the text and background of at least ~4:1 to be on the safe side.
This isn't even that hard to fix - hell you can add the Web Content Accessibility Guidelines to a skill.
I've genuinely had solid results from telling Claude "... and make sure it has good accessibility".
If you have some good sources let me know, I'll turn it into a guide that Claude can read
I think this is a second order thing when you are building a side project.
I think it's fine, so long as the intent is to refine the thing after you've validated the product idea and direction. There are a million things to optimize in web pages, and AI can't simply one-shot good decisions yet.
Honestly, my accessibility on my apps/websites is much better now with AI because you can just tell AI to do it (and run automated tests to validate it worked) vs not doing it at all for a small side project with 2 users.
Just chiming in to say I don't care at all about accessibility and I find it bewildering that every thread sharing some project has a comment like this.
I think accessibility is a really admirable thing and helpful to society (like ramps or parking). But stop shoving your wants on others when you can fix it on your own. Just write a chrome plugin using ai that adjusts css to set contrast ratio of your choice. Can even use a local llm to figure out replacement colors.
Accessibility that can be had on client side should not be a concern on server side.
Were/Are human-generated side projects better in this respect?