This doesn't pass a sniff test. We have plenty of ways to verify good software, else you wouldn't be making this post. You know what bad software is and looks like. We want something fast that doesn't throw an error every 3 page navigations.
You can ask an LLM to make code in whatever language you want. And it can be pretty good at writing efficient code, too. Nothing about NPM bloat is keeping you from making a lean website. And AI could theoretically be great at testing all parts of a website, benchmarking speeds, trying different viewports etc.
But unfortunately we are still on the LLM train. It just doesn't have anything built-in to do what we do, which is use an app and intuitively understand "oh this is shit." And even if you could allow your LLM to click through the site, it would be shit at matching visual problems to actual code. You can forget about LLMs for true frontend work for a few years.
And they are just increasingly worse with more context, so any non-trivial application is going to lead to a lot of strange broken artifacts, because text prediction isn't great when you have numerous hidden rules in your application.
So as much as I like a good laugh at failing software, I don't think you can blame shippers for this one. LLMs are not struggling in software development because they are averaging a lot of crap code, it's because we have not gotten them past unit tests and verifying output in the terminal yet.