Then you're doing it wrong?
I'd need to see a few examples, but this is easily solved by giving the llm more context, any really. Give it the version number, give it a url to a doc. Better yet git clone the repo and tell it to reference the source.
Apologies for using you as an example, but this is a common theme on people who slam LLMs. They ask it a specific/complex question with little context and then complain when the answer is wrong.
I’ve specified many of these things and still had it fall on its face. And at some point, I’m providing so much detail that I may as well do it myself, which is ultimately what ends up happening.
Also, it seems assuming the latest version would make much more sense than assuming a random version from 10 years ago. If I was handing work off to another person, I would expect to only need to specify the version if it was down level, or when using the latest stable release.
This is exactly the issue that most people run into and it's literally the GIGO principle that we should all be familiar with by now. If your design spec amounts to "fix it" then don't be surprised at the results. One of the major improvements I've noticed in Claude Code using Opus 4.5 is that it will often read the source of the library we're using so that it fully understands the API as well as the implementation.
You have to treat LLMs like any other developer that you'd delegate work to and provide them with a well thought out specification of the feature they're building or enough details about how to reproduce a bug for them to diagnose and fix it. If you want their code to conform to the style you prefer then you have to give them a style guide and examples or provide a linter and code formatter and let them know how to run it.
They're getting better at making up for these human deficits as more and more of these common failure cases are recorded but you can get much better output now by simply putting some thought into how you use them.