That's because there has been rapid improvement by LLMs.
Their tendency to bullshit is still an issue, but if one maintains a healthy skepticism and uses a bit of logic it can be managed. The problematic uses are where they are used without any real supervision.
Enabling human learning is a natural strength for LLMs and works fine since learning tends to be multifaceted and the information received tends to be put to a test as a part of the process.