I have kept track of a few instances where AI has been applied to real and genuine problems. ~
Not trivial problems. Issues with possible solutions, errors, and unresolved history.
AI did not \\"solve\\" any issues on its own, but what stood out to me was the speed at which concepts could be rewritten, restructured and tested for stress.
A mental model that has been useful to me is that AI is not particularly good at providing the first answer, however, it is very good at providing the second, third, and tenth versions of the answer, especially when the first answer has already been identified as weak by a human.
In these instances, the progress seemed to stem from the AI being able to: Quickly reword and restate a given argument. Convert implicit assumptions into explicit ones. Identify small gaps in logic before they became large.
What I have been grappling with is how to differentiate when AI is just clarifying versus when it is silently hallucinating structure. Is the output of AI being treated as a draft, a reviewer, a rubber duck, or some combination? When is the output so fast that the rigor of thought is compromised? I am interested in how others are using AI for hard thinking and not just for writing cleanup.