> LLMs really build that bridge to precisely the answers I want.
It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
Not criticising you in particular, but this does sound to me like this approach has a good possibility of just reinforcing existing biases
In fact the approach sounds very similar to "find a wikipedia article and then go dig through the sources to find the original place that the answers I want were published"
Agreeable LLMs and embedded bias are surely a risk, but I don't think this a helpful frame. Most questions don't have correct answers, so it would follow that you'd want practical answers for those, and correct answers for the remainder.
> It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
“Verify that” and then ChatGPT will do a real time search and I can read web pages. Occasionally, it will “correct itself” once it does a web search
Though I think you're reading more into my phrasing than I meant, the overall skepticism is fair.
One thing I do have to be mindful of is asking the AI to check for alternatives, for dissenting or hypothetical answers, and sometimes I just ask it to rephrase to check for consistency.
But doing all of that still takes way less time than searching for needles buried by SEO optimized garbage and well meaning but repetitious summaries.