No, it's prone to assuming or falsifying details even when it has the tools at hand that could verify the true details. Even when explicitly instructed to perform a specific tool call that would load the correct information into its context. Sometimes the pull of the training data is too strong and it will just not make the call and output garbage, all the while claiming otherwise.
No, it's prone to assuming or falsifying details even when it has the tools at hand that could verify the true details. Even when explicitly instructed to perform a specific tool call that would load the correct information into its context. Sometimes the pull of the training data is too strong and it will just not make the call and output garbage, all the while claiming otherwise.