> When a question touches restricted data — student PII, sensitive HR information — the agent doesn’t just refuse. It explains what it can’t access and proposes a safe reformulation. "I can’t show individual student names, but here’s the same analysis using anonymized IDs."
This part is scary. It implies that if I'm in a department that shouldn't have access to this data, the AI will still run the query for me and then do some post-processing to "anonymize" the data. This isn't how security is supposed to work... did we learn nothing from SQL injection?
In the strongest interpretation of that it would offer only data which the user is allowed to access. Why do you assume that them implementing a feature to prevent PII being accessed that they then turn around and return data which the user is not supposed to access?