It would never occur to me that they couldn’t. From a legal POV, that sounds a lot like using your search history against you.
This seems plainly obvious -- chat bots are not attorneys. Why would they be privileged as such? You don't get attorney-client privilege when you put your legal questions into Google, or to sending them to anyone or anything else other than an attorney...
An aspect of AI that's really underdiscussed is just the basic switch from doing all your searches logged out to now being forced to be logged in somewhere. That much alone is disqualifying for me.
This seems so obvious to me. Why would you ever put information regarding a legal case you’re party to into an AI chat
Of all the words to use in the title, they chose "prompts" when talking about AI. Had to read it twice because, if you assume the AI "prompts" equivalent, the whole title becomes gibberish.
This is why you should have local models. The local models are good enough for private chats, they might not be as good as the cloud models for precise technical work, but for general sensitive chat you definitely should stick to local.
The obvious business opportunity here is for some lawyer to start running an AI service to do these kinds of things. Anyone who subscribes is a client of the lawyer, who owns the chatbot infrastructure, which would be protected under attorney client privilege.
people point out in sibling comments that is phone call then be out of client-attorney privileges? since it goes through a "3rd party"? maybe not the call itself but the voicemail for example. can it be "extracted" for the same purpose? another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
Increasingly AI seems to be mostly downside. A legal chat bot without attorney-client privledge, also implies a medical chatbot may have no HIPAA protection. It renders the service unsafe and therefore unusable and maybe more importantly... unsalable.
Could there be something like a VPN for AI models? VPP?
You send a prompt to a neutral third party who then sends it to an AI model and then routes the response back to you?
Of course lawyers want you to give up your power; they don't want you looking up information that they charge $500 an hour to give you.
Meanwhile, sensible people perform sensitive defense and prosecution related chats anonymously facilitated via local LLMs or cryptocurrency.
tl;dr: privileged communications (see: https://law.usnews.com/law-firms/advice/articles/what-are-pr...) are protected only when they are communications between privileged parties. Everything else is can be used against you in a court of law.
[dead]
> Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. > > Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. > > Manhattan-based U.S. District Judge Jed Rakoff ruled, opens new tab in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. > > No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote.
If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?
What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.