It feels like a private conversation. You type a question like you would to a human, get a quick response (sometimes with a compliment, “Terrific question!”) and move on. But when you're using an AI chat tool, especially with legal work, that sense of privacy can be misleading.
For attorneys, protecting various privileges, confidentiality, and privacy are more than a binary setting or preference. Confidentiality, privilege and client trust are not abstract principles. Preventing client materials against disclosure is a daily responsibility as a professional ethical obligation. As AI systems become more integrated into legal workflows, it raises an urgent question: Who else might see the data you enter into that chat window? Is the answer similar to thinking through “who might read this email?” and deleting certain text before clicking send just in case the email gets forwarded?
The answer might not be as straightforward as you think. Let’s take a closer look at how AI platforms handle your information, why that matters for legal professionals and what steps you can take to protect sensitive data.
For this post, “input” means the documents/media you upload and/or your prompts, and “output” means the content generated by an AI tool in response to your input.
Not all AI tools treat your information the same way. To most users, the free and paid (business) versions of an AI tool look identical. The difference between the two goes beyond features, they come with different licenses. The licenses determine how data is handled, where it goes and who might have access to it in the future. Here are the notable differences:
For outside counsel in particular, this distinction matters.
Using an AI tool under a free license may expose client data in ways that breach confidentiality or violate professional ethic rules. If a problem occurs in these scenarios, the liability falls squarely on the person or firm that (inadvertently) shared the data; the AI provider disclaims liability for pretty much everything. Even if no immediate confidentiality breach occurs, the risk to client trust and professional reputation is significant.
Attorneys must treat AI platforms like any other vendor handling privileged and/or confidential material. That means reviewing terms of service, asking the right questions and choosing tools that match the privacy obligations of legal practice.
Privacy is a primary concern when using AI, but this tool also comes with input and output risks, which Zero Day Law discusses in this recent blog post.
Though most users focus on the front-end output, behind the scenes there are several areas of concern if legal professionals use artificial intelligence to aid in their legal work:
For attorneys, these factors provide clear risks. A platform that does not offer strong, enforceable privacy terms and controls should never be used for any sensitive work.
Learn more about the balance between privacy regulations and AI use in this recent blog post.
Before sharing any confidential information with an AI platform, take time to review the terms that control how your data is handled. These details are often buried in user agreements and privacy policies. Many users overlook them, but they carry real legal consequences. Businesses may assume they’ll negotiate a license for a Business copy, but often the generic business license online applies.
The following are current links to the terms of service or user agreements for popular platforms:
These documents define what the provider can do with your inputs and how long they can keep them. Some include clauses that allow internal staff to view user content. Others grant broad rights to reuse data for research or system improvements.
As you review these documents, focus on several key points:
But user beware; terms change often. A platform that was safe to use last quarter may have adopted new policies without notice. Make it a regular habit to check for updates, especially if your firm relies on these tools for any part of its workflow.
Explore more about the connection between AI and cybersecurity privacy.
Law firms can’t afford to approach AI use casually. A formal protocol protects your clients, your team, and your liability exposure. This advice is not entirely different from how we’d advise lawyers to protect data in Sharepoint, Google Workspace or email clients. But with AI, people tend to treat it differently and not follow long-standing paradigms. Zero Day Law recommends a framework that includes:
Without a strong structure in place, even careful use can expose your firm to unnecessary risk. Clear, easy-to-understand guidelines will help protect your team and your clients.
Even with secure platforms and sound policies, attorneys should take extra steps to avoid exposing sensitive material:
Enterprise versions often include access control, disabled logging and training features and other technical safeguards. For firms dealing with highly regulated information, on-premise LLM deployments and AI tools will keep all data within a contained network.
There are times when no amount of caution or technical protection is enough. Some matters should never be fed into an AI system, including:
AI can speed up research, streamline document review and generate new ideas. But the legal profession has obligations that go beyond a general obligation to be efficient for clients. Confidentiality, client trust and ethical compliance must come first.
AI does not need to be avoided entirely in legal settings. A clear use plan will properly protect your firm and your clients. Assess the platforms you rely on. Classify what information is safe to use. Train your team, document your practices and revise your approach regularly.
Protecting client confidentiality in the age of AI starts with knowing how your tools really work. Not just how their websites say they work! If your firm is using AI or considering it, a clear, enforceable plan will help protect sensitive information. The risks are real, but with the right protocols, privacy and performance can coexist.
Learn more about Zero Day Law’s AI Privacy and Security Solutions here.
Our team can help your firm evaluate current tools, draft policy language and build a confidentiality-first AI workflow. Contact us today to start the conversation or request a consultation. Zero Day Law is here to help you keep security, compliance and client trust at the center of your AI-use strategy, whether that's helping with policy, terms or just demystifying legal gray areas that come with emerging tools and technologies.