
Choosing an AI vendor comes with real risk. A signed contract may (or may not) look solid, but that alone will not protect your firm if the vendor mishandles sensitive information or fails to meet legal standards.
Responsible use of AI begins with reviewing the terms, but it does not end there. Your responsibility toward protecting clients’ and employees’ information includes knowing how an AI system collects, stores and shares data.
In this post, we break down what legal and compliance teams should be asking before signing on with any AI provider. Helping your team answer the question: What does the vendor say it does? But also going a step further and doing a reasonable amount of inquiry or investigation to get a sense of whether the vendor does what it says it does. A thorough vetting process is a safeguard against reputational and regulatory fallout.
Privacy and Security Compliance: Legal Must-Haves
Privacy laws aren’t static, and neither are the risks tied to AI vendors. It’s not enough to assume compliance based on a feature list or a well-designed interface. You need proof that the vendor understands the legal landscape and can meet your obligations under laws like GDPR, CCPA and other state-specific regulations.
What to Discuss With Your Vendor
To assess risk, start with these foundational questions:
- Will you be processing our personal data, and if so, what types (e.g., IP addresses, employee benefits information, client information about a data breach)? A provider should be able to clearly demonstrate how they identify and handle the types of personal data they collect and process. Vague or evasive answers can be red flags.
- Under GDPR, with our data are you clear that you’re a data processor and we’re the data controller (or vice versa)? The distinction determines a provider’s responsibilities under the law; processors act on your behalf at your direction and must support your compliance efforts, while controllers have independent data protection obligations that could affect your risk exposure as well.
- Are you compliant with CCPA? Vendors should understand and meet the correct compliance requirements for their business, your work, and your use case, especially since CCPA applies differently to consumer and business contexts.
- Are you exempt from state privacy laws due to nonprofit or small business exemptions? Federal tax exempt status may reduce a provider’s and your legal obligations, but it does not remove the need to uphold baseline data security and ethical standards.
- Are there any diagnostic, troubleshooting, or performance tools in use where client data may be exposed? System monitoring platforms can capture sensitive information such as screenshots or session activity or as logs, depending on configuration.
Each answer will clarify how data flows through their systems, what legal responsibilities you may share and how your data will be handled.
Checklist of Legal Requirements
An AI vendor should be able to demonstrate compliance with key privacy and security requirements, which is far more than just checking boxes. These elements show whether the provider is prepared to handle regulated data responsibly and respond appropriately when something goes wrong.
Look for the following legal requirements:
- A formal Data Protection Agreement (DPA) where required. The vendor should provide or negotiate a DPA that reflects your relationship under applicable laws. It should clearly define roles, responsibilities and the technical and organizational measures in place to protect personal data.
- Breach notification processes that meet legal timelines. Vendors must have a documented process for identifying, containing and reporting data breaches to you rapidly. Under GDPR, for example, vendors are required to notify clients within 72 hours. Under Ask how the vendor monitors incidents and what protocols are in place for client communication.
- Clear terms about data usage for AI training. The agreement should state whether your data will be used to train or improve the vendors’ AI models. If training is allowed by default, there must be a clear opt-out mechanism. This is critical for firms handling confidential, privileged or sensitive information.
- Knowledge of how long and where data is stored and processed. Vendors should be able to identify the physical and legal jurisdictions where your data resides as well as the retention period. This determines whether cross-border safeguards are required and whether the vendor can comply with data localization or transfer restrictions.
These requirements almost always exist for organizations handling sensitive, regulated or client-owned information. A vendor’s ability to answer these questions confidently shows it takes compliance seriously.
"Any AI vendor handling regulated data should be able to back up their claims with clear, documented practices. If they can't show you how they protect your information, they aren't ready to work with it."
Risk Management and the Initial Evaluation
A thorough risk assessment is more than policy claims and marketing language.
A vendor’s privacy policy may be publicly available, providing you with the opportunity to confirm it aligns with your contract terms. Privacy policy language should not contradict what is written in your agreement without one document specifying the one that controls in the event of a contradiction. Any mismatch can expose your organization to unanticipated risks, including broader data sharing rights than intended.
Do some high-level searching online to find out about any publicly-known previous privacy investigations and breaches. For example, Anthropic disclosed a data handling incident in early 2024 involving the accidental sharing of customer information by a third-party contractor. Ask directly about any past issues and how they were resolved, what they’re doing to prevent future similar incidents, and how long ago they happened.
While many established AI tools follow strong security practices, the rapid pace of innovation in the AI space introduces additional risk. In the push to release new features and capture market share, security can sometimes lag behind development. At the same time, the prominence of AI technologies make them appealing targets for attackers looking to exploit vulnerabilities or gain visibility by disrupting high-profile AI tools.
Finally, consider any flexibility the vendor offers in how data is retained and deleted. Your vendor should be able to support internal audits, respond (or facilitate you responding) to access requests and implement data minimization practices that match your legal and operational needs.
Learn more: Read our 6 Tips to Minimize Risk to enhance your AI risk management strategies.
Reading Between the Lines: Hidden Contract Risks
AI vendor contracts often include vague or overly broad language that sounds reassuring but lacks real protection. You might see terms like “industry-standard security” or “we value your privacy,” but vendors should be able to back those claims with some additional specificity about how they handle your data.
One common red flag is language about improving services. If a contract says your data may be used to “enhance performance” or “optimize user experience,” that could mean your inputs are being stored, reviewed or used even to train future models, possibly without your knowledge.
The absence of key information can also be problematic. If you can’t get clear answers about how long data is retained, who has access to it or whether it is shared with subprocessors, assume the worst. Some contracts also include subtle clauses that permit the use of anonymized or aggregated data for marketing or development. Ask questions if you don’t know how that applies in your case.
“Don’t rely on general language. Push for precise terms that describe data usage, retention, subprocessors and training practices in plain, enforceable language.”
Practical Guidance: What to Do When the Contract Isn’t Perfect
Even with careful vetting, many AI vendors are still catching up to current privacy standards. When the contract falls short or includes ambiguous terms, you need practical strategies to limit risk. These should include:
- Removing names, case numbers or identifying details before submitting content to an AI platform.
- Never input personally identifiable information unless you’ve reviewed the vendor’s data handling policies and have client consent.
- Use anonymized or synthetic data when testing prompts, evaluating outputs or building workflows. (Find a good anonymizer or scrubber, don’t just change the first letter of every word!)
- Apply the same caution to internal testing environments as you would with live tools. The risk is the same.
Learn more: Our related post, Is Your AI Chat Really Private, discusses how vendors may have access to chats that users think are private.
Sample Contract Clauses to Request or Review
When negotiating or reviewing agreements, push for language that strengthens your rights and limits vendor access to your data.
For this post, “input” means the documents/media you upload and/or your prompts, and “output” means the content generated by an AI tool in response to your input.
Here are four contract clauses to consider:
- Vendor will not use any client data for marketing, product training, or improvement without written consent.
- Uploaded input will be used solely to provide the contracted service and for no other purpose.
- Vendor will notify client of any subprocessor changes in advance and provide a right to object.
- Client retains all ownership and intellectual property rights in submitted input and AI-generated outputs.
These clauses help set clear boundaries and may provide leverage if the vendor falls short. Without this language, you may be giving up more than you realize, regardless of how secure the tool appears on the surface.
Take a Step Back and Survey the Landscape
Before engaging an AI vendor, take a minute to consider the full set of information you’ve gathered. You can have the most air-tight, favorable contract clauses, but if that vendor cannot tell you the first thing about how it handles inputs, the contract clauses are not much help. On the flip side, you can be confident about a vendor’s security practices, but if your contract assigns no obligations to the vendor, the vendor could refrain from using the strong security practices. Finally, consider incentives. What does the vendor have to lose if it fails to secure its customers’ data? What does the vendor have to lose if it offers no promises about protecting data?
Large global providers need to protect their brands and want to be known as secure, prioritizing security and investing in security. Since you can never really know how safe a vendor’s tools may be, betting on a global company can be a good way to lower risk. On the other hand, those large companies also may not need you as a customer, so they aren’t going to negotiate favorable terms with you as you onboard.
AI vendor selection and management requires legal expertise and operational awareness to understand how data will be handled, where obligations begin and where they may fall apart.
If you have any questions, we’re here to help!
If your organization is considering or already using AI tools, now is the time to take a closer look at your vendor relationships. ZeroDay Law helps legal teams, compliance officers and business leaders evaluate AI platforms with privacy, security and risk in mind. Contact us today to learn more.