Choosing an AI vendor comes with real risk. A signed contract may (or may not) look solid, but that alone will not protect your firm if the vendor mishandles sensitive information or fails to meet legal standards.
Responsible use of AI begins with reviewing the terms, but it does not end there. Your responsibility toward protecting clients’ and employees’ information includes knowing how an AI system collects, stores and shares data.
In this post, we break down what legal and compliance teams should be asking before signing on with any AI provider. Helping your team answer the question: What does the vendor say it does? But also going a step further and doing a reasonable amount of inquiry or investigation to get a sense of whether the vendor does what it says it does. A thorough vetting process is a safeguard against reputational and regulatory fallout.
Privacy laws aren’t static, and neither are the risks tied to AI vendors. It’s not enough to assume compliance based on a feature list or a well-designed interface. You need proof that the vendor understands the legal landscape and can meet your obligations under laws like GDPR, CCPA and other state-specific regulations.
To assess risk, start with these foundational questions:
Each answer will clarify how data flows through their systems, what legal responsibilities you may share and how your data will be handled.
An AI vendor should be able to demonstrate compliance with key privacy and security requirements, which is far more than just checking boxes. These elements show whether the provider is prepared to handle regulated data responsibly and respond appropriately when something goes wrong.
Look for the following legal requirements:
These requirements almost always exist for organizations handling sensitive, regulated or client-owned information. A vendor’s ability to answer these questions confidently shows it takes compliance seriously.
A thorough risk assessment is more than policy claims and marketing language.
A vendor’s privacy policy may be publicly available, providing you with the opportunity to confirm it aligns with your contract terms. Privacy policy language should not contradict what is written in your agreement without one document specifying the one that controls in the event of a contradiction. Any mismatch can expose your organization to unanticipated risks, including broader data sharing rights than intended.
Do some high-level searching online to find out about any publicly-known previous privacy investigations and breaches. For example, Anthropic disclosed a data handling incident in early 2024 involving the accidental sharing of customer information by a third-party contractor. Ask directly about any past issues and how they were resolved, what they’re doing to prevent future similar incidents, and how long ago they happened.
While many established AI tools follow strong security practices, the rapid pace of innovation in the AI space introduces additional risk. In the push to release new features and capture market share, security can sometimes lag behind development. At the same time, the prominence of AI technologies make them appealing targets for attackers looking to exploit vulnerabilities or gain visibility by disrupting high-profile AI tools.
Finally, consider any flexibility the vendor offers in how data is retained and deleted. Your vendor should be able to support internal audits, respond (or facilitate you responding) to access requests and implement data minimization practices that match your legal and operational needs.
Learn more: Read our 6 Tips to Minimize Risk to enhance your AI risk management strategies.
AI vendor contracts often include vague or overly broad language that sounds reassuring but lacks real protection. You might see terms like “industry-standard security” or “we value your privacy,” but vendors should be able to back those claims with some additional specificity about how they handle your data.
One common red flag is language about improving services. If a contract says your data may be used to “enhance performance” or “optimize user experience,” that could mean your inputs are being stored, reviewed or used even to train future models, possibly without your knowledge.
The absence of key information can also be problematic. If you can’t get clear answers about how long data is retained, who has access to it or whether it is shared with subprocessors, assume the worst. Some contracts also include subtle clauses that permit the use of anonymized or aggregated data for marketing or development. Ask questions if you don’t know how that applies in your case.
“Don’t rely on general language. Push for precise terms that describe data usage, retention, subprocessors and training practices in plain, enforceable language.”
Even with careful vetting, many AI vendors are still catching up to current privacy standards. When the contract falls short or includes ambiguous terms, you need practical strategies to limit risk. These should include:
Learn more: Our related post, Is Your AI Chat Really Private, discusses how vendors may have access to chats that users think are private.
When negotiating or reviewing agreements, push for language that strengthens your rights and limits vendor access to your data.
For this post, “input” means the documents/media you upload and/or your prompts, and “output” means the content generated by an AI tool in response to your input.
Here are four contract clauses to consider:
These clauses help set clear boundaries and may provide leverage if the vendor falls short. Without this language, you may be giving up more than you realize, regardless of how secure the tool appears on the surface.
Before engaging an AI vendor, take a minute to consider the full set of information you’ve gathered. You can have the most air-tight, favorable contract clauses, but if that vendor cannot tell you the first thing about how it handles inputs, the contract clauses are not much help. On the flip side, you can be confident about a vendor’s security practices, but if your contract assigns no obligations to the vendor, the vendor could refrain from using the strong security practices. Finally, consider incentives. What does the vendor have to lose if it fails to secure its customers’ data? What does the vendor have to lose if it offers no promises about protecting data?
Large global providers need to protect their brands and want to be known as secure, prioritizing security and investing in security. Since you can never really know how safe a vendor’s tools may be, betting on a global company can be a good way to lower risk. On the other hand, those large companies also may not need you as a customer, so they aren’t going to negotiate favorable terms with you as you onboard.
AI vendor selection and management requires legal expertise and operational awareness to understand how data will be handled, where obligations begin and where they may fall apart.
If you have any questions, we’re here to help!
If your organization is considering or already using AI tools, now is the time to take a closer look at your vendor relationships. ZeroDay Law helps legal teams, compliance officers and business leaders evaluate AI platforms with privacy, security and risk in mind. Contact us today to learn more.