Six months ago, I renewed a marketing analytics platform and noticed something new in the vendor’s terms: a three-paragraph section on artificial intelligence that hadn’t been there before. It covered how the vendor uses AI in their product, what happens to data that gets processed through their AI features, and whether they retain the right to use customer data for model training.
I read it twice. It was vague in the places that mattered and specific in the places that didn’t. And when I flagged it for our legal team, the attorney’s first question was: “Is this the only vendor contract with AI language in it?”
It was not. Not even close.
Over the last year, I’ve seen AI-related clauses appear in vendor contracts that had nothing to do with AI when we originally signed them. Software platforms, data providers, consulting agreements, even a facilities management vendor added language about “automated decision-making tools.” Some of these clauses are thoughtful. Most of them are boilerplate that the vendor’s legal team copied from somewhere and dropped into the agreement without much consideration for what it actually means for the customer.
This is becoming a contract operations problem, and most contract managers I talk to haven’t fully caught up to it yet.
What’s Showing Up in Contracts
The AI clauses I’m seeing fall into a few categories, and once you know what to look for, you start noticing them everywhere.
Data usage and model training rights. This is the big one. Vendors are adding language that gives them the right to use your data to train or improve their AI models. Sometimes it’s explicit (“Customer data may be used to improve [Vendor’s] machine learning models”). Sometimes it’s buried in a broader data usage clause that gives the vendor rights to use data “to improve and develop the Service,” which now includes AI capabilities that didn’t exist when you signed the original agreement.
Research from Stanford Law’s CodeX center, using data from TermScout, found that 92% of AI vendor contracts claim data usage rights beyond what’s necessary for service delivery. The market average for traditional SaaS contracts is 63%. That gap matters. It means AI vendors are, on the whole, asking for significantly broader rights to your data than the software industry norm.
AI disclosure and transparency. Some contracts now include clauses requiring (or promising) disclosure of how AI is used within the product. These range from genuinely informative (“Here is how our AI processes your data, and here are its limitations”) to meaningless (“We may use advanced technologies including artificial intelligence to provide the Service”). The good ones tell you what the AI does, what data it touches, and what it doesn’t do. The bad ones are there to check a compliance box.
IP ownership for AI outputs. If the vendor’s platform generates reports, summaries, or recommendations using AI, who owns that output? This question used to be theoretical. It’s now a contract clause. I’ve seen language that assigns the vendor ownership of any “AI-generated outputs,” which gets tricky when those outputs are based entirely on your data and produced for your benefit.
Indemnification (or the lack of it). The Stanford/TermScout research also found that only 33% of AI vendors provide indemnification for third-party IP claims related to their AI. And only 17% explicitly commit to complying with all applicable laws, compared to 36% in traditional SaaS agreements. That means your vendor’s AI feature might infringe on someone else’s intellectual property, and the contract you signed may not protect you.
Why This Is a Contract Operations Problem
Here’s what frustrates me about the current conversation around AI governance. Most of the discussion is happening at the policy level: executives debating AI strategy, legal teams drafting internal AI use policies, boards asking about “AI risk frameworks.” All of that matters. But while those conversations are happening in conference rooms, the actual governance is happening in the contracts. And the person who’s supposed to catch it? That’s me. The contract manager who’s reading the terms when they come in for renewal.
AI governance isn’t just a legal department initiative or an IT security concern. It’s an operational contract management problem because:
Every vendor relationship is now potentially an AI relationship. Two years ago, I could look at a vendor contract and know whether AI was involved. It was a distinct product category. Now AI features are embedded in platforms that were sold as traditional software. Your CRM has AI. Your expense management tool has AI. Your project management platform has AI. That means every vendor contract renewal is potentially an AI governance review, whether anyone planned for it or not.
The regulatory landscape is moving fast. The EU AI Act is phasing in requirements through 2025 and 2026, with obligations for both developers and deployers of AI systems. Colorado’s AI Act takes effect in February 2026, requiring algorithmic impact assessments and bias testing. California’s AI Transparency Act kicks in around the same time. And in the U.S. federal space, OMB guidance is requiring agencies to revise procurement policies for AI vendors by March 2026. If your vendors are subject to any of these regulations, their compliance obligations flow into your contracts. And if your contract doesn’t address it, you’re the one with the gap.
Renewals are where this gets real. Most of these AI clauses are appearing at renewal, not in new agreements. Vendors update their terms, add AI language, and send the renewal package. If you auto-renew without reviewing the updated terms, you’ve just accepted whatever the vendor’s legal team decided was appropriate. This is why auto-renewal tracking matters even more now. It’s not just about whether you want to keep the service. It’s about whether the terms you originally agreed to still say what you think they say.
What I’m Doing About It
I’m not a lawyer, and I want to be clear about that. I don’t draft AI clauses, and I don’t provide legal advice on what they mean. But I’m the person who has to identify when these clauses show up, flag them for the right people, and make sure the contracts in our repository reflect what’s actually been agreed to. Here’s what I’ve started doing.
Searching for AI language across all existing contracts. I ran a search in ContractSafe for terms like “artificial intelligence,” “machine learning,” “automated decision,” “model training,” and “AI.” I found AI-related language in 23 of our active vendor contracts. Some of it was benign. Some of it wasn’t. A few contracts had data usage clauses that gave the vendor essentially unlimited rights to use our data for AI development, with no opt-out and no deletion requirement. Those went straight to legal.
Adding AI review to the renewal checklist. Every time a contract comes up for renewal now, one of the things I check is whether the vendor has updated their terms to include AI language. If the renewal terms are different from the original, I flag the changes before we sign. This takes maybe five extra minutes per contract, but it’s caught three problematic clauses in the last two months alone.
Keeping a running log of AI clauses by vendor. I maintain a simple list: vendor name, whether the contract includes AI language, what the AI clause covers, and whether it’s been reviewed by legal. This isn’t sophisticated. It’s a spreadsheet. But when leadership asks “which of our vendors are using AI with our data?” I can answer in about 30 seconds, which is more than most contract managers can do right now.
Asking vendors direct questions. When I see vague AI language, I ask the vendor to clarify. “Does your AI feature use our data for model training?” “Can we opt out?” “What happens to our data after the contract terminates?” Vendors don’t always love these questions, but they’re required to answer them, and the answers often reveal that the vendor’s own team isn’t sure how their AI clauses work. That’s telling.
What to Watch For
If you manage contracts and you haven’t started paying attention to this yet, here’s what I’d watch for.
Any vendor that has added AI features since your last contract review has likely updated their terms. Read those terms carefully, especially the data usage sections. Look for training rights, retention policies, and opt-out language (or the absence of it).
Clauses that reference “improving the Service” or “developing new features” may now include AI training that didn’t exist when you signed. The language hasn’t changed, but what it covers has.
Indemnification gaps around AI-generated content are real. If your vendor produces any output using AI (reports, analysis, recommendations), check whether the contract protects you if that output infringes someone else’s IP.
Regulatory compliance obligations are going to start flowing through vendor contracts within the next year. If you’re not tracking which vendors use AI and how, you won’t be ready when your auditor or compliance team comes asking.
I wrote yesterday about the practical side of AI in contract management, what’s useful and what’s hype. This post is about the other side: AI as something your contracts need to govern, not just something they’re managed with. Both are real, and both are landing on the contract manager’s desk.
The AI governance conversation will eventually catch up. There will be standard clauses, established frameworks, and clear regulatory requirements. But right now we’re in the messy middle, where the clauses are showing up faster than the guidance, and the person most likely to catch the problems is the one reading the contracts. That’s us. And it’s one more reason why what we do matters more than most people realize.
I’m Dave, and I write about contract management the way it actually works. No jargon, no sales pitch, just what I’ve learned from 15+ years of doing this job. New posts every Tuesday and Thursday.


Leave a Reply