Six months ago, I started a project I’d been putting off: going through every vendor contract in our repository that involves any kind of AI or machine learning functionality and checking what the agreements actually say about it. Sixteen contracts. SaaS platforms, analytics tools, an HR screening product, a couple of document processing services. All of them use AI in some capacity. All of them are happy to tell you about it on their marketing pages.

What they’re less happy to do, apparently, is put meaningful terms about it in the contract.

I found exactly three (out of sixteen) that had anything I’d call a substantive AI-specific clause. The rest either buried AI in a general “features may include” paragraph or said nothing at all. And this matters more now than it did a year ago, because the EU AI Act is no longer theoretical. It’s operational, and its requirements are starting to show up in the contracts I’m reviewing, the ones I’m negotiating, and the conversations I’m having with vendors who haven’t caught up yet.

A Quick Primer on What’s Actually Happening

The EU AI Act entered into force in August 2024. It’s being phased in over several years. As of February 2025, prohibited AI practices are already enforceable, with penalties that can reach €35 million or 7% of global annual turnover. The obligations for high-risk AI systems (think hiring tools, credit scoring, anything touching critical infrastructure) hit in August 2026, and the full regulation is expected to be completely applicable by 2027.

If you’re thinking “we’re a U.S. company, this doesn’t apply to us,” I’d gently push back. If your AI-powered vendor processes data about EU-based employees, customers, or partners, the Act’s reach is extraterritorial. And even if it genuinely doesn’t apply to your organization today, the contractual expectations it’s creating are spreading well beyond EU borders. I’m already seeing clauses influenced by the AI Act in contracts with U.S.-based vendors. The regulation is setting a floor for what “responsible AI contracting” looks like globally, the same way GDPR did for data privacy.

What I’m Actually Seeing in Contracts

Here’s what’s changed in the last twelve months. The contracts that do have AI-specific language are starting to cluster around five areas, and every one of them traces back to something in the AI Act or the EU’s model contractual clauses for AI procurement that were updated in March 2025. Those model clauses were peer-reviewed by over 40 experts and are becoming a reference point even for private-sector deals.

Data use and training restrictions. This is the one I see most often now. A clause that says, in various degrees of specificity, that the vendor will not use your data to train its general-purpose models. This used to be buried in privacy policies. Now I’m seeing it as a standalone contractual commitment. One vendor I reviewed had a “no training, no commingling, no retention” provision. That’s exactly the kind of language Holon Law Partners recommends for AI vendor agreements, and it’s encouraging to see it showing up in practice.

Transparency and documentation. A couple of contracts now include obligations for the vendor to provide documentation about the AI system’s intended use, known limitations, and how the model was trained and evaluated. This maps directly to the AI Act’s requirements for high-risk systems. It’s also, frankly, just useful information. When I’m evaluating whether a tool is fit for purpose, knowing its limitations matters more than knowing its features.

Human oversight requirements. One HR-adjacent tool we use for screening has added language about human-in-the-loop decision-making. This is clearly a response to the AI Act’s classification of employment-related AI as high-risk. The clause requires that no consequential decision (hiring, firing, promotion) be made solely on the AI’s output without human review. I would have wanted that clause anyway, but it’s nice to see the vendor volunteering it.

Incident reporting. Two contracts now include provisions requiring the vendor to notify us of material performance issues, security incidents, or compliance problems with the AI system. Before this year, I’d seen this kind of language for data breaches under GDPR. Seeing it extended to AI system failures is new.

Audit rights. The most aggressive clause I’ve seen gives us the right to periodic audits or third-party assessments of the AI system’s performance and compliance. This is still rare. Most vendors resist audit clauses in general, and AI-specific audit rights are a harder sell. But the fact that I’m seeing them at all tells me the market is moving.

What I’m Not Seeing (and Why That Worries Me)

For every contract with thoughtful AI provisions, I have three or four that are completely silent on the topic. And the readiness gap extends beyond my vendor portfolio.

A survey from the European Digital SME Alliance found that more than 60% of small and mid-size tech companies said they were not adequately prepared for compliance with any phase of the AI Act. Nearly half reported they hadn’t even conducted a risk classification of their own AI systems. Meanwhile, compliance analyses suggest that over half of organizations lack systematic inventories of AI systems currently in production. You can’t classify what you can’t find, and you can’t comply with what you haven’t classified.

This is a problem I recognize from the data privacy world. When GDPR arrived, most of the companies I worked with weren’t ready either. They scrambled, they hired consultants, and eventually they got there. The AI Act feels similar: the compliance gap will close, but right now we’re in the uncomfortable middle period where the regulation exists and most organizations haven’t operationalized it.

For contract managers, that gap is our problem to deal with. The vendors aren’t going to fix this on their own.

What I’m Doing About It

I’m not a lawyer, and I want to be clear about that. I don’t draft these clauses. But I’m the person who flags what’s missing, asks the questions, and makes sure the right people are involved before we sign something. Here’s what’s changed in my process.

I added an AI section to my contract review checklist. Every new vendor agreement that involves AI or automated decision-making gets flagged for additional review. The checklist covers: data training restrictions, transparency obligations, human oversight requirements, incident notification, and audit rights. If the contract is silent on all five, I send it back with questions before it moves forward.

I started keeping an AI clause library. When I see a well-written clause in one contract, I save it. Not to copy it verbatim into other agreements (again, not a lawyer), but to use as a reference point when talking to vendors about what we expect. It’s a lot easier to negotiate when you can say “here’s what your competitor put in their contract” than when you’re arguing from abstract principles.

I’m flagging existing contracts for renewal review. Some of the contracts I reviewed with zero AI provisions are up for renewal in the next 12 months. Those renewals are my opportunity to add the language that should have been there in the first place. I’ve added notes in ContractSafe so the renewal alerts remind me to flag these for legal review, not just auto-renew.

I’m pushing back on “it’s in our privacy policy” as an answer. Several vendors, when I asked about AI-specific contract terms, pointed me to their privacy policy or acceptable use policy. Those aren’t contracts. They’re unilateral documents that the vendor can change at any time. If a commitment matters, it belongs in the agreement.

This Isn’t Going Away

The EU AI Act is the first comprehensive AI regulation, but it won’t be the last. Colorado’s AI Act is already on the books. New York City requires bias audits for automated hiring tools. State-level legislation is proliferating. The trend is clear: AI governance is becoming a contract management problem, not just a technology problem or a legal problem.

For people who do what I do, that means the contract is becoming the primary vehicle for AI risk management. Not the vendor’s marketing claims. Not their trust center page. Not a blog post about their “commitment to responsible AI.” The contract. The thing with signatures on it. The thing that actually creates obligations.

If your vendor contracts don’t address how AI is used, how your data is handled by it, and what happens when something goes wrong, you have a gap. And that gap is about to get a lot more expensive to ignore.

Start with your highest-risk vendors. Read what the contract actually says about AI. If the answer is “nothing,” you know what to do next.


I’m Dave, and I write about contract management the way it actually works. No jargon, no sales pitch, just what I’ve learned from 15+ years of doing this job. New posts every Tuesday and Thursday.


Leave a Reply

Your email address will not be published. Required fields are marked *