Governance & Risk | ~7 min read
Let me ask you something direct.
Do you know which AI tools your team used this week?
Not which ones are on the approved list. Which ones they actually used.
If you’re not sure, you’re not alone but you’re also not off the hook. Because as the team lead, what happens in your domain is your accountability. Whether a formal policy exists or not.
That’s the AI governance gap. And more often than not, it starts right at the team level.
The Policy PDF Problem
Most organizations have done something about AI by now. There’s a policy document somewhere. Maybe a spreadsheet of approved tools. Maybe a checkbox in an onboarding checklist that says “employees must use AI responsibly.”
That’s a start. But it is not governance.
Here’s the difference, and it matters:
A policy tells people what they’re allowed to do.
Governance is the structure that makes sure it actually happens and that you can demonstrate it did.
The gap between those two things is where real risk lives. A policy that exists in a PDF and never gets enforced, audited, or reviewed is roughly as useful as no policy at all when something goes wrong. And in regulated environments like healthcare, finance, education, critical infrastructure, “something went wrong” increasingly means regulators asking for evidence, not just explanations.
But here’s what I want you to take away: you don’t need to wait for your organization to build enterprise AI governance before you start doing your part. In fact, if you wait, you are the gap.
What You’re Actually Responsible For
You probably can’t build a company-wide AI governance program. That’s not your job right now.
But you can govern what happens on your team. And that’s more than most leads are doing.
At the team level, AI governance comes down to three questions and I’ll borrow a frame here that I think is genuinely useful:
See: Do you know where AI is being used on your team, what it’s being used for, and what data it’s touching?
Control: Do you have any actual guardrails in place, even informal ones, about what’s okay and what needs approval?
Prove: If your manager, your security team, or an auditor asked you to walk through your team’s AI use tomorrow, could you do it?
Most team leads can’t answer “yes” to all three. If you’re one of them, that’s not a reason to panic it’s a reason to act.
Start Here: A Practical AI Governance Checklist for Team Leads
You don’t need a formal program to start closing the gap. Here’s where to begin.
1. Do a real AI tool inventory.
Don’t start with the approved tools list. Start by asking your team directly: what AI tools are you actually using, in any part of your work? Personal accounts, free tiers, browser extensions, coding assistants all of it counts. You may be surprised what comes up.
The goal isn’t to police anyone. The goal is to see clearly. You can’t govern what you can’t see.
2. Map your data sensitivity.
Once you know what tools are in use, ask the next question: what data is flowing into those tools?
Not every workflow carries the same risk. A team member using AI to draft internal meeting notes is a very different situation from someone using it to summarize customer contracts or process patient data. You need to know which of your team’s workflows touch anything regulated, confidential, or customer-facing because those are the ones that need tighter controls.
3. Set team-level expectations in writing.
Even a short internal note establishes clarity. Something like: “For anything involving customer data or regulated information, get approval before using an AI tool. For general productivity use, here’s what’s currently okay.”
It doesn’t have to be formal policy language. It has to be clear and documented. A Confluence page, a Teams post that’s been pinned, an email you sent and saved any of these beats silence. And silence is the default at most organizations right now.
4. Know your organization’s AI policy — and know its limits.
Read whatever your organization has published. Understand what it covers. Then ask yourself honestly: does this policy actually address what my team does day-to-day? Are there gaps between what it says and what reality looks like?
If you see risk your organization hasn’t addressed, your job is to document it and escalate. That’s not overstepping. That’s exactly what a team lead is supposed to do.
5. Build a short paper trail.
This is the one most people skip, and it’s the one that matters most when things go sideways. Keep a simple log, even a basic spreadsheet, of what AI tools your team uses, what they’re used for, and any decisions you’ve made about approvals or restrictions. Date your entries.
If anyone ever asks you to walk through your team’s AI use, you want to be able to do it. That log is your evidence. Evidence is not a nice-to-have in governance. It is the point.
A Note for More Experienced Leaders
If you’re further along in your career and reading this as someone responsible for a broader program or trying to build one the See, Control, Prove framework is a useful organizing structure for thinking about AI governance at scale.
The insight that tends to get lost at the enterprise level: the evidence is the actual product, not the policy document. Regulators and auditors are increasingly moving from “show me your policy” to “show me your technical evidence.” A beautifully written policy that can’t be backed by timestamped, auditable records of real controls is going to be a very uncomfortable conversation.
Build for proof from the beginning, not as an afterthought.
The Bigger Picture
The organizations that get caught flat-footed on AI governance aren’t usually the ones that never thought about it. They’re the ones that assumed a policy document covered it. They checked the box and moved on.
AI is embedded in day-to-day work now not as an experiment, but as a real part of how people get things done. That means the risk is real too. And the accountability doesn’t sit only with the CISO or the compliance team. It sits with every leader who has a team using these tools.
You’re one of those leaders.
You don’t have to build a perfect program overnight. But you do have to start seeing clearly, setting expectations, and building a record of how your team operates.
That’s governance. And it starts with you.
Next up: Why Security and Compliance Are Not the Same Thing — and Why That Distinction Matters for IT Leads.

Leave a comment