Your team is already using AI. The question is whether you know about it
Shadow AI is the compliance risk most organizations haven't addressed yet. It's already happening — and a blanket ban won't fix it.
Your team is already using AI. The question is whether you know about it
Last week a client told me something that has stuck with me. They had just finished their company AI policy, weeks of workshops, legal review, approved tools list. Two days after the rollout, a junior accountant asked if it was okay to paste client invoices into ChatGPT. He had been doing it for six months. Nobody had told him not to.
That is shadow AI. Not hackers. Just people doing their jobs faster, with tools that carry real compliance risk.
The gap is bigger than you think
A KPMG survey found that over half of office workers use AI tools daily. Less than a third of companies have a formal policy covering it. In most organizations, there is a real gap between what employees are doing and what anyone officially knows about.
Shadow IT is not new. When companies blocked Dropbox because the internal file server was slow, employees switched to USB sticks or personal Google Drive accounts. The same pattern is repeating with AI tools, with higher data protection stakes.
The cases look similar everywhere. An admin pastes customer emails into ChatGPT to draft faster replies. A sales rep uses it to polish a proposal. Someone in marketing uploads a market research report to get a summary. Individually, fine. Together, you have sent customer data, internal figures, and possibly trade secrets to external servers without a Data Processing Agreement, without GDPR documentation.
What the law actually says
Under GDPR, submitting personal data to a tool without a valid Data Processing Agreement is a reportable breach. Not if harm occurs later. The submission itself is the event.
That covers more than people tend to assume: customer names, email addresses, purchase histories, employee data, salary details, performance reviews. Even a proposal can contain trade secrets that have no business going through a third-party system.
ChatGPT free and Plus tiers do not include a DPA. OpenAI offers one for Team and Enterprise customers, but it requires an active agreement. Some providers use submitted data for model training unless you opt out in settings. Most users have no idea that setting exists.
Why banning does not work
The standard IT response: block ChatGPT at the firewall.
The employee saving 20 minutes a day summarizing emails switches to their phone. Or finds an alternative tool. The behavior goes underground, it does not stop. You have not removed the compliance risk. You have removed your visibility into it.
This is not speculation. Shadow IT has existed for decades. The pattern repeats because people use tools that help them do their jobs. Blocking access without giving people an alternative does not address the productivity gap. It just moves the problem somewhere less visible.
What a working policy looks like
A useful AI policy answers three questions clearly enough that any employee can recall them without looking anything up: Which tools are approved for general tasks without sensitive data? Which have a signed DPA and can handle internal data? What is never acceptable?
Three sentences on an internal wiki page is enough. If answering those questions requires finding a 20-page document and making judgment calls, the policy will not work.
The goal is not airtight control that nobody follows. It is a clear enough framework that well-intentioned work stays on the right side of compliance.
Where to start
Ask your team directly whether they use AI tools at work. The honest answer will be higher than the official one. That gap is your actual risk profile.
From there, you choose: enterprise licenses with DPAs, meaning Microsoft Copilot, ChatGPT Enterprise, or Google Workspace AI, or self-hosted models running on your own infrastructure that do not send data externally. Both approaches work depending on your budget and risk tolerance. Waiting and hoping the gap does not surface as a regulatory problem is not a real strategy.
Shadow AI is rarely a people problem. It is usually a tooling gap, employees doing what they were hired to do, without a safe framework for doing it.
If you want to figure out which AI tools make sense for which tasks in your organization, and which ones are GDPR-safe to deploy, our free Automation Check covers that in 30 minutes.