Why the EU AI Act applies to UK businesses and how to comply

You sit down at your desk, open Shopify, and see a new order from Lyon. It feels like a win. You recently added a sleek AI chatbot to your storefront to handle shipping queries in French and German.
It runs on a basic ChatGPT integration. It costs you £40 a month. It saves your ops manager three hours a day.
But that £40 subscription just pulled your entire UK business into the jurisdiction of the European Union.
Most UK founders assume Brexit shielded them from Brussels. It didn't. If your software touches an EU citizen, you're on the hook. The newly passed EU AI Act doesn't care where your company is registered. It cares where the outputs of your systems land. If you sell abroad, your current customer-facing tools are likely illegal under the new framework. Ignoring this isn't a strategy. It's a massive unpriced risk.
The extraterritorial AI dragnet
The extraterritorial AI dragnet is the legal mechanism in the EU AI Act that forces UK businesses to comply with European regulations the moment their automated systems interact with EU citizens. You don't need an office in Paris or a warehouse in Berlin. If your website serves a customer in Spain, you're caught in the net.
The European Parliament adopted this landmark law to regulate artificial intelligence based on risk source (https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law). It explicitly applies to providers and deployers outside the EU if the output of their system is used within the bloc.
This is where UK SMEs get blindsided. You might think you're just a small retailer in Leeds. But the moment you plug a generative AI tool into your customer service workflow, the EU considers you a deployer of an AI system.
The UK is taking a lighter, pro-innovation approach. The UK's Artificial Intelligence (Regulation) Bill focuses on principles rather than strict bans source (https://lordslibrary.parliament.uk/research-briefings/lln-2024-0014/). That creates a false sense of security at home.
Founders read the domestic headlines and assume they're safe. They build automated email responders. They launch dynamic pricing bots. They deploy automated CV screeners for remote hires.
Then a European customer interacts with that system. The dragnet snaps shut. The penalties for non-compliance are severe. They start at €7.5 million or 1.5% of global turnover. A fine of that size doesn't just hurt a £5M revenue business. It kills it.
The problem persists because the tools are so easy to buy. You can add an AI support widget to your site in three clicks. The SaaS vendor makes the sale. You take the legal liability.
It's a mess. Nobody knows why the compliance burden falls on the little guy, but it does. End of. The regulators built a framework designed to catch massive tech monopolies, but the net is woven tight enough to drag in a 30-person logistics firm. You can't ignore it and hope they won't notice.
Why outsourcing compliance to your software vendors fails
Relying on your SaaS vendors to shield you from the EU AI Act is a structural failure because their terms of service explicitly push deployer liability back onto you.
I often see lawyers and consultants telling founders to just use enterprise-grade tools like Microsoft 365 Copilot or HubSpot's AI features to stay safe. They're wrong.
Take a standard Zapier flow connected to OpenAI. You set up a Zap to read incoming Zendesk tickets, pass the text to ChatGPT to draft a reply, and email it back to the customer.
OpenAI is the provider under the act. They have to ensure their foundational model meets certain technical standards. But you're the deployer.
The EU AI Act classifies AI systems interacting directly with humans as requiring transparency. You must explicitly inform the user they're talking to a machine.
When your Zapier flow silently emails a customer in Dublin using a human-sounding persona, you're breaking the law. OpenAI won't stop you. Zapier won't stop you. The API just processes the JSON payload you send it.
The vendor provides the engine. You decide where to drive the car. If you drive it into a regulated zone without the required disclosures, the fine lands on your desk. The API doesn't care about cross-border trade rules. It just returns a string of text.
The pattern I keep seeing is founders embedding a chatbot using a third-party wrapper. The wrapper promises AI compliance on its landing page. But look at the webhook. The webhook parses the JSON from the user's input, sends it to the LLM, and returns the answer.
It doesn't check the user's IP address. It doesn't verify their geolocation.
If a German customer asks a question, the system treats them exactly like a customer in Manchester. The lack of routing logic means you're treating unregulated and highly regulated users exactly the same. That's a massive operational blind spot.
The API endpoints you rely on are entirely agnostic to the law. They take an input, and they generate an output. The moment you assume HubSpot or Pipedrive is managing your cross-border legal exposure, you've lost control of your own systems.
Building a compliant routing layer

A technical architecture using Make.com to separate UK and EU traffic, ensuring mandatory AI disclosures are injected only when legally required.
Building a compliant routing layer means intercepting every customer interaction, checking their jurisdiction, and applying the correct AI transparency rules before the system generates a response.
Let's walk through a real setup for a £12M e-commerce brand selling across the UK and the EU. They receive 500 support emails a day.
They want to use AI to draft replies for returns and shipping queries. Not plugging Zendesk directly into an LLM. A middleware layer sits in between.
Here's the operational flow. An email lands in Zendesk. A webhook triggers a scenario in Make. Make pulls the customer profile from Shopify to check their shipping address.
If the address is in the UK, Make sends the email body to the Claude API with a standard prompt. Claude drafts a warm, human-like reply. Make pushes it back to Zendesk as a draft for an agent to approve, or sends it directly.
If the address is in the EU, the logic branches. Make still calls the Claude API. But it appends a strict instruction to the prompt: the output must begin with a clear, mandatory disclosure that the message is AI-generated.
Make takes Claude's response and injects a hardcoded footer: "This message was generated by an artificial intelligence system." Only then does it update Zendesk.
In most cases, I recommend using Make for orchestration, Shopify for geolocation data, Claude for generation, and Zendesk for delivery.
You aren't guessing. You aren't relying on the LLM to remember to add the disclaimer. You enforce the transparency requirement at the routing level. The API call has a strict JSON schema that separates the AI's answer from the mandatory compliance text.
A build like this takes roughly two to three weeks. It costs between £6,000 and £9,000 depending on how messy your existing Zendesk tags are.
The known failure mode is data mismatch. If a customer emails from a new address not linked to their Shopify account, Make can't find their location. The system defaults to a null value.
You catch this by failing safely. If the jurisdiction is unknown, the system must default to the strictest regulatory standard. It assumes they're in the EU and applies the disclosure.
And yes, that's annoying. It's slightly annoying for a UK customer to see an AI disclaimer. But it's catastrophic for your business to miss an EU customer and breach the act.
The limits of transparency routing
Transparency routing breaks down completely when you try to apply it to systems the EU classifies as high-risk, where simple disclaimers are legally insufficient.
The routing layer works for customer service, marketing, and basic ops. These are low-risk AI use cases. The act just demands transparency.
But if you use AI to screen job applicants, you hit a wall. The EU classifies AI used in employment and worker management as high-risk.
If you use an automated tool to parse CVs from European applicants, adding a disclaimer does nothing. You need a full conformity assessment. You must prove the system is free from bias. You need human oversight protocols. You must register the system in an EU database.
If your invoices come in as scanned PDFs from legacy European suppliers, and you use an AI vision model to extract the data, that's fine. It's internal admin. But the second that AI model makes a decision that affects an EU citizen's livelihood or legal status, the compliance burden multiplies by ten.
Pay attention to this part. You can't solve a high-risk classification with a clever Make scenario.
Before you build anything, audit your use cases. I tell founders this constantly: if you're scoring human behaviour, predicting creditworthiness, or filtering job candidates, stop. The technical debt isn't worth the legal risk. Stick to using AI for operational efficiency and text processing. Leave the high-risk decisions to humans.
Three questions to sit with
The extraterritorial AI dragnet is already active. The grace periods are ticking down. The rules of cross-border trade have fundamentally changed. The technology moved fast, and the regulation is finally catching up. You need to map your automated systems today, before a routine customer interaction turns into a structural crisis.
- When a customer interacts with your automated systems, what exact technical mechanism verifies their geographic location before generating a response? A simple IP check often fails when users have a VPN active.
- If a regulator audited your customer service platform tomorrow, could you produce a log showing that every European user was explicitly informed they were speaking to a machine? You need hard evidence, not just a verbal assurance from your ops manager.
- Which of your current SaaS subscriptions use generative models in the background, and do their terms of service leave you holding the bag for deployer compliance? Go read the fine print on your chatbot provider's website.
Get our UK AI insights.
Practical reads on AI for UK businesses — teardowns, how-to guides, regulatory news. Unsubscribe anytime.
Unsubscribe anytime.