Something significant happened in January 2026 that most UK business owners completely missed.
The Information Commissioner’s Office, the body that enforces data protection law in the UK, published its first detailed guidance on agentic AI systems. If you’ve heard the term “AI agents” thrown around and wondered what it means for your business, this is the document that starts to answer that question. And the implications are more practical than you might expect.
I’m going to break down what the ICO actually said, explain why it matters even if you’re a 20-person company nowhere near building your own AI, and cover the three other regulatory changes happening right now that will affect how UK businesses use automation in 2026.
What Is “Agentic AI” and Why Should You Care?
An AI agent isn’t a chatbot. It’s a system that can take actions on its own, triggered by events, retaining context between tasks, and interacting with other software without a human pressing buttons each time.
If you’ve ever set up a workflow that automatically sends a follow-up email when a customer enquiry sits unanswered for 24 hours, that’s the beginning of agentic behaviour. The AI decides when to act, what to say, and which system to update. No human in the loop.
The ICO’s concern is straightforward: when AI systems make decisions that have real consequences for real people, UK data protection law has something to say about it. Specifically, when those decisions have legal or similarly significant effects on individuals, enhanced obligations kick in.
For a small business, this could look like an AI system automatically rejecting a loan application, filtering job applicants, or scoring customer creditworthiness. These aren’t hypothetical scenarios. They’re things businesses are already automating.
What the ICO Actually Said
The ICO’s January 2026 report, published as part of its Tech Futures series, raised three points that matter for UK businesses of any size.
First, automated decision-making rules apply to AI agents. If your AI system makes decisions about people that have significant effects, you need proper safeguards in place. The UK GDPR has always required this, but the ICO is now explicitly connecting those rules to the newer class of AI tools that businesses are adopting.
Second, hallucinations are a data protection problem. AI systems built on large language models can generate incorrect information and present it as fact. When that incorrect information is about a person, it becomes inaccurate personal data. Under UK law, you’re responsible for the accuracy of personal data you hold and process. If your AI system invents a customer’s address or generates a false credit assessment, that’s your problem to fix.
Third, AI systems can create new personal data at scale. This is subtle but important. An AI agent that analyses your customer database might infer things about people that weren’t in the original data: spending patterns, likely income brackets, churn risk scores. These inferences count as personal data. You need a lawful basis to process them, and you need to tell people about it.
I know what you’re thinking. More regulation, more paperwork, more things to worry about. But honestly, I think this guidance is genuinely useful. It’s not written in dense legalese. It’s the ICO saying: here’s what’s coming, here’s what we expect, and here’s how to get ahead of it before it becomes a problem. For a small business, that kind of early warning is valuable. The companies that read this now and adjust won’t be scrambling when the formal code of practice lands later this year.
The Data (Use and Access) Act: What Changed
Running alongside the ICO guidance, the Data (Use and Access) Act 2025 received Royal Assent in June 2025 and is being phased in through 2026. This is the single biggest change to UK data law since we left the EU, and its core amendments to the UK GDPR are expected to take effect in the coming months.
The headline change: the Act relaxes restrictions on automated decision-making. Organisations can now rely on any lawful basis for automated decisions, provided suitable safeguards are in place. Previously, automated decisions with significant effects could only be made in specific circumstances.
What does this mean practically? It makes it easier for businesses to use automation in decision-making processes. But “easier” doesn’t mean “unregulated.” You still need safeguards. You still need to tell people when automated decisions are being made about them. And you still need to give them the right to challenge those decisions.
The good news for UK businesses trading with European partners: the EU renewed its adequacy decision for the UK until at least December 2031. Data can continue to flow between the UK and EU without additional safeguards. That’s one less thing to worry about.
Google Is Also Paying Attention
This isn’t just about data protection law. Google rolled out a core algorithm update in February 2026 that’s directly relevant to how businesses use AI.
The update targets thin, low-value AI-generated content. If you’ve been using ChatGPT to churn out blog posts without adding genuine expertise or insight, your search rankings are likely to suffer. Google is rewarding sites that demonstrate real topical authority and penalising content that reads like it was generated by a machine and published without human oversight.
There’s an irony here. The businesses most likely to benefit from AI automation are also the businesses that need to be most careful about how they use AI for content. Using AI to streamline your operations is smart. Using AI to mass-produce your marketing content without adding your own knowledge and perspective is a rankings risk.
For small businesses, the lesson is clear: use AI where it saves you genuine time on repetitive operational tasks. Don’t use it as a shortcut for the things that require your expertise, like explaining your industry, sharing what you’ve learned, or helping your customers understand complex problems.
What This Actually Means for a 20-Person Business
If you’re running a company with 10 to 50 employees, you’re probably wondering whether any of this applies to you. The answer is: more than you think, but less than the headlines suggest.
You’re probably already using automated decision-making. If your CRM scores leads automatically, if your email system segments customers based on behaviour, if your invoicing software flags overdue accounts for follow-up, these are all forms of automated processing. Most of the time, they don’t trigger the enhanced obligations because the decisions aren’t “significant” in the legal sense. But it’s worth knowing where the line is.
Your data sits somewhere. This is the bit that catches people out. I explain it to business owners like this: most automation companies build the robot in their house. Your data, your workflows, your customer information, all sitting on their servers, under their control. If they go bust, change their pricing, or get acquired, you’re stuck. We build the robot in your house. You own the infrastructure, you control the data, and nobody can pull the plug on you.
If you’re using cloud-based automation tools right now, your business data is sitting on servers you don’t control, in jurisdictions you probably haven’t checked. The combination of the ICO’s AI guidance and the DUAA changes means that where your data lives and who processes it matters more than ever. Self-hosted automation puts you in control of that answer.
You need to know what your AI tools are doing. Not at a technical level. At a “what decisions is this thing making about people” level. If you’re using AI-powered customer service tools, automated hiring screeners, or predictive analytics on customer data, you need to be able to explain what happens and why. The ICO’s code of practice will likely make this a formal requirement.
Three Things to Do This Quarter
You don’t need a compliance officer or a six-figure legal budget. You need to do three practical things.
Map your automated decisions. Write down every place in your business where software makes a decision about a person without a human reviewing it first. Email automations, lead scoring, customer segmentation, payment risk assessments. Just get the list on paper. Most businesses are surprised by how long it is.
Check where your data lives. For every tool you use that holds customer or employee data, find out: where are the servers? Who else can access the data? What happens to the data if you stop using the tool? If you can’t answer these questions, you have a data sovereignty gap. Our approach to client-owned infrastructure exists precisely because these questions matter.
Read the ICO’s guidance yourself. It’s written in plain English. The Tech Futures report on agentic AI is available on the ICO website and takes about 15 minutes to read. If you’re using AI in your business, it’s 15 minutes well spent. The ICO’s AI and data protection guidance is a good starting point.
Let’s Talk About the “AI Will Replace My Job” Thing
I hear this constantly. It’s the single biggest misconception around automation and AI right now, and this new regulation actually helps put it to rest.
The ICO’s guidance isn’t about banning AI. It’s about making sure humans stay in the picture. Safeguards, oversight, the right to challenge automated decisions. That’s not anti-AI. That’s pro-people.
Automation doesn’t replace jobs. It replaces the parts of jobs that nobody should be doing manually in the first place. The filing, the chasing, the copying data from one system to another, the sending of the same email for the fifteenth time this week. Those tasks don’t need a person. They need a process. And when you automate that process, the person gets their time back to do the work that actually requires a brain, a relationship, or a judgment call.
The businesses getting this right aren’t cutting headcount. They’re freeing up the people they already have to do higher-value work. That’s what good automation looks like. And it’s exactly what the ICO’s framework supports: AI handling the repetitive tasks, humans handling the decisions that matter.
The Bigger Picture
UK AI regulation is moving in a deliberate direction. There’s no standalone AI law yet, and the government has said it prefers a principles-based, sector-specific approach rather than the EU’s more prescriptive AI Act. But that doesn’t mean there are no rules. The UK GDPR, the DUAA, and the ICO’s guidance all apply right now.
The businesses that will navigate this best are the ones that understand where AI is making decisions in their operations, keep their data under their own control, and build automation that they can explain and audit. That’s not just good compliance. It’s good business practice.
If you want to understand what automation could look like for your business, built on infrastructure you own with full visibility over where your data goes, drop me an email. No obligation, no jargon. Just a clear conversation about what’s possible.