Trends

GDPR and AI Automation: What You Can and Cannot Do

October 15, 202512 min read
GDPR and AI Automation: What You Can and Cannot Do

The most frequently asked question at every automation project: "Is this allowed under GDPR?" The short answer: yes, if you set it up properly. The longer answer is more nuanced than most articles make it seem, because the GDPR isn't a simple checklist. It's a framework that forces you to think about what you do with other people's data. That's not a bad starting point.

What it comes down to

The GDPR doesn't prohibit using AI. The law requires that you have a lawful basis for processing personal data, that you're transparent about what you do, and that you take appropriate security measures. That applies whether you work manually or with AI.

The difference with AI: the scale is larger and the processing is less visible. An employee reading invoices doesn't memorize the names. An AI model processing invoices potentially sends that data to an external server. That's where the attention needs to be. Not because AI is inherently less safe, but because the data takes an extra step that you need to secure and document.

The six legal bases and which ones apply

The GDPR has six legal bases for data processing. In AI automation, we practically use three. Consent: the data subject has explicitly agreed, for example via a contact form or AI scan. Performance of a contract: the processing is necessary to fulfill a contract, for example invoice processing for a client. Legitimate interest: the processing is necessary for a legitimate business interest, provided the data subject's interest doesn't outweigh it.

That last one, legitimate interest, is the most commonly used basis for internal process automation. If you have your own invoices automatically processed, that's a legitimate business interest. You don't need explicit consent from your suppliers for that. But you do need to document it in a processing register.

Three concrete choices we make

One: we process data within the EU. All n8n instances run on servers in Amsterdam or Frankfurt. The choice for European hosting is non-negotiable. It saves discussions with the data protection officer and prevents legal complexity around international transfers.

Two: we choose Azure OpenAI over the regular OpenAI API. Microsoft guarantees with Azure OpenAI that your data isn't used for model training, that the data is processed in the EU (West Europe region), and that a data processing agreement is available that complies with GDPR. The regular OpenAI API lacks those guarantees.

Three: we minimize the data sent to the AI model. If we only need an invoice amount and supplier name, we don't send the entire document including contact person and phone number. Data minimization is a core GDPR principle and costs little extra effort in practice if you build it in from the start.

DPIA: when mandatory, how we handle it

For large-scale or systematic processing of personal data with AI, a Data Protection Impact Assessment (DPIA) is mandatory. In practice: if you structurally run customer data through an AI model, you must document what you do, why, what risks exist, and what measures you take.

We deliver a DPIA template with every project that's specifically tailored to the solution we build. Not a generic 50-page document, but a practical 3-4 page overview that your Data Protection Officer can review. It describes which data is processed, through which systems, with what security, and what risks exist. Concrete and specific.

Data processing agreements: the forgotten step

If you use an AI service that processes data on behalf of your organization, you're required to have a data processing agreement in place. This applies to Azure OpenAI, your hosting provider, your email service, and every other party that has access to personal data.

In practice, these agreements are often already available as standard documents from the provider. Microsoft, Google, and AWS offer them as part of their enterprise contracts. But you need to actively accept and archive them. During an audit, the regulator wants to see proof that you have processing agreements with all parties in the chain.

The pitfall: shadow AI

The biggest GDPR risk in most organizations isn't in professional AI implementations. It's in employees using ChatGPT, Gemini, or Claude themselves to process customer data. Without a data processing agreement. Without management knowing. Without any control over where that data goes.

A concrete example: an employee pastes a customer complaint email into ChatGPT to generate a response. That email contains name, address, customer number, and details about the problem. That data goes to OpenAI's servers in the US, can be used for model training, and there's no data processing agreement.

A clear AI policy for your organization, with guidelines on which tools employees may use and which data they may input, is more important than the technical GDPR compliance of any individual automation project. We help clients draft such a policy as part of every project.