Lyron
Team reviewing AI automations for EU AI Act compliance on a laptop
Guide

EU AI Act in practice: What SMBs need to do about their AI automations in 2026

· 11 min read

The EU AI Act is no longer a distant threat. Prohibitions have been in force since February 2025, obligations for general-purpose AI models like ChatGPT and Claude since August 2025, and the full requirements for high-risk systems come into play from August 2026. Most SMBs underestimate how much they are already affected – whether through a chatbot on their website, an AI-based lead scoring tool or simple prompt templates in internal systems.

This article is not a legal opinion – it is a pragmatic guide. What does the EU AI regulation actually demand of a mid-market company running AI automations? We outline the five steps to make your existing workflows AI-Act-ready before the first warning letter arrives.

At a glance

  • The EU AI Act applies to all companies, not just large tech.
  • SMBs benefit from relief measures, but not from exemptions.
  • The AI literacy obligation has been active since February 2025.
  • Chatbots, voice agents and AI scoring are directly affected.
  • If your automations are well documented, you are already half-compliant.

What does the EU AI Act actually regulate?

The EU AI regulation follows a risk-based approach: not every AI system is regulated equally, but according to its potential for harm. The regulation defines four risk classes – from unacceptable to minimal. The higher the risk, the stricter the obligations.

Risk class Examples Obligations
Unacceptable Social scoring, manipulative systems, biometric categorisation Banned
High-risk CV screening, credit scoring, medical devices, critical infrastructure Conformity assessment, risk management, documentation, audit
Limited risk Chatbots, voice agents, deepfakes, emotion recognition Transparency (labelling)
Minimal Spam filters, recommendation engines, AI in video games Voluntary codes of conduct

For most SMB automations, the decisive category is limited risk: chatbots, AI-assisted email replies, voice agents and generated content. But anyone using AI for HR decisions, credit checks or safety-critical processes quickly enters the high-risk zone.

The timeline: what applies and when?

The EU AI Act's deadlines are staggered – anyone assuming in 2026 that they have until 2027 is mistaken. The key milestones:

  • February 2025: Prohibitions on unacceptable practices + AI literacy obligation (Art. 4)
  • August 2025: Obligations for general-purpose AI (GPAI) providers – OpenAI, Anthropic, Google. Also: sanctions, governance, authority designations.
  • August 2026: Full obligations for high-risk AI under Annex III – e.g. HR software, credit scoring
  • August 2027: Transitional periods for legacy systems and certain product-embedded AI (Annex I)

For a typical mid-market company running AI automations, this means: the AI literacy obligation and chatbot transparency are binding today. Anyone operating high-risk applications must be able to produce a formal conformity assessment by August 2026.

Who does the AI Act apply to – my SMB too?

Short answer: yes, almost certainly. The AI Act recognises two main roles: provider (anyone developing or marketing an AI system) and deployer (anyone using an AI system in their operations). Every SMB that offers a chatbot, uses ChatGPT for customer communications or runs AI-based lead qualification falls at least into the second category.

The good news: SMBs receive relief measures. For example, reduced fees for conformity assessments, simplified documentation templates and national regulatory sandboxes (in Germany coordinated by the Federal Network Agency). But there is no general exemption: obligations such as AI literacy and transparency apply regardless of company size.

Which obligations apply to typical SMB automations?

Chatbots and voice agents

Every chatbot on your website and every voice agent in your service operation carries a transparency obligation (Art. 50). Users must be able to clearly recognise that they are interacting with an AI – at the latest on first contact. A short note ("This assistant is AI-powered") is enough, but it must be clearly visible.

AI scoring for applications or credit

This is where things get serious: anyone using AI for pre-screening applicants, credit checks or access decisions enters the high-risk zone. That means, among other things: a written risk management strategy, complete training data documentation, human oversight for every decision, and a conformity assessment before deployment.

Email classification and ticket routing

Most AI automations in back-office operations – automatic email sorting, ticket assignment, invoice extraction – fall under limited or even minimal risk. Usually, solid documentation of what the automation does is enough. Full transparency obligations only trigger when end users interact directly with the output.

AI-generated content (text, image, video)

Anyone publishing AI-generated content must label it, as long as it depicts people, events or statements that are meant to appear real – especially deepfakes. Marketing visuals and synthetic social media posts are subject to labelling rules under Art. 50(4).

5 concrete steps every SMB should take now

For the vast majority of mid-market companies, the following five steps cover most of the AI Act's obligations. We recommend working through them in order – each step builds on the previous one.

inventory_2

1. Build an AI inventory

Before you regulate, you need to know where AI is being used. Most SMBs drastically underestimate this: ChatGPT for sales emails, Copilot in Excel, AI features in the CRM, smart filters in the email system, transcription in Teams – all of it counts. List every system with provider, use case and data processed. This catalogue is the foundation for everything that follows, and supervisory authorities will request it if needed.

category

2. Risk classification per system

For each system in the inventory, determine the risk class. Most standard automations (email routing, invoice extraction, content generation) land in minimal or limited. High-risk is concretely defined (Annex III of the regulation) – anyone unsure can consult the European Commission's sector-specific guidelines or the national regulator. This classification determines which obligations follow.

visibility

3. Add transparency notices

Wherever an AI interacts directly with humans or publishes generated content, clear notices must be in place. This applies to: website chatbots, voice agents in service, automated email replies to external recipients, AI-generated images in marketing materials. The wording can be brief ("Automated reply – AI-assisted"), but must be recognisable before the interaction starts.

description

4. Documentation and audit trail

For every productive AI automation, document: purpose, input data, model, provider, thresholds, escalation paths. High-risk systems additionally require a full risk management system under Art. 9 and a permanent audit trail. Anyone orchestrating workflows in n8n, Make or Power Automate gets the audit trail for free: every execution is logged, with inputs and outputs traceable.

groups

5. AI literacy training for staff

Art. 4 of the AI Act obliges employers to provide staff with the AI competence appropriate for their role – for everyone working with AI systems. Concretely: regular training on fundamentals, limits, data protection and common failure modes of AI. A 60-minute baseline plus role-specific deep dives is sufficient for most SMBs. Document attendance and content.

What SMBs most often miss

Hidden AI in existing SaaS tools

Your CRM suddenly has a "lead score AI"? Your HR tool pre-filters applications automatically? Your email client suggests replies? All these features are AI in the sense of the regulation. Review every SaaS contract: which new AI features were rolled out in the past 12 months – and which risk class do they belong to?

The training obligation also covers "ChatGPT users"

Many executives assume the AI literacy obligation only concerns IT departments developing AI themselves. It actually applies to every employee using AI tools – including the sales colleague drafting quotes with ChatGPT, or the marketing manager using Midjourney. Without documented training: breach.

Prompts are documentation too

A frequently overlooked point: the system prompts, RAG configurations and custom GPTs you use are part of the AI system. They belong in the technical documentation. Especially for custom AI agents in n8n or Make, the logic must be traceable – not just for compliance, but for your own future sanity.

Practical example: how a documented n8n automation meets the AI Act

Let's take a concrete case: an SMB uses an AI agent in n8n to classify incoming support emails and reply automatically. What does AI-Act-compliant implementation look like?

What the workflow covers

  • Transparency: The automated reply begins with a note ("This reply was generated with AI assistance").
  • Audit trail: n8n logs every execution – input, prompt, model response, output. Traceable for years.
  • Human-in-the-loop: On uncertainty or critical categories (complaints, legal queries), the ticket routes to a human automatically.
  • Data protection: PII is masked before the LLM call; sensitive attachments never reach external models.
  • Documentation: The workflow itself is the documentation – versioning, comments, test runs included.

The result: the system lands cleanly in limited risk with full transparency compliance. The conformity check is reduced to internal documentation. No external audit, no CE marking, no notified body required. That is the point: if you build your automations documented from the start, the main work is already done.

Three persistent myths about the EU AI Act

Myth 1: "Only big tech is affected"

Wrong. The AI Act does not regulate company size, but how AI systems are used. A 15-person craft business running an automated chatbot on its website has the same transparency obligations as a FTSE 100 corporation.

Myth 2: "We only use ChatGPT, so it's OpenAI's problem"

Half right. OpenAI bears its provider obligations (GPAI rules, active since August 2025). But as soon as you use ChatGPT in your company, you are a deployer – and have your own obligations: AI literacy, purpose documentation, and potentially transparency towards customers.

Myth 3: "I'll wait until it becomes concrete"

Risky. Sanctions are harsh: up to EUR 35 million or 7% of global annual turnover for unacceptable practices. Even "simple" breaches (e.g. missing AI literacy) can cost up to EUR 15 million. And supervisory authorities are building up right now – by 2027, enforcement will be in full swing.

Conclusion: documentation is half the AI Act

At first glance, the EU AI Act looks like a hurdle. In practice – at least for typical SMB automations – it is a documentation project. If you build your workflows from the start with clear descriptions, audit trails and transparency notices, three-quarters of compliance is handled.

The real problems arise where AI is used "hidden": in existing SaaS products, in ungoverned employee prompts, or in unstructured automations where no one really knows what's happening. That is exactly where process automation with transparency stops being nice-to-have and becomes mandatory.

Our advice: start with the AI inventory. Once that's done, you know where you stand – and the remaining steps almost follow by themselves.

AI Act check for your AI automations

In a free initial consultation, we go through your existing automations together and clarify which obligations concretely apply – including an inventory template and a pragmatic implementation plan.

Book a free consultation

Share this article: