Skip to main content
AI Tool Radar
DACH Focus

EU AI Act in 2026: What Solopreneurs and Small Businesses in DACH Actually Need to Do

August 2, 2026 was supposed to be the AI Act enforcement cliff. A failed trilogue on April 28 left the deadline uncertain. Meanwhile Article 4 has been binding since February 2025 and applies to every solo business that uses ChatGPT. An honest, source-heavy guide.

13 min read2026-04-30By Roland Hentschel
ai-acteuregulationdachcompliancesolopreneursme

Why this post exists#

If you run a one-person agency in Hamburg, a freelance dev shop in Vienna, or a coaching practice in Zurich and you use ChatGPT, Claude, Copilot, or any other AI tool in your business — you are inside the scope of the EU AI Act. That is true regardless of headcount, regardless of revenue, and regardless of whether you "build" AI or just use it. The question is not whether the regulation applies to you. The question is which parts already do, which parts might be delayed, and what compliance actually looks like for a single-person business in 2026.

Most existing AI-Act guides are written for enterprises with legal departments. This post is the opposite — written for the solopreneur who needs to know what to do this week, what to ignore, and where the genuine deadlines are.

Not legal advice. The Act is being actively re-negotiated. For your specific situation, talk to a Datenschutz lawyer.

The current state: April 2026#

The AI Act entered into force on 1 August 2024. Its provisions apply in phases, not all at once. As of late April 2026, three things are simultaneously true:

  1. Article 5 (prohibited practices) and Article 4 (AI literacy) have been binding since 2 February 2025. No grace period left.
  2. The big "Article 6 / high-risk systems" deadline of 2 August 2026 is in active legal limbo. The European Commission's Digital Omnibus on AI (published 19 November 2025) proposes pushing that deadline to 2 December 2027 for standalone systems and 2 August 2028 for embedded ones. Source: DLA Piper analysis of the Digital Omnibus.
  3. The political trilogue on 28 April 2026 ended without agreement. The next attempt is scheduled for 13 May 2026. If no deal is reached and ratified before 2 August 2026, the original deadline applies as written.

That last point is the source of all the uncertainty you are reading about. Lawyers are split: some firms tell clients to plan for August 2026, others say the delay is "a near-certainty." Both are guessing. The honest answer is: prepare as if the original deadline holds, because (a) it might, and (b) most of the prep work would be useful regardless.

Source for the timeline mechanics: the official EU AI Act implementation timeline and Holland & Knight's April 2026 analysis.

What is already binding (and most solopreneurs don't know)#

Article 5: Prohibited practices (since 2 February 2025)#

Eight categories of AI use are flat-out illegal in the EU. Most solopreneurs will read the list and think "obviously I don't do that." Read it anyway:

  1. Subliminal or manipulative AI techniques causing harm.
  2. Exploiting vulnerabilities of specific groups (age, disability, social/economic situation).
  3. Social scoring by public or private actors leading to detrimental treatment in unrelated contexts.
  4. Predicting criminal behaviour purely from profiling personality traits.
  5. Untargeted facial-image scraping from the internet or CCTV.
  6. Emotion recognition in workplaces and educational institutions (with narrow exceptions).
  7. Biometric categorisation inferring race, political opinion, religion, sexual orientation, etc.
  8. Real-time remote biometric identification by law enforcement in public spaces.

Source: Article 5, EU AI Act consolidated text and the Commission's February 2025 guidelines on prohibited practices.

The two that catch real businesses by surprise:

  • Emotion recognition in the workplace. If you run a small agency and adopt an AI tool that "analyses sentiment of meeting recordings to flag burnout risk," that is now illegal in the EU outside narrow medical/safety contexts. Doesn't matter that you have 3 employees.
  • Biometric categorisation by protected characteristic. Some marketing analytics tools advertise "audience segmentation by inferred demographics from photos." If those inferred demographics include race, religion, or sexual orientation — illegal.

Article 4: AI literacy (since 2 February 2025)#

This is the obligation almost no solopreneur knows about, and it applies to every single one of you who uses ChatGPT in your work.

The text: providers and deployers of AI systems must ensure a sufficient level of AI literacy of their staff and other persons dealing with AI systems on their behalf, taking into account technical knowledge, experience, education, and the context the AI is used in.

Per the European Commission's official Q&A: "If your company uses any AI tool, you are a deployer. The obligation makes no distinction by sector, company size, or type of AI used." Source: AI Literacy — Questions & Answers, European Commission.

For a solopreneur, "ensuring staff AI literacy" sounds absurd — you are the staff. But the obligation still applies. What it concretely means:

  • You must be able to articulate the limitations of the AI tools you use.
  • You must understand the basic risks of the AI you deploy in client work (hallucination, data leakage, bias).
  • You must be able to explain to clients what the AI is doing and why.
  • If you have any contractor, freelancer, or virtual assistant using AI on your behalf — they fall under the same literacy obligation.

The penalty for Article 4 non-compliance falls under the general infringement tier of the Act: up to EUR 7.5 million or 1.5% of global annual turnover, whichever is higher. Source: Latham & Watkins client alert on Article 4. For a one-person business those caps are theoretical, but the 1.5% turnover formula gives national authorities a calibrated lever.

Enforcement of Article 4 only fully kicks in on 2 August 2026, when national market surveillance authorities are required to be in place. In Germany that role is shared between the Bundesnetzagentur (lead) and the BfDI; in Austria the data protection authority and the Telekom-Control-Kommission share it. Source: Bundesnetzagentur — AI Act prohibited practices and market surveillance role.

Translation for solopreneurs: between now and 2 August 2026, your "AI literacy programme" should at minimum exist on paper. Even if it's a one-pager describing which AI tools you use, what you trained yourself on, and what you tell clients about AI involvement — that beats nothing.

What was supposed to start on 2 August 2026#

Assuming the Digital Omnibus delay does not pass:

Transparency obligations (Article 50)#

Apply directly to deployers and providers. The relevant ones for a solo business:

  • Chatbots: if you put an AI chatbot on your website that interacts with humans, you must disclose that they are talking to an AI. Even when "obvious from context," disclosure is recommended.
  • AI-generated content: any AI-generated image, audio, or video that constitutes a "deep fake" must be marked as such. The Act allows artistic, satirical, and clearly-creative exceptions.
  • AI-generated text published in the public interest (think: news, opinion pieces) must be labelled, unless the content has gone through human editorial review and the publisher takes responsibility.

Source: Article 50, AI Act consolidated text.

For most service-business solopreneurs the practical obligation is: a footer line saying "this site uses AI-generated content where labelled" plus inline labels on AI imagery. Many German-language sites already do this voluntarily.

High-risk systems (Article 6, Annex III)#

This is the part the Digital Omnibus is trying to delay. High-risk includes AI in: HR/recruitment screening, credit scoring, education grading, medical devices, critical infrastructure, law enforcement, migration, and democratic processes.

For a typical DACH solopreneur this matters in exactly two scenarios:

  1. You are a recruiter or use AI-driven CV screening. The tool you use must be CE-marked under Article 6 once enforcement begins. You as deployer must run an Article 27 fundamental-rights impact assessment.
  2. You build any kind of AI grading or scoring product — language testing, exam scoring, edtech assessment. You become a provider of a high-risk system, with documentation, data governance, post-market monitoring, and conformity assessment obligations.

Outside those scenarios, Annex III probably does not apply to you. But check carefully if you operate in: legal-tech, fintech (creditworthiness scoring), medtech (any decision-support), or anything touching public administration.

GPAI provider obligations (Article 53+)#

These apply to companies that make General-Purpose AI models — OpenAI, Anthropic, Google, Mistral, etc. They do not apply to you as a user.

Indirect impact: starting 2 August 2026, the Commission can demand documentation, conduct evaluations, and impose fines on GPAI providers. Expect those providers to push more compliance information, watermarking metadata, and transparency notices into their APIs and EULAs. For users of those APIs, this is mostly a cost-passthrough question over the next 12 months. Source: Article 53 enforcement timeline.

The SME and start-up provisions you should actually use#

The Act does not exempt small businesses, but it does carve out concrete SME-friendly mechanisms. Most are underused because almost nobody writing about the AI Act cares about the bottom of the market.

Free regulatory sandbox access. Article 57 mandates each member state to operate at least one AI regulatory sandbox by 2 August 2026, with priority access for SMEs and start-ups, free of charge. In Germany, the Bundesnetzagentur is building this; in Austria, the RTR; in Switzerland (not EU but EEA-relevant), EFD has signalled interest. Useful if you are building any AI product near the edge of high-risk classification.

Lighter documentation for SMEs. The Act provides that SMEs and start-ups can submit "equivalent documentation" instead of the full technical file, subject to national authority approval. Source: Holisticai — SME support measures under the AI Act.

No mandatory consultation in fundamental-rights impact assessments. SMEs are explicitly exempt from the consultation requirement when running an FRIA, though they are encouraged to where possible.

Proportional fines. Penalties must take into account the size and market share of the company. A 1.5% global turnover cap on EUR 30,000 of revenue is EUR 450, not EUR 7.5 million. National authorities still set actual thresholds.

Caveat: none of these reduce the substantive obligations. They make compliance cheaper and lighter — they do not exempt you.

Are you a "deployer" or a "provider"?#

This distinction decides which obligations attach to you. Confusion here is the single most common mistake among solopreneurs reading the Act.

Deployer: uses an AI system under their own authority. Almost every solopreneur is a deployer of multiple GPAI systems (ChatGPT, Claude, Copilot, etc.).

Provider: develops an AI system or has one developed and places it on the market under their own name or trademark.

The trap: if you take a GPAI model and substantially modify it, fine-tune it for a specific high-risk purpose, and then offer that fine-tuned system to clients under your own brand — you become a provider of a high-risk system. The threshold for "substantial modification" is set out in Article 25 and is being actively litigated in legal commentary. A loose rule of thumb: light fine-tuning + same intended purpose = still deployer; heavy fine-tuning + new intended purpose, especially in Annex III categories = now provider.

Source: Article 25 on responsibilities along the AI value chain.

For a solopreneur consultant who wraps ChatGPT in a custom GPT and sells access — almost certainly still a deployer, because the underlying purpose is general-purpose chat. For a solopreneur who fine-tunes Llama on candidate CVs and sells "AI-powered recruiting screening" — likely a provider of a high-risk Annex III system, with all the documentation that implies. The difference is a 100x compliance burden.

Concrete five-item checklist for the next four weeks#

  1. Write your AI literacy one-pager. List every AI tool you use (ChatGPT, Claude, Copilot, etc.), what you use it for, the limitations you understand, and what you tell clients about AI involvement. This is your Article 4 compliance evidence. Update quarterly.

  2. Audit your client-facing AI for Article 50 disclosure. Any chatbot needs a clear "you are talking to an AI" notice. Any AI-generated image on social or in marketing needs a label or note. Most solopreneurs are non-compliant here today and don't realise it.

  3. Map your tools against Annex III. If anything you do touches recruitment, scoring, education assessment, credit, or critical infrastructure — even tangentially — get specific legal review before 2 August 2026.

  4. If you have contractors using AI on your behalf — sub-contract clause update. Article 4 makes you responsible for "other persons dealing with AI systems on your behalf." Update freelancer agreements with a one-paragraph AI literacy and confidentiality clause. Especially relevant for VAs, content writers, and overseas dev contractors.

  5. Watch the 13 May 2026 trilogue. If the Digital Omnibus passes, the high-risk timeline shifts to December 2027 / August 2028 and your urgency drops. If it fails again, prepare for 2 August 2026 enforcement as written. Sources for the trilogue calendar: the official EU AI Act news feed and GDPR Register's tracking of the omnibus.

What to ignore (for now)#

If you are a one-person service business that uses standard SaaS AI products and does not develop, fine-tune, or resell AI models:

  • The conformity-assessment process under Article 43 does not apply to you. That is for providers of high-risk systems.
  • The post-market monitoring plan under Article 72 does not apply.
  • The CE marking requirement does not apply.
  • The full technical documentation under Annex IV does not apply.

If you read AI-Act guides written for enterprises and feel overwhelmed — most of what they cover is genuinely not your problem. Article 4 + Article 5 + Article 50 disclosure cover 90% of your real exposure.

The honest summary#

For a DACH solopreneur in April 2026, the AI Act is mostly a paperwork hygiene exercise, not an existential threat. The two binding pieces — prohibited practices (which you are very likely already complying with by accident) and AI literacy (which you probably are not) — are cheap to address with a one-pager and a contractor-clause update. The big-ticket high-risk obligations are either narrowly applicable or actively being delayed.

The practical risk is not the EUR 7.5 million fine cap. It is sleeping through Article 4, getting a complaint from a disgruntled client or contractor, and ending up as the test case the Bundesnetzagentur or RTR decides to make an example of in late 2026. That outcome is avoidable with one afternoon of paperwork.

Do the afternoon of paperwork.

Sources#

EU primary sources

Digital Omnibus and trilogue tracking

Independent legal analysis


Roland Hentschel

Roland Hentschel

AI & Web Technology Expert

Web developer and AI enthusiast helping businesses navigate the rapidly evolving landscape of AI tools. Testing and comparing tools so you don't have to.

Tools Covered in This Post

More from the Blog