AboutBlogContact
AI SolutionsApril 17, 2026 4 min read 2

EU AI Act 2025: What Every Software Company Needs to Know in 2026

AunimedaAunimeda
📋 Table of Contents

EU AI Act 2025: What Every Software Company Needs to Know in 2026

The EU AI Act entered into force in August 2024. The prohibition phase began immediately. The GPAI (General Purpose AI) obligations for model providers kicked in August 2025. By April 2026, most remaining compliance deadlines have passed or are imminent.

If you build software that uses AI—or if your clients are in the EU—this regulation affects you. Here's a practical breakdown without the legal jargon.


The Core Framework: Risk Tiers

The AI Act classifies AI systems into four risk tiers. Your obligations depend entirely on which tier your product falls into.

Unacceptable Risk (Prohibited)

These systems are banned outright from August 2024:

  • Social scoring by governments
  • Real-time biometric surveillance in public spaces (with narrow exceptions for law enforcement)
  • AI that exploits psychological vulnerabilities of people
  • Subliminal manipulation systems

For most software companies: Not relevant unless you're in public sector surveillance.

High Risk

This is where most practical compliance burden sits. High-risk AI systems require conformity assessments, documentation, human oversight, and registration in an EU database before deployment.

High-risk categories include:

  • AI in recruitment (CV screening, interview ranking)
  • AI in credit scoring and financial decisions
  • AI in medical devices and healthcare diagnostics
  • AI in critical infrastructure (energy, transport, water)
  • AI used in education (admission decisions, scoring)
  • Biometric categorization systems

For software companies: If your product makes or strongly influences decisions about individuals in these domains, you're in high-risk territory. This affects many HR tech, fintech, and healthtech products.

Limited Risk

Chatbots, AI-generated content, emotion recognition systems. Primary obligation: transparency. Users must know they're interacting with AI. AI-generated content must be labeled.

For software companies: This catches most customer-facing AI features. A chatbot on your website, an AI writing assistant, AI-generated images—all require disclosure.

Minimal Risk

AI in spam filters, AI in video games, AI for inventory forecasting. No specific obligations beyond general product safety law.


GPAI: If You Build or Fine-Tune Foundation Models

The General Purpose AI provisions apply to companies that train or distribute foundation models (like an LLM or a diffusion model).

If you're using OpenAI, Anthropic, or Google APIs—you're a downstream deployer, not a GPAI provider. The compliance burden for the model itself sits with OpenAI/Anthropic/Google.

If you're fine-tuning an open-source model (like Llama 4) and distributing it, you may have GPAI obligations depending on the compute used in fine-tuning.


What Compliance Actually Looks Like in Practice

For a B2B SaaS company with an AI feature:

  • If the AI makes decisions about people in high-risk domains: full conformity assessment, register the system, implement human oversight
  • If it's a chatbot or content generation tool: write a disclosure policy, label AI-generated content, implement a way to opt out

For an e-commerce platform with AI recommendations:

  • Product recommendations: minimal risk, no specific obligations
  • Dynamic pricing that discriminates based on protected characteristics: potentially high risk

For an HR startup with AI screening:

  • This is explicitly high risk. You need technical documentation, a conformity assessment, registration in the EU AI database, human review of decisions, and bias testing.

What About Companies Outside the EU?

The AI Act applies if the output of your AI system is used in the EU—not just if you're incorporated there. If your US or UK or Central Asian company's AI product is accessed by EU users or EU companies, you're in scope.

This is similar to GDPR's extraterritorial effect. The EU market is large enough that most serious software products will eventually need to address this.


Practical Steps for April 2026

  1. Audit your AI features — categorize each one by risk tier
  2. Check disclosure compliance — any user-facing AI needs clear labeling
  3. For high-risk systems — get legal counsel; conformity assessments are not DIY
  4. Document your AI pipeline — what model, what training data, what decisions it influences
  5. Review vendor contracts — does your AI provider indemnify you for compliance failures?

The Act is enforced by national market surveillance authorities with fines up to €35M or 7% of global turnover for violations in prohibited categories.


The Silver Lining

Compliance is real work. But the EU AI Act also creates opportunity. Companies that demonstrate AI compliance will have a competitive advantage in selling to regulated industries—finance, healthcare, government—where buyers are increasingly requiring it in RFPs.

"AI Act compliant" is becoming a procurement requirement, not just a legal obligation.


Aunimeda builds AI systems with documentation and transparency built in from day one. If you're building AI-powered products and need to think through compliance, contact us to discuss your architecture.

See also: AI Solutions, Custom Software Development, Web App Security

Read Also

Vibe Coding in 2026: How AI Tools Are Changing Software Development Foreveraunimeda
AI Solutions

Vibe Coding in 2026: How AI Tools Are Changing Software Development Forever

Andrej Karpathy coined the term in 2025. By 2026, 'vibe coding'—describing what you want and letting AI write the code—is reshaping how teams build software. Here's what actually changed, what works, and what doesn't.

How to Build a Voice AI Assistant for Customer Service in 2026aunimeda
AI Solutions

How to Build a Voice AI Assistant for Customer Service in 2026

Voice AI has crossed the uncanny valley. In 2026, customers can't reliably tell the difference between a voice AI and a human agent. Here's how to build one that actually handles real customer service calls—architecture, tools, and pitfalls.

How to Build an AI Chatbot with Claude or GPT-4o in 2026aunimeda
AI Solutions

How to Build an AI Chatbot with Claude or GPT-4o in 2026

A practical guide to building production AI chatbots: prompt engineering, context management, tool use, and the integration patterns that actually work in real apps.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles