EU AI Act: What Your Organization Needs to Do Now
by Tachyon Labs, Research
The EU AI Act entered into force in August 2024, with obligations phasing in through 2027. If your organization builds, deploys, or procures AI systems that touch EU citizens, this regulation applies to you — regardless of where you are headquartered.
This is not a distant compliance exercise. The first enforcement deadlines have already passed. Here is what matters and what to do about it.
The risk-based framework
The Act classifies AI systems into four risk tiers. Your obligations depend on where your systems fall:
Unacceptable risk — Banned outright. Social scoring systems, real-time biometric surveillance in public spaces (with narrow exceptions), and AI that exploits vulnerabilities of specific groups.
High risk — Heavy obligations. This covers AI used in employment decisions, credit scoring, critical infrastructure management, law enforcement, and education. Most enterprise AI deployments that affect people's lives or livelihoods land here.
Limited risk — Transparency obligations. Chatbots, deepfake generators, and emotion recognition systems must disclose that they are AI-powered.
Minimal risk — No specific obligations, but general principles apply.
The critical question for most enterprises: are your AI systems high-risk? If they influence hiring, lending, insurance, healthcare, or operational safety — the answer is likely yes.
What high-risk classification means in practice
Organizations deploying high-risk AI systems must implement:
Risk management systems — Not a one-time assessment. A continuous, documented process for identifying, analyzing, and mitigating risks throughout the AI system's lifecycle.
Data governance — Training data must be relevant, representative, and free from errors. You need to document data provenance and demonstrate that your datasets do not encode prohibited biases.
Technical documentation — Detailed records of the system's purpose, capabilities, limitations, performance metrics, and the decisions it was designed to support.
Record-keeping and logging — Automatic logging of system operations sufficient to enable post-hoc auditing. If your AI makes a consequential decision, you need to be able to reconstruct why.
Transparency to users — Clear information provided to deployers about what the system does, how it should be used, and what its known limitations are.
Human oversight — The system must be designed so that humans can effectively oversee its operation and intervene when necessary. "Human in the loop" cannot be a rubber stamp.
Accuracy, robustness, and cybersecurity — The system must perform as intended, withstand adversarial inputs, and protect against unauthorized access or manipulation.
The timeline that matters
The obligations are phasing in on a staggered schedule:
- February 2025 — Prohibited AI practices banned
- August 2025 — Rules for general-purpose AI models apply
- August 2026 — High-risk AI system obligations fully enforceable
- August 2027 — Obligations for high-risk AI in regulated products
If you are building compliance programs now, you are on schedule. If you have not started, you are behind.
What to do today
Step 1: Inventory your AI systems. You cannot govern what you cannot see. Build a complete registry of every AI system in your organization — purchased, built in-house, or embedded in vendor products.
Step 2: Classify by risk. Map each system to the Act's risk tiers. Be honest about edge cases — regulators will not give credit for generous self-assessment.
Step 3: Gap analysis. For each high-risk system, compare your current practices against the Act's requirements. Where are you already compliant? Where are the gaps?
Step 4: Build the governance framework. Assign accountability, establish review processes, and create documentation standards. This is not a one-person job — it requires coordination across legal, engineering, data science, and business leadership.
Step 5: Implement technical controls. Logging, monitoring, bias testing, adversarial robustness testing, and explainability tooling. Many of these require platform-level capabilities, not manual processes.
The governance gap
Most organizations have data governance frameworks. Few have AI governance frameworks. The gap is significant because AI systems introduce risks that traditional data governance does not address: emergent behavior, adversarial manipulation, feedback loops that amplify bias, and opacity in decision-making.
The EU AI Act is the first major regulatory framework that forces organizations to close this gap. It will not be the last. The NIST AI Risk Management Framework, ISO 42001, and emerging regulations in the UK, Canada, and other jurisdictions are all converging on similar requirements.
Organizations that invest in AI governance infrastructure now will be prepared for the full regulatory wave. Those that treat compliance as a checkbox exercise will find themselves rebuilding from scratch with each new regulation.
Building for compliance, not just passing audits
The difference between organizations that struggle with AI compliance and those that handle it well is infrastructure. Manual compliance processes — spreadsheets, quarterly reviews, ad-hoc documentation — do not scale.
What scales: platform-level governance with automated model inventories, continuous monitoring, built-in audit trails, and reporting that satisfies both technical reviewers and board-level stakeholders.
This is what we build at Tachyon Labs. Not compliance theater — governance infrastructure that makes regulatory readiness a byproduct of how you operate, not a separate workstream.