// Research / Trust, PII & Safety

EU AI Act compliance your auditor and your DPO both want.

Risk classification, technical documentation, post-market monitoring, GDPR DPIA - run by engineers who can also read the regulation. We're not a certification body and we don't audit. We translate the AI Act into artifacts your engineering team produces from CI/CD - so the documentation stays current after we leave.

// What we see

Deadlines are real. The artifacts take longer than you think.

01

Classification gets done after the system is built

Most teams discover their system is high-risk halfway through engineering, then retrofit governance onto a stack that wasn't built with Article 9 obligations in mind. Risk management, data governance, transparency, human oversight - all bolted on at week 20, all visible to a competent auditor.

02

Technical documentation is written from a deck

Annex IV documentation gets compiled by a consultant from architecture slides and second-hand information. The notified body or the supervisory authority opens it, asks where the eval results came from, and the answer doesn't connect to the model registry. The remediation is starting over from the actual system.

03

GDPR and AI Act run as parallel programs

DPIA, ROPA, transfer impact assessment, AI Act technical documentation - all overlap by design. Most teams run them through different vendors against the same systems, pay twice, and end up with two incompatible records of the same processing activity.

// Case Study

We trained EasyDocs' invoice extraction model

EasyDocs is the platform provider - they ship document management software to their own customers. We trained the fine-tuned NLP model that runs inside it, auto-extracting VAT numbers, totals, and addresses from invoices and learning from every user correction. Deployed on their servers, no external dependencies.

  • 98%

    field-level extraction accuracy

  • <300ms

    inference time per invoice

  • On-prem

    deployment with no external dependencies

Read the case study
We trained EasyDocs' invoice extraction model

// What we do

Three things that decide whether the documentation holds up.

Most AI Act work is consulting that produces a binder. The compliance program that survives a supervisory authority is the one where the documentation comes out of the engineering pipeline that's already running.

Risk classification before you build

We classify each AI system against AI Act categories (prohibited, high-risk per Annex III, limited-risk, minimal-risk, GPAI) and the GDPR processing-purpose lens before any engineering decisions calcify. The output is a defensible position you can hand to legal and procurement, with clear obligations attached to each system.

Technical documentation from CI/CD

Annex IV is what supervisory authorities ask for. We build it from your model cards, eval pipelines, and registry - so the architecture, data sources, performance metrics, and known limitations stay current as the system evolves. Auto-generated where possible, version-controlled where not.

Post-market monitoring wired to production

Article 72 requires ongoing monitoring with reporting obligations. We build drift detection, performance monitoring, and incident classification into the production system, with reporting templates for national competent authorities and trend analysis fed back into the risk management cycle. Not a quarterly PDF.

// Method fit

AI Act compliance work isn't the right engagement for every AI system.

skip it if

  • Your system isn't high-risk and you don't sell to the EU

    Most marketing-grade chatbots and internal productivity tools aren't high-risk under Annex III. If you don't place AI systems on the EU market and your customers aren't EU-regulated, the AI Act is largely advisory for you - light-touch documentation is enough.

  • You need an ISO-certifiable management system

    If procurement is asking for a certifiable AI management system rather than AI Act conformity, ISO 42001 is the right framework. The two overlap heavily but the deliverable shape is different - controls and evidence vs. risk classification and technical file.

    ISO 42001 AI Management System
  • Your dominant concern is data privacy at the LLM boundary

    If 'customer PII goes to a third-party LLM' is the actual risk, an egress-side redactor is a faster, cheaper control than a full AI Act program. Different problem, different stack - the compliance program comes after the data flow is fixed.

    PII Redaction & LLM Data Privacy

use it if

AI Act compliance fits when you're placing high-risk or GPAI systems on the EU market, your buyers or regulators have started asking, you have an in-flight engineering program that needs the documentation layer wired in, and you want a compliance posture that holds up beyond the launch demo.

// How we work

Classify first. Document from the pipeline. Hand off the monitoring.

Every AI Act engagement starts with classification - because the obligations differ by an order of magnitude across categories. The technical work follows the classification, not the other way around.

01

Risk classification and gap assessment (week one)

We classify each in-scope system against Annex III, the GPAI rules, and the GDPR processing-purpose lens. We cross-reference your existing engineering artifacts (model cards, eval results, deploy logs) and produce a gap matrix tied to specific obligations - so the work plan reflects your actual obligations, not a generic checklist.

02

Build the documentation layer in your stack

Annex IV technical file generated from your CI/CD outputs. Risk management process integrated with the model registry. DPIA and ROPA entries aligned with engineering reality. Your team watches it being built in their own tools - no parallel SharePoint tree to maintain.

03

Hand off the post-market monitoring

We hand off the technical file template, the post-market monitoring dashboards, the incident reporting playbooks for national competent authorities, and the management review cadence. Slack for 30 days after delivery for the questions that come up after we leave.

Michał Pogoda-Rosikoń

// Expert insight

The AI Act isn't a parallel universe to your engineering practice - it's the documentation a competent engineering team should already produce. The teams that struggle aren't bad at compliance, they're missing the link between Annex IV and what their CI/CD already outputs. Our job is to wire those two together.

Michał Pogoda-Rosikoń

Co-founder @ bards.ai

See our open-source work

// Why bards.ai

Why us, instead of a Big-Four AI compliance practice.

Most AI compliance work is run by lawyers without engineering depth. Most AI engineering teams skip the regulation. We do both - and we operate under the AI Act ourselves, in the EU.

EU-based, EU-regulated

We're a Polish company shipping AI for EU and global customers. The AI Act and GDPR aren't homework for us - they're the law we operate under. Our PII research is published, our compliance posture is the one we ship.

Crosswalk to ISO 42001 built in

ISO 42001 covers most of the AI Act management system and technical documentation requirements. We map AI Act technical documentation to ISO 42001 Annex A controls in the same engagement so you don't pay twice.

Senior engineers only, no juniors

Every person on your engagement has shipped AI to production. The technical documentation we produce reads like an engineering document because it's produced by engineers. No ramp-up tax, no learning the regulation on your dollar.

// FAQ

Common questions about EU AI Act compliance

No. We're not a notified body and we don't audit. For most high-risk Annex III use cases the AI Act allows internal control conformity assessment - the company self-assesses against the requirements. We help you build the documentation, controls, and evidence so that internal assessment (or, where required, a notified body's review) holds up. Audit and certification belong to accredited bodies.

Prohibited practices banned from February 2025. GPAI obligations from August 2025. High-risk systems under Annex III: full compliance from August 2026. High-risk systems integrated as safety components of regulated products (Annex I): August 2027. Most companies have 12-18 months of real work ahead, and the documentation requirements are non-trivial.

Two paths. Annex I covers AI as a safety component of products already regulated under EU harmonization legislation (medical devices, machinery, automotive). Annex III lists eight standalone use cases including biometric ID, critical infrastructure, education, employment, essential services (incl. credit scoring and insurance pricing), law enforcement, migration, and administration of justice. We classify against both in the gap assessment.

Heavily. AI Act Article 10 references GDPR for data governance. DPIA under Article 35 GDPR is effectively required for any high-risk AI system. ROPA entries need to capture the AI processing purpose. Lawful basis for training data is its own analysis - legitimate interest is increasingly contested for web-scraped training data after recent CJEU developments. We run AI Act and GDPR in one engagement.

Engagements start at $40K. Most AI Act compliance projects land between $40K and $150K depending on the number of in-scope systems, GPAI vs. high-risk obligations, and whether GDPR work is in scope. Fixed-fee proposal after the first scoping call - no time-and-materials surprise.

// Let's ship it

Send us your AI portfolio. We'll send back a classification.

Tell us about the AI systems, the buyer geography, and the timeline. We'll come back with a classification, a gap, and a roadmap to AI Act and GDPR compliance - usually within a business day. Engagements from $40K, typically 4-8 weeks of structured engineering work.

Michał Pogoda-Rosikoń

Michał Pogoda-Rosikoń

Co-founder @ bards.ai