AI & ML
EU AI Act: what product teams are adding to release gates
Documentation, human oversight, and logging requirements are landing in CI templates.
Engineering leads report folding risk classification and model cards into the same pipelines as security scans. The goal is provable traceability without blocking every experiment.
The EU AI Act’s tiered obligations have moved from legal slide decks into engineering backlogs. Product teams shipping systems that qualify as high-risk—or that sit adjacent to those categories—are embedding compliance checks next to security and accessibility gates rather than treating them as a pre-launch paperwork step.
Model cards, data lineage summaries, and human oversight workflows are increasingly versioned alongside application code. The intent is defensible evidence: who approved deployment, what data distributions were used for training or fine-tuning, and how overrides and appeals work when automated decisions affect users.
CI/CD templates are picking up new hooks: static checks for required documentation fields, automated logging of inference metadata in tamper-evident stores, and feature flags that enforce “human in the loop” paths for certain jurisdictions or customer segments. None of this replaces legal review, but it reduces the gap between stated policy and shipped behavior.
Teams that succeed treat compliance as product quality—measurable, testable, and owned—rather than as an external audit event. That mindset is what keeps release velocity acceptable while regulatory surface area grows.