In software engineering, we solved the "did anyone break something?" problem years ago. We call it CI/CD — every commit triggers automated checks, if a check fails the pipeline stops, and every result is stored forever. AI governance has the exact same problem. But instead of automated test suites, it runs on annual audits, manually-completed spreadsheets, and policy documents that were accurate the day they were written.
What Is Rules-as-Code?
Rules-as-Code (RaC) is the practice of converting regulatory obligations — typically expressed in natural language — into machine-executable logic. Instead of a lawyer interpreting Article 9 of the EU AI Act and writing a policy document, an engineer encodes the obligation as a check that can run automatically against real system data.
A simple example: the EU AI Act requires high-risk AI systems to maintain logs with "sufficient granularity to enable post-hoc reconstruction." In a RaC approach, a check runs on every deployment asking: does the logging schema capture required fields? Are logs being archived to a compliant location? Is retention policy active? If any answer is no, the deployment is flagged or blocked.
The Four Components of a Rules-as-Code Governance System
1. Regulatory Parsing & Mapping
Translate regulatory text into structured, machine-readable obligations. Each obligation is assigned to a control domain — risk management, logging, documentation, human oversight — and tagged with its source regulation and article reference. This is the Control Library: the authoritative source of truth for what compliance requires.
2. Executable Checks
Each control is implemented as a check — a function that runs against real data and returns pass, warn, or fail with a reason. Checks query your Git history, ticket system, ML platform, log infrastructure, and vendor contracts — wherever the evidence actually lives.
3. CI/CD Integration
Checks run on every release trigger, just like your test suite. If a high-risk AI system is deployed without satisfying its compliance checks, the pipeline can block the deployment, require manual sign-off, or create a tracked exception with an audit trail. Compliance becomes a quality gate, not a quarterly exercise.
4. Evidence Automation
Every check run generates a timestamped evidence artifact: what was checked, what the result was, what data supported the determination. These artifacts are stored in an evidence repository and can be assembled into an audit pack — a complete, traceable compliance record — at the push of a button.
Why This Changes Everything
The traditional compliance cycle creates a predictable failure mode: the company deploys AI, compliance is treated as a one-time exercise, the system evolves rapidly, and by the time an audit arrives the compliance story is six versions out of date and takes weeks to reconstruct. Rules-as-Code eliminates this entirely. The audit pack is always current because evidence collection is always running.
Getting Started: The Minimum Viable Governance Stack
- Identify your high-risk obligations — which regulations apply based on your AI system's use case and jurisdiction
- Map obligations to controls — what specific, verifiable checks demonstrate each obligation is met?
- Automate highest-risk checks first — logging retention, model documentation currency, human oversight records
- Wire checks into your release process — even a GitHub Action that posts a compliance summary on each merge beats nothing
- Store your evidence — every check result, with full context, in a searchable archive
This is exactly what Vigilens is building — a platform that makes the entire Rules-as-Code stack available to AI teams without requiring them to build it from scratch.