Guardrails that let you move fast without breaking trust.
Bias audits, compliance frameworks, and responsible AI policies so you can ship fast without regulatory or reputational risk. Mapped to ISO/IEC 42001 and NIST AI RMF.
The Problem
Ungoverned AI is already a liability.
Invisible bias is indefensible bias: Most AI systems aren't tested for fairness across demographic groups before they reach users. When a hiring algorithm downranks candidates from certain postcodes, or a credit model produces different approval rates by gender, the liability is real. Regulators don't care whether the bias was intentional.
Compliance frameworks keep multiplying and they don't agree: ISO/IEC 42001, NIST AI RMF, the EU AI Act, RBI guidance on AI in financial services. Each has different scope, vocabulary, and documentation requirements. Without a unified governance architecture, you get overlapping audits, redundant docs, and gaps in every framework.
No audit trail when something goes wrong: When an AI decision is challenged by a regulator, customer, or board member, the question is always: 'Show us how this decision was made.' If you can't produce model versioning records, data provenance, and a documented review process, that gap becomes evidence of negligence.
Retrofitting governance costs ten times more: Adding compliance controls to a live AI system means auditing every model in production, tracing data lineage backward, and often retraining on filtered datasets. Companies that do this upfront spend a fraction of what incident-driven remediation costs.
AI policies that nobody follows: Plenty of companies have written some version of an 'AI policy.' Few have one with teeth: defined roles, mandatory review gates, escalation paths, and a way to flag problems from the field. A document in Confluence is not a governance programme.
Our Approach
Built to operate — every day, by your team. A structured programme that maps your AI risk exposure, closes the gaps that matter, and gives you governance your team can run.
Phase 1 — AI risk and exposure mapping (Week 1): We inventory every AI system across your organisation, including models embedded in third-party tools, and assess each against a risk taxonomy: bias exposure, data privacy, explainability, and regulatory scope. We also review existing policies, documentation, and prior audit findings. The output is a full picture of where your highest risk sits before we do anything else. Deliverable: AI system inventory and risk exposure map with criticality ratings by system and risk category
Phase 2 — Bias audit and compliance gap analysis (Weeks 2-3): For your highest-risk models, we run bias audits across demographic and operational groups relevant to your use case. In parallel, we run a gap analysis against the compliance frameworks that apply to your industry and jurisdiction (typically ISO/IEC 42001, NIST AI RMF, and sector-specific regulation). Every gap is documented with a severity rating and remediation path so findings translate directly into prioritised action. Deliverable: Bias audit report for priority models, compliance gap analysis mapped to applicable frameworks, remediation priority matrix
Phase 3 — Governance framework design (Weeks 3-5): We design the governance your organisation needs: an AI risk policy built for your operating model, role-specific accountability structures, mandatory review gates in the model development lifecycle, incident response procedures, and documentation standards for audit readiness. Where applicable, we map controls to ISO/IEC 42001 to support certification. The framework is built to be operationally realistic, not aspirational controls that disappear six months later. Deliverable: AI governance policy suite, accountability framework, model lifecycle review procedures, documentation templates
Phase 4 — Implementation, training, and handover (Weeks 5-6): We work with your technical and compliance teams to put controls in place, close prioritised remediation gaps, and add governance checkpoints to your development workflow. We run targeted training for the roles accountable for ongoing compliance: role-specific guidance for model owners, data teams, and senior leadership — tailored to each group's responsibilities. The engagement closes with a documented handover and a 90-day review checkpoint. Deliverable: Implemented governance controls, training sessions for accountable roles, handover documentation, 90-day review schedule
Deliverables
Assessment Phase (Weeks 1-2)
- AI system inventory covering all models in use, including embedded third-party AI
- Risk exposure map with criticality ratings across bias, privacy, explainability, and regulatory scope
- Compliance gap analysis mapped to ISO/IEC 42001, NIST AI RMF, and applicable sector frameworks
- Bias audit report for priority models with cohort-level findings and severity ratings
Framework Design Phase (Weeks 3-5)
- AI governance policy suite: risk policy, acceptable use policy, and data governance addendum
- Accountability framework with defined roles, decision rights, and escalation paths
- Model lifecycle review procedures with mandatory gates from development through deprecation
- Audit-ready documentation templates mapped to applicable frameworks
Implementation & Handover (Weeks 5-6)
- Implemented governance controls with evidence of closure for prioritised remediation gaps
- Role-specific training for model owners, data teams, and compliance leads
- Handover documentation and 90-day review checkpoint
Who This Is For
Right for you if: You have AI in production (or deploying soon) and need governance before a regulator, auditor, or enterprise customer asks for it.. You operate in a regulated industry (financial services, healthcare, insurance, HR tech) where AI decisions carry compliance or fiduciary exposure.. You want ISO/IEC 42001 certification or formal conformance with NIST AI RMF, and you need a structured programme — a checklist won't cut it.. You've had an incident involving an AI system and need to show remediation to stakeholders..
Not right if: You're still evaluating whether to use AI at all. Start with our AI strategy and diagnostic to identify use cases before building governance around them.. You want a one-hour policy template. Governance that holds up under scrutiny requires a real assessment of your systems and risk profile..
Use Cases
Financial Services: A regulated fintech lender had deployed an AI-assisted credit underwriting model and was approaching a regulatory audit. The model had no formal documentation of training data sources, feature selection rationale, or fairness testing. The compliance team had three months to produce an audit-ready governance package. — Ran a retrospective bias audit across approval outputs segmented by income band, geography, and gender-inferred features. Documented the model development process from available artefacts, filled gaps with technical interviews, and built a compliance framework consistent with RBI guidance and ISO/IEC 42001. Produced the full audit documentation package and briefed the compliance lead ahead of the regulatory review.. Outcome: The lender passed the audit without remediation requirements. The compliance team now runs quarterly model reviews using our documented procedures.
HR Technology: A SaaS platform used by 40+ enterprise clients for candidate screening had complaints from two clients about disparate shortlisting rates across demographic groups. The company needed to investigate quickly, respond credibly, and prevent recurrence without overhauling the product. — Ran a bias audit on the screening model's outputs across the client datasets in question. Found that one feature (a proxy for candidate location) was driving disparate outcomes in roles where geography was irrelevant. Recommended targeted feature adjustment rather than full retraining. Built a fairness testing protocol into the model release process and drafted client-facing disclosure language.. Outcome: Both enterprise clients were retained. The feature adjustment reduced the demographic outcome gap by 78% on affected cohorts. The fairness testing protocol became a release requirement for all subsequent model updates.
Healthcare: A hospital network piloting AI-assisted clinical decision support tools needed to show responsible AI governance to its board and the Ministry of Health as part of a grant compliance review. No formal governance existed, and the AI systems spanned three vendors and two internally built tools. — Inventoried all five AI systems and assessed each against a clinical AI risk framework covering patient safety, explainability, and data privacy. Produced a governance policy suite built on NIST AI RMF. Trained clinical leads and IT on their accountability roles. Prepared the board-level governance summary required for the grant compliance submission.. Outcome: Grant compliance review completed without conditions. The board adopted the governance policy at its next meeting. The hospital network now runs quarterly AI system reviews using the framework we delivered.
Results
Governance that holds up when it matters
Fintech - mid-market credit platform: Regulatory audit passed without remediation requirements. A regulated fintech lender with $45M in annual disbursements came to us three months before a scheduled regulatory audit of their AI underwriting model. The model had been in production for 18 months with no governance documentation. We ran a retrospective bias audit, reconstructed the compliance documentation, and built a governance framework consistent with RBI guidance and ISO/IEC 42001. The audit passed without conditions. The compliance team now runs the framework on their own: quarterly model reviews, documented exception handling, and a defined escalation path when model performance drifts.
Frequently Asked Questions
What's the difference between AI governance and AI compliance?
Compliance is the floor: meeting the specific requirements of a regulation or standard. Governance keeps you above the floor over time. You can pass a point-in-time compliance check without governance, but you'll fail the next audit cycle when your models have drifted and your docs haven't kept up. We build governance that makes ongoing compliance a byproduct of how your team already works — so it never becomes a periodic scramble.
Which frameworks do you work with?
Primarily ISO/IEC 42001, NIST AI RMF, and the EU AI Act risk classification. India: RBI, SEBI, and CDSCO guidance. US: OCC/CFPB on AI in lending, FinCEN for AML, HIPAA/HITECH for healthcare, FDA guidance on AI/ML-based SaMD. UK: FCA and PRA frameworks. If your auditors require a framework not listed here, we assess it during the scoping call.
How long does a governance engagement take?
A full engagement (risk mapping, bias audit, framework design, implementation, handover) runs about six weeks. Organisations with a smaller AI footprint or narrower scope can finish in three to four weeks. We scope precisely after the initial assessment so you're not paying for phases that don't apply.
Can you audit a model built by a third-party vendor?
Yes, within limits. We can audit model outputs (what decisions the model makes and how outcomes distribute across groups) without access to model weights or training data. We can also assess the vendor's documentation against applicable standards. We can't validate internal model architecture or retrain a vendor's model. If the audit turns up a serious concern, we'll help you structure the conversation with that vendor.
What if our AI systems are still being developed?
Earlier is better. Adding governance checkpoints during development costs far less than retrofitting after deployment. If your systems are still being built, we can design a governance framework and put review gates into your build process from the start. This is where ISO/IEC 42001 applies most naturally.
Do you offer ongoing governance support after the engagement?
Yes. The handover is set up so your team can run the framework on their own, and most do. For organisations that prefer ongoing external support (quarterly bias audits, annual framework reviews, or on-call guidance when a new AI system comes in), we offer a retainer scoped to your needs. We'll outline options at handover.





