top of page

AI Governance for Financial Services

  • Feb 16
  • 7 min read
AI Governance for Financial Services







Most financial institutions are already using AI. Transaction monitoring, fraud detection, customer onboarding, credit decisioning, risk scoring — the adoption has happened quickly and, in many cases, ahead of the governance structures needed to support it.


That gap is now a regulatory priority. Across the EU, the UK, the US, and Singapore, regulators are converging on a common expectation: if you deploy AI in a regulated financial services business, you need to be able to explain how it works, demonstrate that it is fair, and prove that a human being is accountable for it.


This article sets out what regulated firms - particularly payment institutions, electronic money institutions, neo-banks, lenders, and fintechs - need to understand about AI governance, the frameworks that matter, and the practical steps involved in getting it right.


Why AI Governance Matters Now

The regulatory pressure on AI in financial services is not theoretical. It is happening across multiple jurisdictions simultaneously.


The EU AI Act, which entered into force in August 2024, classifies AI systems used for credit scoring, creditworthiness assessments, insurance risk and pricing, and fraud detection as high-risk. Firms deploying these systems face mandatory requirements around risk management, data governance, transparency, human oversight, documentation, and post-market monitoring. The high-risk obligations take effect from August 2026, with penalties of up to 7% of global annual turnover or €35 million for non-compliance.


In the UK, the FCA has taken a different but equally consequential approach. Rather than introducing AI-specific rules, the FCA is applying its existing regulatory framework - Consumer Duty, the Senior Managers and Certification Regime, and operational resilience requirements - to the use of AI. The message is clear: the outcomes AI produces are subject to the same standards as any other business process. If your AI model produces unfair outcomes, creates bias, or makes decisions that cannot be explained to customers, the FCA will treat it as a conduct failure. The FCA has also confirmed it expects firms to establish AI risk committees, embed bias audits, and document human oversight for high-impact decisions.


In the US, the regulatory landscape is more fragmented but no less demanding. The NIST AI Risk Management Framework provides the most widely referenced governance standard, offering a structured approach to identifying, assessing, and managing AI risks. Federal regulators including the OCC, the Federal Reserve, and the FDIC continue to apply existing model risk management expectations to AI and machine learning. The SEC's 2026 examination priorities have elevated AI governance alongside cybersecurity as a primary focus area, displacing cryptocurrency concerns that dominated previous years. For fintechs operating in the US, AI is now treated as part of core risk and compliance infrastructure.


Singapore's MAS has published the FEAT principles - Fairness, Ethics, Accountability, and Transparency - as non-binding guidance that is increasingly referenced in supervisory assessments. The OECD AI Principles, adopted by over 40 countries, provide a baseline that underpins many national frameworks.


For firms operating across borders, the challenge is not understanding any single framework in isolation. It is building governance that satisfies multiple regulators simultaneously without creating duplicative, unscalable structures.


What Regulators Actually Expect

Strip away the regulatory jargon and the expectations converge on six areas.


AI Inventory and Risk Classification. Regulators expect firms to know what AI systems they are using, where

they are deployed, and how they are classified by risk. Under the EU AI Act, this means mapping every AI use case against the Act's risk tiers and determining which systems are high-risk. Under NIST, it means maintaining a documented inventory with clear ownership. Under the FCA's principles-based approach, it means being able to demonstrate to your supervisory team that you understand the risks your AI systems create for your customers.


Data Governance. Every major framework requires firms to demonstrate that the data used to train and operate AI systems is accurate, relevant, unbiased, and handled in compliance with applicable data protection rules. GDPR, the UK Data Protection Act, and the CCPA all apply. Under the EU AI Act, flawed data governance in a high-risk model is not a technical failing - it is a compliance violation.


Transparency and Explainability. If your AI system makes a decision that affects a customer - whether declining a payment, flagging a transaction, adjusting a credit score, or pricing an insurance product - you need to be able to explain how that decision was reached. The EU AI Act requires disclosure obligations for high-risk systems. The FCA expects explainability under Consumer Duty. US regulators expect it under fair lending and consumer protection rules. The common thread: if you cannot explain it, you should not deploy it.


Bias Testing and Fairness. AI systems can perpetuate or amplify biases in training data across protected characteristics. Regulators increasingly expect firms to conduct regular bias audits, test model outputs for discriminatory patterns, and document remediation steps where issues are identified. This is not optional under the EU AI Act for high-risk systems, and the FCA has signalled it will scrutinise AI-driven consumer outcomes for evidence of unfair treatment.


Human Oversight. No major regulatory framework accepts fully autonomous AI decision-making in high-risk financial services contexts. The EU AI Act mandates human oversight for high-risk systems. The FCA expects firms to maintain human-in-the-loop controls for significant decisions. NIST embeds human oversight throughout its risk management lifecycle. Boards and senior management must be able to demonstrate they understand what their AI systems do and have the authority to intervene.


Documentation and Audit Trails. Regulators expect comprehensive records covering model design, training data, validation, performance metrics, changes, and decisions. This is not just for internal risk management - it is for supervisory examination. The EU AI Act requires detailed technical documentation for high-risk systems. The FCA has indicated that audit trail and explainability guidance is expected by the end of 2026.


What This Means for Payment Institutions and EMIs

If you are an authorised payment institution or electronic money institution, AI governance intersects with obligations you are already managing.


Transaction monitoring is the most obvious example. Most firms now use AI or machine learning models in their AML and fraud detection systems. Under the EU AI Act, these are likely to fall within the high-risk classification. Under FCA expectations, the outputs of these models - whether they flag, block, or allow transactions - must be explainable, fair, and subject to human review.


Customer onboarding and KYC processes increasingly rely on AI for identity verification, document authentication, and risk scoring. If these systems produce biased outcomes or make errors that affect customer access, firms face both regulatory risk and Consumer Duty exposure.


Credit decisioning, dynamic pricing, and risk scoring - common in lending-adjacent payment services - all carry explainability and fairness requirements across every major jurisdiction.


The practical implication is that AI governance cannot be treated as a standalone compliance project. It needs to be embedded within your existing compliance, risk management, and operational resilience frameworks. A separate AI policy document that sits unread in a compliance folder does not satisfy any regulator.


Building a Governance Framework That Works

The firms that will manage this well are not the ones with the longest policy documents. They are the ones that build governance into how they actually operate.


Start with what you have. Most regulated firms already have risk management frameworks, model validation processes, and compliance monitoring in place. AI governance should extend these structures rather than replace them. The goal is not to create an entirely new compliance function - it is to ensure your existing frameworks account for the specific risks AI introduces.


Appoint clear ownership. Regulators expect a named individual - whether an AI Officer, a senior manager under SMCR, or a member of the executive team — to be accountable for AI governance. This person needs sufficient authority, resource, and access to the board to be effective. Without clear ownership, governance becomes a shared responsibility that belongs to nobody.


Classify your AI use cases by risk. Not every AI system requires the same level of governance. A model that auto-categorises internal support tickets does not carry the same risk as one that makes credit decisions. A proportionate, risk-based approach - aligned with the EU AI Act's tiering and NIST's risk management lifecycle -ensures you invest governance effort where it matters most.


Build explainability into the design process, not as an afterthought. The most common governance failure we see is firms deploying AI models and then trying to explain them retrospectively. If you cannot articulate how a model reaches its decisions before it goes live, the problem only gets harder at scale.


Test for bias regularly, not once. Bias testing at deployment is necessary but insufficient. Models drift. Training data becomes stale. Customer demographics change. Ongoing monitoring and periodic re-testing is what regulators actually expect, and it is what protects your customers.


Document everything. If it is not documented, it did not happen. Regulators will not accept verbal assurances about model governance. Technical documentation, validation records, change logs, incident reports, and board minutes demonstrating oversight - these are the artefacts that matter in a supervisory examination.


The Cost of Getting It Wrong

The EU AI Act's headline penalty - up to €35 million or 7% of global turnover - gets attention, but the real cost of poor AI governance is broader.


Regulatory intervention, including restrictions on permissions, mandatory remediation programmes, and skilled person reviews, creates operational disruption that far outweighs any fine. Reputational damage from biased or unexplainable AI outcomes is difficult to reverse, particularly for consumer-facing firms. And as the cyber insurance market increasingly conditions coverage on AI-specific controls, firms without documented governance may face higher premiums or reduced coverage.


For payment institutions and EMIs, where safeguarding obligations, Consumer Duty, and operational resilience already create a demanding compliance environment, adding unmanaged AI risk to the picture is not a sustainable position.


Where to Start

If you are a regulated financial institution that uses AI in any customer-facing or risk-related capacity, the starting point is straightforward: understand what AI you are using, classify it by risk, and assess whether your current governance arrangements are adequate.


For firms operating in the EU or serving EU customers, the August 2026 deadline for high-risk systems creates a hard timeline. Conformity assessments and governance implementation typically take six to twelve months. Firms that have not started preparation should begin now.


For UK-authorised firms, the absence of an AI-specific rulebook does not mean the absence of regulatory expectations. The FCA is actively supervising AI use through Consumer Duty, SMCR, and operational resilience, and has confirmed that further guidance on audit trails and explainability is expected by the end of 2026.


For firms operating across multiple jurisdictions, a unified governance framework aligned to the highest common standard - typically the EU AI Act's requirements supplemented by NIST's risk management approach - provides the most efficient path to multi-jurisdiction compliance.


Buckingham Capital Consulting provides AI governance and assurance services for regulated financial institutions worldwide. If you would like to discuss your AI governance requirements, speak to our team.

 
 
bottom of page