Credit union supervision has always been about safety, soundness, and member protection. AI hasn't changed those goals. It has expanded the surface area where each one can fail and the expectations that examiners apply when they go looking.
What follows is the documentation set I would build today for any credit union over $500 million in assets, scaled down for smaller institutions and scaled up for those above $5 billion. It reflects the questions NCUA examiners are asking in 2026, the way they connect AI use to existing supervisory letters, and the structure of a governance document that the board will actually read.
What NCUA Examiners Are Actually Asking About AI
The starting question is always the same. Show me the AI inventory.
Examiners want a single list, dated within the last 90 days, that names every AI or machine learning system in use at the credit union — including features built into core providers like Symitar from Jack Henry, Corelation Keystone, or Fiserv DNA, including digital banking AI features from providers like Alkami or Q2, including marketing automation tools, including chatbots, and including any internal experimentation with foundation models from OpenAI, Anthropic, or Google.
The follow-up questions are predictable. Who owns each system. What member-impacting decisions does it influence. What is the vendor's risk classification. When was it last reviewed. Where is the documentation.
Most credit unions I have spoken with cannot produce that list on demand in the first quarter of 2026. The ones that can — usually because the chief risk officer built a simple spreadsheet last year and kept it current — shorten the AI portion of the exam meaningfully and reduce the chance of a documentation finding.
Required Documentation: The Four Artifacts Every Exam Wants
The four artifacts that every NCUA exam will request, regardless of asset size, are the AI inventory, the vendor due diligence file with AI-specific addenda, the model risk write-up scaled to the use case, and the board record showing AI strategic review.
The AI inventory is one row per system, with eight to twelve columns covering name, vendor, business owner, member impact category, risk classification, last review date, key contractual terms, and link to the vendor file.
The vendor file for an AI-touching system has to go beyond standard due diligence. It needs the vendor's own AI documentation, a statement of how the AI affects member outcomes, the data flows in and out, the model change notification provisions in the contract, and any independent testing the credit union has performed.
The model risk write-up scales with risk. For low-risk uses — back-office automation, internal search — a one-page summary is fine. For higher-risk uses — credit decisioning, BSA/AML alerting, member-facing chatbots making transactional recommendations — the write-up should cover purpose, inputs, performance, fairness considerations, monitoring, and incident response.
The board record is where most credit unions fall short. Examiners look for board minutes that show the AI strategy was discussed, the AI policy was approved, the inventory was reviewed, and any high-risk deployments were authorized. A board packet with a one-line entry "AI update — informational" is not enough. The minutes need to reflect what was actually discussed and what decisions were made.
The Third-Party Risk Management Overlay
Most AI in credit unions enters through vendors, not internal builds. That makes third-party risk management the dominant surface for AI governance, and NCUA's existing third-party risk guidance becomes the de facto AI guidance for vendor-supplied capabilities.
The TPRM overlay for AI requires four additions to the standard vendor file. First, an AI feature inventory at the vendor level — every AI capability the vendor provides, with the credit union's posture on each. Second, model change governance — contractual notification when the vendor materially changes a model, and a credit union process to assess the change. Third, data use and training rights — explicit provisions on whether member data can be used to train vendor models, and on whose behalf. Fourth, performance reporting — a recurring report from the vendor on accuracy, false positive and false negative rates, and any incidents.
The credit unions that handle this best treat their largest core, digital banking, and lending vendors as the priority. Cover the three that touch the most member data first. Smaller vendors fall under a lighter-weight version of the same template.
BSA/AML and the AI Anomaly Detection Question
BSA/AML compliance is where AI documentation matters most operationally and where NCUA and FinCEN expectations converge most clearly. Transaction monitoring systems and anomaly detection systems have used machine learning for years. The question now is whether the credit union understands what its system is actually doing.
The BSA officer's file should include a written description of the anomaly detection methodology, the data sources feeding it, how scenarios are tuned, how alerts are generated and dispositioned, and how false positive and false negative rates are tracked. Where the system uses machine learning that adapts over time, the documentation has to cover the retraining cadence and the validation that retraining did not introduce gaps.
FinCEN's position is consistent. The technology does not exempt the institution from a reasonably designed program. If the credit union cannot explain how its monitoring system works, the program is not reasonably designed regardless of vendor performance claims.
The pattern I see in well-run BSA programs is an annual independent validation of the monitoring system, separate from the annual BSA audit, that tests the model against known typologies and reviews the alert disposition workflow. That validation goes in the file. Examiners ask for it.
Fair Lending and Member Impact Analysis
Fair lending applies to credit unions the same way it applies to banks. ECOA and Regulation B do not exempt cooperatives. When an AI system influences a credit decision — even indirectly, through an underwriting recommendation that a human approves — the credit union owns the fair lending implications.
The minimum fair lending posture for AI in lending covers three components. A disparate impact analysis on the model output, performed at intervals tied to model retraining or at least annually. Adverse action notice generation that produces principal-reason codes the member can understand. Documentation of the methodology used to test for proxy discrimination — features that correlate with protected classes even if not directly demographic.
Where the credit union uses a third-party model — fintech partner underwriting, indirect auto lending platform, fintech-as-a-service — the contractual provisions need to require the partner to provide the disparate impact analysis. Without it, the credit union is exposed and cannot answer the examiner question.
Why a One-Page AI Policy Is Not Enough and a Fifty-Page Policy Is Not Read
I have read AI policies that are one page long and AI policies that run to forty-eight pages. Neither extreme works. The one-pager fails because it cannot cover the operational reality of governance — risk classification, vendor process, board reporting, incident response. The forty-eight-pager fails because the board never reads past page eight, and the operational owners cannot tell what is binding versus aspirational.
The governance document I recommend is structured in eight sections, runs 12 to 25 pages of policy with separate appendices, and is reviewed annually by the board with operational ownership in management.
The Eight-Section Governance Document That Works
Section one, purpose and scope. Why the credit union has an AI policy, what systems it covers, who is bound by it.
Section two, definitions and risk classification. What counts as AI under the policy, the four-tier risk classification — minimal, limited, significant, high — and the criteria for each tier.
Section three, roles and responsibilities. Board oversight, executive accountability, the AI committee or designated owner, business unit ownership, and the role of risk and compliance.
Section four, lifecycle controls. The required steps from idea to deployment to retirement — risk assessment, vendor due diligence, testing, deployment approval, monitoring, decommissioning. Each step ties to a documentation requirement.
Section five, vendor and third-party requirements. The TPRM overlay, the contractual provisions required for AI vendors, and the process for assessing vendor model changes.
Section six, member protection and fair treatment. Fair lending requirements, member communication standards, complaint handling for AI-related issues, and the disparate impact testing program.
Section seven, monitoring, incident response, and reporting. The metrics tracked, the thresholds for escalation, the incident response process, the regulatory reporting triggers, and the cadence of board reporting.
Section eight, review and revision. Annual review by the board. Triggered review on material change. Documentation of review history.
Appendices contain the AI inventory, the vendor list with classifications, the standard model documentation template, and the incident response runbook. Those live as separate documents because they update more often than the policy itself.
What This Means for Credit Union CEOs and CROs
The pattern I have watched at credit unions ranging from $400 million to $4 billion in assets is consistent. The institutions that do well in 2026 exams have spent six to twelve months building the four artifacts and walking the board through the policy at least twice. The institutions that struggle started the conversation in the month before the exam.
The teams that win are doing three things. They are maintaining a current AI inventory as a living spreadsheet, owned by the CRO or designated AI lead. They are running their largest three vendors through the AI overlay before adding new ones. They are getting AI on the board agenda at least quarterly with a one-page report that the board actually discusses.
The defining competitive gap among credit unions over the next two years will not be technology adoption. The Filene Research Institute and the league system have made the technology widely available. The gap will be governance discipline — which credit unions can integrate AI into lending, member experience, and BSA/AML without losing examiner confidence. The ones building the documentation now will deploy AI faster because each new system fits into a structure that already exists. The ones deferring the documentation will move more cautiously, ship slower, and answer more findings.