The boards I talk to in 2026 have moved past the curiosity phase. The vocabulary has shifted from "what is generative AI" to "what is our model inventory and who owns it." That shift is recent. It is also the right one.
What boards are missing is a shared framework that lets them ask the right questions without having to become AI practitioners. The framework below is the one I see working — six pillars, each with diagnostic questions a board can put to management, each with an operating answer management can defend to an NCUA examiner.
This is the structure. It is also the order of operations.
Pillar One — Use-Case Prioritization: The Question Boards Should Ask First
Every credit union has a list of AI use cases someone has pitched. Member service chatbots. Loan decisioning. Fraud. Marketing personalization. Internal knowledge management. Collections.
The pillar-one question is not which use cases are technically possible. It is which use cases produce a measurable member or financial outcome and survive a fair lending review. A $2B credit union I have seen do this well started with three filters — material expected impact, examinable defensibility, and time to value under 12 months. They went from a list of 22 use cases to a funded portfolio of 4.
Board diagnostic questions for this pillar.
- What is the expected member or financial impact of each funded AI initiative.
- Which initiatives have been deprioritized and why.
- How do we know we are working on the right use cases, not just the loudest ones.
Pillar Two — Data Readiness: The Pillar Most Often Skipped
AI on bad data produces bad outcomes faster. The credit unions struggling in 2026 are the ones that bought the AI capability before they fixed the data layer. Member 360 fragmentation across the core, the digital banking platform, the loan origination system, and the marketing stack is still the most common blocker I see.
The pillar-two question is not "do we have a data warehouse." It is "can we trace a member outcome back to the data inputs that drove it, and would that trace satisfy a fair lending review." The answer at most credit unions today is no.
Board diagnostic questions for this pillar.
- Where does our member data live, and how complete is the picture for the member segments we are targeting with AI.
- Who owns data quality, and what is the budget commitment behind that ownership.
- If an examiner asked us to explain an AI-driven decision to a specific member, could we trace it.
Pillar Three — Governance: From Policy Document to Operating Discipline
Most credit unions now have an AI policy. Most of those policies are too generic to drive behavior. The pillar that separates ready from not ready is whether the governance is operating, not whether it is documented.
A working AI governance program includes a model inventory updated quarterly, a documented risk tiering for each model, an approval workflow before any new model goes into production, a fairness review for any model touching credit, marketing, or collections, and an incident response process when a model behaves unexpectedly. NIST AI Risk Management Framework and SR 11-7 model risk principles are the reference points examiners are tracking, even though SR 11-7 was written for banks.
Board diagnostic questions for this pillar.
- How many AI models are in production at our credit union and at our material vendors.
- Who has authority to approve a new model going into production.
- What was the last model risk decision we documented, and what was the outcome.
Pillar Four — Vendor Management: The Real Risk Surface
The honest truth about AI in credit unions is that most of the AI in use is not built by the credit union. It is embedded in the core platform, the digital banking provider, the loan origination system, the fraud platform, the marketing automation tool, and the member service chatbot.
Symitar, Jack Henry, Fiserv DNA, Corelation Keystone, Q2, Alkami, MeridianLink, Zest AI, Upstart, Plaid — every one of them has AI inside the product. Vendor management is therefore where AI risk actually lives.
The pillar-four question is whether the vendor due diligence process has been updated to ask AI-specific questions. Model inventory at the vendor. Training data lineage. Model fairness testing. Vendor's incident response when their model is wrong. SOC 2 mapping to AI controls. Right to audit AI-related controls.
Board diagnostic questions for this pillar.
- Have we identified which vendors have AI in production and what they do with our member data.
- Has our standard vendor due diligence questionnaire been updated for AI.
- What is our exposure if a material vendor's AI model fails.
Pillar Five — Talent and Training: Why "Hire an AI Person" Is the Wrong Answer
Boards keep asking the same question. "Should we hire a Chief AI Officer." For most credit unions under $5B, the answer is no.
AI readiness is a coordination problem across risk, IT, lending, marketing, and member experience. A new title with no authority across those functions becomes a fifth wheel inside 12 months. What works at most institutions is assigning AI accountability to an existing C-level executive — usually the COO or the CIO — and building capability below that level through targeted training, selective hires, and CUSO partnerships.
The training side matters more than the hiring side. Every executive in the institution needs working AI literacy. Every risk and compliance officer needs the ability to read a vendor model card. Every lender, marketer, and member-experience leader needs to know what their vendors' AI is doing on their behalf.
Board diagnostic questions for this pillar.
- Who owns AI accountability at the executive level, and what is their authority across functions.
- What is the AI literacy plan for the leadership team and the board itself.
- Where are we hiring versus partnering versus training.
Pillar Six — Measurement: The Pillar That Forces the Other Five
Without measurement, every other pillar drifts. The credit unions making real progress are the ones that have committed to a small set of AI program metrics reported to the board on a regular cadence. Number of models in production. Member or financial outcomes per funded use case. Vendor AI risk exposure. Training completion rates. Incidents and near-misses.
These are not vanity metrics. They are the operational evidence that the framework is alive.
Board diagnostic questions for this pillar.
- What AI metrics are we reporting to the board, and how often.
- What is the trend on each of those metrics over the last four quarters.
- Are we measuring the things that matter to members, or just the things that are easy to count.
The CUSO Question: Shared Infrastructure vs Going It Alone
Every readiness conversation in 2026 hits the same fork. Build it, buy it, or build it together with other credit unions through a CUSO. The right answer is different by pillar.
Use cases that touch member experience and brand differentiation — marketing personalization, channel design, member service tone — are usually best owned directly. The competitive advantage is the institution's voice, not the underlying model.
Use cases that benefit from a data network effect — fraud detection, BSA/AML, lending decisioning at the underwritten level, vendor due diligence — get materially better when more credit unions contribute data to the same model. CUSOs like PSCU, Co-op Solutions, Constellation Digital Partners, and the various regional CUSOs are increasingly the right path for those workloads.
The boards getting this right are explicit. Member-facing differentiation, build or buy. Shared infrastructure, partner. Don't confuse the two.
Board-Level vs Management-Level Decisions
One of the most common failure modes I see is role confusion at the board-management interface. The board ends up reviewing tactical tooling decisions while management asks for board approval on operational details.
The split that works.
The board owns risk appetite, the AI policy framework, and material vendor or CUSO investment decisions over a defined dollar threshold. The board does not own model selection, training-data choices, or which marketing campaign tests an AI tool first.
Management owns use-case selection inside the approved framework, vendor evaluation, model deployment, operating metrics, and the day-to-day risk decisions. Management does not unilaterally raise the institution's AI risk appetite or commit to a CUSO joint investment without board approval.
If your last AI board meeting featured a deck that looked like a vendor demo, the split is broken. If it featured an updated model inventory, an AI metrics dashboard, and one strategic decision the board needs to make, the split is working.
The 90-Day Starting Sprint
For credit unions that are honest about being earlier in this work than they want to be, a 90-day sprint is enough to move from talk to structure.
- Days 1 to 30. Inventory current AI use across the institution and your material vendors. Include shadow AI — the ChatGPT and Claude usage your team is doing without IT's involvement. Map what you find against the six pillars.
- Days 31 to 60. Write or update the AI policy. Name the executive owner. Set up the model inventory and the vendor AI review process. Identify the two highest-priority production use cases.
- Days 61 to 90. Define success metrics for each pilot. Build the board reporting template. Present the framework, the inventory, and the pilot plan to the board for approval against the institution's AI risk appetite.
Ninety days is not enough to be done. It is enough to stop being unstructured.
What This Means for Credit Union CEOs and Board Chairs
The credit unions that are going to compound on AI are not the ones with the biggest budget. They are the ones with the clearest framework, the right ownership, and the discipline to measure what they said they would measure. The boards that are going to add value are the ones that ask sharper questions, not longer ones.
The next 18 months will separate institutions visibly. Members will feel the difference in service quality, lending speed, and fraud protection. Examiners will notice the difference in governance maturity. Talent will go to the institutions that look like they know what they are doing.
The framework is not the work. The framework is what makes the work legible to a board, to an examiner, and to the team doing it. Credit unions that get the framework right in 2026 will spend 2027 compounding. Credit unions that don't will spend 2027 explaining why they are behind.