Attorney-client privilege has always been about who gets to hear the thing the client said. AI did not change the doctrine. It changed the surface area.
When a partner pastes a draft motion into a consumer AI tool to "tighten the language," the question is not whether AI is useful. It is whether the partner just disclosed privileged content to a third party with undefined retention, training rights, and access controls. Most of the time, the answer is yes — and most of the time, the partner did not know that was the question.
The 2026 standard of care is not optional. ABA Formal Opinion 512 settled the high-level framework. State bars are filling in the operational detail. Insurance carriers are starting to ask AI-specific questions on professional liability renewals. Clients — especially financial services clients with their own regulatory obligations — are sending AI-specific addenda to outside counsel.
The firms that build governance now will have a meaningful advantage. The firms that wait will spend 2027 catching up under pressure.
ABA Formal Opinion 512 in plain English
ABA Formal Opinion 512, issued July 29, 2024, applies six existing Model Rules to generative AI. It does not create new ethics rules. It applies the rules every lawyer already knows to a tool most lawyers had not used a year before.
Rule 1.1 (Competence). Lawyers must understand the benefits and risks of AI tools they use. That includes how the tool generates output, what it does with input data, and where its known failure modes lie. The duty of technological competence is no longer a comment buried in Rule 1.1 — it is the heart of the AI governance question.
Rule 1.6 (Confidentiality). Lawyers must take reasonable measures to prevent unauthorized disclosure of client information. Pasting client information into a public AI tool with undefined retention is hard to defend as reasonable. Using an enterprise-licensed tool with documented no-training and no-retention terms is much easier to defend.
Rule 3.3 (Candor). Lawyers cannot file fabricated case citations. The 2023 sanctions in Mata v. Avianca are now standard CLE material, and the rule is unchanged in 2026 — the lawyer is responsible for every citation, every quote, every factual representation, regardless of which tool produced it.
Rules 5.1 and 5.3 (Supervision). Partners must supervise associates and non-lawyer staff using AI. That means written policy, training, and a review process. The supervising lawyer cannot offload responsibility to the tool or to the user.
Rule 1.5 (Reasonable Fees). If AI dramatically reduces the time required for a task, the firm should consider how that affects billing. The opinion does not require fee reductions, but it does require a defensible position on how fees are set when AI is in the loop.
Informed Consent. Where AI use is material to the representation, the client should be informed. The opinion does not mandate disclosure for every AI use, but it points toward disclosure when AI affects confidentiality, fees, or work product.
State bar opinions worth knowing
ABA Opinion 512 sets the floor. State opinions add operational detail that varies by jurisdiction.
Florida Ethics Opinion 24-1, adopted January 2024, was the first comprehensive state opinion. It addresses confidentiality, competence, supervision, fees, and advertising — and it explicitly requires that lawyers obtain informed consent before using a generative AI tool that retains client information.
California's State Bar Practical Guidance for the Use of Generative AI, issued November 2023, is not a formal opinion but is treated as the operating guidance for California-licensed lawyers. It is more granular than ABA 512 on specific use cases.
New York City Bar Formal Opinion 2024-5 addressed prompts, hallucinations, and supervisory duties in detail. It is persuasive in New York and frequently cited in firm policy documents.
Other jurisdictions have issued or are drafting opinions. The District of Columbia, New Jersey, Texas, and Virginia all have committee guidance in various stages. A multi-state firm needs to track every jurisdiction where it practices, not just where it is headquartered.
Privilege and confidentiality with AI vendors
Privilege turns on three things. Was the disclosure necessary to the representation. Was the recipient bound to confidentiality. Did the lawyer take reasonable steps to protect the information.
For an enterprise AI vendor with strong contractual terms, all three can be answered yes. For a consumer-tier AI tool used on a personal account with default settings, none of the three is reliably yes. The vendor selection is not an IT decision. It is a privilege decision.
Specific contract terms that matter:
No training on client data. The vendor must contractually agree not to use prompts, outputs, or metadata to train models. "Opt-out" is weaker than "no training by default." Enterprise tiers from Harvey, Spellbook, and Thomson Reuters CoCounsel typically include this. Free consumer tiers typically do not.
Defined retention. Prompts and outputs should be retained for a specified, short window — 30 days is a common standard, with audit logs retained longer under separate access controls. "Indefinite" is not acceptable.
Encryption. In transit and at rest. AES-256 at rest, TLS 1.2 or higher in transit, with documented key management.
Data residency. US-only or a specified region, depending on client base. Some financial services clients now require US-only contractual residency.
Sub-processor disclosure. A current list of all sub-processors and a notification process for adding new ones.
Breach notification. Specified hours for vendor notification to firm. 72 hours is a common standard.
SOC 2 Type II. A current report, annually renewed, available under NDA.
If the vendor cannot put these terms in writing, the firm cannot defensibly use the tool with client data. The analysis is that direct.
Client disclosure norms emerging in 2026
Through 2024 and most of 2025, the prevailing approach to client disclosure was case-by-case. By early 2026, the norm has shifted. The defensible position is now affirmative disclosure in the engagement letter, with an updated firm AI usage policy posted publicly.
The disclosure does not need to be long. A representative paragraph reads roughly like this:
"The Firm uses generative AI tools to support legal work, including document review, research, drafting, and summarization. The tools the Firm uses are enterprise-licensed under contractual terms that prohibit the use of Client information for model training, restrict data retention to short windows, and require encryption and access controls consistent with our duty of confidentiality under Rule 1.6. Client information is not shared with public or consumer AI services. The Firm reviews all AI-assisted output before any work product is delivered. By signing this engagement, the Client acknowledges and consents to this use. Clients may request additional detail or alternative arrangements at any time."
That paragraph plus a publicly posted AI usage policy puts the firm in a strong position on Rule 1.6 and Rule 1.4 disclosure questions.
Internal AI usage policy elements
Every firm above 20 lawyers should have a written AI usage policy by mid-2026. It should cover, at minimum:
Approved tools list. Specific tools, by name, with the version or tier approved. Harvey on the enterprise tier, yes. ChatGPT on a personal account, no. The list is updated quarterly.
Approved use cases. Drafting, research, summarization, contract review. Specific prohibited cases — for example, no AI use on matters where the client has explicitly opted out, no AI use on highly sensitive litigation strategy without partner sign-off.
Data handling rules. What data can be input. What data cannot. Redaction standards. Account-level rules for ABA-conformant configurations.
Verification standard. Every citation verified against the source. Every factual claim spot-checked. Every output reviewed by a lawyer before any external use.
Logging and audit. What is logged. How long. Who reviews. The firm needs to be able to reconstruct AI use on any matter if a client or court asks.
Training requirements. Initial training for all lawyers and staff. Refreshers tied to policy updates and new tool approvals. The Practising Law Institute, the ABA Center for Innovation, and several state bars publish current training material — most firms now build their own internal version layered on top.
Models worth reviewing include the New York State Bar Association's Task Force on Artificial Intelligence report, the ABA's Center for Innovation materials, and the policy templates from the Sedona Conference. None of these is a copy-paste solution, but each is a useful starting frame.
Incident response when something goes wrong
Eventually, something will. A junior associate will paste a confidential settlement term into a consumer AI tool. A partner will discover an AI vendor logged prompts longer than the contract specified. A litigation matter will have a hallucinated citation that gets caught at oral argument.
The firm needs an AI incident response runbook before the incident, not after. It should include the named partner who runs incident response, the named outside ethics counsel for privilege analysis, the data forensics partner, the breach notification analysis, the client notification protocol under Rule 1.4, and the tribunal disclosure analysis under Rule 3.3.
A mid-market law firm in the southeast that ran a tabletop exercise on AI incident response in late 2025 caught three policy gaps that would have cost them weeks under real pressure. The exercise took four hours. The discovery was worth a year of policy work.
What this means for managing partners
The managing partner question is no longer "should we use AI." It is "what is in writing." Three documents, this quarter:
An updated engagement letter with affirmative AI disclosure language. An AI usage policy posted internally and a public-facing version on the firm site. An AI incident response runbook that sits alongside the cybersecurity incident response plan.
Add a vendor diligence standard with the contract terms above. Add quarterly training. Add a named partner who owns the program. Six artifacts, two quarters, and the firm is well ahead of where most peers are operating.
The 2027 standard of care is going to assume all of this is in place. The firms that wait until then will be defending claims, not making positioning arguments.
Privilege is the asset to protect
The technology is going to keep changing. The vendors will consolidate. New tools will displace incumbents. The competence duty will keep expanding to cover whatever comes next.
What does not change is what privilege protects, why clients pay for it, and what a firm owes when something goes wrong. The governance work in front of every law firm partner this year is not really about AI. It is about preserving the asset the firm has always sold.
The defining competitive gap in legal services over the next 36 months will be the gap between firms that built AI governance as part of their professional identity and firms that bolted it on after a public incident. The firms that win are the ones treating governance as the product, and they are writing it down this quarter.