Why “AI training for lawyers” is misnamed #
The phrase “AI training for lawyers” describes the wrong thing. The actual obligation under Model Rule 1.1 and Comment 8 is not training in “AI” as a category. It is training in the tool the lawyer is using, sufficient to make competent decisions about when to rely on it. ABA Formal Opinion 512 says this in plain language:
“Knowing what GAI is in the abstract does not satisfy Rule 1.1 when an attorney is using a specific tool with specific training data, specific output behaviour, and specific failure modes.”
The implication: a CLE titled “Introduction to AI for Lawyers” that surveys what generative AI is, what it can do, and what its risks are in general terms does not, by itself, satisfy the competence obligation. What satisfies the obligation is training in the specific tool the lawyer is going to use, the failure modes of that tool, and the verification practices that protect against those failure modes.
This is the same standard that has applied to legal research for decades. Reading a textbook about Westlaw is not training in Westlaw; using Westlaw under supervision until the lawyer can reliably retrieve a case is.
The curriculum below reflects that standard. It is organised by what the lawyer needs to be able to do after training, not by what the lawyer needs to have heard about.
The five categories of AI tooling and what each requires #
The competence obligation differs by category. An AI-competent lawyer in 2026 needs operational fluency in each.
Category 1: Legal-research AI. CoCounsel (Westlaw-backed), Lexis+ AI, Vincent (vLex-backed), Harvey, Spellbook’s research module. The competence question is whether the lawyer can identify hallucination risk per query, knows the verification standard from Mata v. Avianca, and is using a tool with a reliable underlying corpus. Training requirement: hands-on use of the specific research tool, with a written protocol for cite-verification.
Category 2: Document-drafting AI. ChatGPT Enterprise, Claude for Work, Gemini for Workspace, Spellbook (drafting), and the drafting features inside CoCounsel and Lexis+ AI. The competence question is whether the lawyer can recognise where drafting AI is appropriate and where it is not. Drafting AI is appropriate for first drafts of documents the lawyer will edit before filing. It is not appropriate for documents whose voice or factual claims will be exposed to the court or to opposing counsel without further review. Training requirement: a written firm style-guide for AI use plus document-by-document examples.
Category 3: Document-review and analysis AI. Casepoint, Everlaw, Relativity AI, and the document-review modules of CoCounsel and Lexis+ AI. The competence question is whether the lawyer understands the underlying review methodology (predictive coding, technology-assisted review, generative summarisation) and can defend it against opposing-party challenges. Training requirement: hands-on use plus understanding of the validation protocols the firm uses to confirm review accuracy.
Category 4: Vendor confidentiality. Not a tool category but a competence category. The lawyer must understand what the vendor sees and retains; whether the data-protection addendum has been executed; whether the consumer-tier or enterprise-tier of the tool is in use. Training requirement: the firm’s vendor-diligence file (our framework), updated quarterly.
Category 5: Litigation discoverability. The privilege and work-product analysis from Heppner, Warner, Tremblay. The lawyer must understand which AI uses are protected by privilege, which by work-product, and which by neither. Training requirement: case briefings on the three 2026 cases and a firm protocol for when to use which tool.
The minimum-viable training programme #
For a firm building an AI competence programme from scratch, the minimum-viable training that satisfies Rule 1.1 has these components.
Component A — Tool-specific orientation (90-120 minutes per tool). Hands-on session with each AI tool the firm has approved. Walk through the tool’s interface, its intended use cases, its known failure modes, and its verification protocol. Documented attendance. Repeat when the tool is materially updated.
Component B — Doctrinal core (60-90 minutes). The six cases / opinions every AI-competent lawyer needs to know: Mata v. Avianca, Park v. Kim, ABA Op. 512, the lawyer’s state-bar opinion (NC FEO 2024-1, California Op. 2023-200, etc.), United States v. Heppner, and Warner v. Gilbarco. What each holds; what each does not reach. Documented in a one-page note.
Component C — Firm policy (45 minutes). The firm’s AI policy: which tools are approved, which are not, what the supervision framework is under Rule 5.3, what the disclosure protocol is under Rule 1.4, what the fee analysis is under Rule 1.5. Signed acknowledgement.
Component D — Quarterly update (30 minutes). Each quarter, a short update covering: what changed in the firm’s tools, what new caselaw or bar opinions have been issued, what new vendor terms have appeared. Documented attendance.
This is roughly 4-6 hours of training per attorney in the first year, 1-2 hours per quarter thereafter. Compare against the typical CLE-style “AI for lawyers” course, which is usually 2-3 hours of survey content with no hands-on, no firm-policy component, and no quarterly cadence. The course material may be useful as supplemental material; it is not, by itself, training that satisfies the competence obligation.
The doctrinal core in detail #
The six items every AI-competent lawyer should know to a level that supports a courtroom answer to “counsel, do you know the holding of Mata v. Avianca?”.
1. Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). The first widely-publicised AI-fabricated-citations sanctions case. Holding: a lawyer who files a brief containing AI-fabricated case citations has violated Federal Rule of Civil Procedure 11. The verification duty applies regardless of the technology that produced the citation.
2. Park v. Kim, 91 F.4th 610 (2d Cir. 2024). Second Circuit affirmance of the principle Mata announced. Holding: an attorney’s Rule 11 duty includes verification of cited authority; filing a brief with fabricated citations is sanctionable conduct. Closes the doctrinal question for federal practice within the Circuit.
3. ABA Formal Opinion 512 (July 2024). The federal Model Rules statement on lawyer use of generative AI. Maps competence (1.1), confidentiality (1.6), supervision (5.3), communication (1.4), candor (3.3), and fees (1.5) to AI use. Read the implementation playbook.
4. State-bar opinion in your jurisdiction. California Op. 2023-200, Florida Op. 24-1, North Carolina FEO 2024-1, New York City Bar Formal Op. 2024-5, D.C. Ethics Op. 388, Mississippi Op. 261. Each tracks the ABA framework with state-specific variations. The comparative tracker covers the differences.
5. United States v. Heppner, No. 1:25-cr-00503 (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.). AI-privilege case. Holding: a defendant’s exchanges with a consumer-tier AI tool are not protected by attorney-client privilege when the AI is not an attorney and the consumer-tier privacy policy permits disclosure to third parties. The work-product doctrine also fails when the AI use is not at counsel’s direction.
6. Warner v. Gilbarco, Inc., 2026 WL 373043, No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026) (Patti, U.S.M.J.). Companion case to Heppner, decided the same week. Holding: a pro se civil litigant’s ChatGPT prompts and outputs are protected as work product because submitting prompts to ChatGPT is not disclosure to an adversary. AI is “tools, not persons”. The decision rests on Sixth Circuit work-product law and is in tension with Heppner.
The doctrinal pattern across the six: courts treat AI use as the lawyer’s own work for sanctions purposes (Mata, Park v. Kim) but as something less than the lawyer’s work for privilege purposes (Heppner, partly Warner). The ABA and state-bar opinions sit on top of this caselaw and operationalise it.
Firm-level governance: the Rule 5.3 dimension #
Individual lawyer competence is necessary but not sufficient. Rule 5.3 makes the supervising lawyer responsible for non-lawyer use of AI — including paralegals, secretaries, and the AI tools themselves treated as non-lawyer assistants under ABA Op. 512.
The Rule 5.3 framework requires the firm to have:
- An approved-tools list. Which AI tools the firm permits, which it does not. This is the firm’s vendor-diligence output. Updated quarterly.
- A use protocol per tool. What kinds of work are permitted on each tool, what kinds are not, what verification is required. This is the firm’s style-guide.
- A training-attendance record. Each lawyer and each non-lawyer staff member has a documented training record per tool. Not a checkbox — an actual record of what was covered and when.
- A disclosure protocol. When AI use is material to the client’s representation, what disclosure is made, on what document, signed by whom.
- A fee-analysis protocol. How AI savings are reflected in client billing under Rule 1.5. The lawyer cannot bill time the lawyer did not work.
An audit-defensible firm-level AI governance file should be eight to fifteen pages: the policy, the approved-tools list, the per-tool use protocol, the training records, the disclosure protocol, the fee-analysis protocol. This is the document the firm produces to a state bar inquiry, to an opposing party in discovery, or to the firm’s malpractice carrier. Firms that do not have this file when asked are starting in a hole.
The training that does not work #
For completeness, the training formats that do not, by themselves, satisfy the Rule 1.1 competence obligation:
- The single-session AI overview CLE. Useful as introduction but does not produce tool-specific competence.
- Vendor demos. The vendor’s incentive is to show the tool at its best. A vendor demo is marketing, not training.
- Self-study with a textbook or course PDF. Useful as supplement; does not produce hands-on competence.
- Generic AI literacy training. Useful for understanding what AI is in the abstract; does not address the specific-tool competence obligation.
- Conferences and panels. Useful for situational awareness; not training.
The training that does work is some combination of: tool-specific hands-on use, written protocols, doctrinal review, and a quarterly update cadence. It is more boring than the conference panel and it produces more competent lawyers.
Why this matters more in 2026 than it did in 2024 #
The competence-obligation analysis is the same in 2026 as in 2024. What has changed is that the operational landscape has matured to the point where the analysis bites.
In 2024, “AI training for lawyers” was largely about persuading lawyers that AI was a thing they needed to think about. The tools were less mature, the caselaw was thin, and the bar opinions were just emerging. A general-overview CLE was a reasonable first response.
In 2026, the tools are mature enough that lawyers are using them in real cases, the caselaw on sanctions and privilege is thick enough to organise around, and the bar has issued opinions clear enough to write firm policy from. The general-overview CLE is no longer the right response; specific-tool training, doctrinal mastery, and firm-level governance are.
The pattern that produces sanctions in 2026 is: a lawyer trained at the general-overview level uses a specific tool the lawyer does not understand the failure modes of, files something containing AI-fabricated content, and is sanctioned. The training framework above is what avoids this pattern. The CLE-style course alone is what does not.
Frequently asked.
Does my state bar require AI training?
Most state bars in 2026 do not have an explicit AI-training CLE requirement. They do, however, require ongoing competence under Rule 1.1, and the competence analysis applies to AI tools the lawyer uses. The practical effect: if you use AI in your practice, you need training in the tools you use, whether or not the state bar mandates it.
How long should AI training be?
For a lawyer using one or two AI tools, the first-year training is roughly 4-6 hours: tool-specific orientation, doctrinal core, firm policy, plus quarterly updates of about 30 minutes each. For lawyers using more tools, training scales linearly — each additional tool adds 90-120 minutes of orientation. Static one-time training does not satisfy the competence obligation as tools change.
Does ABA Op. 512 require disclosure of AI use to clients?
Not as a per-se rule. Op. 512 contemplates disclosure under Rule 1.4 where AI use is material to the representation. The materiality threshold is a judgement call. A safe operational standard: disclose AI use where the client would reasonably want to know — substantive drafting, document review affecting strategy, AI-driven research informing case theory.
Are general-purpose AI tools (ChatGPT, Claude, Gemini) appropriate for legal work?
The enterprise tiers are; the consumer tiers are not. The distinction matters under both Rule 1.6 (vendor confidentiality) and Heppner (privilege protection). The enterprise tiers ship with data-protection terms that align with Rule 1.6 requirements; the consumer tiers do not. Verify the data-protection addendum has been executed before using any general-purpose AI for client work.
Should I take a paid AI-for-lawyers course?
If the course is hands-on with a specific tool the firm uses, yes. If it is a general-overview CLE, it is supplemental at best — useful for situational awareness but not satisfying the specific-tool competence obligation by itself. The most useful courses combine doctrinal review (the six cases above) with hands-on use of named tools.
Citations and further reading.
- ABA Model Rule 1.1 (competence) and Comment 8 (technology competence).
- ABA Model Rule 5.3 (supervision of nonlawyer assistance).
- ABA Formal Opinion 512 (July 2024).
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).
- Park v. Kim, 91 F.4th 610 (2d Cir. 2024). Second Circuit affirmance of the verification duty.
- United States v. Heppner, No. 1:25-cr-00503 (S.D.N.Y. Feb. 17, 2026).
- Warner v. Gilbarco, Inc., 2026 WL 373043, No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026).
- IXSOR: State bar AI opinions, a comparative tracker.
- IXSOR: AI vendor diligence catalogue.
