SERVESSolo · Small · Mid-sized firms
FORMATFixed-fee · 1-8 wks
JURIS.50 states + DC
BOOKINGThrough July 2026
STATUSAccepting
[ INSIGHTS · OPINION READING ]

ABA Op. 512.

An operational reading of ABA Formal Opinion 512 (July 29, 2024), the American Bar Association Standing Committee on Ethics and Professional Responsibility\'s first formal opinion on lawyer use of generative artificial intelligence. Each Model Rule the opinion interprets, mapped to a concrete firm-level practice.

AUTHORDan Hughes
FILEDMay 2026
OPINIONABA 512
ISSUEDJul 29, 2024
JURIS.ABA Model Rules
READING~16 minutes
· 01 ·

The opinion, in one paragraph.

ABA Formal Opinion 512 holds that the existing Model Rules of Professional Conduct govern lawyer use of generative AI without amendment, and identifies six duties an attorney must satisfy when using GAI tools: (1) competence regarding the tool\'s capabilities and limits, (2) confidentiality with respect to inputs and outputs, (3) communication with clients about material AI use, (4) candor toward tribunals and supervision of nonlawyer assistance, (5) reasonable fees that account for AI efficiencies, and (6) bias and unauthorized-practice considerations specific to particular tools and use cases. The opinion does not prescribe technical controls. It applies the Model Rules to a fact pattern that did not exist when most of those Rules were adopted.

What follows is a Rule-by-Rule reading, with the operational practice each Rule\'s application to GAI implies. The opinion supplies the duty; the firm chooses the implementation.

· 02 ·

Rule 1.1 (Competence): the tool matters.

Op. 512 holds that an attorney using a GAI tool has an affirmative duty to understand the tool\'s benefits and risks, including its limits and failure modes, sufficient to make competent decisions about when to rely on its output. The duty is grounded in Model Rule 1.1 and Comment 8, which has imposed a technology-competence obligation since 2012.

The opinion is explicit that the duty is tool-specific, not category-general. Knowing what GAI is in the abstract does not satisfy Rule 1.1 when an attorney is using a specific tool with specific training data, specific output behaviour, and specific failure modes. The competence inquiry is whether the attorney understands the tool she is actually using.

Operational practice. A firm should be able to demonstrate, on request, what training each attorney has received with respect to each AI tool the firm has approved for use. The training need not be elaborate. It must be specific to the tool, dated, and updated when the tool changes materially.

The opinion also imports the supervision aspect of competence. An attorney who delegates AI-assisted work to a junior must verify the junior is similarly competent in the tool, or supervise sufficiently to catch the failures the junior\'s incompetence would produce.

· 03 ·

Rule 1.6 (Confidentiality): the inputs matter most.

Op. 512 treats inputs to a GAI tool as a confidentiality event under Model Rule 1.6. The Rule\'s reasonable-efforts standard, set out in MR 1.6(c), applies to AI exactly as it applies to any other technology that creates a disclosure risk.

The opinion identifies a sub-question that is distinctive to GAI: what happens to the inputs after submission. Some GAI vendors retain inputs to train models. Others do not. The contract terms control. The opinion implies, without naming vendors, that an attorney inputting privileged client material into a tool whose terms reserve training rights to the vendor will struggle to satisfy the reasonable-efforts standard absent informed client consent.

Op. 512 is also explicit that the contract terms can be different across plans of the same vendor. The same model, accessed under a consumer plan, may have training rights reserved; accessed under an enterprise plan, may not. The choice of plan is part of the Rule 1.6 analysis.

Operational practice. A firm should maintain a written data-classification policy that maps data classes (privileged, confidential, public) to approved tools and approved configurations. The policy should be auditable: a partner asked which tool may receive a deposition transcript should have a single-sentence answer.

· 04 ·

Rule 1.4 (Communication): when disclosure is required.

Op. 512 treats AI use as triggering communication duties under Model Rule 1.4 in two patterns:

  • Where AI use is material to the representation, e.g., a tool produces a substantial portion of the client\'s deliverable, the client is entitled to be informed.
  • Where the cost of using AI tools is to be passed through to the client, the engagement letter or fee agreement should disclose the practice.

The opinion is silent on whether routine, non-material AI use requires disclosure. State opinions vary. The conservative reading: include a paragraph in the engagement letter that describes the firm\'s general approach to AI use, the safeguards applied, and the client\'s right to ask about specific tools used in the matter.

Operational practice. Update engagement-letter templates with two paragraphs: one describing the firm\'s AI use posture (vendor-vetted tools, data-classification policy, attorney supervision), one describing the client\'s right to know more. The fee section should describe AI cost recovery if the firm intends any.

· 05 ·

Rules 3.3 and 5.3 (Candor and Supervision): the briefer\'s problem.

The post-Mata case law, anchored by Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), and a growing list of successors, has established a consistent fact pattern: an attorney files AI-generated work product without verifying the cited authority; the citations are fabricated; the attorney is sanctioned. Op. 512 addresses the underlying duties under Model Rule 3.3 (candor toward the tribunal) and Model Rule 5.3 (responsibilities regarding nonlawyer assistance).

The opinion\'s reading of MR 5.3 is that an AI tool is, for supervisory purposes, in the same category as a nonlawyer assistant. Output requires attorney review proportionate to the importance of the matter and the failure modes of the tool.

Op. 512 also affirms that fabricated citations are sanctionable conduct regardless of the source of the fabrication. A brief filed under an attorney\'s signature is the attorney\'s representation to the court. AI cannot launder the duty.

Operational practice. A firm should have a written supervision standard for AI-assisted work product. The standard should require attorney verification of every cited authority before filing. For high-volume practices, the standard should specify a workflow: who runs the citation check, what tool is used, what the documentation looks like.

· 06 ·

Rule 1.5 (Fees): the efficiency question.

The least-settled aspect of Op. 512. Model Rule 1.5 requires that fees be reasonable. AI compresses the time required to perform many tasks. The opinion stops short of holding that efficiencies must be passed to the client; it holds that fees billed cannot be unreasonable in light of the work actually performed.

Three patterns emerge across Op. 512 and the state opinions that have followed it:

  • Hourly billing. An attorney may not bill for time not actually spent. If a task that took six hours last year takes one hour with AI, the bill reflects one hour.
  • Pass-through costs. AI subscription costs may be billed to clients only as expressly disclosed in the engagement and reasonable in amount; clients should not be billed at the firm\'s cost as if it were a flat-rate vendor charge unless the engagement so provides.
  • Fixed-fee. The reasonable-fee analysis is largely orthogonal to AI; the value of the deliverable to the client is what is billed. AI affects the firm\'s cost structure, not the fee analysis.

The state opinions have begun to address the more granular questions Op. 512 leaves open: who keeps the value of efficiency gains; whether AI use changes the floor on a contingent-fee analysis; whether reduced billable hours in associate work product changes the supervisory-staffing analysis. Each is jurisdiction-specific.

· 07 ·

Rule 8.4 (Misconduct): bias and UPL.

Op. 512 addresses two MR 8.4 surfaces. First, bias in AI output: the opinion notes that GAI tools can produce outputs reflecting biases in training data, and that an attorney relying on biased output without counterweight may produce work product implicating the prohibition on conduct prejudicial to the administration of justice. Second, unauthorized practice of law: the opinion confirms that an attorney\'s use of AI does not, in itself, constitute UPL by the AI vendor, while warning that the line is real and worth knowing.

The bias surface deserves attention because it is the most subtle. AI in legal-research tools may surface or omit case law in ways correlated with training-data composition. Discovery tools may prioritise documents in patterns reflecting the training set. Drafting tools may produce arguments stronger for some claimants than others on grounds the attorney does not see. The duty under MR 1.1 (competence) and MR 8.4 is to be aware enough to compensate.

Operational practice. A firm should periodically audit AI-assisted work product across categories of matters likely to surface bias (criminal defence, civil rights, employment, immigration). The audit need not be a formal disparate-impact study; it must be enough to identify pattern drift and respond to it.

· 08 ·

What Op. 512 does not address.

Three significant carve-outs frame the opinion\'s reach.

Privilege analysis. Op. 512 limits itself to the Model Rules of Professional Conduct. Whether disclosure to a third-party AI vendor waives the attorney-client privilege or work-product immunity is a substantive evidentiary question the opinion does not resolve. The federal common-law approach to privilege waiver via third-party disclosure, exemplified by United States v. Ackert, 169 F.3d 136 (2d Cir. 1999), and the state-by-state variations in waiver doctrine, govern that analysis. State-bar guidance on this question is uneven and lags the technology.

Tool selection. The opinion expressly declines to opine on which AI tools are appropriate for which uses. The professional-judgment determination remains with the attorney. The opinion supplies the framework for evaluation; it does not perform the evaluation.

Court rules and disclosure obligations. A growing number of federal and state courts have promulgated standing orders or local rules requiring disclosure of AI use in filings. These vary widely. Op. 512 imports the duty of compliance via MR 3.4 and the candor obligation; the specific disclosure language is per-court, and the attorney must check the local rules of the forum.

· 09 ·

Sister-state opinions.

The state opinions interpreting the equivalent of the Model Rules in their jurisdictions are functionally similar to Op. 512, with state-specific framing. A multi-jurisdictional practice should read each applicable opinion. A non-exhaustive list:

The ABA also maintains the Task Force on Law and Artificial Intelligence, which has issued reports beyond Op. 512 and is the standing reference for the federal-Model-Rules conversation.

· 10 ·

Implementation checklist.

The Op. 512 framework reduces to seven operational artefacts a firm should have, on request, available for a malpractice carrier, an opposing party in a sanctions inquiry, or a state bar disciplinary panel.

  • An AI-tools approval list. Which tools the firm has approved for which data classes, current as of the most recent vendor-diligence review.
  • A data-classification policy. Privileged / confidential / public, mapped to approved tools and configurations.
  • Vendor-diligence files. One per approved tool. Contract, security posture, training-data terms, breach-notification.
  • Training records. Per attorney, per tool, dated.
  • A supervision standard. Written. Specifies attorney review obligations on AI-assisted work product, with audit cadence.
  • Engagement-letter language. Addresses AI use posture, client right to inquire, fee approach.
  • A periodic audit log. Sixty-day initial, quarterly thereafter. Documents drift, updates, and corrective action.

None of these are technically demanding. Most firms above five attorneys already have analogues for non-AI technology. The work is in adapting the existing artefacts to the AI-specific facts.

· 11 ·

Citations and further reading.

Primary:

Cases:

  • Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). The early-cycle sanctions case for AI-fabricated authority.
  • United States v. Ackert, 169 F.3d 136 (2d Cir. 1999). Functional-equivalent doctrine in privilege analysis.

State and other authority:

This article is general analysis of a published ethics opinion and surrounding authority. It is not legal advice. It does not establish an attorney-client relationship. Engage qualified counsel for advice on your firm\'s specific situation in your jurisdiction.

· AUTH ·

About the author.

Dan Hughes is the founder of IXSOR. Ex-BBC. Ex-Apple. Lifelong technologist. And most importantly: not an attorney. He writes about legal AI from the operational and infrastructure side, where the rules meet the machines. Reach: [email protected].