Mata, three years on.
A survey of Mata v. Avianca, Inc. and the sanctions caselaw that has followed it. What three years of post-Mata enforcement tells practising attorneys about AI-assisted brief writing, the verification duty, and the operational standard that prevents the sanction.
The case, in one paragraph.
In Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), counsel for the plaintiff filed an opposition brief that cited and quoted from federal cases that did not exist. The fabricated authorities had been generated by ChatGPT, which counsel had used to research the brief and had not verified. After opposing counsel and the court could not locate the cited cases, Judge P. Kevin Castel sanctioned the attorneys and the firm under Federal Rule of Civil Procedure 11 and the court\'s inherent power. The sanction order, dated June 22, 2023, became the canonical text on AI-fabricated authority and lawyer responsibility for it.
Three years later, the case is still cited weekly in sanctions orders across federal and state courts. The pattern it identified, an attorney trusting AI output without verifying, has not stopped recurring. The legal duty it articulated has hardened into common knowledge across the bar, but the operational practice required to satisfy that duty is uneven.
The discipline imposed.
Judge Castel\'s sanctions order imposed a $5,000 monetary sanction on the two attorneys and their firm, jointly and severally; required notification of the parties whose names had been used in fabricated citations; and ordered the attorneys to send copies of the order to a list of judges referenced in the fabricated opinions. The sanction was modest in dollar terms; the reputational consequences ran much further. The story was reported in The New York Times, the legal trade press, and law school casebooks within weeks. The attorneys\' names became, and remain, search-engine shorthand for the failure mode.
The court\'s rationale was not that the attorneys had used AI. The court explicitly stated there was nothing inherently improper about using AI-assisted research. The rationale was that the attorneys had filed a brief representing as authority cases they had not read and that did not exist. The use of AI did not cause the sanction. The failure to verify did.
Rule 11 and the duty before AI.
The substantive obligation is older than the technology. FRCP 11(b) requires that, by signing a paper presented to the court, an attorney certifies "that to the best of the person\'s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances," the legal contentions are warranted by existing law. The "reasonable inquiry" element is the operative phrase. It has always required reading the cases one cites.
State analogues track the federal Rule. Most state rules of civil procedure, professional conduct rules (variously Rule 3.1, Rule 3.3, Rule 11), and inherent court powers reach the same outcome through different doctrinal paths. The North Carolina parallel is N.C.G.S. § 1A-1, Rule 11, which is functionally identical for these purposes.
What Mata made operationally explicit was that the introduction of an AI tool between the attorney and the cited authority does not transfer the duty. The attorney still signs the brief. The signature still carries the certification.
The Second Circuit, affirming.
The principle Mata announced was tested and affirmed at the appellate level in Park v. Kim, 91 F.4th 610 (2d Cir. 2024). The Second Circuit confronted a brief that contained nonexistent case citations generated by ChatGPT. The court held that an attorney\'s duty under Rule 11 includes verification of cited authority and that filing a brief with fabricated citations is sanctionable conduct, regardless of the technology that produced the fabrication.
The opinion is short and direct. It treats Mata as having stated the obvious rather than as having broken new ground. That framing matters: by 2024, the verification duty in the AI context was not a doctrinal innovation but a settled application of an existing Rule. The Second Circuit\'s rejection of any argument to the contrary closed the doctrinal question for federal practice within the Circuit and offered persuasive authority elsewhere.
The pattern that followed.
Between mid-2023 and mid-2026, federal and state trial courts have issued a growing number of sanctions orders against attorneys who filed AI-generated work product containing fabricated authority. The reporting databases, including CourtListener, the PACER docket system, and trade-press tracking projects, document dozens of such orders.
The pattern across these orders is consistent:
- An attorney uses a generative AI tool, often a consumer-grade one, to research or draft legal authority.
- The tool produces citations that look correct: case names, reporters, page numbers, even quoted language.
- The attorney files the brief without verifying the citations against a primary database (Westlaw, Lexis, the court\'s own docket, the Federal Reporter).
- The opposing party or the court attempts to locate the cited authority and cannot.
- An order to show cause issues; the attorney admits the use of AI and the failure to verify; the court imposes sanctions.
The sanctions vary. Monetary penalties range from $1,000 to $10,000 in the typical case, with outliers in either direction. Some orders include disclosure to the state bar; some include public publication of the order; some include disqualification from the matter. The reputational sanction, in every case, is substantial.
The standing orders that emerged.
Within months of Mata, individual federal judges began issuing standing orders requiring disclosure of AI use in filings before their courts. The earliest examples included orders from judges in the Northern District of Texas and the U.S. Court of International Trade. By 2025, similar orders had appeared in the Eastern District of Pennsylvania, the District of Massachusetts, the District of Colorado, and a number of state trial courts.
The orders vary in scope. Some require an attorney to certify that no AI was used in preparing a filing. Others require disclosure of AI use and certification that the attorney has independently verified all citations. The strictest require attaching the AI prompt and output to the filing. The most lenient simply restate Rule 11.
An attorney practising in any federal court in 2026 is responsible for checking the standing orders of the assigned judge and the local rules of the district. The compliance burden is real; the burden is also bounded. Most orders require nothing beyond what Rule 11 already required, restated as an explicit certification.
State enforcement and the parallel duty.
State courts have been less central in the post-Mata caselaw, but the duty exists in state practice through three independent doctrinal paths.
State rules of civil procedure. Most states have adopted a Rule 11 analogue. North Carolina\'s is at N.C.G.S. § 1A-1, Rule 11. The duty of reasonable inquiry is identical.
Rules of Professional Conduct. Rule 3.1 (meritorious claims) and Rule 3.3 (candor toward the tribunal) reach the same conduct from a disciplinary, rather than procedural, angle. The bar opinions interpreting these rules in the AI context (see ABA Op. 512 and NC FEO 2024-1) treat fabricated-citation conduct as sanctionable regardless of the AI involvement.
Inherent court power. Trial courts possess the inherent power to sanction abusive conduct in litigation; this power has been invoked in a number of post-Mata orders alongside or in lieu of Rule 11.
The doctrinal multiplicity matters operationally because an attorney facing potential sanctions can be liable under any of the three paths, with sanctions running in parallel.
The verification standard.
What the post-Mata caselaw has established, by accumulation rather than by any single opinion, is an operational verification standard. The standard reduces to four points.
One. Verify against a primary database. Westlaw, Lexis, Bloomberg Law, the Federal Reporter, the court\'s own docket. Not a second AI tool. The verification source must be one whose authority is itself trusted; one whose function is, ultimately, to retrieve cases that actually exist rather than to generate text.
Two. Read the cited authority. Pulling up the case header is necessary but not sufficient. The duty under Rule 11(b) is a duty as to the legal contention the citation supports. An attorney reading only the syllabus has not satisfied the inquiry.
Three. Verify quotations. AI tools fabricate quoted language with high confidence. The cited authority may exist but say something different from what the AI represented. Read the page; check the language.
Four. Document the verification. A simple practice: a partner-signed memorandum to the file confirming the citations have been independently verified, listing the verification source for each. The memorandum is not strictly required by any rule, and is dispositive of the diligence question if a sanctions issue arises.
Operational practice for small firms.
The post-Mata verification standard is not technically demanding. It is procedurally demanding, in the sense that it requires a practice that does not exist by default at most small firms. Three operational questions every firm should be able to answer:
- Who verifies? The signing attorney is responsible. Whether verification is delegated to an associate, a paralegal, or a contract attorney is a staffing decision; the responsibility is not delegable.
- What does verification look like at this firm? Defined in writing. Specifies the database used, the read-and-confirm step, the documentation step.
- Where is the audit trail? A verification memorandum, a checklist, or a system note in the practice-management platform. Something that would survive a sanctions inquiry.
For fixed-fee practices, the verification work is built into the matter\'s scope. For hourly practices, the time spent is properly billable to the client as part of the brief preparation. In neither case is verification optional.
What Mata teaches about competence.
The deeper lesson of Mata and its successors is about the duty of competence under Model Rule 1.1 and Comment 8. The competence inquiry around an AI tool is not whether the attorney can operate the user interface. It is whether the attorney understands the tool\'s failure modes well enough to know what to verify.
Generative AI tools fabricate confidently. The output reads like a competent legal memorandum even when the legal memorandum is fiction. An attorney who does not understand this confident-fabrication failure mode will trust outputs she should distrust. Comment 8\'s direction to "keep abreast of changes in the law and its practice, including the benefits and risks associated with the technology" is, in the AI context, a direction to learn what the tool gets wrong.
The reasonable inquiry under Rule 11 is shaped by what the attorney knew or should have known about the tool\'s reliability. After three years of Mata-pattern cases, the "should have known" standard does not leave much room for surprise.
Citations and further reading.
Primary cases:
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). Sanctions opinion of Judge P. Kevin Castel; the canonical AI-fabricated-citation opinion.
- Park v. Kim, 91 F.4th 610 (2d Cir. 2024). Second Circuit affirmance; closes the doctrinal question for federal practice within the Circuit.
Procedural rules:
- Federal Rule of Civil Procedure 11 via Cornell Legal Information Institute.
- N.C.G.S. § 1A-1, Rule 11 (North Carolina parallel).
Rules of Professional Conduct (selected):
Bar opinions on AI verification (already covered):
- ABA Formal Opinion 512: An Implementation Playbook (IXSOR reading).
- NC State Bar 2024 FEO 1: What It Actually Requires (IXSOR reading).
Tracking resources:
- CourtListener (Free Law Project), for retrieval of post-Mata sanctions orders.
- PACER, federal docket system.
This article is general analysis of published case law. It is not legal advice. It does not establish an attorney-client relationship. Engage qualified counsel for advice on your firm\'s specific situation in your jurisdiction.
About the author.
Dan Hughes is the founder of IXSOR. Ex-BBC. Ex-Apple. Lifelong technologist. And most importantly: not an attorney. He writes about legal AI from the operational and infrastructure side, where the rules meet the machines. Reach: [email protected].