SERVESSolo · Small · Mid-sized firms
FORMATFixed-fee · 1-8 wks
JURIS.50 states + DC
BOOKINGThrough July 2026
STATUSAccepting
[ INSIGHTS ]

AI in Arbitration: Practice Playbook.

Eleven concrete things every firm with an arbitration practice should have in place by Q3 2026. Not maximalist. The floor. Each item operationalisable in 1 to 2 weeks; the whole programme in four working weeks at under $30,000 of attorney time. Companion to AI in US Arbitration; operational rather than survey.

PUBLISHED
UPDATED
FORMATOperational playbook
READING~17 minutes
FORFirm GCs · ADR practice leaders · arbitration practitioners
POSTURENot legal advice
THE ELEVEN SCROLL OR JUMP —
· 00 ·

Why this is not optional.

Three structural facts make AI risk in arbitration higher than in court:

  1. The procedural-order infrastructure isn't built. AAA Rule 61 and JAMS Rule 29 give arbitrators sanction power, but neither rule names AI. The hallucinated-citation problem becomes sanctionable only if someone proposes (and the arbitrator adopts) a procedural order at the preliminary conference. If you don't propose it, it doesn't exist.
  2. The confidentiality stakes are higher. Arbitration and mediation submissions are confidential by default. That's a feature in the room and a bug if your AI vendor records the prompt. A privilege-waiver event is harder to detect, harder to remediate, and more likely to surface only in collateral litigation.
  3. The vacatur consequence is total. LaPaglia v. Valve established the §10(a)(4) "exceeded powers" theory as a live route to vacate an award allegedly produced with AI assistance. Even if it's hard to win, the asymmetric stakes mean your client's award is now exposed if opposing counsel can credibly allege your AI use.

Each of these compounds. A firm without the infrastructure below isn't accepting an AI risk that's slightly elevated above litigation. It's accepting a category of risk it doesn't have a way to detect.

· 01 ·

Written AI policy — litigation and ADR are not the same.

Most firms by mid-2025 had drafted some version of an AI use policy, almost always written for litigation. The litigation framing — Rule 11 verification, Mata-defense, court-disclosure orders — is necessary but not sufficient.

The policy should separately address:

  • Mediation submissions. Closed-loop AI only. No public-version tools (free ChatGPT, Claude.ai free, Gemini free) under any circumstances. Mediation-confidentiality privilege is broader than work-product or attorney-client privilege in most states, and the consumer-grade AI privacy policies waive it on their face.
  • Pre-arbitration drafting. Closed-loop AI is acceptable for drafting, summarization, organization. Human verification of every citation, statistic, and direct quote before any filing.
  • Award drafting (for arbitrator practitioners). Personal AI use by an arbitrator is different from counsel AI use. Per CCA guidance: drafting, summarization, organization — fine. Evidence evaluation, witness credibility, application of law, exercise of judgment — not delegable.
  • AI vendor diligence. What model is the tool calling? On what infrastructure? What's the data retention policy? Does the vendor train on customer prompts?
· 02 ·

Vetted AI tooling stack — closed-loop only.

Every firm should publish an internal approved-tool list with annotated privacy terms. Three categories:

  • Tier 1 — approved for confidential matter use. Tools whose terms (a) prohibit training on customer prompts, (b) provide enterprise-grade data retention controls (configurable retention down to zero), (c) commit to no third-party disclosure absent customer authorization or legal process, (d) offer a data-processing addendum. Examples: enterprise tiers of major foundation-model vendors with appropriate contracts. The list of qualifying tools changes; verify terms quarterly.
  • Tier 2 — approved for non-confidential matter use. Public AI tools with clear privacy policies, used for tasks that don't involve client data. Example: drafting a marketing email, summarizing a public reported decision.
  • Tier 3 — prohibited. Public-version consumer AI tools used with any client data, ever. Personal AI accounts used for any work-related task, ever. Tools without a published privacy policy. Tools with privacy policies that reserve rights to use prompts for training without customer consent.

The list must be enforceable. That means: IT-level controls preventing access to Tier 3 tools from firm devices; mandatory training (~30 minutes) on the why, not just the what; a reporting channel for accidental Tier 3 use that does not punish the reporter; quarterly audit of vendor terms against the firm's own requirements.

· 03 ·

The AI procedural order — the single highest-leverage move.

At every preliminary conference, propose an AI procedural order. Five clauses, paraphrased here for orientation; adapt for each matter:

  1. Disclosure. Each party shall disclose, at the time of any submission to the arbitrator, whether generative AI was used in the preparation of the submission. Disclosure shall include identification of the AI tool used and the nature of the use (drafting, summarization, citation research).
  2. Verification. Counsel certifies that all factual assertions, citations to authority, and direct quotations in any submission to the arbitrator have been independently verified by an attorney reading the original source. AI-generated citations have been confirmed to exist and to support the proposition for which they are cited.
  3. Prohibited use. No party shall submit any confidential material — including pleadings, memoranda, exhibits, witness statements, settlement positions, or arbitrator-only submissions — to a generative AI system that does not contractually prohibit the AI provider from training on, retaining, or disclosing the submission to third parties.
  4. Arbitrator AI use. The arbitrator may use generative AI for organization, summarization, scheduling, and similar administrative purposes. The arbitrator shall not delegate to AI the evaluation of evidence, the assessment of witness credibility, the application of law to facts, or the exercise of adjudicative judgment. The arbitrator shall disclose the nature and extent of AI use prior to the closing of the record.
  5. Sanctions. Violation of any provision of this order is grounds for sanctions under the applicable rule (AAA Rule 61 / JAMS Rule 29), up to and including assessment of fees, exclusion of evidence, drawing adverse inferences, and dismissal.

The order is enforceable because it's an order. Without it, the arbitrator has weaker tools than a federal judge for managing AI misuse. With it, AAA Rule 61 / JAMS Rule 29 sanctions become real and immediate.

Adoption is not automatic. The arbitrator may modify, narrow, or reject the proposal. But the proposal itself shifts the conversation onto your terrain, signals seriousness, and creates a record of what each party agreed to (or refused to agree to). If opposing counsel rejects the verification provision in paragraph 2, you have made a record of that refusal — useful later.

· 04 ·

Verification-of-cited-authority workflow.

For every submission — every brief, every memorandum, every set of exhibits, every reply — a documented workflow that runs before the submission goes out:

  1. Generate a citation list automatically from the document.
  2. Pull the original source for each citation (Westlaw, Lexis, court docket, public source).
  3. An attorney reads the original source and confirms (a) the case exists, (b) the case stands for the proposition cited, (c) the quote (if any) is verbatim, (d) the case has not been overruled or distinguished in a material way.
  4. The attorney signs a verification log — a one-line entry per citation.
  5. The verification log is retained with the matter file.

This is the post-Mata baseline. The cost is real (probably 30 minutes of associate time per brief). The cost of not doing it is in the public record across multiple federal districts, in five-figure sanctions, in mandatory CLE on the dangers of AI, in bar referrals.

For arbitration specifically: the verification log lives with the matter. If a §10(a)(4) challenge to the opposing award later argues opposing counsel submitted hallucinated authority that the arbitrator relied on, your contemporaneous verification log is the evidentiary anchor for the bad-faith showing.

· 05 ·

Privilege and confidentiality — mediation has its own protocol.

Treated separately from arbitration because the privilege analysis is materially different.

  • Mediation submissions have stronger confidentiality protection than arbitration submissions in most US states (California Evidence Code §§1115 to 1129; Florida §44.102; Texas CPRC §§154.052 to .073; New York CPLR §§4547 / 7508).
  • That stronger protection is fragile in the AI era because public-version AI tools accept prompts under privacy policies that allow further disclosure.
  • A party uploading a mediation brief to a public AI tool likely waives mediation privilege as to the subject matter of the upload.

The protocol:

  • Hard prohibition on public-AI tools touching mediation material at any stage.
  • Closed-loop AI approved for mediation drafting only with documented enterprise terms.
  • Client-side education: explain to the client, in writing, that they must not upload settlement-discussion material into ChatGPT either. Most clients do not know.
  • A mediation-confidentiality stipulation drafted with explicit reference to AI: "neither party shall submit any mediation communication, brief, position statement, or related material to any artificial intelligence system that does not contractually prohibit the system's provider from retaining, training on, or disclosing the material to third parties."

Get the stipulation signed before the mediation starts.

· 06 ·

Discovery template for AI-related disputes.

For AI-substantive disputes — the things JAMS AI Rules were written for — a discovery practice template covering the categories specific to AI:

  • Training data manifests and provenance documentation
  • Training-data licensing chain
  • Evaluation set composition and results
  • Model weights and architecture (where relevant)
  • System prompts and tool-use scaffolding
  • Customer feedback / RLHF logs
  • Incident logs (model malfunctions, hallucinations, misuse)
  • Red-team and pre-deployment testing results
  • Internal communications regarding known model limitations

JAMS AI Rules let parties bring in a "technically savvy" arbitrator or discovery referee. AAA does not have an analogous express provision but the same outcome is achievable through party agreement and arbitrator discretion. Budget for the technically savvy referee from the outset.

· 07 ·

Vacatur readiness — FAA §10(a)(4).

Two-sided protocol — both for resisting an adversary's vacatur attempt and for evaluating whether to file your own.

When you receive an unfavorable arbitration award, the §10(a)(4) "exceeded powers" theory now includes the LaPaglia possibility: the arbitrator outsourced adjudicative judgment to AI. To evaluate whether the theory has factual purchase:

  • Was the turnaround from final briefing to award unusually fast? (15 days for a 29-page award, LaPaglia.)
  • Are there factual errors in the award of the kind characteristic of AI hallucination — invented citations, fabricated quotes, factual recitations that don't match the record?
  • Has the arbitrator publicly discussed AI use in adjudication — in articles, on panels, on social media?
  • Is there any direct evidence of AI use — admissions, metadata in the award document, reuse of phrases consistent with model output?

If three or more of these signals are present, the §10(a)(4) theory has factual purchase to investigate. Investigation steps include: subpoena practice (carefully — arbitrator privilege is real but narrow), review of arbitrator public statements, comparison of the award text against the record submissions for indicators of synthesis vs lookup.

When opposing counsel suggests AI use by your arbitrator: the defense is the procedural order in §03. If the order was in place and the arbitrator complied, the §10(a)(4) ground is harder. If you didn't propose the order, the absence of one becomes the opposing argument's foundation.

· 08 ·

Attorney-supervision protocol — staff and contractor AI use.

Staff (paralegals, secretaries, interns) and contractors (e-discovery vendors, transcription services, court-reporting firms) all use AI. The firm's policy must reach them.

  • Contracts with vendors must explicitly prohibit Tier 3 AI tool use with firm matter material.
  • Training requirement extends to staff and is logged.
  • The reporting channel for accidental Tier 3 use is open to staff without retaliation.
  • Vendor selection criteria include the vendor's own AI policy (downstream alignment).

The supervision rule is now ABA Formal Opinion 512 (July 2024), and it has bite. Failure to supervise an AI-using paralegal who slipped a hallucinated citation into a draft is the partner's problem, not the paralegal's.

· 09 ·

Conflict screening that includes AI vendor relationships.

When the firm's AI vendor is a party (or the corporate parent of a party, or a major investor in the party) to the arbitration, that's a conflict that did not exist five years ago and is now structurally common. OpenAI, Anthropic, Google, Microsoft, Meta — all are commonly counterparties or upstream supply-chain participants in matters across IP, employment, antitrust, consumer, securities, and product-liability practices.

Conflict-screening checklist additions:

  • Is the firm's primary AI vendor a party (or affiliate) to the matter?
  • Has any material work product on the matter been produced using a tool from a vendor that is a party (or affiliate)?
  • Could the matter create a "we sued our own AI vendor" conflict?
  • Could discovery in the matter reach the firm's own AI use logs, creating a privilege risk?

A documented screen at intake is the answer.

· 10 ·

Arbitrator vetting — what to ask.

When selecting (or striking) arbitrators on a slate, the existing diligence — bar history, disciplinary record, prior decisions, conflicts — should now include AI-use due diligence:

  • Has the arbitrator publicly discussed using AI to draft awards, opinions, articles?
  • Has the arbitrator served on panels or written articles about AI in adjudication? (Useful, not disqualifying, but worth knowing the position.)
  • Has the arbitrator issued any procedural orders in past matters addressing AI use?
  • Is there any public record of an arbitrator's award being challenged on §10(a)(4) AI grounds?

Strike or accept based on the matter. Some matters benefit from an arbitrator who is openly comfortable with AI tooling (efficiency wins). Others require an arbitrator who will draft personally and who has no AI-related public footprint.

· 11 ·

Client-disclosure template — tell them once, properly.

Increasingly, sophisticated clients are asking. Less sophisticated clients should be told without being asked.

A short (one-page) client-facing memo at engagement covering:

  • The firm uses AI tools in matter work, primarily for [drafting / summarization / research / document review].
  • The firm uses only closed-loop tools where vendor terms prohibit training on or disclosing client material.
  • All AI-assisted work is reviewed and verified by an attorney before submission to any tribunal or counterparty.
  • The client retains the right to opt out of AI assistance on the matter (some clients will, especially for high-sensitivity or highly regulated matters).
  • The firm does not use public-version AI tools on client material.

This document protects the firm in two ways: (a) creates a record of disclosure consistent with state-bar guidance trending toward affirmative disclosure of "material" AI use; (b) creates a record of client consent to a defined scope of AI use, making post-hoc client objections weaker.

· PLAN ·

Implementation — four working weeks.

A firm can move from zero to compliance in four working weeks:

  • Week 1. Vendor terms audit (§02). Approved-tool list published. Tier 3 controls deployed in IT.
  • Week 2. Written AI policy (§01) drafted, reviewed, ratified by managing partner. Client-disclosure template (§11) finalized.
  • Week 3. Procedural-order template (§03) finalized. Verification workflow (§04) operational. Mediation protocol (§05) operational.
  • Week 4. Discovery template (§06) for AI matters. §10(a)(4) checklist (§07). Arbitrator-vetting addendum (§10). Conflict-screening update (§09). Staff training delivered (§08).

Total firm-level cost: under $30,000 in attorney time, plus IT enforcement costs that depend on existing infrastructure. The compounding cost of not doing this is at least one Mata-style sanction per year per practice area at the current rate of incident.

· NOTE ·

What is not on this list.

Two items intentionally absent:

  • A written ban on attorney AI use. Bans don't work, are unenforceable, and create incentive to hide rather than disclose. The right architecture is a vetted-tools approach plus mandatory verification, not prohibition.
  • A public marketing-facing AI policy. Useful for sales but not for compliance. The internal policy and the client-disclosure template do the actual work.
· CITE ·

Sources and further reading.

IXSOR cross-references:

This article is an operational playbook. It is not legal advice, does not establish an attorney-client relationship, and does not predict how any specific court or arbitrator will rule on facts not before it.

Frequently asked questions.

What is the single highest-leverage thing a firm can do for AI risk in arbitration?

Ask for an AI-specific procedural order at the preliminary conference. Without it, an arbitrator's tools for managing AI misuse are weaker than a federal judge's by an order of magnitude. With it, AAA Rule 61 and JAMS Rule 29 sanctions become real and immediate.

How long does it take to implement these policies?

A firm can move from zero to compliance in four working weeks. Week 1: vendor terms audit, approved-tool list, IT controls. Week 2: written AI policy, client-disclosure template. Week 3: procedural-order template, verification workflow, mediation protocol. Week 4: discovery template, vacatur checklist, arbitrator vetting, conflict screen, staff training.

Should the firm ban attorney AI use?

No. Bans don't work, are unenforceable, and create incentive to hide rather than disclose. The right architecture is a vetted-tools approach plus mandatory verification, not prohibition. Tier 1 closed-loop tools approved for confidential matter use; Tier 2 public AI for non-confidential tasks; Tier 3 (consumer-grade public AI used with client data) prohibited.

Why is mediation treated differently from arbitration?

Mediation has stronger confidentiality protections than arbitration in most states (California Evidence Code §§1115 to 1129; Florida §44.102; Texas CPRC §§154.052 to .073; New York CPLR §§4547 / 7508). That stronger protection is fragile in the AI era because public AI tool privacy policies allow further disclosure. A party uploading mediation material to ChatGPT likely waives privilege on the same logic as Heppner. Mediation requires its own protocol distinct from arbitration.

Does the supervision rule apply to staff and vendor AI use?

Yes. ABA Formal Opinion 512 (July 2024) extends the supervision duty to AI tool use by paralegals, secretaries, interns, and outside vendors (e-discovery, transcription, court-reporting). Failure to supervise an AI-using paralegal who slips a hallucinated citation into a draft is the partner's problem, not the paralegal's.

· AUTH ·

About the author.