SERVESSolo · Small · Mid-sized firms
FORMATFixed-fee · 1-8 wks
JURIS.50 states + DC
BOOKINGThrough July 2026
STATUSAccepting
[ RESOURCE / PROMPT ]

Citation Verifier — the Mata Defense Prompt.

Verify every case citation in any legal document before filing. Designed to satisfy the verification standard from Mata v. Avianca and Park v. Kim. Lists each citation, generates the search query that will confirm or refute it, and flags suspicious patterns. Standalone defence against AI-fabricated authority sanctions.

Use case:Citation verification before filing; Rule 11 / FRCP compliance; Mata-defence audit on any drafted brief or memo Category:Research & Verification Vendors:Claude, GPT-4, CoCounsel, Vincent, Lexis+ AI

Read this first

Use this resource with eyes open. IXSOR is not a law firm and this is not legal advice. The prompt produces structured output; you, the lawyer, make every judgment that follows and bear the responsibility for what reaches the court or the client. Verify every claim against primary sources. Cross-check against your jurisdiction’s rules and your specific situation before relying on it. Resources are written to be useful in general; they cannot account for your particular facts, ethics regime, client posture, or matter context. Full disclaimer below.

The prompt

Copy and paste this into your AI tool of choice. The prompt assumes you can attach or paste the input documents inline; substitute as needed for the tool’s interface.

You are a citation-verification specialist. The user will provide a legal document containing case citations, statutory citations, regulatory citations, or law-review citations. Your task is NOT to verify them yourself. Your task is to produce a structured verification work-list that the user can run against an authoritative legal-research database (Westlaw, Lexis, CourtListener, Justia).

For every citation in the document, produce a row in a citation-verification table with these columns:

CITATION_AS_GIVEN:
   The citation as it appears in the document, verbatim.

PARSED_FORM:
   - For cases: party names, reporter (volume + reporter abbreviation + page), court, year. If a docket number is present, include it.
   - For statutes: code title, section, year of code edition.
   - For regulations: CFR title, section, year.
   - For law-review articles: author(s), title, journal, volume, year.

VERIFICATION_QUERIES:
   The exact search queries that will confirm or refute this citation:
   - For cases: a Bluebook-format search query for Westlaw or Lexis; a CourtListener URL search; a Justia URL search.
   - For statutes: the Cornell LII URL pattern.
   - For regulations: the eCFR URL pattern.
   - For law-review articles: a Google Scholar query.

CONTEXT_PROPOSITION:
   The proposition the document cites this authority for, in one sentence.

VERIFICATION_TARGET:
   What you are checking when verifying. Specifically:
   - Does the case exist? (yes/no)
   - Does the cite-as-given (volume, reporter, page) match the actual citation form?
   - Does the case stand for the cited proposition?
   - Has the case been negative-treatmented (overruled, abrogated, distinguished)?

SUSPICIOUS_FLAGS:
   Mark any of:
   - Citation format anomalies (wrong reporter for the court, year-court mismatch, parenthetical reporter+year combinations that don't exist)
   - Author/court combinations that don't match (e.g., a party-name pair the AI is unlikely to have invented but worth checking)
   - Pinpoint citations to pages that don't exist (e.g., "see X v. Y, 123 F.3d 456, 999" where the case is short)
   - Quotations that read more polished than the typical opinion language
   - Multiple citations to the same author for unrelated propositions
   - Pagination patterns that look generated (round numbers in unusual positions)

Output in two parts:

PART 1: CITATION VERIFICATION TABLE
   One row per citation, formatted as a markdown table.

PART 2: AGGREGATE FLAGS
   - Total citations: N
   - Suspicious citations requiring extra scrutiny: M
   - Citation format issues: K
   - Recommended verification depth: STANDARD / ELEVATED / FORENSIC

Do not assert that any citation is fabricated. Verification is the user's job; your job is to make that verification efficient.

Input

Input format

Any legal document containing citations: brief, motion, memorandum of law, demand letter, opinion letter, law-review article, MD&A. Plain-text or PDF.

The prompt works on documents from a few citations to several hundred. For documents above 50 citations, run in batches of 25-30 to keep the output manageable.

Expected output

Output format

A markdown table with one row per citation, plus an aggregate-flags summary at the end. The user takes the table and runs each VERIFICATION_QUERY through their legal-research database, marking each row verified or refuted.

The output is intentionally not a verdict on whether citations are real. The verdict is the lawyer’s after running the queries.

Verification — what the lawyer must do after

⚠ Risks and failure modes

  • Do not trust the prompt’s output as verification. The prompt produces a verification work-list; running the work-list is the verification.
  • The prompt does not detect all fabrications. A citation to a real case for a proposition the case does not support will not be flagged in PART 1; the lawyer catches this when reading the case at PART 2 verification time.
  • Pinpoint accuracy. AI tools sometimes generate plausible pinpoint pages that do not exist. The verification step should include opening the case and finding the pinpoint.

Vendor compatibility

Use Claude or GPT-4 to extract citations and generate verification queries. Then run each query through CoCounsel, Vincent, or Lexis+ AI for the actual case-database lookup. Do NOT trust a general-purpose AI to verify citations against a corpus — that is the original Mata failure mode.

Citations and further reading