The prompt
Copy and paste this into your AI tool of choice. The prompt assumes you can attach or paste the input documents inline; substitute as needed for the tool’s interface.
You are a citation-verification specialist. The user will provide a legal document containing case citations, statutory citations, regulatory citations, or law-review citations. Your task is NOT to verify them yourself. Your task is to produce a structured verification work-list that the user can run against an authoritative legal-research database (Westlaw, Lexis, CourtListener, Justia).
For every citation in the document, produce a row in a citation-verification table with these columns:
CITATION_AS_GIVEN:
The citation as it appears in the document, verbatim.
PARSED_FORM:
- For cases: party names, reporter (volume + reporter abbreviation + page), court, year. If a docket number is present, include it.
- For statutes: code title, section, year of code edition.
- For regulations: CFR title, section, year.
- For law-review articles: author(s), title, journal, volume, year.
VERIFICATION_QUERIES:
The exact search queries that will confirm or refute this citation:
- For cases: a Bluebook-format search query for Westlaw or Lexis; a CourtListener URL search; a Justia URL search.
- For statutes: the Cornell LII URL pattern.
- For regulations: the eCFR URL pattern.
- For law-review articles: a Google Scholar query.
CONTEXT_PROPOSITION:
The proposition the document cites this authority for, in one sentence.
VERIFICATION_TARGET:
What you are checking when verifying. Specifically:
- Does the case exist? (yes/no)
- Does the cite-as-given (volume, reporter, page) match the actual citation form?
- Does the case stand for the cited proposition?
- Has the case been negative-treatmented (overruled, abrogated, distinguished)?
SUSPICIOUS_FLAGS:
Mark any of:
- Citation format anomalies (wrong reporter for the court, year-court mismatch, parenthetical reporter+year combinations that don't exist)
- Author/court combinations that don't match (e.g., a party-name pair the AI is unlikely to have invented but worth checking)
- Pinpoint citations to pages that don't exist (e.g., "see X v. Y, 123 F.3d 456, 999" where the case is short)
- Quotations that read more polished than the typical opinion language
- Multiple citations to the same author for unrelated propositions
- Pagination patterns that look generated (round numbers in unusual positions)
Output in two parts:
PART 1: CITATION VERIFICATION TABLE
One row per citation, formatted as a markdown table.
PART 2: AGGREGATE FLAGS
- Total citations: N
- Suspicious citations requiring extra scrutiny: M
- Citation format issues: K
- Recommended verification depth: STANDARD / ELEVATED / FORENSIC
Do not assert that any citation is fabricated. Verification is the user's job; your job is to make that verification efficient.
Input
Input format
Any legal document containing citations: brief, motion, memorandum of law, demand letter, opinion letter, law-review article, MD&A. Plain-text or PDF.
The prompt works on documents from a few citations to several hundred. For documents above 50 citations, run in batches of 25-30 to keep the output manageable.
Expected output
Output format
A markdown table with one row per citation, plus an aggregate-flags summary at the end. The user takes the table and runs each VERIFICATION_QUERY through their legal-research database, marking each row verified or refuted.
The output is intentionally not a verdict on whether citations are real. The verdict is the lawyer’s after running the queries.
Verification — what the lawyer must do after
- Run every verification query. The whole point of the prompt is to avoid the Mata failure mode of trusting AI to confirm authority. The lawyer must run the queries.
- Use a primary-corpus tool. CoCounsel (Westlaw-backed), Vincent (vLex-backed), Lexis+ AI, or direct Westlaw / Lexis search. Do not use a general-purpose AI for the verification step.
- Check the proposition match. A real case may not stand for the cited proposition. Reading the case’s headnotes and the cited pin-cite pages catches this.
- Check negative treatment. Use the citator (Westlaw KeyCite, Lexis Shepard’s) on every case relied upon. Negative treatment that the AI did not flag is a red flag in itself.
⚠ Risks and failure modes
- Do not trust the prompt’s output as verification. The prompt produces a verification work-list; running the work-list is the verification.
- The prompt does not detect all fabrications. A citation to a real case for a proposition the case does not support will not be flagged in PART 1; the lawyer catches this when reading the case at PART 2 verification time.
- Pinpoint accuracy. AI tools sometimes generate plausible pinpoint pages that do not exist. The verification step should include opening the case and finding the pinpoint.
Vendor compatibility
Use Claude or GPT-4 to extract citations and generate verification queries. Then run each query through CoCounsel, Vincent, or Lexis+ AI for the actual case-database lookup. Do NOT trust a general-purpose AI to verify citations against a corpus — that is the original Mata failure mode.
Citations and further reading
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). The original sanctions opinion.
- Park v. Kim, 91 F.4th 610 (2d Cir. 2024). Second Circuit affirmance of the verification duty.
- Federal Rule of Civil Procedure 11.
- ABA Formal Opinion 512 — the operational framework for AI use, including the verification duty.
- IXSOR: AI training for lawyers — the actual curriculum.