The five vendors #
CoCounsel (Thomson Reuters). Originally Casetext, acquired by Thomson Reuters in 2023. Sits on the Westlaw corpus. Strong on case-law research, brief drafting, and document review. Tight integration with Westlaw and Practical Law. Premium pricing ($300-$500/seat). The default choice for litigators already on Westlaw.
Vincent (vLex). Built on vLex's multi-jurisdictional corpus (US case law, statutes, regs, plus international). Strong on cite-checking and verifying authority across jurisdictions. Mid-tier pricing ($80-$200/seat). Competitive with CoCounsel on most US-only work and superior for multi-jurisdictional.
Lexis+ AI (LexisNexis). Sits on the Lexis corpus. Strong integration with Shepard's citation analysis. Good for litigators already on Lexis and for transactional work where Lexis's secondary sources (Matthew Bender treatises, ALR) carry weight. Premium pricing ($300-$600/seat depending on add-ons).
Westlaw Edge AI (Thomson Reuters). Westlaw's native AI feature set, distinct from CoCounsel even though both share corpus. Westlaw Edge AI is more conservative in its outputs (citations always, no synthesis without citation). Premium pricing bundled into Westlaw subscriptions.
Harvey. Different model: not corpus-anchored, instead trained on legal-domain content with retrieval against multiple sources. Strong on transactional work, M&A diligence, and large-firm workflows. AmLaw 100 customer base. Premium pricing ($400-$800/seat enterprise).
The five do not occupy the same position in the market. CoCounsel and Lexis+ AI compete head-to-head on the litigator buyer. Vincent occupies a price-performance sweet spot. Westlaw Edge AI is a feature-of-Westlaw rather than a standalone purchase. Harvey is the AmLaw 100 default that is moving down-market.
Verification: the post-Mata standard #
Mata v. Avianca and Park v. Kim made citation verification the operative standard for AI use in briefs. Three years on, the AI legal research vendors have differentiated on how they handle verification.
Citation-anchored output (CoCounsel, Vincent, Lexis+ AI, Westlaw Edge AI). Every claim in the AI's output is anchored to a primary-source citation with a verifiable URL or document reference. The lawyer can click through to the source. Hallucination rates are low because the model retrieves rather than generates.
Synthesised output with confidence scoring (Harvey, some CoCounsel modes). The AI synthesises across sources and provides a confidence score for each claim. Higher-confidence claims are more likely to be verifiable; lower-confidence claims need extra scrutiny. More flexible than pure citation-anchoring but requires more lawyer-time to verify.
Free-form generation with citation hints (general-purpose enterprise AI). Claude, ChatGPT, Gemini will produce legal-research-style output but the citation hints are not retrieved from a corpus — they are generated. This is the original Mata failure mode. General-purpose AI should not be used for citation work after Mata.
For a firm whose lawyers will be filing AI-assisted briefs, the verification posture is the most important evaluation criterion. Citation-anchored vendors (CoCounsel, Vincent, Lexis+ AI, Westlaw Edge AI) are the floor.
Strengths and weaknesses by vendor #
CoCounsel — strengths: Westlaw corpus integration, brief-drafting workflows, document-review module, large-firm references. Weaknesses: priced at premium even for solo / small-firm work; tied to Westlaw ecosystem (which may or may not match the firm's preferences).
Vincent — strengths: vLex's multi-jurisdictional corpus, strong cite-checking, mid-tier pricing competitive with general-purpose AI plus a research add-on, multi-language and international work. Weaknesses: smaller US-domestic corpus depth than Westlaw or Lexis; less polished UI than larger vendors; smaller customer base means less peer-validation.
Lexis+ AI — strengths: Lexis corpus integration, Shepard's citation analysis built-in, secondary-source library (Matthew Bender, ALR, Practising Law Institute treatises). Weaknesses: AI feature parity with CoCounsel comes and goes; pricing tier structure complex; some features behind separate add-ons.
Westlaw Edge AI — strengths: integrated with existing Westlaw subscription (no separate procurement for many firms); conservative output mode reduces verification work; KeyCite integration. Weaknesses: less aggressive AI features than CoCounsel; bundled pricing makes value-isolation hard; some workflows still require switching between Westlaw and CoCounsel.
Harvey — strengths: AmLaw 100 reference customer base; strong transactional and M&A workflows; multi-corpus retrieval; sophisticated firm-specific customisation. Weaknesses: enterprise-only pricing is out of reach for most solos and small firms; less polished on litigation workflows than CoCounsel; premium positioning means many firms over-pay relative to actual usage.
Use-case fit #
For litigation firms already on Westlaw: CoCounsel is the path of least resistance. Westlaw Edge AI plus a separate CoCounsel subscription gives the most depth. A pure Westlaw Edge AI subscription is acceptable for firms that don't need brief-drafting AI.
For litigation firms already on Lexis: Lexis+ AI. The Shepard's integration is meaningful for cite-checking; the secondary sources matter for substantive analysis.
For multi-jurisdictional or international work: Vincent. The vLex corpus is the strongest of the available legal-research AI corpora outside US-domestic-only work.
For solo and small-firm budget-constrained buyers: Vincent (mid-tier pricing) plus a general-purpose enterprise AI for surrounding analysis. Or, for firms already paying for CoCounsel or Lexis+ AI through inherited Westlaw / Lexis seats: use what's already there.
For AmLaw 100 firms: Harvey is the reference choice; CoCounsel and Lexis+ AI are also defensible. The decision usually turns on existing Westlaw / Lexis relationships and the firm's transactional-vs-litigation mix.
For transactional and M&A practices: Harvey or CoCounsel (with Practical Law access) over Lexis+ AI for most firms. Practical Law's deal-document templates are unmatched.
For in-house legal departments: depends on the existing research subscription. The procurement question is usually adding AI on top of an existing Westlaw or Lexis seat, not picking from scratch.
Pricing in 2026 #
- CoCounsel: $300-$500/user/month at typical volumes. Bundles available with Westlaw subscriptions.
- Vincent: $80-$200/user/month. The mid-tier is genuinely mid-tier; the premium adds enterprise features.
- Lexis+ AI: $300-$600/user/month depending on add-ons. The Shepard's-integrated tier is at the higher end.
- Westlaw Edge AI: Bundled into Westlaw subscriptions ($150-$350/user/month for Westlaw with AI features included).
- Harvey: $400-$800/user/month enterprise. AmLaw 100 firms pay seven-figure annual contracts; mid-market firms negotiate from there.
The category is bimodal: mid-tier ($80-$200) anchored by Vincent and several general-purpose AI subscriptions used for legal work; premium tier ($300-$600+) anchored by CoCounsel, Lexis+ AI, and Harvey. Few products sit in between because the cost of building a high-quality legal corpus is large, and once a vendor has built one, it prices accordingly.
The two-tool stack pattern #
The most common 2026 procurement pattern in firms above 5 attorneys is a two-tool stack: one premium primary-corpus AI for verification-critical work, plus one mid-tier or general-purpose AI for surrounding analysis.
The reasoning: the premium tool is necessary for citation-grade research (Mata-defence) and for the most-used database. The mid-tier or general-purpose tool covers everything else — document drafting, summarisation, brainstorming, comparison work — at substantially lower cost.
Concrete examples:
- CoCounsel for citation-grade research + Claude for Work for surrounding drafting and analysis. ~$450/month per attorney.
- Lexis+ AI for citation-grade research + ChatGPT Enterprise for surrounding work. ~$420/month per attorney.
- Vincent for citation-grade research + Claude for Work for surrounding work. ~$200/month per attorney. The budget option that doesn't sacrifice verification quality.
Firms that buy only the premium tool usually under-use it — the premium AI is overkill for many tasks the firm hands it. Firms that buy only the mid-tier or general-purpose AI run into the verification problem on citation work. The two-tool stack solves both.
How to evaluate (the 5-question pilot) #
Vendor demos optimise for impression. The actual evaluation that matters is running the same legal-research questions through 2-3 vendors and comparing the output. Five questions that span the firm's typical work, run through each vendor, with the lawyer comparing on six axes:
- Citation accuracy. Are the cited cases real? Are the pin-cites accurate? (Verify each via Westlaw / Lexis directly.)
- Comprehensiveness. Does the output cite the cases the lawyer would expect? Or does it miss leading authority?
- Currency. Does the output reflect recent (last 6 months) cases?
- Negative-treatment awareness. If a cited case has been overruled, does the output flag it?
- Output usability. Is the output structured for the lawyer's actual work product, or does it require heavy reformatting?
- Speed. How long does the tool take to produce a usable answer?
Vendor pilots in 2026 are typically 30-60 days at no cost. Most vendors will accept a pilot framed around the firm's actual work; a few will not, and that's a procurement signal in itself.
Frequently asked.
What's the best AI for legal research in 2026?
Depends on which corpus and which budget. CoCounsel for Westlaw firms; Lexis+ AI for Lexis firms; Vincent for budget-constrained or multi-jurisdictional; Harvey for AmLaw 100. The two-tool stack pattern (one premium primary-corpus AI plus one general-purpose for surrounding work) fits most firms better than a single-vendor stack.
Can I use ChatGPT for legal research?
Not for citation-bearing legal research. The Mata v. Avianca standard requires verification of every cited authority, and general-purpose AI generates citations rather than retrieving them — the original Mata failure mode. Use a primary-corpus tool (CoCounsel, Vincent, Lexis+ AI, Westlaw Edge AI) for citation work; general-purpose AI is fine for surrounding analysis and drafting.
Is Harvey the best?
Harvey is the AmLaw 100 reference choice and has strong transactional workflows. For most solo and mid-market firms, CoCounsel, Lexis+ AI, or Vincent will be more cost-effective and equally accurate on the firm's typical work. Harvey's strength is integration depth and customisation, which is most valuable at large-firm scale.
Is Vincent really competitive with CoCounsel?
For US-domestic case-law research at typical solo and small-firm volumes: yes. The vLex corpus is strong on US case law and unmatched for multi-jurisdictional or international work. CoCounsel pulls ahead on Westlaw integration depth, brief-drafting workflows, and large-firm features. The mid-tier vs premium pricing is real, and Vincent is the strongest mid-tier option in 2026.
How long should the pilot be?
30-60 days. Most vendors offer 30-day standard, 60-day on negotiation. Run 5 representative research questions through 2-3 vendors in parallel; compare on the six axes in section seven. The parallel run is essential because vendor demos always show the tool at its best.
Citations and further reading.
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).
- Park v. Kim, 91 F.4th 610 (2d Cir. 2024).
- ABA Formal Opinion 512.
- IXSOR Resources: Citation Verifier — the Mata Defense Prompt.
- IXSOR: AI vendor diligence catalogue.
- IXSOR: Best AI for Lawyers — evaluation framework.
- IXSOR: AI Contract Review Buyer’s Guide.
