The four categories of AI contract tooling #
AI contract review is not one product category. It is four, and the four solve different problems for different buyers. The first procurement decision is choosing which category fits the firm's work.
Category 1: Native drafting AI. Tools that produce contract language from a structured input or natural-language description. Representative vendors: Spellbook, BlackBoiler, BriefCatch (drafting modules), the drafting features inside CoCounsel and Lexis+ AI. The buyer here is producing new contracts. The AI generates first drafts, edits to firm style, and inserts standard clauses. Pricing typically $80-$200/seat/month. Best for transactional practices producing high-volume agreements (vendor contracts, NDAs, employment agreements, service agreements).
Category 2: Contract review and analysis AI. Tools that ingest existing contracts and produce structured analyses: clause extraction, deviation reports, risk scoring, redline suggestions. Representative vendors: Kira (Litera), Luminance, ContractPodAI, ThoughtRiver, Robin AI, Della. The buyer here is reviewing inbound contracts — vendor agreements, M&A diligence, lease portfolios, employment-agreement audits. Pricing $100-$400/seat/month with significant per-document or per-page variants. Best for firms doing high-volume inbound review.
Category 3: Full contract-lifecycle management with embedded AI. Tools that handle drafting, review, negotiation, signature, storage, renewal, and reporting in a single platform. Representative vendors: Ironclad, Agiloft, LinkSquares, ContractPodAI (also full-lifecycle), LexisNexis CounselLink, Onit. AI is one feature among many. Pricing $200-$1,500+/user/month or per-contract enterprise pricing. Best for in-house legal departments and large firms with heavy contract operations.
Category 4: General-purpose AI with contract-specific prompts. Claude, ChatGPT, Gemini, CoCounsel, Lexis+ AI, Harvey, used with contract-review prompts (such as IXSOR's vendor-policy analyzer or a similar internal prompt). The buyer here gets the lowest marginal cost (already paying for general-purpose AI) and the highest flexibility, at the cost of less polished workflows than Category 1-3. Pricing typically $25-$60/seat/month for the underlying AI, plus the firm's prompt-engineering time.
The error most firms make in their first AI-contract procurement is buying from the wrong category. A drafting practice that buys Category 2 (review tooling) ends up using a fraction of the platform. A review-heavy practice that buys Category 1 (drafting) finds the analysis features thin. The category decision is the first decision, and it is upstream of the vendor-shortlist decision.
What the firm is actually buying #
The marketing pages emphasise feature lists. The procurement decision is downstream of three more important questions.
Where does the contract data go? The contract being reviewed is, in nearly every case, a client confidential document. Where it sits during processing, who has access, how long it is retained, and what happens to it after the review are the operative questions for ABA Model Rule 1.6 analysis. Vendor marketing rarely answers these directly; the data-protection addendum (DPA) does.
What does the AI actually do? “Reviews contracts” means very different things across vendors. Some AI tools extract clause types into a structured table (Kira's traditional model). Some run a deviation report against a firm playbook (Luminance, ThoughtRiver). Some generate redline suggestions inline (Spellbook, Robin AI). Some produce a narrative risk summary (Harvey, CoCounsel). The buyer should know which mode the vendor uses before evaluating against firm workflows.
What integrates and what doesn't? A contract-review tool that does not integrate with the firm's document management (NetDocuments, iManage, SharePoint) or matter management (Clio, MyCase) creates manual work that erodes the time savings. The integration list is more important than most feature differences. Vendors with weak DMS integrations save a lot of money but cost more time per matter.
An effective procurement evaluates each vendor on these three questions before any feature comparison. Most procurement processes do the reverse and end up choosing the vendor with the most-marketed feature set rather than the best operational fit.
Rule 1.6 confidentiality and the data-flow analysis #
The Rule 1.6 analysis for AI contract review tracks the same framework as for any AI use, with practice-specific intensity. (See the vendor diligence catalogue for the general framework.) The contract-review specifics:
Document retention. Contracts under review include client deal terms, pricing, counterparty identities, financial terms, and (in M&A) sometimes pre-public material non-public information. The retention question is sharper than for, say, a brief: after the review is complete, how long does the vendor retain the contract, the analysis output, and the metadata? Acceptable: deletion within 30-60 days post-review or on-firm-request. Walk-away: indefinite retention for “product improvement.”
Training-data use. Many AI contract-review vendors trained their initial models on contract corpora. The relevant question is whether the vendor continues to train on customer contracts. Acceptable: no training on customer contracts, or training only on contracts the customer has affirmatively opted into the corpus pool. Walk-away: default training on customer contracts with opt-out only.
Counterparty data. A contract under review names the counterparty. Some vendors aggregate counterparty data (anonymised or not) for benchmark analytics across their customer base. The Rule 1.6 question is whether counterparty identity is treated as part of the client confidence. The conservative answer: yes, and aggregation that reveals counterparty identity is a Rule 1.6 issue.
Discovery exposure. Contracts under AI review may be discoverable in subsequent litigation between the parties. The vendor's retention policy and access controls determine whether the AI processing layer creates additional discovery surface. Vendors that retain only the analysis output (and not the contract itself) reduce exposure; vendors that retain the contract increase it.
Privilege analysis. An AI contract review performed at counsel's direction, in an enterprise-tier vendor with appropriate confidentiality terms, is consistent with the work-product framework set out in Warner v. Gilbarco. AI contract review performed in consumer-tier AI, or by a non-lawyer at the firm without supervision, runs into the limitations identified in United States v. Heppner.
Feature parity vs feature value #
Most vendors in 2026 list a similar feature set on their marketing pages: clause extraction, redline suggestions, deviation reports, integration with major DMS platforms, AI-assisted negotiation, and term-by-term risk scoring. The features are largely table stakes; the differentiation has moved to:
- Corpus quality and update cadence. Tools trained on a curated legal-contract corpus (Kira, Luminance) tend to outperform general-purpose AI on specialist clauses. The corpus update cadence matters more than the corpus size.
- Playbook integration. The firm's negotiation playbook (which clauses are mandatory, which are negotiable, which are walk-aways) varies by firm and by client. Tools that allow firm-specific playbook customization (ThoughtRiver, Della, Robin AI) save the most time over general-purpose tools.
- Multi-document and portfolio analysis. Reviewing one contract is interesting; reviewing a thousand contracts under a deal-team timeline is the actual M&A diligence problem. Tools designed for portfolio analysis (Kira's diligence module, Luminance, ContractPodAI's data-room features) win this use case.
- Negotiation tracking. Tools that track redlines through multiple counterparty rounds (Ironclad, ContractPodAI, LinkSquares) save time on the negotiation cycle. Tools without this feature force the firm to manage negotiation history manually.
- Reporting and analytics. Tools that produce contract analytics across the firm's portfolio (LinkSquares, Ironclad) deliver value beyond the per-contract review. Tools without portfolio analytics are reviewing one contract at a time, with no learning curve.
The features that matter for a particular firm depend on its work mix. A firm reviewing high volumes of inbound vendor contracts will value playbook integration above portfolio analytics. A firm running M&A diligence will value the reverse. The feature evaluation should follow the work mix, not the marketing emphasis.
Pricing in 2026 #
The contract-review AI category has settled into a per-seat-per-month pricing pattern with some volume-based variants. Indicative ranges:
- Drafting AI (Category 1): $80-$200 per attorney per month. Spellbook, BlackBoiler, and the drafting modules in CoCounsel and Lexis+ AI sit in this range. Volume discounts at 10+ seats and 50+ seats.
- Review AI (Category 2): $100-$400 per seat. Kira, Luminance, ContractPodAI cluster at the higher end of this range; Robin AI, Della, ThoughtRiver at the lower end. Per-document pricing (typically $5-$50/document) often available for low-volume firms.
- Lifecycle platforms (Category 3): $200-$1,500 per user per month, or enterprise pricing. Ironclad, Agiloft, LinkSquares enterprise-anchor; LexisNexis CounselLink and Onit dominate the in-house legal-department segment.
- General-purpose AI (Category 4): $25-$60 per seat for ChatGPT Enterprise / Claude for Work / Gemini for Workspace, plus the firm's prompt-engineering time. The cheapest entry but the most internal lift.
Most firms underestimate the implementation cost. The marketing pricing is the per-seat sticker; the actual cost includes integration ($5K-$50K one-time depending on scope), playbook development ($10K-$80K initial then ongoing), training ($2K-$20K per cohort), and the inevitable workflow tweaks during the first three months. A realistic budget for a 10-attorney firm adopting Category 2 review AI is $60K-$150K in year one, dropping to $30K-$80K in year two.
Pricing negotiation is realistic above 5 seats. Most vendors list a price; most will discount 10-30% on multi-year commitments. Pilot programs (free 30-90 day evaluations) are standard at the enterprise end and increasingly at the mid-market end.
Six contract clauses to redline in any vendor agreement #
The vendor's own contract is the contract that matters most. The six clauses recurring in 2026 AI-vendor agreements that demand redline:
1. Training data rights. Default contracts often grant the vendor rights to train models on customer prompts and outputs. The redline: explicit, affirmative carve-out preventing training on customer data. This should be a hard requirement at the enterprise tier; vendors not offering it should be walk-aways for client work.
2. Retention windows. Default contracts often allow the vendor to retain customer data indefinitely for “product improvement.” The redline: explicit retention limit (60-180 days post-deletion-request) plus customer right to demand deletion at any time with documented confirmation.
3. Sub-processor chain. Vendors typically use third-party cloud infrastructure, model providers, and support tooling. The redline: a stable URL listing all sub-processors, customer notice obligation before sub-processor changes, and customer right to object to a new sub-processor.
4. Governmental disclosure. Default contracts often allow disclosure to government agencies under broadly drawn legal-process clauses. The redline: customer-notice obligation before disclosure (where legally permissible), and a commitment to challenge overly broad legal process. Vendors that decline to commit to challenge are accepting a different posture than firms typically need.
5. Anonymisation claims. Vendors often claim that “anonymised” data is outside customer protections. The redline: a defined anonymisation method (k-anonymity, differential privacy, or specific irreversible-stripping), and a contractual commitment that “anonymised” data not be re-identified or aggregated in a way that reveals customer identity.
6. Tier differentiation. Many vendors offer a consumer / free tier with weaker protections than the enterprise tier. The redline: explicit confirmation that the customer is on the enterprise tier, with the DPA terms applicable to all the customer's users, all the customer's data, all the time. No silent tier-flipping.
These six are the floor. Specific firm contexts may require additional redlines (HIPAA BAA execution for medical-records work, GDPR data-controller terms for EU clients, attorney-client privilege carve-outs for criminal-defense work). The framework is portable; the specific contract terms vary.
Implementation playbook (twelve weeks) #
For a firm adopting AI contract review for the first time, twelve weeks is the realistic timeline.
Weeks one through three: workflow audit and category decision. Identify the contract work the firm actually does — volume, type, sponsor (deal partner), turnaround targets. Decide which of the four tool categories matches. The category decision determines the vendor shortlist.
Weeks four and five: vendor shortlist and pilot scoping. Select 2-3 vendors from the chosen category. Reach pilot agreements with each (most vendors offer 30-60 day pilots in 2026). Define the pilot scope: which contracts, which review steps, which integrations.
Weeks six through nine: parallel pilot. Run the same contracts through the vendor pilots and the firm's existing workflow simultaneously. Track time savings, accuracy, integration friction, and lawyer satisfaction. The parallel-run is essential because vendor demos always show the tool at its best; the parallel-run shows the tool on the firm's actual work.
Week ten: vendor selection. Pick one vendor based on the pilot data. Negotiate the contract: focus on the six redlines from section six and on multi-year pricing.
Week eleven: integration. Build the integration with the firm's DMS and matter-management platforms. The integration step usually exposes data-flow assumptions that the contract did not address; circle back to the vendor as needed.
Week twelve: training and standardisation. Two-hour training session for the contract team. Document the firm's standard contract-review workflow with the AI tool. Cut over.
Compressed timelines are possible (down to six weeks for solo / small firms with simpler stacks). Longer timelines (16-26 weeks) are common for in-house legal departments or AmLaw 100 firms where the procurement process and security review are themselves multi-week exercises.
The hardest question: which tool for which work? #
The most consequential AI-contract decision is not which vendor to pick. It is which tool to use for which task. The wrong-tool problem appears in three patterns:
Drafting AI used for review. A firm that has Spellbook (Category 1: drafting) for its transactional practice tries to use it to review inbound vendor contracts. The drafting AI does not have the deviation-detection or playbook features the review use case needs. The lawyer ends up reading the contract page-by-page; the AI's value is approximately zero.
Review AI used for drafting. A firm that has Kira (Category 2: review) tries to use it to draft new contracts. Review AI is built around the assumption of an existing contract; it is not a generative-drafting tool. The output is unusable.
General-purpose AI used at scale without prompt engineering. A firm with Claude for Work tries to do M&A contract review by pasting contracts and asking general questions. The output is a narrative summary that misses the deviation patterns the firm needs to catch. The general-purpose AI works well for one-off contract analysis with careful prompt engineering; it does not scale to a thousand-contract diligence without that engineering.
The discipline that resolves the wrong-tool problem: a firm-level matrix of work types × tool types, maintained as part of the firm's AI policy and updated quarterly. The matrix tells the lawyer which tool to use for the work in front of them. Without it, individual lawyers default to whichever tool they have used most recently, which is rarely the right answer.
The IXSOR Firm AI Policy Generator includes a tool-by-task matrix as part of the generated policy. Firms can populate it as their tool-stack grows.
Frequently asked.
What's the best AI for contract review in 2026?
There is no single best. The best tool depends on whether the firm is drafting (Category 1: Spellbook, BlackBoiler), reviewing inbound contracts (Category 2: Kira, Luminance, ContractPodAI), running full lifecycle (Category 3: Ironclad, Agiloft), or using general-purpose AI with prompts (Category 4: CoCounsel, Claude for Work). The category decision precedes the vendor decision.
How much should I budget for AI contract review?
Realistic year-one budget for a 10-attorney firm: $60K-$150K all-in (subscription plus integration plus training plus playbook development). Year-two: $30K-$80K. Per-seat sticker pricing ranges from $30 to $400+; the seat price is a small fraction of total cost-of-ownership.
Can I use ChatGPT to review contracts?
The enterprise tier (ChatGPT Enterprise or Claude for Work) with a Rule 1.6-aligned data-protection addendum can be used for contract review with careful prompt engineering. The consumer tier should not be used for client documents. The trade-off: lower marginal cost than specialist tools, but more lawyer-time per contract because the workflow is less polished.
Do I need a Business Associate Agreement with my AI contract-review vendor?
Only for contracts containing protected health information (HIPAA) or regulated data (GDPR, CCPA). Most commercial contracts do not need a BAA. Healthcare-related contracts and any contract referencing patient or health-data exhibits do. Confirm with the vendor before processing.
What happens to my contracts after the AI review?
The answer is in the vendor's data-protection addendum, not its marketing page. Acceptable practice in 2026: deletion within 30-60 days post-review or on customer request. Walk-away practice: indefinite retention for product improvement. The retention question should be settled in writing before the firm sends the first contract.
Will defense / opposing counsel know I used AI to review the contract?
Generally not unless disclosed. AI contract review is the lawyer's work product; the work product privilege protects the analysis. Whether to disclose is a tactical and ethical question rather than a regulatory one in 2026. The trend in mediation and negotiation is to disclose AI use voluntarily where it would otherwise look like the firm did unusually fast or unusually thorough review; outside of mediation, disclosure is optional.
Citations and further reading.
- IXSOR: AI vendor diligence catalogue — the six-observation framework for vendor evaluation.
- IXSOR Resources: AI Vendor Privacy Policy Analyzer — the prompt that operationalises the redline list.
- IXSOR Resources: Firm AI Policy Generator — the policy that operationalises the tool-by-task matrix.
- ABA Formal Opinion 512.
- United States v. Heppner / Warner v. Gilbarco / Tremblay v. OpenAI.
- IXSOR: Legal Practice Management Software 2026 buyer’s guide — companion analysis for adjacent procurement.
- IXSOR: AI in Corporate & Transactional Practice (forthcoming) — practice-specific implementation case study.
