AI for lawyers.
A practical map of legal AI in 2026, what the category actually covers, what small firms are using, the five ABA Model Rules duties that govern it, and how to evaluate the vendor landscape without getting sold.
The state of the question.
AI is now a category of legal-practice tooling, not a thought experiment. Most surveys put generative-AI use among practicing attorneys somewhere between sixty and ninety percent, depending on how the question is framed and whether ChatGPT-on-a-personal-account counts. Both the American Bar Association and a growing number of state bars have issued formal ethics opinions on the subject. The technology has cleared the curiosity threshold and entered the workflow.
What it has not cleared is the operational threshold. The same surveys consistently show that the share of firms with a written AI policy is a small fraction of the share using AI. Firms have a tool footprint without a governance footprint. Solo and small firms, the practitioners who run most active matters in this country, are where that gap is widest, because vendor sales and BigLaw consultancies have spent the last two years optimising for buyers with seven-figure budgets.
This piece is a working map. It does not rank tools. It does not pitch ours. It tells you what the category covers, what the regulatory layer requires, and what an honest implementation looks like, at the size of practice that probably reads it.
What "AI for lawyers" actually covers.
The phrase has flattened into marketing shorthand. Operationally, it splits into six different categories of tooling, with different ethical surface area and different vendor economics.
Research and case-law lookup. Westlaw Precision, Lexis+ AI, Thomson Reuters CoCounsel, Vincent AI, and Harvey (in a different price tier) sit here. They retrieve and summarise authority, draft research memos, and flag opposing cases. Subscription-priced. Constrained outputs because the inputs are constrained, you are searching a curated corpus, not the open internet.
Drafting. Spellbook, Lexis Drafting, Microsoft Word Copilot, Clio Duo, and a long tail of contract-specific drafters. They generate first drafts, suggest clause replacements, redline against templates. Highest variance in quality across the category.
Document review. Reveal, DISCO AI, Relativity aiR, Everlaw, categorise, prioritise, and summarise large document sets. Discovery-volume work. Long-established as a category; the AI in 2026 is meaningfully better than the predictive coding of 2014, but the workflow is similar.
Intake, triage, and client communication. Smith.ai, Lex Reception, Clio Intake, intake-specific GPT integrations. Triage incoming inquiries, schedule consultations, draft initial response letters. The most consumer-facing slice of legal AI; the hardest to get past privilege analysis.
Practice management with AI features. Clio Duo, MyCase Sora, Filevine ProjectAI, PracticePanther's recent additions. The "AI" here is bolted onto the matter-management platform you already pay for. Lower marginal cost; integrates with your existing data; capabilities lag the standalone tools.
General-purpose models used by lawyers off-platform. ChatGPT, Claude, Gemini, Microsoft Copilot. The fastest-growing slice. Not legal-specific. Free or nearly free. The site of the most contested ethics questions, because attorneys are pasting privileged work product into consumer tools without thinking carefully about the terms of service.
What small firms are actually using.
The vendor map and the practice-floor reality are not the same map. In a typical fifteen-attorney general-civil practice today, the AI footprint usually looks like this:
- ChatGPT or Claude on personal subscriptions, used for drafting, summarising opposing motions, brainstorming arguments, and writing client-facing correspondence.
- Microsoft Word Copilot if the firm is on Microsoft 365, used by the staff who already had Word open.
- Otter, Fathom, or another transcription tool for hearings, client calls, and depositions where the rules permit.
- Maybe one legal-specific tool, usually Spellbook, sometimes CoCounsel, used by one or two power-user attorneys; uneven adoption across the rest of the firm.
- Increasingly, an AI feature inside Clio or MyCase if the firm already runs that platform.
What is conspicuously absent: a written policy describing which of these tools is approved for which kinds of work, what data may be entered, what disclosure is required to clients, and what the partner-level review obligation is. The firm has the tool footprint; the firm has not done the operational work.
The ethics layer, five duties.
The American Bar Association's Formal Opinion 512 (July 2024) is the controlling federal-Model-Rules statement. As of mid-2026, more than a dozen state bars have issued their own formal opinions, ranging from the North Carolina State Bar's 2024 Formal Ethics Opinion 1 to opinions in California, Florida, New York, and others. The state opinions interpret the same Model Rules through a state-specific lens; the obligations are functionally similar.
Five duties recur across every opinion:
Competence (Model Rule 1.1). An attorney who uses an AI tool must understand what it does, what it does not do, and where it fails. Comment 8 to MR 1.1 already required attorneys to keep abreast of the benefits and risks of relevant technology; AI is now squarely within that obligation. "I did not know it could hallucinate" is not a defence in 2026.
Confidentiality (Model Rule 1.6). The most operationally consequential duty. Inputs to an AI tool can constitute disclosure. Pasting a client's privileged document into a tool whose terms of service permit training on inputs is, in many readings, a breach. The fix is not to ban AI, it is to choose tools whose contracts disclaim training rights, configure them correctly, and write a policy describing what data classes can be entered into which tools.
Candor and supervision (Model Rules 3.3, 5.1, 5.3). AI work product must be supervised the way a paralegal's work product must be supervised. Some courts now require disclosure of AI use in filings; some do not. Disclosure to clients is increasingly required by state opinions. The supervision duty is constant: the lawyer signing the brief is responsible for every citation in it, regardless of which tool generated the first draft.
Billing (Model Rule 1.5). The least settled. If a research memo that took six hours to write last year takes one hour with AI, who keeps the five hours of efficiency, the firm or the client? Several state opinions have begun to address this directly; the answer trends toward "the client gets the benefit, but the firm is not obliged to bill at zero." The honest answer for a fixed-fee practice is that AI does not change the billing analysis at all.
Reasonable fees and unauthorised practice (Model Rules 1.5, 5.5). A subset of the billing question. Charging clients for AI-generated work as if it were attorney work, when the supervision was inadequate, can become a reasonable-fees problem. Outsourcing legal work to AI without competent supervision can become a UPL problem.
The tractable form of the ethics layer is a one-page policy that maps each duty to a concrete operational practice, what the firm does, who decides, what gets documented. The unwritten policy is the malpractice exposure.
The vendor landscape.
Three patterns repeat across the legal-AI vendor market in 2026.
Pattern one: the legal-specific premium tools (Harvey, CoCounsel, Vincent, Lexis+ AI, Westlaw Precision). High accuracy, narrow scope, $400–$2,500 per user per month. Their reach is into mid-sized and large firms because the cost-per-seat is incompatible with most solo and small budgets. Worth evaluating when you have a single high-volume workflow that justifies the seat cost.
Pattern two: the practice-management bundles (Clio Duo, MyCase Sora, Filevine, PracticePanther). The AI is included with, or modestly upcharged against, the case-management subscription you already pay. The capabilities are shallower than the premium tools but the integration with the data your firm has already entered is the highest-leverage feature on the market for small-firm practice.
Pattern three: the general-purpose models (ChatGPT Team, Claude Pro, Microsoft Copilot, Google Gemini Workspace). $20–$30 per user per month. Capable enough to do the long tail of drafting, summarising, brainstorming, and editing work that legal-specific tools either don't cover or cover at five times the price. The right configuration, Team or Workspace plans whose terms disclaim training, deployed with a policy, is often the highest-ROI AI investment a small firm can make. The wrong configuration, personal accounts, no policy, is the highest-risk.
The vendor evaluation question that matters more than any tool comparison is whether the contract gives away your inputs. Read the data-use clause; ignore the marketing site.
What stops adoption.
The obstacles are not the ones the vendors talk about.
Cost is real but not central. The premium tools are out of reach for most small firms; the general-purpose tools are not. Most firms can afford something useful.
Trust, after Mata v. Avianca and the half-dozen successor cases, is real and central. Hallucinated citations have been sanctioned in federal and state courts; the sanctions have been substantial; the underlying cause is always the same, an attorney trusted AI output without verifying. The rational response is not to avoid AI; it is to verify.
Integration is the biggest under-discussed obstacle. The premium tools rarely talk to the practice-management platform; the practice-management platform's AI is shallow; the general-purpose tools have no integration at all. Workflows live in spreadsheets and copy-paste, which decays.
Training is the obstacle most firms eventually trip on. The partner buys the tool; the partner finds it useful; the staff are not given the time, the prompts, or the workflow to use it; six months later the seat licence is revoked because nobody is using it. The lesson is that any tool decision is also a training decision and the training is not optional.
What good implementation looks like.
The pattern that consistently produces good outcomes at solo and small firms, across our practice and across the cases we know of others working in this space, has five steps. None of them are about the tool.
One. Pick one workflow. Not "we are going to do AI." A specific workflow, first-draft demand letters; deposition summaries; intake triage; opposing-motion summarisation. One. Defined by who does it now, how long it takes, and what bad looks like.
Two. Pick the tool that fits the workflow. Not the tool the partner saw on a podcast. The tool with the lowest friction at the workflow's specific shape. Sometimes that is a $20-per-month general-purpose model with a written prompt template. Sometimes it is a $400-per-month legal-specific tool. The match matters more than the brand.
Three. Write the policy before the pilot. Eight pages. Mapped to the five Model Rules duties above. Names the approved tool, the approved data classes, the disclosure obligation, the supervision standard, the audit cadence. Written before the first matter touches the tool, not after.
Four. Train the staff who will use it. The partner does not need the most training. The associate or paralegal who will operate the workflow needs the training. Forty-five minutes, hands-on, with the actual tool, with sample matters that resemble the firm's actual matters.
Five. Audit at sixty days, then quarterly. Read the AI's output across a sample of matters. Compare against the policy. Identify drift. Update the policy. Repeat.
This pattern is unglamorous. It is also the difference between the firms that get value from AI in 2026 and the firms that have a tool subscription and a malpractice carrier with new questions on the renewal.
Where IXSOR fits.
This piece is a working map, not a pitch. But the firms reading it for whom IXSOR is useful share a profile: solo through about fifty attorneys; some AI use already happening, often with no governance around it; aware of the ethics opinions but uncertain how to operationalise them; uninterested in a six-figure consulting engagement and unwilling to receive vendor-funded recommendations.
Our engagements are fixed-fee, vendor-agnostic, and aligned to the ABA Model Rules and the December 2025 Task Force second report. We do not take vendor commissions, referral fees, or implementation kickbacks. We do not retain the policies we write, they are yours.
If the working map above accurately describes the gap your firm is in, the capabilities page describes how we close it, and the contact page is where a sixty-minute initial call begins.
This is general industry analysis aimed at practising attorneys. It is not legal advice. Engage qualified counsel for advice on your firm's specific situation.
Sources and further reading.
Primary:
- American Bar Association, Formal Opinion 512 (July 2024). The federal-Model-Rules statement.
- ABA Task Force on Law and Artificial Intelligence, Second Report (December 2025).
- North Carolina State Bar, 2024 Formal Ethics Opinion 1.
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), and successor cases on AI-hallucination sanctions.
State bar opinions and guidance (selection):
- California State Bar, Practical Guidance for the Use of Generative AI in the Practice of Law (November 2023).
- Florida Bar, Ethics Opinion 24-1 (January 2024).
- New York State Bar Association, Task Force Report (April 2024).
Secondary:
- Lawyers Mutual NC, guidance on safe and ethical AI use in NC practice.
- NC Bar Association, practice-policy commentary (2026).
About the author.
Dan Hughes is the founder of IXSOR. Ex-BBC. Ex-Apple. Lifelong technologist. And most importantly: not an attorney. He writes about legal AI from the operational and infrastructure side, where the rules meet the machines. Reach: [email protected].