SERVESSolo · Small · Mid-sized firms
FORMATFixed-fee · 1-8 wks
JURIS.50 states + DC
BOOKINGThrough July 2026
STATUSAccepting
[ INSIGHTS , CIVIL RIGHTS , DRAFT ]

What counts as consent?

The 2026 federal AI cases are usually read as bar-rules questions about how lawyers should use generative AI. They are bigger than that. The reasoning each opinion uses rests on a theory of consent that, generalised, would unwind boundaries every other consent doctrine in U.S. law has spent two centuries developing. A civil-rights reading of Heppner, Warner, and Tremblay.

AUTHORDan Hughes
FILEDMay 2026
FORMATCivil-rights essay , draft
JURIS.U.S.
READING~28 minutes
// Key Takeaways5 points · ~1 min read
· 01 ·

The cases announce a theory of consent.

Three federal opinions in the first quarter of 2026 are usually read as cases about lawyers using generative AI. United States v. Heppner, S.D.N.Y., February 17, 2026, held that a defendant's months-long use of Anthropic's Claude defeated both attorney-client privilege and work-product protection over the resulting AI logs. Warner v. Gilbarco, E.D. Mich., February 10, 2026, held the opposite for a pro se litigant's use of ChatGPT, on the basis that "ChatGPT (and other generative AI programs) are tools, not persons." Tremblay v. OpenAI, N.D. Cal., August 2024, held that attorney-crafted prompts to ChatGPT were opinion work product.

That surface reading is too narrow. The reasoning each opinion uses to anchor its result rests on a theory of consent that, taken at its word, would touch every consent doctrine in U.S. law: Fourth Amendment search consent, Miranda waiver, contractual capacity and unconscionability, the third-party doctrine, common-law privilege waiver, and ordinary tort and property consent. The cases are not just bar-rules questions. They are early data points in a civil-rights problem about what "I agree" means in an economy where every interaction with a smart device is governed by a click-through to terms permitting broad use.

This piece reads the 2026 cases as civil-rights cases. The lawyerly applications under Rule 1.6 follow from the broader consent question they raise; they are not its source. The broader question is what is actually moving.

The argument in short: each of the three cases relies on a model of consent under which one click-through agreement at sign-up produces a permanent, blanket, stack-cascading, third-party-attributable waiver covering every future submission, every device in the chain, and every category of data the contract reaches. No other consent doctrine in U.S. law works this way. The 2026 opinions do not articulate why the AI vendor relationship warrants a model of consent that the rest of the doctrine would refuse to recognise. That gap is what this piece presses.

This is the doctrinal companion to IXSOR's two-layer confidentiality piece, which presents the cases at face value. This piece reads them at altitude.

· 02 ·

The concerns the cases are pointing at.

Before pressing the reasoning, the legitimate concerns deserve direct articulation. The 2026 opinions are not pulling worry out of nothing. There are real confidentiality risks when client-matter content goes into a third-party AI vendor's platform. Five categories matter most.

Vendor-side human review. Even when the AI model itself does not "memorise" individual prompts in the way the courts seem to assume, the vendor's pipeline includes humans. Engineers debug. Trust-and-safety teams review flagged content. Quality reviewers sample interactions. Any of those touchpoints is, in fact, a third-party human reading the lawyer's prompt. The privacy policy authorises this, often broadly. This is a real exposure that does not depend on any model-training analysis.

Retained logs subject to legal process. Vendors retain prompt-and-output logs. Those logs are subpoenable. They have been subpoenaed. The Heppner facts are, in their bones, this: a prosecutor obtained AI logs through a search warrant on the defendant's accounts, and used the contents against him. The retention and subpoena-availability of the logs, not anything about the model's training, is what made the disclosure operative.

Contractual drift. Privacy policies update. Service-improvement clauses are read to authorise new uses. A vendor that did not train on customer data in 2024 may train on it in 2026 by issuing a privacy-policy update users are opted into by default. The lawyer who signed up under one regime is operating under another, often without notice. The downstream uses of the data the lawyer originally submitted are not fixed at sign-up.

Asymmetric bargaining. Click-through SaaS contracts are not negotiable for solo and small firms. The vendor sets the terms, the lawyer accepts or declines, the lawyer's only practical leverage is choice of vendor. The contract that governs the data flow is shaped by the vendor's commercial incentives, which evolve, and over which the lawyer has no continuing influence.

Ambient erosion of confidentiality baseline. Even if no individual prompt amounts to disclosure in any meaningful sense, the regime of pervasive prompt submission shifts what "confidentiality" means as an operating concept. A profession that puts substantive client-matter content into third-party AI products at the rate the bar is currently doing has a different working definition of "confidential" than the one the Model Rules contemplated when they were drafted. That shift may be defensible. It is, at minimum, real.

These are the concerns. They are not trivial. The question this piece presses is not whether the concerns exist, but whether the 2026 opinions have articulated those concerns well enough to support the doctrinal weight they place on them.

· 03 ·

The cases' specific reasoning, line by line.

The five concerns above are the substantive ground. The 2026 opinions do not, in fact, rest on those concerns directly. They rest on a more abstract claim: that AI use, as a category, is third-party disclosure. The specific reasoning is worth reading closely.

Heppner proceeds in two moves. First, the court concludes that "Heppner does not, and indeed could not, maintain that Claude is an attorney. Because Claude is not an attorney, that alone disposes of Heppner's claim of privilege." Second, the court reads Anthropic's privacy policy and concludes that the policy's authorisation of use, training, and disclosure to "third parties including governmental regulatory authorities" defeats any "reasonable expectation of confidentiality." Voluntary submission to a vendor whose published terms permit disclosure, the court holds, is the kind of disclosure that destroys both privilege and work product.

The reasoning is internally tidy. It is also doing two things at once. The first move, Claude is not an attorney, addresses the privileged-relationship element of the analysis. The second, the privacy policy permits broad use, addresses the confidentiality element. The opinion treats these as separate findings. They are. But the second finding, which does most of the doctrinal work, rests on an unexamined premise: that submitting a prompt to a vendor whose privacy policy permits broad use is operatively equivalent to disclosing the content to a third party.

That premise is what the rest of this piece tests.

Warner reaches the opposite outcome by treating the same submission differently. "ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background." Submitting prompts is not, on this reasoning, disclosure to "an adversary or in a way likely to get in an adversary's hand." The vendor exists, the administrators exist, the privacy policy exists, but the submission is not the kind of disclosure that triggers waiver.

Heppner and Warner cannot both be right. Or rather, they can both be right only if the operative variable is something other than the bare fact of vendor submission, the privacy-policy contents, or the categorical "AI" label, all of which are functionally similar in both cases. The cases differ on facts (criminal vs civil, government acquisition vs civil discovery, individual user vs counsel-directed use), but the rule each announces would, if generalised, contradict the other.

Tremblay takes a third path. The Tremblay court does not hold that AI use defeats protection or that AI use preserves it. The court holds that the prompts themselves, as crafted by counsel, are opinion work product because they reflect the attorney's mental impressions. The vendor disclosure issue is bracketed. The court protects the prompts based on their authorship and content, not based on their downstream handling.

Tremblay is the most doctrinally cautious of the three. It is also the narrowest. The reasoning extends straightforwardly only to attorney-crafted prompts in counsel-directed AI use. It does not address client-driven AI use (Warner) or post-warrant log review (Heppner), and it does not propose a rule for those.

Three opinions. Three different rules. One shared and unexamined premise: that "AI use" is a coherent category that warrants its own doctrinal analysis distinct from ordinary software use. That premise is doing more work than any of the three opinions actually defends.

· 04 ·

The twenty-year telemetry parallel none of the cases address.

For two decades, U.S. lawyers have used Microsoft Word, Outlook, Office, Apple OS, Windows, web browsers, email clients, cloud storage, and practice management software. Each of those products collects telemetry, usage data, click patterns, error reports, and other forms of customer-data-driven product-improvement signal. Each vendor's privacy policy authorises this collection. Each vendor retains the data. Each vendor's policy permits disclosure to third parties under various circumstances, including law enforcement and regulatory authorities. Each vendor has, at various points, used aggregated customer data to develop and improve its products.

The bar has not, historically, treated this as a Rule 1.6 problem. Lawyers draft pleadings in Word, send privileged emails through Outlook, store client files in OneDrive or Google Drive, run cloud-hosted practice management systems with vendor support staff who have system access. The substantive content of countless client matters has flowed through countless third-party vendor pipelines for the entire career of the bar's senior partners.

The implicit reasoning has been: aggregated, de-identified telemetry is a different category from substantive content, and the vendors do not "see" the substantive content in the way that would trigger Rule 1.6 concerns. That reasoning has held up well enough for twenty years that the bar's de-facto position is that ordinary cloud-software use is permissible without per-matter client consent.

The 2026 AI cases announce, in effect, that AI use is different. The opinions do not articulate what makes it different. None of the three opinions distinguishes the privacy policy of Anthropic or OpenAI from the privacy policy of Microsoft 365, Google Workspace, or any major cloud-based practice management product. None of them addresses how the vendor's mechanical handling of an AI prompt differs from the vendor's mechanical handling of an emailed brief draft. Each of them simply treats AI use as a separate category and proceeds to analyse it on its own terms.

This is the gap. The 2026 cases announce a categorical line without articulating what justifies it. If the bar is willing to live with twenty years of cloud-software use that mechanically resembles AI use in most relevant respects, the doctrinal question is what specifically about AI makes it land differently. The cases do not answer.

· 05 ·

What "training" actually does to a single prompt.

One way to test the categorical line is to look at what AI training actually does, mechanically, to the data the lawyer submits. The 2026 opinions speak in generalities: the data is "used to train AI models," the user has "no reasonable expectation of confidentiality" in submission, the model "learns from" the prompt. The actual mechanics deserve more attention than the opinions give them.

A frontier-class generative model, of the kind Anthropic's Claude or OpenAI's ChatGPT runs on, is trained on the order of ten to fifteen trillion tokens. A single client-matter prompt of typical length runs perhaps one thousand tokens. If included in training, that prompt represents one part in ten to fifteen billion of the input. The contribution to any individual model parameter is, in expectation, statistical noise. The model does not memorise the prompt; it adjusts its statistical distribution over the entire token vocabulary by an amount indistinguishable from the contribution of any other thousand-token chunk.

Verbatim memorisation does occur in narrow circumstances: very short or highly distinctive sequences, content that appears repeatedly in the training set, or content with highly idiosyncratic markers. Membership inference attacks, the technical name for the family of techniques that attempt to determine whether a given piece of data was in the training set, exist and have been demonstrated, but they are difficult, low-yield, and have become harder over time as training methods have evolved. A practitioner trying to extract a competitor firm's specific Heppner-style prompt from a deployed model would face long odds and high cost.

None of this is to say that putting a prompt into a training set is harmless. The other concerns named in section 02 above remain: vendor-side human review, log retention, contractual drift. Those concerns are real and operate independently of model-training mechanics. The argument here is narrower: the specific concern that "the prompt becomes part of the AI" is, in mechanical terms, much closer to "the prompt becomes one part in ten billion of an aggregate statistical pattern" than to "the prompt is reproduced and accessible." The 2026 opinions speak as if the latter were the case. The actual mechanics do not match.

This matters for the doctrine because the analogy each opinion implicitly draws, between AI submission and verbatim third-party disclosure, breaks down on the mechanical end. If the doctrine wants to capture the real concerns (vendor pipeline, retained logs, contractual drift), it needs to name those concerns directly. Using "AI training" as a proxy for "third-party disclosure" misframes the underlying activity in a way that will not survive sustained technical scrutiny.

· 06 ·

Where the categorical line actually wobbles.

Even if the AI / non-AI line could be cleanly drawn in 2024, the line has eroded substantially in the two years since. Three forces, each documented in current vendor practice, cut against the line as a stable doctrinal anchor.

Hybrid uses already exist. A user's interaction with ChatGPT generates UX telemetry, which is itself simultaneously product-usage data and AI-context data. Microsoft Copilot's interaction logs are simultaneously Office telemetry and generative-AI training context. Google Gemini in Workspace is the same. The category labels do not separate cleanly when the product is itself an AI, or when the product wraps an AI. A doctrine that depends on distinguishing "AI use" from "ordinary software use" will struggle to draw the line in any product released after 2024.

Retroactive recategorisation is real. Practice management software, word processors, operating systems that collected usage data for product improvement at sign-up time can be, and have been, redirected to train the vendor's LLM on the same contractual authority. The user signed up for telemetry. The vendor's downstream use of that telemetry is now generative-AI training. The pattern has shown up at multiple major vendors over the last two years: a service-improvement clause that predated the AI feature is later read as authorising customer-data use to train a new AI capability, frequently with opt-out rather than opt-in consent. The lawyer who signed up under the old regime is operating under the new one without ever actively agreeing to it.

Service-improvement clauses authorise the bridge by design. Most modern privacy policies grant rights to use customer data for "service improvement" or "developing new features," language broad enough to cover both legacy telemetry and emerging AI training, without a clear contractual moment when one category became the other. The contract is not asking the user to choose between regimes. It is reserving the option to use the data under either regime, at the vendor's discretion, indefinitely.

If the doctrinal line is "AI use is different from ordinary software use," each of these three forces erodes the line where it is most needed. Hybrid uses make the line illegible at the product level. Retroactive recategorisation makes the line untrackable across time. Service-improvement clauses make the line unenforceable contractually. A doctrine that depends on this line as a stable anchor is asking practitioners to manage a category that vendors have already begun to dissolve.

· 07 ·

Running Heppner's logic through the line.

Take Heppner's actual reasoning seriously and extend it. The court holds that a user submitting prompts to Claude has, on the privacy policy's face, no "reasonable expectation of confidentiality," because the policy permits collection, training use, and third-party disclosure including to "governmental regulatory authorities." Voluntary submission under those terms is the kind of disclosure that defeats privilege and work product.

Now apply that reasoning to the lawyer's other vendor relationships.

Microsoft 365's privacy policy permits collection of customer data, use for service development and improvement (which now explicitly includes AI feature development under Microsoft's published terms), and disclosure to "regulatory or governmental authorities." A defendant whose draft motion was composed in Word and stored in OneDrive has, on Heppner's reasoning, no "reasonable expectation of confidentiality" in that draft as against Microsoft.

Google Workspace's privacy policy permits collection, use for service improvement (now also including AI feature development under Google's Gemini terms), and disclosure to government on lawful request. The defendant who drafts in Docs is, on Heppner's reasoning, in the same position vis-à-vis Google as the Heppner defendant was vis-à-vis Anthropic.

Apple iCloud's privacy policy permits the same broad categories of collection, use, and disclosure. A defendant whose iMessage with counsel synced through iCloud has, on Heppner's reasoning, voluntarily submitted that message to a vendor under terms that permit broad use and disclosure.

Excerpt from Apple iCloud Terms of Service showing the broad license users grant Apple over content, with the operative phrase highlighted in yellow
[ EX. 1 ] Apple iCloud Terms of Service, § IIIapple.com/legal/internet-services/icloud/us-en/terms.html

Twenty years of Word, Workspace, and iCloud use have plainly not been treated as Rule 1.6 violations. No reported opinion holds that defendants lose privilege over draft motions because they typed them in cloud-hosted software. The settled operating practice of the bar is irreconcilable with that result, even if no opinion has formally said so. Heppner's reasoning, taken at its word, would imply otherwise.

One obvious response is that Heppner is a criminal case, and the others would be civil. That difference is real and matters, but it is not in fact what Heppner anchors on. Heppner anchors on the privacy policy and the categorical claim that AI use defeats the confidentiality element of privilege. Both of those features apply identically to Microsoft 365, Google Workspace, and iCloud as they do to Anthropic. If the operative variable is the criminal-investigation posture, then a more careful opinion would have said so, narrowed the rule to that posture, and left civil discovery for a later case. Heppner does not do this. It announces a broader rule and leaves the criminal/civil line for the next court to pull.

There is a third problem with Heppner's reasoning that neither the cross-vendor problem nor the criminal/civil response addresses. The opinion treats the user's agreement to Anthropic's privacy policy at sign-up as a permanent, blanket waiver: every prompt submitted under that policy, forever after, is conducted under a regime in which a reasonable expectation of confidentiality has been waived in advance. No other consent doctrine in U.S. law works this way. Fourth Amendment consent to search is bounded in scope and time, withdrawable, and limited to the specific intrusion consented to. Miranda waiver is event-specific; a defendant who answers one question can re-invoke the right to counsel before the next, and the police must stop. Subject-matter waiver of privilege is topic-specific, not blanket. Implied consent in tort and property is bounded by what the specific intrusion fairly contemplated. None of these doctrines treats one act of consent as foreclosing all future re-invocations of the underlying right.

Heppner's reasoning, by contrast, reads the click-through privacy policy as having precisely that scope. One agreement, all submissions, no event-specific re-invocation, no scope limitation, no temporal boundary. The court does not name what about an AI vendor's privacy policy makes it different in this respect from any other agreement to terms in U.S. law, and the doctrinal cost of treating consent this way, if generalised, is dramatic. Going outside today does not waive a right to privacy when one comes back inside. Answering one question before asking for counsel does not foreclose the right to counsel in every later interrogation. Disclosing one privileged communication on a topic does not collapse privilege over every communication on every topic. Heppner reaches the analogous result in the AI context without explaining why the AI context warrants what no other context does.

The consent-scope problem deepens further when two related questions are asked: whose consent, and consent to what. Take the first. The lawyer using Anthropic's Claude under a firm-purchased enterprise license did not personally click through Anthropic's consumer terms; the firm's IT administrator did. The lawyer using a shared workstation did not configure it. The lawyer using an operating system that shipped with built-in AI features did not opt into AI specifically; the OS vendor's terms were accepted at device setup, possibly by a different person, possibly years before the AI features existed. Heppner's reasoning treats all of these as voluntary submissions by the lawyer to a vendor whose terms permit broad use. The actual chain of consent, in many cases, runs through other actors who are not the lawyer and do not carry the lawyer's confidentiality obligations.

Now the second. The lawyer's keyboard has terms. The mouse has terms. The OS has an EULA. Every installed application has terms. The network card has firmware terms. The router has terms. The ISP has terms. Each authorises some category of vendor data collection, use, and disclosure. If submitting client-matter content under one privacy policy that permits broad use is what defeats Rule 1.6 protection, then under Heppner's reasoning the lawyer who merely typed the submission on a keyboard had stacked consent across a dozen contracts before any of the content reached the AI vendor at all. The doctrine produces cascading waivers across every layer of the modern technology stack.

The aggregation problem extends across devices. Documents printed at home generate printer telemetry, which the printer manufacturer's privacy policy authorises collecting and using. That telemetry crosses to the manufacturer's cloud, then surfaces on the user's phone as an ink-cartridge refill alert, an aggregation enabled by three separate contracts: the printer's terms accepted at setup, the phone's terms accepted at activation, and the notification service's terms accepted at app install. Under Heppner's reasoning, has the user waived privacy in what was printed at home, because the contracts authorise the cross-device aggregation? The reasoning gives no principled answer. The user's only signal that any of this is happening is the notification itself, which arrives after the data has already moved across vendors. "Voluntary submission under terms permitting broad use" is doing very different work in this fact pattern than the same phrase did when applied to a Heppner-style direct AI prompt.

Excerpt from HP printer data collection notice showing printer telemetry collection, with the operative phrase highlighted in yellow
[ EX. 2 ] HP Printer Data Collection Noticehpsmart.com/plain/printer-data-collection-notice

The pattern is not specific to printers. A smart refrigerator that texts the user when the milk runs low has aggregated data about the contents of the fridge, transmitted that data across vendors, and surfaced it on a different device, all under a chain of contracts none of which specifically anticipated "monitor my refrigerator inventory." Smart watches transmitting heart-rate readings to a phone app, smart speakers logging ambient sound to refine voice recognition, smart locks reporting entry timestamps to a cloud, smart cars reporting cabin telemetry to the manufacturer's connected services platform, every IoT category produces the same structural pattern of cross-device aggregation under stacked consent. Heppner's reasoning, applied consistently, would have to address whether the user has waived privacy in any of these data streams as a result of the contracts that enabled them. It does not address. It cannot, on its own logic, articulate why the AI vendor's privacy policy is doctrinally distinguishable from a smart refrigerator's.

Excerpt from Samsung SmartThings privacy notice showing the collection of refrigerator-door images recognized as food, with the operative phrase highlighted in yellow
[ EX. 3 ] Samsung SmartThings U.S. Privacy Noticeeula.samsungiotcloud.com/legal/us/en/pps.html
Excerpt from Amazon Alexa privacy policy showing voice interactions used to train Amazon's machine learning models, with the operative phrase highlighted in yellow
[ EX. 4 ] Alexa, Echo Devices, and Your Privacyamazon.com Alexa Privacy
Excerpt from Amazon Alexa privacy policy showing humans review a sample of voice interactions, with the operative phrase highlighted in yellow
[ EX. 5 ] Alexa Privacy, supervised-learning disclosureamazon.com Alexa Privacy

The smart watch is itself an instructive case in how the consent label, rather than the substance of the data, drives the regulatory regime. The continuous data a modern smart watch transmits, heart rate, heart-rate variability, blood oxygen, skin temperature, accelerometer-derived activity, electrodermal response on some models, is nearly the same physiological data set polygraph examiners use under the Employee Polygraph Protection Act. The polygraph use is heavily regulated, restricted by federal statute, often inadmissible in court, and the subject of decades of constitutional and statutory development around compelled physiological testimony. The smart-watch use is governed by a clickwrap agreement to terms permitting broad use, and watch-derived health data has been subpoenaed in homicide and insurance-fraud investigations. Same physiological data, two regulatory regimes, divided by little more than the label on the device. Heppner's reasoning, generalised to the smart-watch case, would say the user voluntarily submitted continuous physiological data to a vendor whose terms permit broad use, with no meaningful protection for the contents. Polygraph doctrine, applied to the same data set, would treat compelled access as a major statutory and constitutional question. The two doctrines cannot be reconciled, and the asymmetry is sustained only by the labels, not by anything substantive about what is being collected.

Excerpt from Fitbit privacy policy showing the biometric data Fitbit collects, with the operative phrase highlighted in yellow
[ EX. 6 ] Fitbit Privacy Policy, biometric collection scopesupport.google.com/product-documentation/answer/14815921

Settled practice carves out something it does not name explicitly: ordinary use of standard tools, by parties operating in good faith under standard terms, does not constitute the kind of disclosure that defeats consent-protected rights. Heppner does not preserve this carve-out, does not explain why the AI vendor's privacy policy falls outside it, and does not articulate where in the technology stack the operative waiver is located. The opinion's logic, generalised, does not stop at Anthropic. The opinion does stop there, but only because the court declined to push further. The doctrinal foundation for that stopping point is absent.

Three doctrinal options follow. The first: Heppner's reasoning was overstated, and the actual operative variable in the case was the criminal posture, or the post-warrant log seizure, or the absence of counsel direction, rather than the broad "submission to vendor under privacy policy permitting broad use" rule the opinion announces. The second: twenty years of cloud-software tolerance has been a Rule 1.6 violation no one has named, and Heppner is the first court willing to draw the implication, in which case the implication should reach civil discovery as well as criminal investigations, third-party-installed software as well as user-installed software, and every layer of the technology stack as well as the AI layer specifically. The third: Heppner has reinvented consent doctrine specifically for AI vendors, treating one click-through as a permanent, blanket, and stack-cascading waiver in a way no other consent doctrine recognises, in which case the doctrinal foundation for that reinvention is missing from the opinion.

None of the three is comfortable. All three deserve to be considered before the broader rule announced in Heppner is treated as good law.

· 08 ·

Running Warner's logic through the line.

Warner's "tools, not persons" framing has the opposite problem. The reasoning, if taken at its word, is too broad in the protective direction.

Warner holds that submitting a prompt to ChatGPT is not waiver-triggering disclosure because "ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background." The vendor exists, the privacy policy exists, the administrators exist, but submission to a tool is not submission to a third party in the work-product-waiver sense.

Apply this reasoning to other vendor tools the lawyer uses. Microsoft Word is a tool, not a person. Submission to Word, in the sense of typing a draft into it, is not waiver-triggering. Outlook is a tool, not a person. Sending a privileged email through Outlook is, on Warner's reasoning, not waiver-triggering. Cloud storage is a tool, not a person. Storing a draft in Dropbox is, on Warner's reasoning, not waiver-triggering.

This conclusion is consistent with settled practice. Use of cloud-based tools has not been treated as waiver in any meaningful body of opinions. Warner's reasoning, applied broadly, would extend protection to ordinary cloud-software use in a way that is consistent with what practising lawyers have long assumed about how cloud tools and privilege interact. That is a feature, not a bug, of the Warner approach.

But Warner does not actually engage with the question of why "tools" deserve a different treatment from human third parties when both sit between the lawyer and the substance of the work product. The court's footnote on administrators ("even if they may have administrators somewhere in the background") gestures at the question and moves on. The administrators are real. They have access. They are humans operating under contracts that permit broad use of the data they administer. The Warner court does not address why the existence of these humans does not collapse the "tool" framing it relies on.

If the answer is that the administrators' access is sufficiently mediated, automated, or de minimis to not count as substantive review, that answer applies equally to Heppner. If it does not apply to Heppner, the administrators in the AI case must be different in kind from the administrators in the cloud-software case, and Warner needs to articulate the difference. It does not.

The Warner outcome may well be right. The reasoning Warner uses to reach it does not carry across to the AI use cases the next round of litigation will produce.

· 09 ·

The bar opinions face the same gap.

The doctrinal gap is not unique to the federal courts. The bar opinions, including ABA Formal Opinion 512 and North Carolina FEO 2024-1, treat AI as a discrete category warranting AI-specific guidance. None of them articulates the materially-different element that justifies AI-specific guidance over guidance applicable to cloud-software use generally.

Op. 512 reads in places as if AI is a new species of vendor relationship. The substantive obligations it identifies, competence under Rule 1.1, confidentiality under Rule 1.6, supervision under Rule 5.3, communication under Rule 1.4, are not AI-specific. They are the obligations that have governed third-party software use for as long as the bar has used third-party software. Op. 512's contribution is in detailed application, not in articulating a categorical distinction.

NC FEO 2024-1 is more precise. It frames AI vendors as falling under Rule 5.3's nonlawyer-assistance regime, which is the right doctrinal anchor. But it also treats AI as a discrete category, on the implicit assumption that AI vendor use raises supervision questions in a way that ordinary cloud-software use does not. That implicit assumption is the same gap the federal courts left open.

The bar opinions face a downstream version of the same gap. They reflect the courts' framing rather than independently articulating what makes the AI vendor relationship materially different from twenty years of tolerated cloud-software use. They do better than the federal courts in framing obligations as continuous with existing rules rather than announcing AI use as defeating protection categorically, but they still inherit the underlying instability of the categorical line.

The deeper question, what consent under a click-through privacy policy actually means in a doctrinal sense, sits well outside the bar's jurisdiction. It is a civil-rights question. The bar's role is to apply the answers other doctrines provide, not to provide them itself. The next round of cases that test these issues will need to articulate explicitly what consent doctrine in the AI context borrows from, departs from, or rewrites of the existing consent jurisprudence developed under the Fourth Amendment, the Fifth, the contractual-capacity tradition, and the body of common-law consent across tort and property. The bar opinions cannot do that work, and should not be asked to.

· 10 ·

What better doctrine would look like.

If the categorical AI line is doing more work than the reasoning supports, what would more rigorous doctrine look like? Three principles, each compatible with the existing body of U.S. consent doctrine and with twenty years of tolerated cloud-software use.

Anchor the analysis on the actual mechanism of disclosure, not on the AI label. The operative concern in Heppner is not that Claude is an AI. It is that Anthropic retained the prompts in logs that were subpoenable. A doctrine that names log retention, vendor pipeline access, or contractual drift as the operative mechanism produces a rule that applies consistently across vendors, AI or not. It also produces a rule individuals and practitioners can act on: ask the vendor about retention; ask about pipeline access; ask about the actual conditions under which logs are produced.

Distinguish vendor pipeline access from training contribution. The vendor's pipeline of human reviewers, debuggers, and trust-and-safety responders is a real exposure. The model's training contribution from any individual prompt is not, in mechanical terms, a meaningful exposure. The doctrine should distinguish between these. Treating "the model trained on the prompt" and "an engineer read the prompt" as the same kind of disclosure obscures the actual variable that matters.

Treat consent as bounded the way every other consent doctrine treats it. A click-through privacy policy should not produce categorically different consent results in confidentiality doctrine, Fourth Amendment doctrine, Miranda doctrine, or contractual unconscionability doctrine just because the labels differ. Consent that is permanent, blanket, stack-cascading, and attributable to anyone in a chain of users is foreign to U.S. consent jurisprudence. The doctrine should treat consent in a click-through privacy policy as bounded in scope, withdrawable, event-specific, and limited to the specific consenting actor, the way it treats consent in every other context.

None of these principles requires abandoning the legitimate concerns that motivate the 2026 cases. They restate the concerns in terms that match the underlying mechanics, apply consistently across vendor types and consent regimes, and produce rules courts and citizens can both implement.

· 11 ·

Implications.

The doctrine is unsettled. The 2026 cases are recent. The next round of litigation will likely test the categorical line in cases the existing opinions did not anticipate, and the consent-doctrine drift this piece names extends well beyond the immediate AI context. While that plays out, the implications fall on three audiences.

For citizens generally. Every interaction with a smart device is now governed by a click-through agreement to terms permitting broad use. The 2026 cases announce, without defending, a model of consent under which one click-through covers all future submissions, every chained device in the technology stack, and every category of data the contracts reach. That model is not how consent works in any other corner of U.S. law. The doctrinal floor for what a click-through actually consents to is being moved. The drift can be absorbed silently or contested explicitly. The first step of contesting it is noticing it.

For practitioners. The bar-rules version of these implications appears in the IXSOR companion pieces on AI confidentiality and the practice-management buyer's guide. The shorter version: do not treat Heppner as foreclosing AI use generally; its broad reasoning, taken seriously, would reach further than is plainly tolerated in any other vendor context, and the criminal posture is the most plausible candidate for what the rule should have been narrowed to. Do not treat Warner as a safe harbour; its "tools, not persons" framing produces a comfortable result in its specific facts but proves more than is tolerated in any other vendor context. Build the operational practice around the underlying concerns identified in section 02 (vendor pipeline access, log retention, contractual drift), not around the AI label.

For regulators, civil-liberties advocates, and policy. The 2026 cases are not just AI cases. They are early data points in a doctrinal drift toward treating click-through consent as plenary, blanket, and stack-cascading. If that drift goes unchallenged, the consequences extend well beyond AI: it will reshape Fourth Amendment search consent in the IoT context, third-party doctrine in the smart-device context, contractual capacity in the clickwrap context, and the doctrinal floor for meaningful consent generally. The current moment, in which the cases announce a category without defending it and the reasoning is absorbed without being tested, is the worst of both worlds: the doctrinal change is happening without the public deliberation a change of this magnitude warrants.

The categorical line at the centre of 2026 AI doctrine is not a stable anchor. The next opinions will either articulate what makes AI materially different in a way the current opinions do not, or they will collapse the line and treat AI vendor use as continuous with cloud-software use generally. Either resolution is preferable to the current moment. Citizens, practitioners, and regulators should hold both possibilities open: that Heppner's reasoning will be narrowed by the next courts, or that it will be generalised across consent doctrine in ways that fundamentally change what "I agree" means in U.S. law. The narrowing reading is the more likely one, only because the alternative is so disruptive that the courts will eventually back away from the broader rule. That narrowing should be deliberate, not accidental.

Frequently asked questions.

Does using ChatGPT defeat attorney-client privilege?

The 2026 federal cases reach different answers. United States v. Heppner (S.D.N.Y., February 2026) held that a defendant's months-long Claude use defeated both privilege and work-product protection. Warner v. Gilbarco (E.D. Mich., February 2026) reached the opposite conclusion in a civil-discovery posture. The cases are not consistent, and the reasoning each uses to anchor its result rests on a model of consent that, generalised, would unwind every other consent doctrine in U.S. law.

What did United States v. Heppner hold?

Judge Rakoff held that because Claude is not an attorney, that alone disposed of the privilege claim, and that the user's submission to a vendor whose privacy policy permits broad use, including disclosure to governmental regulatory authorities, defeated any reasonable expectation of confidentiality for work-product purposes. The opinion announces a broad rule and does not narrow it to the criminal-investigation posture it arose in.

What did Warner v. Gilbarco hold?

Magistrate Judge Patti held that submitting prompts to ChatGPT was not a waiver-triggering disclosure because ChatGPT and other generative AI programs are tools, not persons, even if administrators exist somewhere in the background. Warner's reasoning produces a comfortable result on its specific facts (civil discovery, pro se litigant) but does not engage with how administrators access the data, which is essentially the same question Heppner answers differently.

What did Tremblay v. OpenAI hold?

Judge Martinez-Olguin held that attorney-crafted prompts to ChatGPT were opinion work product, queries crafted by counsel carrying counsel's mental impressions and opinions. The protection followed from the prompts' authorship and content rather than from any vendor-disclosure analysis. Tremblay is the most cautious of the three opinions and the narrowest.

How does click-through consent compare to Fourth Amendment search consent?

Fourth Amendment consent to search is bounded in scope and time, withdrawable, and limited to the specific intrusion consented to. Heppner's reasoning treats a click-through privacy policy at sign-up as a permanent, blanket, non-resettable waiver covering every future submission to the vendor, an approach no other consent doctrine in U.S. law recognises. Miranda waiver is event-specific. Subject-matter waiver of privilege is topic-specific. Implied consent in tort and property is bounded by the specific intrusion. Click-through consent under Heppner's reasoning is none of these things.

Is AI training data different from ordinary product telemetry?

Mechanically, less than the 2026 cases assume. A typical client-matter prompt of 1,000 tokens contributes roughly one part in 10-15 billion to a frontier model's training input. The model does not memorise the prompt; it adjusts statistical distributions. The real concern is not the model's training contribution but the vendor's pipeline (human reviewers, debuggers), retained logs subject to legal process, and contractual drift in privacy-policy terms over time. Those concerns operate independently of the AI label and apply equally to twenty years of cloud-software vendor relationships the bar has tolerated.

What is the consent-scope problem the article identifies?

Heppner reads the click-through privacy policy as having permanent, blanket, stack-cascading scope: one agreement at sign-up authorises every future submission, every device in the chain, every category of data the contracts reach. The user's keyboard has terms, the OS has terms, the network card has terms, the router has terms, the ISP has terms. Each authorises some category of vendor data collection. Cross-device aggregation extends across devices and vendors, with the user often discovering the data flow only when it surfaces as a notification on a different device. No other consent doctrine in U.S. law treats consent this way.

What should practitioners do given the unsettled doctrine?

Do not treat Heppner as foreclosing AI use generally; its broad reasoning would reach further than is plainly tolerated in any other vendor context, and the criminal posture is the most plausible candidate for what the rule should have been narrowed to. Do not treat Warner as a safe harbour; its tools-not-persons framing proves more than is tolerated in any other vendor context. Build the operational practice around the underlying concerns: vendor pipeline access, log retention, contractual drift. Read the privacy policy. Document the diligence.

What better doctrine would look like?

Three principles: anchor the analysis on the actual mechanism of disclosure (log retention, pipeline access, contractual drift) rather than the AI label; distinguish vendor pipeline access from training contribution since the latter is mechanically negligible while the former is a real exposure; treat consent in a click-through privacy policy as bounded the way every other consent doctrine treats consent, bounded in scope, withdrawable, event-specific, and limited to the specific consenting actor.

· 12 ·

Citations and further reading.

Cases discussed:

  • United States v. Heppner, No. 1:25-cr-00503 (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.) , Memorandum on attorney-client privilege and work-product protection over AI logs.
  • Warner v. Gilbarco, Inc., No. 2:24-cv-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026) (Patti, U.S.M.J.) , Order on motion to compel discovery of AI prompts and outputs.
  • Tremblay v. OpenAI, Inc., Case No. 23-cv-03223-AMO, 2024 WL 3748003 (N.D. Cal. Aug. 8, 2024) (Martinez-Olguin, J.) , Order on opinion work product over attorney-crafted prompts.

Authorities applied:

IXSOR cross-references:

This article is a civil-rights essay on the consent-doctrine implications of three federal AI opinions. It is not legal advice, does not establish an attorney-client relationship, and does not predict how any specific court will rule on facts not before it. It is a draft, currently noindexed, circulating for comment before formal publication.

· AUTH ·

About the author.