Is ChatGPT confidential.
Lawyer use of generative AI raises confidentiality questions in two distinct layers. The first is whether the vendor sees and retains the data. The second is whether the use is discoverable in litigation. Three federal opinions, Tremblay (2024), Heppner (Feb. 2026), and Warner v. Gilbarco (Feb. 2026), set the doctrinal frame for both.
The two-layer question.
"Is ChatGPT confidential for legal work?" sounds like a single question. It is not. It is two questions wearing one phrase, and the analysis differs at each layer. A lawyer who answers only the first layer, and uses generative AI on that basis, is exposed at the second.
The first layer is vendor confidentiality: when a lawyer types a client matter into ChatGPT or Claude, what does the vendor see, retain, train on, and reserve the right to disclose? This is a contract-and-policy question, governed by terms of service, privacy policies, enterprise agreements, and the lawyer's own diligence under Model Rule 1.6.
The second layer is discoverability confidentiality: when AI use is later raised in litigation, are the prompts, the outputs, and the fact of the use itself protected by the attorney-client privilege or the work-product doctrine? This is a doctrinal question, governed by federal common law on privilege and Federal Rule of Civil Procedure 26(b)(3).
The two layers can fail independently. A lawyer using ChatGPT Enterprise (which is contractually tight at Layer 1) can still be ordered to produce her prompts in litigation if the Layer 2 analysis goes against her. A lawyer using consumer ChatGPT (which is loose at Layer 1) can still see her prompts protected as opinion work product if the Layer 2 analysis breaks her way. The two layers must both be cleared.
Layer 1: what the vendor sees.
Every major AI vendor publishes a privacy policy and a terms-of-service document that, taken together, define what happens to user inputs and outputs. The variation across tiers within a single vendor is large; the variation across vendors is larger; and the variation between consumer-grade products and enterprise products is the largest of all.
The doctrinal anchor at this layer is Model Rule 1.6, which obliges every lawyer to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." ABA Formal Opinion 512 applies that obligation to generative AI specifically: a lawyer must understand the vendor's data-use practices and verify that they meet the duty before submitting client information.
Verification at this layer means reading the actual contract, not the marketing copy. Three categories of vendor disclosure recur:
- Training use. Does the vendor use submitted prompts and generated outputs to train future models? Most consumer tiers say yes by default; most enterprise tiers say no by default.
- Retention. How long does the vendor retain prompts and outputs? Consumer tiers commonly retain indefinitely or until user-initiated deletion; enterprise tiers commonly retain for a defined window subject to deletion-on-request.
- Third-party disclosure. To whom may the vendor disclose user data, affiliates, subprocessors, governmental authorities, litigation adversaries? Consumer tiers reserve broad rights; enterprise tiers narrow them substantially.
Consumer ChatGPT: the default policies.
OpenAI's free tier and ChatGPT Plus operate under the same baseline. As of 2026, the published practices are: prompts and outputs are used to train future OpenAI models unless the user opts out via the data controls panel; conversations are retained for a default period and may be retained longer for safety review; OpenAI reserves the right to disclose data to "affiliates, service providers, professionals and other third parties" and to legal authorities under "applicable laws or in response to valid legal process."
None of this is unusual for a consumer cloud service. It is, however, a mismatch with Model Rule 1.6 if the user is submitting client-identifying or matter-confidential information. The "reasonable efforts" standard the Rule imposes is not satisfied by submitting privileged matter to a service whose default contract permits training, indefinite retention, and discretionary disclosure to "third parties."
This is the conclusion every state bar that has issued AI guidance has reached at the threshold: lawyers using consumer-tier generative AI for client matters do so against the grain of their confidentiality obligations unless they have separately arranged for stricter terms or have stripped the prompt of confidential content before submission.
ChatGPT Team, Enterprise, API: the contractual upgrades.
OpenAI's commercial tiers materially change Layer 1. ChatGPT Team and Enterprise contractually disable training on submitted data by default, narrow retention to a defined window, and impose stricter controls on third-party disclosure. The OpenAI API for developers operates under a separate Business Terms document with similar protections.
For Model Rule 1.6 purposes, the upgraded tiers move closer to the threshold the Rule requires. They do not eliminate the analysis: a lawyer must still verify that the specific tier subscribed-to includes the protections, that the protections survive amendment without notice, and that the vendor's published security practices (SOC 2 Type II reports, ISO/IEC 27001 certifications, encryption-at-rest representations) match the contract. IXSOR's AI Vendor Diligence piece walks through the clauses worth redlining.
The upgraded tiers also do not eliminate the Layer 2 analysis. Even with a contractually airtight vendor relationship, the prompts and outputs are still communications with a third party. The work-product and privilege analysis at Layer 2 turns on different questions, addressed below.
Anthropic Claude, Google Gemini: parallel analysis.
The other major US AI vendors structure their tiers similarly. Anthropic offers consumer Claude (free and Pro) under one set of terms, Claude Team and Enterprise under another, and the Anthropic API under a third. Google's Gemini parallels: consumer Gemini, Gemini for Workspace (with the data-handling promises that flow from Google Workspace's existing enterprise terms), and the Gemini API under Google Cloud's terms.
The substantive variation between vendors at the consumer tier is small. All three reserve broad training rights, retention rights, and third-party disclosure rights as defaults. The variation at the enterprise tier is larger and turns on the specific contract negotiated; the same diligence questions apply across all three.
One detail worth flagging is the published-policy treatment of governmental disclosure. Anthropic's privacy policy, in the version operative through early 2026, expressly stated that Anthropic "may disclose personal data to third parties in connection with claims, disputes [,] or litigation." That language was at issue in Heppner, discussed below.
Layer 1 conclusion: when the vendor layer passes.
Layer 1 is, in practical terms, a contract-reading exercise. The lawyer is asked: given what your vendor has reserved the right to do with the inputs you submit, can you reasonably represent to your client that your duty of confidentiality has been met? Three reasonable answers exist depending on how the lawyer has structured the use:
Pass: lawyer is on an enterprise tier with a written, current, and reviewed agreement that disables training, defines retention, and narrows third-party disclosure; lawyer has reviewed the vendor's security documentation; lawyer has verified that the specific data being submitted is permitted under the agreement.
Conditional pass: lawyer is on a consumer tier but has stripped the prompt of all client-identifying and matter-confidential content before submission, treating the AI as a research-grade tool akin to a general-purpose search engine. The pass holds only if the strip is rigorous; if the lawyer's verification process leaks any matter-specific facts into prompts, the pass fails.
Fail: lawyer submits matter-confidential content to a consumer-tier service without contract review. This is the failure pattern that surfaced in Mata v. Avianca and the post-Mata sanctions caselaw, see IXSOR's Mata three years on piece.
Layer 1 is the layer most state-bar guidance has focused on. It is also the layer that lawyers most commonly assume answers the whole question. It does not. Layer 2 is where the doctrine has just got interesting.
Layer 2: discoverability + work product.
The Layer 2 question is whether AI use, once made, is protected from disclosure to litigation adversaries. The answer turns on two related but distinct doctrines: the attorney-client privilege and the work-product doctrine codified at Federal Rule of Civil Procedure 26(b)(3).
The privilege protects "(1) communications (2) between privileged persons (3) made in confidence (4) for the purpose of obtaining or providing legal advice." The work-product doctrine protects "documents and tangible things ... prepared in anticipation of litigation or for trial." The two operate on different inputs: the privilege requires a lawyer-client communication channel, while work product extends to the lawyer's mental impressions whether or not communicated to the client.
Three federal opinions, decided between August 2024 and February 2026, set the current frame for how these doctrines apply to generative-AI use. They are addressed in turn below. They do not all reach the same result, and the divergence is the analytically important part.
Tremblay v. OpenAI: opinion work product.
The first of the three is Tremblay v. OpenAI, Inc., Case No. 23-cv-03223-AMO, 2024 WL 3748003 (N.D. Cal. Aug. 8, 2024). The plaintiffs were authors who had used ChatGPT in pre-suit investigation of their copyright claims against OpenAI. Their complaint attached an exhibit containing some of the prompts and outputs from that testing. OpenAI sought, in discovery, the full set of prompts and outputs, including ones that did not make it into the complaint.
District Judge Araceli Martínez-Olguín, on appeal from the magistrate judge's ruling against the plaintiffs, held that the prompts were not just fact work product but opinion work product, "queries crafted by counsel and contain[ing] counsel's mental impressions and opinions." Opinion work product, the court emphasised, is "virtually undiscoverable" under Republic of Ecuador v. Mackay, 742 F.3d 860, 869 n.3 (9th Cir. 2014).
The waiver analysis was the second-order move. The plaintiffs had disclosed some prompts in the complaint; OpenAI argued subject-matter waiver extended to all of them. The court rejected this. Subject-matter waiver of opinion work product, the court held, requires that "mental impressions are at issue in a case and the need for the material is compelling," citing United States v. Sanmina Corp., 968 F.3d 1107, 1124-25 (9th Cir. 2020). Neither condition was satisfied. The motion to compel was denied as to undisclosed prompts.
Tremblay is, narrowly read, a Ninth Circuit decision protecting attorney-crafted prompts as opinion work product where the attorney was directing the AI use as litigation strategy. It does not say what happens when the AI user is the client, or when the prompt is not crafted by counsel, or when the use is in another circuit. The two 2026 cases addressed those questions directly.
United States v. Heppner: the answer is no.
The next case is United States v. Heppner, No. 1:25-cr-00503 (JSR) (S.D.N.Y. Feb. 17, 2026). The defendant, a former corporate executive indicted on securities-fraud charges, had used Anthropic's Claude over several months, after receiving a grand-jury subpoena, to "prepare reports that outlined defense strategy" and possible factual and legal arguments. Federal agents seized the AI logs in connection with the arrest. The defendant, through counsel, asserted both attorney-client privilege and work-product protection over the seized AI documents.
Judge Jed S. Rakoff framed the question as one of first impression nationwide and answered it in two words. The Memorandum's structure is doctrinally tidy: the court walks through each element of the attorney-client privilege and finds at least two of the three missing, and then turns to work product and finds it equally unavailable.
The first-element finding is the load-bearing one. The court takes barely a paragraph to dispose of the privilege claim: "Heppner does not, and indeed could not, maintain that Claude is an attorney. Because Claude is not an attorney, that alone disposes of Heppner's claim of privilege." The court anticipates the cloud-software analogy and rejects it: privileges require "a trusting human relationship," not just a vendor relationship.
The second-element finding is the more transferable one. The court reads Anthropic's privacy policy and concludes that a user submitting prompts to Claude has, on the policy's face, no "reasonable expectation of confidentiality." The policy permits collection, training use, and disclosure to "third parties" including "governmental regulatory authorities." That language alone, the court holds, defeats the confidentiality element of privilege.
The third-element finding is more nuanced. The court notes that Kovel's functional-equivalent doctrine, under which a non-lawyer agent retained at the lawyer's direction can fall within the privilege, could in principle apply if counsel had directed the AI use. But Heppner's counsel conceded that they had not done so. Heppner used Claude on his own initiative. The Kovel analogy fails for want of a Kovel-style retention.
The work-product analysis follows. Anticipation of litigation is satisfied. But voluntary disclosure to a third party operating under a public privacy policy that permits further disclosure is, the court concludes, the kind of disclosure that destroys work-product protection. The 31 AI Documents at issue are not protected.
Warner v. Gilbarco: tools, not persons.
The third case is Warner v. Gilbarco, Inc., No. 2:24-cv-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026). The plaintiff, a pro se employment-discrimination claimant, had used ChatGPT in the course of her own litigation. The defendants, in discovery, sought "all documents and information concerning her use of third-party AI tools in connection with this lawsuit."
Magistrate Judge Anthony P. Patti denied the motion. The reasoning is structured around the work-product doctrine, not the privilege. Even granting that a pro se litigant can assert work-product protection (which the court did, citing prior E.D. Mich. cases), the question is whether using ChatGPT amounts to a waiver-triggering disclosure to a third party.
The court answered no. Work-product waiver, the court held, requires "a waiver to an adversary or in a way likely to get in an adversary's hand," citing In re Columbia/HCA Healthcare Corp. Billing Pracs. Litig., 293 F.3d 289, 306 n.28 (6th Cir. 2002). Submitting prompts to ChatGPT is not such a waiver because "ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background."
The court closes with a symmetric formulation: "both sides of this dispute seek to obtain each other's thought processes, while shielding their opponent from discovery of their own. The Court will uphold the protections afforded the thought processes and litigation strategies of both sides and will order production of neither."
The split: same date, opposite result.
Warner and Heppner were both decided on February 10, 2026. They reach opposite conclusions on substantively similar fact patterns. The split is real and the practitioner needs to understand why it exists.
Three differences run through the cases:
Posture. Heppner arose in a criminal investigation, with AI logs already in government hands following a search warrant. The defendant was asserting privilege defensively after the fact. Warner arose in civil discovery, with a pro se plaintiff asserting work product to resist a motion to compel before disclosure occurred.
Doctrine invoked. Heppner centred on the attorney-client privilege, which fails categorically when Claude (or any AI platform) is not the attorney. The work-product analysis, conducted second, came out the same way because of voluntary third-party disclosure. Warner centred on work product alone, privilege was never the lever, since the plaintiff was pro se and there was no attorney to channel the privilege through. The work-product question turned narrowly on whether AI use is "disclosure to a third party," and the court answered no on the "tools, not persons" theory.
Privacy-policy reading. Heppner reads Anthropic's policy and finds the third-party-disclosure clauses controlling: a user submitting prompts has no reasonable expectation that the data stays out of governmental hands. Warner does not read the OpenAI policy at length; it treats the AI tool as analogous to a word processor or search engine, where vendor-side data flows do not automatically waive work product.
The structural reconciliation is that both holdings can be right within their narrow facts. Heppner is a privilege case where privilege fails because there is no attorney. Warner is a work-product case where work product survives because the AI is treated as a tool. The unresolved area is the case Heppner reached but did not fully develop: the work-product analysis when an attorney directs the AI use, the AI is on an enterprise-tier contract that narrows third-party disclosure, and the privacy-policy posture is materially different from the consumer-tier Anthropic policy Heppner read. Tremblay begins to answer that question for opinion work product. The civil work-product analysis with attorney direction is the unresolved tier-three combination.
Why the seam isn\'t really pro se.
A natural reading of Heppner and Warner is that pro se status is the distinguishing feature. Heppner involved a represented criminal defendant whose AI use was retrospectively claimed as protected; Warner involved a pro se civil plaintiff whose use was prospectively shielded. The pro-se litigant won; the represented one lost. It looks like a pro-se carve-out.
It is not. The doctrine that pro se litigants can assert work-product protection is well-settled in the federal circuits. Warner said so directly, citing earlier E.D. Mich. cases. There is nothing novel about a pro se plaintiff invoking Rule 26(b)(3). What was novel in Warner was the holding that AI use does not waive that protection, because AI is "tools, not persons." A represented plaintiff making the same use of ChatGPT in the same district would have received the same holding.
The mirror question is: would a pro se Heppner have fared differently? On attorney-client privilege, no. The privilege requires an attorney; pro se status doesn\'t create one any more than represented status created one in Heppner\'s actual case (because his counsel hadn\'t directed the AI use, removing the Kovel agency theory). On work product, marginally — pro se litigants own their work product directly without channeling through counsel. But the controlling fact in Rakoff\'s analysis was Anthropic\'s privacy policy, which permits broad third-party disclosure. That policy applies to a pro se Heppner identically. Same court, same vendor, same controlling fact, same outcome.
The seam is not pro se versus represented. It is which court is reading which privacy policy under which doctrine, and how seriously they take the cloud-software analogy. Pro se status is correlated with one side of the seam; it is not the analytical lever doing the work.
Why "tools, not persons" is doing too much work.
The Warner framing treats ChatGPT as analogous to "a word processor, a search engine, or a legal research database." The Heppner court anticipated this analogy and rejected it on doctrinal grounds. Both moves are partial. The cleaner analysis treats AI as a service operating under specific contractual terms and asks what those terms permit, rather than picking a side in the metaphor war.
Three legally meaningful differences separate ChatGPT from offline Microsoft Word:
Where the data physically goes. A document drafted in offline Word never leaves the user\'s device. Microsoft has no view into the draft. ChatGPT, in contrast, transmits every prompt to OpenAI\'s servers as a precondition of the service functioning. The architecture itself involves third-party data flow.
What the vendor reserves. Standard Word EULA does not claim training rights to documents, does not retain documents indefinitely, and does not reserve broad third-party disclosure rights. Consumer-tier ChatGPT terms expressly do all three.
The training feedback loop. Microsoft Word does not ingest user documents and incorporate them into future Word that other users receive. Consumer-tier ChatGPT does, by default. The output you receive on any given day may be informed by inputs other users submitted last week. There is no Word-equivalent of this structural data-use pattern.
The line that matters is not "AI versus traditional software." It is "tool that keeps user data local, with the user controlling retention and disclosure" versus "service that processes user data on remote servers under contracts reserving rights of use and disclosure." Offline Word is squarely in the first category. ChatGPT consumer is squarely in the second. Cloud-synced Word with Microsoft 365 and Copilot integration is moving toward the second. The doctrinal analysis should follow the architecture and the contract, not the metaphor.
Read this way, Warner is right that AI is a "tool" in the sense that it is software, and right that using it does not categorically waive work product. But it is incomplete because it does not engage with the contract layer. Heppner is right that the contract layer matters and that voluntary disclosure under permissive privacy terms is dispositive. But it is incomplete because it does not engage with the question whether enterprise-tier contracts that narrow disclosure produce a different result. The synthesis these two opinions invite, but neither performs, is to read the actual contract for the specific tier the user subscribed to and apply the waiver analysis to those specific terms.
Why this isn\'t really an AI question.
The Heppner reasoning is not specific to artificial intelligence. The court\'s holding rests on three propositions: (1) communications with a non-attorney third party are not protected by the attorney-client privilege; (2) voluntary disclosure under a privacy policy that permits third-party use defeats reasonable expectation of confidentiality; and (3) work-product protection is destroyed by voluntary third-party disclosure unless the third party is functionally an agent of counsel under Kovel. None of these propositions mentions AI. They apply to any commercial cloud service whose terms reserve broad rights.
The questions practitioners are starting to ask, naturally, are whether the same logic extends to:
Google search. Yes. Search history is routinely discoverable in federal and state litigation; Google produces query records under subpoena and search warrant. The reason this has not been doctrinally controversial is not that the analysis is different from Heppner, but that practitioners do not typically claim work product over their searches. The doctrine has been latent. If a litigant did claim work product over searches reflecting litigation strategy, the same waiver analysis would apply: voluntary disclosure to Google, broad privacy-policy rights, no reasonable expectation of confidentiality.
Generative versus algorithmic answers. The privilege and work-product analyses do not turn on whether the vendor\'s processing is deterministic (a traditional algorithm returning ranked search results) or probabilistic (a generative model producing novel text). What matters is data flow and contract. The output behaviour differs; the analytical posture is identical. Google\'s addition of AI Overviews and Gemini integration to its search product changes the user experience but does not change the doctrinal frame: same vendor, same servers, same logging, same privacy policy.
Autocomplete. Bifurcates by where the processing runs. Local autocomplete on the user\'s device (operating-system keyboard suggestions, browser autofill from local history, IDE code completion that runs locally) keeps the data within the user\'s control and is functionally indistinguishable from offline Word. Cloud autocomplete (predictive search-as-you-type, Gmail Smart Compose, M365 Copilot suggestions, GitHub Copilot when its model runs on Microsoft\'s infrastructure) transmits keystrokes to the vendor in real time and is structurally identical to ChatGPT for waiver purposes.
Cloud Office, Slack, Dropbox, AWS, transcription services. Every commercial cloud service that processes user input under contractual terms reserving vendor rights occupies the same doctrinal posture. There is significant existing caselaw on whether storing files in Dropbox or sending email through Gmail waives privilege; the outcomes vary, often turning on whether the user was on consumer or enterprise terms. AI did not create the question. It made the question urgent because the inputs became substantive (whole prompts of legal strategy, not just file metadata) and the outputs became substantive (generated analysis, not just stored documents).
The implication for practitioners is that defensible AI adoption is a special case of defensible cloud-services adoption, and that the diligence framework that has been latent in cloud-services use for fifteen years is now operating at higher temperature. The practitioner who has been careless about cloud Office, careless about cloud transcription, careless about cloud autocomplete, but suddenly careful about ChatGPT, is not actually managing the risk. The practitioner who has been careful about cloud services across the board, and is now extending the same care to AI specifically, is.
Practical guidance: when both layers pass.
Combining the Layer 1 contract analysis with the Layer 2 doctrinal analysis yields a small number of clean fact patterns that, on current authority, work for lawyer use of generative AI in connection with client matters.
Clean pattern A: enterprise tier, attorney direction, civil-litigation context. The lawyer is on an enterprise contract with the AI vendor that contractually disables training, defines retention, and narrows third-party disclosure (Layer 1 passes). The lawyer crafts the prompts, directs the AI use as part of litigation strategy, and treats the outputs as opinion work product (Layer 2 passes under Tremblay and the Warner "tools, not persons" framing). Discoverability risk is low.
Clean pattern B: confidential-stripped consumer tier, research use only. The lawyer uses consumer ChatGPT or Claude for general legal research, with all client-identifying and matter-confidential content stripped from prompts (Layer 1 passes by virtue of the strip). The lawyer treats outputs as research notes subject to verification under the post-Mata standard. Layer 2 is mostly not implicated because the prompts contain no privileged content and the outputs are general reference material.
Failure pattern A: consumer tier, client matter, no enterprise contract. The lawyer pastes client communications, draft pleadings, or matter-specific facts into consumer ChatGPT. Layer 1 fails on Rule 1.6 grounds because the consumer terms reserve broad rights. Layer 2 fails under Heppner: voluntary disclosure to a vendor whose policy permits third-party disclosure undercuts the confidentiality element of privilege and the work-product-waiver analysis.
Failure pattern B: client uses AI on their own, lawyer claims privilege after the fact. The client, without lawyer direction, uses an AI platform to think through the case, then shares the AI outputs with the lawyer. Lawyer asserts privilege over the AI documents. Under Heppner, this fails: privilege does not attach because the AI is not an attorney and there is no Kovel-style direction by counsel.
Operational checklist.
For solo, small, and mid-sized practices integrating generative AI into matter work, the operational discipline that follows from this analysis reduces to a small number of habits.
- Tier the tool to the use. Enterprise tier for matter work; consumer tier only for prompt-stripped general research.
- Read the actual contract. Marketing copy is not a contract. Confirm training-use, retention, and third-party-disclosure terms in writing for the specific tier subscribed-to.
- Direct the use. Where AI is being used in connection with a matter, the lawyer should craft the prompts, direct the AI use, and document the direction. This is the Kovel-style fact pattern that gives the strongest claim to privilege if the question is later litigated.
- Treat outputs as work product. Maintain prompts and outputs in the lawyer's file as litigation materials, not as client-shared documents. The opinion-work-product theory in Tremblay works only if the prompts genuinely reflect counsel's mental impressions.
- Disclose to clients where required. Some state bars (NC FEO 2024-1, several others) require client disclosure of AI use in specific circumstances. The NC FEO 2024-1 piece walks through that obligation.
- Verify all output against primary sources. Post-Mata, the verification duty is independent of the confidentiality analysis, see Mata, Three Years On.
- Maintain a tool inventory. Per ABA Op. 512, the supervision and competence duties require knowing what tools are in use across the firm.
- Build a vendor-diligence record. For each AI vendor, retain a written record of the diligence performed before use began. The Vendor Diligence piece sets out the clauses to review.
The open questions.
Three questions the current authorities leave open are worth flagging.
What about enterprise-tier contracts that genuinely narrow third-party disclosure? Heppner's reading of Anthropic's privacy policy was the controlling fact in the confidentiality analysis. A different vendor on a different contract may produce a different result. No reported case yet has tested the Layer 2 analysis with an enterprise contract that contractually rules out third-party disclosure for the kind of data at issue.
What about Kovel-style retention of AI as a tool of counsel? Heppner nodded at the possibility that an AI used at counsel's direction could fall within the Kovel doctrine, citing United States v. Adlman, 68 F.3d 1495, 1498-99 (2d Cir. 1995). But the court did not develop the analysis because the fact pattern was a client acting on his own. A clean test case would involve a lawyer formally retaining an AI vendor as a Kovel agent (possibly via the enterprise contract) and using the platform exclusively at counsel's direction.
How does the Warner "tools, not persons" framing handle vendor-side training use? Warner's analogy treats AI like a word processor. But word processors do not, by default, take the user's input and use it to train future products that other users will then receive. That structural difference may matter for the work-product-waiver analysis in a future case where the vendor's training-use practices are squarely at issue.
The doctrine on these questions will continue to develop through 2026 and beyond. Practitioners adopting AI for matter work should plan for the analysis to shift, document their reasoning at the time of adoption, and revisit each tool's posture at least annually.
Citations and further reading.
Primary cases:
- Tremblay v. OpenAI, Inc., 2024 WL 3748003, Case No. 23-cv-03223-AMO (N.D. Cal. Aug. 8, 2024) (Martínez-Olguín, J.) (attorney-crafted ChatGPT prompts are opinion work product).
- United States v. Heppner, No. 1:25-cr-00503 (JSR) (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.) (defendant's AI exchanges with Claude not protected by attorney-client privilege or work-product doctrine).
- Warner v. Gilbarco, Inc., 2026 WL 373043, No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026) (Patti, U.S.M.J.) (pro se plaintiff's ChatGPT use protected as work product; AI is "tools, not persons").
- In re OpenAI, Inc. Copyright Infringement Litigation, 802 F. Supp. 3d 688 (S.D.N.Y. 2025) (voluntary disclosure to AI platform undermines confidentiality claims).
- United States v. Adlman, 68 F.3d 1495 (2d Cir. 1995) (Kovel doctrine on functional-equivalent agents).
- United States v. Kovel, 296 F.2d 918 (2d Cir. 1961) (foundational case on non-lawyer agents within privilege).
- Republic of Ecuador v. Mackay, 742 F.3d 860 (9th Cir. 2014) (opinion work product is virtually undiscoverable).
- United States v. Sanmina Corp., 968 F.3d 1107 (9th Cir. 2020) (waiver of opinion work product requires mental impressions at issue and compelling need).
- In re Columbia/HCA Healthcare Corp. Billing Pracs. Litig., 293 F.3d 289 (6th Cir. 2002) (work-product waiver requires disclosure to adversary or likely adversary).
Procedural rules:
- Federal Rule of Civil Procedure 26(b)(3) (work-product doctrine).
Ethics authorities:
- ABA Model Rule 1.6 (confidentiality).
- ABA Formal Opinion 512 (generative AI obligations).
IXSOR cross-references:
- ABA Formal Opinion 512: An Implementation Playbook.
- NC State Bar 2024 FEO 1: What It Actually Requires.
- Mata v. Avianca, Three Years On.
- AI Vendor Diligence: Contract Clauses to Redline.
This article analyses published federal opinions and surrounding ethics authorities. It is not legal advice. It does not establish an attorney-client relationship. The application of the doctrines discussed to any specific matter is highly fact-sensitive and should be the subject of advice from qualified counsel.