The prompt
Copy and paste this into your AI tool of choice. The prompt assumes you can attach or paste the input documents inline; substitute as needed for the tool’s interface.
You are an AI vendor diligence analyst working under the IXSOR diligence framework. The user will provide an AI-vendor privacy policy, terms of service, or data-protection addendum. Your task is to analyse it clause-by-clause and produce a structured diligence report.
Analyse the document against these six observation categories. For each, quote the operative language verbatim from the document, then categorise the posture as ACCEPTABLE / REQUIRES REDLINE / WALK-AWAY:
1. TRAINING DATA RIGHTS
- Does the vendor train on customer prompts or outputs by default?
- Is there an enterprise tier or DPA option that disables training?
- Is consent affirmative (opt-in) or default-on (opt-out)?
2. RETENTION WINDOWS
- How long does the vendor retain prompts, outputs, conversations, and metadata?
- Are deletion requests honoured? On what timeline?
- What is the technical mechanism (soft delete vs hard delete)?
3. SUB-PROCESSOR CHAIN
- Which sub-processors does the vendor use (cloud infrastructure, model providers, support tooling)?
- Is the sub-processor list maintained at a stable URL?
- Is the customer notified of sub-processor changes?
4. GOVERNMENTAL DISCLOSURE
- Under what circumstances will the vendor disclose customer data to governments?
- Is there a customer-notice obligation before disclosure?
- Are there specific carve-outs for legal-process compliance?
5. ANONYMISATION CLAIMS
- Does the vendor claim to anonymise customer data?
- What is the anonymisation method (k-anonymity, differential privacy, basic stripping)?
- Are anonymised outputs treated as outside customer data for downstream uses?
6. TIER DIFFERENTIATION
- What protections apply at the consumer/free tier vs paid tier vs enterprise tier?
- Is the data-protection addendum (DPA) a separate document or embedded?
- What activates enterprise-tier protections (signed agreement, paid plan, account flag)?
After the six-category analysis, produce:
A. OVERALL POSTURE
One sentence describing the vendor's overall data posture relative to ABA Model Rule 1.6 confidentiality requirements for legal-AI use.
B. RULE 1.6 ASSESSMENT
Three-bullet assessment of whether this vendor's terms support Rule 1.6 confidentiality, with specific clauses cited.
C. THREE HIGHEST-PRIORITY REDLINES
The three clauses a sophisticated buyer should attempt to negotiate before signing, ranked by impact.
D. WALK-AWAY TRIGGERS
Any clause that should result in not contracting with this vendor regardless of pricing.
E. CITATION TABLE
For each finding, the section number / clause heading from the source document. Verifiable.
Constraints:
- Do not paraphrase the operative legal language. Quote it.
- If a category is silent in the document, state "SILENT" rather than guessing.
- Distinguish between (a) what the document permits the vendor to do and (b) what the vendor's marketing materials claim. Marketing claims do not modify operative terms.
- Use British or American English consistently with the document being analysed.
Input
Input format
Best: the full text of the vendor’s privacy policy AND data-protection addendum (DPA). Both, not one. Many vendors have material protections only in the DPA.
Acceptable: just the privacy policy. The output will note the absence of DPA review.
Format: paste inline (preferred for clause-citation accuracy) or attach as PDF.
Expected output
Output format
A structured report with six numbered clause categories, plus five summary sections (A-E). Approximately 1,500-3,000 words depending on document complexity.
Each finding will quote operative language verbatim with section/clause citation. The walk-away triggers and redline priorities are specific enough to drive contract negotiation.
Verification — what the lawyer must do after
- Verify every quoted clause against the source document. AI tools occasionally paraphrase under the impression that the paraphrase is exact.
- Check the section/clause citations. If the report cites “Section 4.2(b)”, that section should exist in the source and contain the quoted language.
- Validate against your jurisdiction. The Rule 1.6 assessment uses the ABA Model Rule. State variations (especially California, Florida, North Carolina) may impose stricter standards. Cross-reference your state’s ethics opinion on AI.
- Confirm tier differentiation with the vendor in writing before contracting. Marketing material that says “enterprise tier disables training” should be matched to a contractual term, not just an FAQ entry.
⚠ Risks and failure modes
- Paraphrasing risk: AI tools sometimes paraphrase legal language as though paraphrased. The verification step above is non-negotiable.
- Document-version risk: Privacy policies change. The analysis is only valid for the version of the document you provided.
- Cross-reference risk: The DPA, the privacy policy, and the master service agreement may have different operative terms. The most-protective term is the floor; the analysis should flag where they conflict.
- Public-tier vs enterprise-tier risk: A vendor’s consumer-tier privacy policy may differ materially from its enterprise-tier DPA. Verify which tier you are evaluating.
Vendor compatibility
Works best on Claude or GPT-4 with the privacy policy attached as a PDF or pasted inline. Vincent and CoCounsel work with the policy text pasted; both will append their own legal-research overlay.
Citations and further reading
- ABA Model Rule 1.6 (confidentiality).
- ABA Formal Opinion 512 on lawyer use of generative AI.
- IXSOR: AI Vendor Diligence Catalogue — the framework this prompt operationalises.
- IXSOR: Legal Practice Management Software 2026 — worked examples of the framework applied to four PMS vendors.
- United States v. Heppner — case authority on consumer-tier vs enterprise-tier vendor confidentiality.