What Rule 707 says #
The proposed text, as released by the Advisory Committee on Evidence Rules in August 2025 and circulated for public comment until February 16, 2026:
Rule 707. Machine-Generated Evidence.
When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d).
The structure is borrowed wholesale from Rule 702, the existing rule on expert-witness testimony. Rule 702 requires that an expert’s testimony be (a) helpful to the trier of fact, (b) based on sufficient facts or data, (c) the product of reliable principles and methods, and (d) a reliable application of the principles and methods to the facts of the case. Rule 707 imports those four requirements into AI-generated evidence offered without a sponsoring expert.
The two-step trigger is important. First: would the evidence, if offered through a human witness, be subject to Rule 702? If yes, second: does the AI output meet Rule 702’s reliability standard? If both, the evidence is admissible; if not, it is not.
Evidence that would not be subject to Rule 702 in the first place — say, a photograph, an unprocessed video clip, a contemporaneous business record — is not affected by Rule 707 even if AI was somehow involved in its production.
The line between "machine-generated" and "simple scientific instruments" #
The Advisory Committee notes accompanying the proposed rule make a deliberate exclusion: simple scientific instruments — radar guns, breathalysers, GPS receivers, basic chromatographs — are not within the rule’s reach. The rule applies to AI and machine-learning systems whose outputs depend on training data, model parameters, or learned inference, where the reliability turns on the system’s design and training rather than the device’s calibration.
Neither the rule text nor the committee notes draw the line precisely. The litigated questions will be:
- Is a digital photograph that has been AI-enhanced (denoised, sharpened, low-light-corrected) within the rule’s reach? The Advisory Committee notes lean against, on the theory that the underlying photograph is itself a record of fact and the AI processing is incidental. Litigants will test this.
- Are body-cam transcripts produced by automated speech recognition within the rule’s reach? The committee notes lean toward exclusion on the “simple scientific instrument” theory; defence counsel may argue that modern ASR systems are anything but simple.
- Are cell-phone-tower triangulations and ALPR (automated licence-plate recognition) reads within the rule? These are typically introduced through human officer testimony, in which case Rule 702 already governs and Rule 707 is irrelevant. Where they are introduced as standalone digital records, Rule 707 may apply.
- Is technology-assisted review (TAR) in e-discovery within the rule’s reach? TAR outputs are not usually offered as substantive evidence — they are used for production decisions. But where a TAR-derived “privilege call” or “relevance call” becomes evidence, Rule 707 is in play.
Practising litigators should expect five to ten years of definitional litigation before the line is well-settled. The first-instance rulings will matter disproportionately because they will be cited by every subsequent court.
What Rule 707 actually requires of a proponent #
The proponent of AI-generated evidence under Rule 707 has to make four showings, each tracking a Rule 702 element.
(a) Relevance and helpfulness. Standard Rule 702(a) analysis. The AI evidence has to assist the trier of fact in understanding the evidence or determining a fact in issue. The bar is low and is rarely litigated under Rule 702; it should rarely be the contested issue under Rule 707 either.
(b) Sufficient facts or data. The AI output must be based on adequate input data. For an AI medical chronology, this means the underlying records produced by the providers; the chronology cannot be based on summarised or partial records if the case turns on the missing detail. For an accident-reconstruction simulation, this means the physical-evidence inputs (skid measurements, vehicle damage, weather data) must be sufficient for the simulation to produce a meaningful output.
(c) Reliable principles and methods. This is the contested element. The proponent must establish that the AI system uses reliable principles and methods. In Rule 702 practice, this is the Daubert reliability inquiry. For AI, the relevant indicia will likely be: peer-reviewed publication of the underlying technique, error-rate measurement, training-data documentation, validation against held-out test sets, and the system’s reproducibility on the same inputs.
(d) Reliable application to the facts. Even where a system is reliable in general, its application to the specific case must be reliable. Was the system used for a task within its trained domain? Were the inputs in the form the system expects? Was the output verified by sample-check or by re-running with adjusted parameters? Each is the kind of cross-examinable predicate Rule 702 requires of human experts.
The practical consequence: a proponent of AI evidence under Rule 707 will need either an expert to lay the foundation (in which case Rule 702 governs directly and Rule 707 is mostly redundant) or written documentation from the AI vendor sufficient to satisfy each element. Vendors of legal-AI tools should expect — and many already do — requests for documentation packages designed to satisfy a Rule 707 foundation.
What Rule 707 changes for trial practice #
The rule’s effect on day-to-day trial practice falls into three categories.
Documentary evidence with AI processing. A case where the proponent wants to introduce a record that has been AI-enhanced or AI-organised must consider the Rule 707 path. In most cases, the cleaner path will be to introduce the original record and the AI processing through a sponsoring expert. The expert’s testimony covers Rule 702; Rule 707 becomes irrelevant.
Forensic evidence. Forensic disciplines are increasingly AI-augmented. AI-assisted DNA mixture deconvolution (TrueAllele, STRmix), AI-assisted facial recognition, AI-driven ballistics matching, AI-trained shot-spotter audio analysis. Each is, in principle, within Rule 707’s reach if introduced without an expert. The defence-side strategic move will be to challenge the prosecution’s reliance on a Rule 707 foundation rather than an expert; the prosecution will counter by either calling the expert (defeating Rule 707’s applicability) or producing the system documentation.
Civil cases with AI-organised exhibits. Personal-injury medical chronologies, AI-summarised business records, AI-generated damages models, AI-extracted contract analyses. These are usually introduced by a fact witness or by stipulation, and Rule 707 will not directly apply. Where stipulation is unavailable and the chronology is offered without a sponsoring witness, Rule 707 applies. The ordinary practice will be to call a paralegal or junior associate as a sponsoring witness, which keeps the analysis within Rule 1006 (summary of voluminous records) rather than Rule 707.
The category that will see the most Rule 707 litigation is criminal forensic evidence, because the prosecution often prefers to introduce forensic AI outputs without a defence-cross-examinable expert. This is the category where defence counsel will press the rule hardest.
Timeline and effective date #
The rule is moving through the standard Federal Rules of Evidence amendment process. The path:
- August 2025. Advisory Committee released the proposed rule for public comment.
- February 16, 2026. Public comment period closed. Comments were submitted by the American Association for Justice, defense-bar organisations, prosecutors’ offices, several law schools, and major legal-AI vendors.
- May 2026. Committee on Rules of Practice and Procedure of the Judicial Conference voted on the rule. (The vote was scheduled for May 7, 2026; the result will appear in the Judicial Conference’s minutes when published.)
- September 2026 (anticipated). Judicial Conference of the United States approves the proposed rule.
- By December 2026 (anticipated). Supreme Court transmits the proposed rule to Congress under 28 U.S.C. § 2074.
- December 1, 2027 (anticipated). Rule takes effect, absent congressional action to defer or reject. Congress has rejected proposed rules before (most recently the original 1973 Rules of Evidence) but rarely.
Litigators should plan on Rule 707 being the operative federal evidence rule on AI-generated evidence by December 2027. State courts that adopt the Federal Rules of Evidence wholesale will follow on their own state-specific timelines, typically within twelve to thirty-six months of the federal effective date. State courts with their own evidence codes will study Rule 707 and most will adopt parallel provisions within five years.
What Rule 707 does NOT govern #
It is worth being explicit about what the rule does not reach, because the AI-and-law commentary tends to conflate distinct legal questions.
Rule 707 is purely about admissibility of evidence. It does not govern:
- AI use in legal research, briefing, and drafting. Those issues are governed by Mata v. Avianca and its progeny, Federal Rule of Civil Procedure 11, and the bar opinions on lawyer use of AI (ABA Op. 512, the state-bar tracker, etc).
- Privilege and work-product. Whether AI exchanges with a client or with the court are protected by attorney-client privilege or work-product doctrine. Those are governed by United States v. Heppner, Warner v. Gilbarco, Tremblay v. OpenAI, and the rest of the 2024-26 federal AI-privilege caselaw.
- Discovery of AI tools, training data, and outputs. Those are governed by Rule 26 and the developing case law on AI-training-data discovery.
- Vendor confidentiality and Rule 1.6. Governed by the bar opinions and DPAs.
- Judicial use of AI. Some commentators predict a parallel rule for judicial AI use; none has been proposed and Rule 707 does not address it.
Rule 707 is, in effect, a single-purpose evidentiary rule. The full federal AI-and-law framework remains a patchwork: ethics rules + sanctions caselaw + privilege caselaw + procedural rules + this evidentiary rule. Practitioners should not expect Rule 707 to do more than its text actually says.
What "machine-generated evidence" actually means #
The biggest unresolved question in the rule is the scope of “machine-generated evidence”. The committee notes give limited guidance and litigation will define the line. Here is how the major categories are likely to play out.
AI-organised exhibits. A medical chronology built by AI from raw records, or an AI-extracted contract analysis, is “machine-generated” in the sense the rule reaches. In practice these will be introduced through a fact-witness sponsor (the lawyer, a paralegal, an investigator) under Rule 1006 or as a demonstrative aid, in which case Rule 707 does not apply. Where the proponent attempts to introduce them without any sponsor, Rule 707 will apply.
Accident-reconstruction simulations. AI-driven physics simulations of accident scenarios are clearly machine-generated. Under current practice these are introduced through an expert witness, in which case Rule 702 governs directly. Rule 707 only matters where a proponent attempts to introduce a simulation without the expert. This is unlikely to be the dominant practice.
Sentiment analysis and writer-attribution AI. AI tools that purport to identify the author of a document or determine the emotional state of a speaker. These are within Rule 707’s clear core: machine-learning systems whose outputs are inferential and whose reliability is contested. The defence side will press hard on Rule 702(c) reliability for these tools.
Deep-fake detection. AI tools that purport to identify whether an image, audio file, or video has been AI-generated or manipulated. The reliability of these tools varies enormously by tool and content type. Courts admitting deep-fake-detection evidence under Rule 707 will need careful Rule 702(c) analysis; courts excluding it will draw on the same analysis.
AI-generated forecasts and damages models. An AI tool that produces a future-medical-cost projection or a lost-earnings projection is producing what is essentially expert-style opinion testimony. These almost certainly require either an expert sponsor (Rule 702) or a Rule 707 foundation; they will be the most-litigated category in civil practice.
Speech-to-text outputs and OCR. Automated transcription and optical character recognition. Likely to fall in the “simple instrument” exclusion in most cases, but the line is fuzzy and the committee notes acknowledge it.
What lawyers should be doing now #
For litigators with cases that may go to trial after December 2027 and that involve AI-generated evidence on either side:
- Document the foundation now. If the case will rely on AI-organised exhibits, AI-generated forecasts, or AI-derived analyses, document the underlying inputs, methods, validation, and quality-control steps now. The Rule 707 foundation needs to be buildable from the case file.
- Identify expert sponsors for the AI components. The cleanest Rule 707 path is to call an expert who can lay a Rule 702 foundation. This makes Rule 707 inapplicable. Identify the expert before discovery closes.
- Demand discovery of opposing AI tools. Where the opposing side will rely on AI-generated evidence, demand discovery of the tool, its training data documentation, validation records, and prior litigation use. Rule 707 will not relax the discovery rules; if anything, it provides additional ground for compelling the documentation.
- Track state-court adoption. If the case is in state court, watch for the state-specific Rule 707 analogue. Several states will adopt the federal rule wholesale; others will modify it in ways that matter to the case.
For consulting and procurement work, IXSOR’s position is that Rule 707 makes AI-vendor documentation requirements operationally important. Vendors who cannot produce a Rule 707 foundation package by 2027 will lose litigation use cases. This will reshape the vendor landscape over the next thirty-six months in favour of vendors who treat litigation defensibility as a first-class product concern.
Frequently asked.
When does Rule 707 take effect?
The rule is on the standard Federal Rules of Evidence amendment timeline. Public comment closed February 16, 2026; the Evidence Rules Committee voted in May 2026; Judicial Conference approval is anticipated in September 2026; Supreme Court transmission to Congress by late 2026; effective date December 1, 2027 absent congressional action.
Does Rule 707 apply to AI used in legal research?
No. Rule 707 is an evidence-admissibility rule. AI used in legal research is governed by Mata v. Avianca, Federal Rule of Civil Procedure 11, and the bar opinions on AI use (ABA Op. 512 and the state-bar opinions). The two regimes are independent.
What about AI-organised medical chronologies in personal-injury cases?
If introduced through a fact-witness sponsor (paralegal, investigator, lawyer) under Rule 1006, Rule 707 does not apply. If introduced without any sponsor, Rule 707 applies and the proponent must satisfy Rule 702(a)-(d). Personal-injury practice should plan on the sponsor model.
Is a body-cam transcript Rule 707 evidence?
The Advisory Committee notes lean toward exclusion under the simple-instrument theory, but litigation will define the boundary. Modern automated speech recognition is not a simple instrument; it is a deep-learning system whose accuracy varies by speaker, accent, and audio conditions. Defence counsel should expect to challenge transcripts where the audio is contested.
How does Rule 707 interact with Daubert?
Rule 707 explicitly imports Rule 702’s reliability requirements, which is the federal codification of Daubert. The reliability inquiry will be the same in substance: peer-reviewed methodology, error-rate measurement, validation, reproducibility. The difference is procedural — Rule 707 gives proponents a path to admissibility without an expert witness, but the reliability bar is unchanged.
Does Rule 707 apply in state court?
Not directly. Rule 707 is a federal rule. States that adopt the Federal Rules of Evidence wholesale (most states do, with variations) will face their own adoption decisions on Rule 707. Watch for state-specific analogues over 2027-2030. Until a state adopts a parallel rule, AI-generated evidence in state court is governed by the existing state evidence code — which usually has a Rule 702 analogue but no Rule 707 analogue.
Citations and further reading.
- Advisory Committee on Evidence Rules, agenda books (publishes the rule text, comments, and committee notes).
- Federal Rule of Evidence 702 (existing rule on expert testimony; Rule 707 imports its (a)-(d) standards).
- Federal Rule of Evidence 1006 (summary of voluminous records).
- Federal Rule of Civil Procedure 11 (representations to the court; Mata v. Avianca duty-to-verify).
- 28 U.S.C. § 2074 (procedure for transmission of proposed rule amendments to Congress).
- Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). The reliability standard Rule 702 codifies and Rule 707 imports.
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). Sanctions for AI-fabricated authority; the verification standard for AI use in briefs.
- United States v. Heppner, No. 1:25-cr-00503 (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.). AI-privilege framework, distinct from Rule 707.
- ABA Formal Opinion 512 (July 2024). The bar-rules framework for AI use, distinct from Rule 707.
- IXSOR: AI in Personal Injury. Practical implementation and Rule 707-foundation considerations for medical-records workflows.
