SERVESSolo · Small · Mid-sized firms
FORMATFixed-fee · 1-8 wks
JURIS.50 states + DC
BOOKINGThrough July 2026
STATUSAccepting
[ INSIGHTS · OPINION READING ]

NC FEO 2024-1.

An operational reading of North Carolina State Bar 2024 Formal Ethics Opinion 1, the State Bar's first formal ethics opinion on lawyer use of artificial intelligence. What the opinion says, which Rules it interprets, what it requires of every NC-licensed attorney, and what it leaves explicitly undecided.

AUTHORDan Hughes
FILEDMay 2026
OPINION2024 FEO 1
ADOPTEDNov 1, 2024
JURIS.North Carolina
READING~14 minutes
· 01 ·

The opinion, in one sentence.

An NC-licensed attorney may use artificial intelligence in a law practice, including by inputting client data into a third-party AI program, provided the attorney does so (a) competently under Rule 1.1, (b) with reasonable efforts to prevent inadvertent disclosure of confidential information under Rule 1.6(c), (c) with appropriate supervision of nonlawyer assistance under Rule 5.3, and (d) communicates with the client about the use as required by Rule 1.4 and may not bill for AI work in a manner inconsistent with Rule 1.5. The full opinion is published by the North Carolina State Bar at ncbar.gov.

The opinion is permissive in result and prescriptive in process. It does not prohibit any particular AI tool or use case. It establishes that NC's existing Rules of Professional Conduct already supply the framework, and applies them.

· 02 ·

The legal architecture.

2024 FEO 1 does not announce a new rule. It interprets four existing Rules of Professional Conduct against a new fact pattern. An attorney evaluating compliance should read the opinion alongside, not as a substitute for, the Rules themselves.

Rule 1.1 (Competence). The opinion grounds the duty of AI competence in N.C. R. Prof. Conduct 1.1 and Comment 8, which require attorneys to "keep abreast of changes in the law and its practice, including the benefits and risks associated with the technology relevant to the lawyer's practice." The opinion treats AI as squarely within that obligation.

Rule 1.6(c) (Confidentiality). Rule 1.6(c) requires "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." The opinion treats inputs to a third-party AI program as a potential disclosure event, requiring the attorney to evaluate the program's security posture before submission of client data.

Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance). The opinion's most consequential interpretive move. Rule 5.3 imposes a supervisory duty on the attorney with respect to "nonlawyer assistance," and the opinion expressly extends the rule's reach to third-party software companies whose AI tools the attorney engages. The reasoning is anchored to 2011 FEO 6, which earlier extended Rule 5.3 to outside-firm nonlawyers generally.

Rule 1.4 (Communications) and Rule 1.5 (Fees). Addressed in later inquiries within the opinion. AI use that materially affects representation may trigger client-communication duties; AI cost recovery is constrained by reasonable-fee principles.

Each Rule existed before AI. The opinion's contribution is to apply each to the fact pattern an attorney now faces when she opens ChatGPT, Westlaw Precision, Spellbook, or a Microsoft Copilot session with a client matter on the screen.

· 03 ·

Inquiry #1, in plain English.

The first inquiry asks whether an NC-licensed attorney may use AI in a law practice at all. The opinion's answer: yes, conditional on three duties.

Competently. The attorney must educate herself on what the specific tool does, what data it processes, and where it fails. The opinion describes the duty as continuing, not a one-time training but an ongoing obligation as the tool, the law, and the practice evolve.

Securely. Where the attorney inputs information protected by Rule 1.6, the attorney must take reasonable steps to ensure the data is not disclosed inadvertently or accessed without authorisation. The opinion does not specify particular technical controls; it imports the existing reasonable-efforts standard from Rule 1.6(c).

With supervision. AI output must be supervised in the same way the work product of a nonlawyer assistant must be supervised. The attorney is responsible for "reviewing, evaluating, and ultimately relying upon the work produced by someone, or something, other than the lawyer."

The opinion frames AI as occupying a position on a spectrum between routine practice tools (case management software, electronic legal research) and nonlawyer support staff (paralegals, IT contractors). The Rules that govern both ends of the spectrum are imported.

· 04 ·

Inquiry #2: Inputting client data.

The second inquiry asks whether the attorney may input a client's documents, data, or other information to a third-party AI program. The answer: yes, provided the attorney has satisfied herself that the program is sufficiently secure to comply with Rule 1.6(c).

The opinion treats this question as functionally identical to the question of whether an attorney may use any third-party software. Cloud-based practice management. Document review platforms. Encrypted email gateways. The Rule 1.6(c) reasonable-efforts analysis applies to all of them, and the opinion declines to single AI out for heightened obligations.

What the opinion implies but does not state: an attorney who inputs privileged client material into a consumer-grade general-purpose model whose terms of service expressly reserve training rights to the vendor is unlikely to satisfy the reasonable-efforts standard. The contract terms are part of the security analysis. Reading them is part of the competence analysis. The opinion does not name vendors; it does not need to.

Two practical operationalisations follow:

  • Read the data-use clause of any AI tool before introducing client data. If the contract permits the vendor to use inputs to train models, that is a Rule 1.6(c) problem absent client consent.
  • Configure the tool correctly. The same vendor frequently offers a consumer plan whose contract permits training and an enterprise plan whose contract does not. Choosing the right plan is part of the security analysis.
· 05 ·

The Rule 5.3 extension.

The most operationally important sentence in the opinion is the application of Rule 5.3 to outside-firm AI vendors. The opinion expressly states that the supervisory duties imposed by Rule 5.3 apply to "nonlawyer assistants within a law firm as well as those outside of a law firm that are engaged to provide assistance in the lawyer's provision of legal services to clients, such as third-party software companies."

Read with the cited 2011 FEO 6, the consequence is that an NC attorney engaging an AI vendor inherits a supervisory duty over that vendor's conduct insofar as it bears on the attorney's professional obligations. In practical terms:

  • The attorney must make reasonable efforts to ensure the vendor's services are compatible with her professional obligations;
  • The attorney with direct supervisory authority must make reasonable efforts to ensure the vendor's conduct is compatible with her professional obligations;
  • The attorney is responsible for the vendor's conduct under specified circumstances per Rule 5.3(c).

"Reasonable efforts" is not defined in the opinion. It imports the existing standard. For a third-party software vendor, the work consists in vendor-diligence: the contract, the security posture, the training-data practices, the deletion practices, the breach-notification commitments. None of which is the responsibility of the vendor's salesperson to volunteer.

· 06 ·

What the opinion deliberately does not decide.

2024 FEO 1 contains an unusually frank editor's note about its own scope. Two carve-outs deserve attention because they shape what an attorney must analyse outside the opinion's four corners.

Attorney-client privilege. The opinion expressly declines to opine on whether disclosure to a third-party AI tool waives privilege, treating the question as "a legal question and outside the scope of the Rules of Professional Conduct." A North Carolina attorney inputting privileged material into an AI tool must analyse the privilege question independently. The federal common-law approach to privilege waiver via third-party disclosure, e.g., United States v. Ackert, 169 F.3d 136 (2d Cir. 1999) (functional-equivalent doctrine), and the state-by-state variations in the work-product immunity analysis, are well outside the opinion's holding.

The "when and how" question. The opinion expressly states that it "does not attempt to dictate when and how AI is appropriate for a law practice." Whether a particular AI tool is fit for a particular task is the attorney's professional-judgment call, governed by the existing standards.

Both carve-outs are doctrinally clean and operationally consequential. The attorney is responsible for parts of the analysis the State Bar declines to do for her.

· 07 ·

Comparison: ABA Op. 512 and sister states.

The federal-Model-Rules statement on lawyer use of generative AI is ABA Formal Opinion 512 (July 2024). NC's 2024 FEO 1, adopted four months later, addresses the same Rules and reaches substantively similar conclusions, with two differences worth noting.

NC is more economical. ABA Op. 512 surveys the issues with broader prose and longer worked examples; NC FEO 2024-1 disposes of the same questions in roughly half the words. An NC attorney who has read ABA Op. 512 will find few surprises in 2024 FEO 1; the obligations are functionally identical.

NC's framing of Rule 5.3 is sharper. Where ABA Op. 512 discusses supervisory obligations across several Rules, NC's opinion locates the third-party-software duty squarely in Rule 5.3 and ties it to the long-standing 2011 FEO 6 line of authority. The doctrinal payoff: an NC attorney's supervisory analysis when engaging an AI vendor is the same analysis already conducted when engaging any outside-firm contractor.

Sister-state opinions worth comparing:

  • California State Bar, Practical Guidance for the Use of Generative AI in the Practice of Law (Nov. 2023), the earliest material formal guidance, accessible from the State Bar of California.
  • Florida Bar, Ethics Opinion 24-1 (Jan. 2024), accessible from The Florida Bar.
  • New York State Bar Association, Task Force Report on Artificial Intelligence (Apr. 2024), accessible from NYSBA.

An attorney admitted in multiple states should read each applicable opinion. Functional convergence does not mean the texts are identical.

· 08 ·

Operational checklist.

The opinion does not require any particular operational document. The duties it imposes are nonetheless impracticable to satisfy without one. The following checklist captures what an NC firm should be able to demonstrate, on request, to satisfy the four-Rule analysis.

  • Rule 1.1. A record of the AI training the firm has provided to attorneys and staff. The training need not be elaborate; it must be specific to the tool actually in use and updated when the tool changes.
  • Rule 1.6(c). A written data-classification policy that maps data classes (privileged, confidential, public) to approved AI tools. Read of the data-use clause of each approved tool. Documentation that the chosen plan is the appropriate plan, e.g., the enterprise SKU rather than the consumer SKU, where the choice matters.
  • Rule 5.3. A vendor-diligence file for each AI tool the firm uses. The file should include the contract, a security-posture assessment, and the policy that ties the firm's use to the firm's professional obligations.
  • Rule 1.4 / Rule 1.5. Engagement-letter language that addresses AI use and a billing approach that reflects the actual work performed. For fixed-fee practices, this is largely a non-issue; for hourly, it is not.
  • Cross-cutting. A supervision standard that requires attorney review of AI-assisted work product, with a documented audit cadence at sixty days from rollout and quarterly thereafter.

The checklist is descriptive, not prescriptive. The opinion supplies the duties; the firm chooses the implementation.

· 09 ·

The Mata hallucination question.

2024 FEO 1 does not cite Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), the early-cycle case in which counsel was sanctioned for filing a brief containing AI-fabricated case citations. The opinion's silence is consistent with its general structure: it imports existing duties rather than enumerating consequences. The duty to verify cited authority, which would prevent a Mata-pattern fact pattern, is already imposed by Rule 3.1, Rule 3.3, and Rule 11 in the Rules of Civil Procedure and equivalent rules in NC state and federal courts.

The number of post-Mata sanctions cases continues to grow. The pattern is consistent: an attorney trusted AI output without verifying. The Rule that would, in retrospect, have prevented the sanction was already in force. AI does not introduce new duties; it stress-tests existing ones.

· 10 ·

Citations and further reading.

Primary:

Cases:

  • Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). Sanctions opinion on AI-fabricated citations.
  • United States v. Ackert, 169 F.3d 136 (2d Cir. 1999). Functional-equivalent doctrine in privilege analysis (relevant to the privilege question 2024 FEO 1 declines to address).

Sister-state authority:

  • California State Bar, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (Nov. 2023). Available via the State Bar of California.
  • The Florida Bar, Ethics Opinion 24-1 (Jan. 2024). Available via The Florida Bar.
  • New York State Bar Association, Report of the Task Force on Artificial Intelligence (Apr. 2024). Available via NYSBA.

Adjacent NC authority:

This article is general analysis of a published ethics opinion and surrounding authority. It is not legal advice. It does not establish an attorney-client relationship. Engage qualified North Carolina counsel for advice on your firm's specific situation. The opinion itself is the controlling text; this article is a reading.

· AUTH ·

About the author.

Dan Hughes is the founder of IXSOR. Ex-BBC. Ex-Apple. Lifelong technologist. And most importantly: not an attorney. He writes about legal AI from the operational and infrastructure side, where the rules meet the machines. Reach: [email protected].