Uncharted Waters: Work Product Protections for Attorneys’ Use of Generative AI

Generative AI or GenAI[1] will revolutionize legal practice.  Indeed, it already is.  With increased use of GenAI, practitioners must consider how the work product doctrine may apply to their interactions with GenAI. 

Relevant to this consideration, courts have started addressing thorny questions regarding the contours of work product protections: Under what circumstances are an attorney’s GenAI prompts protected?  Is protection waived if supporting GenAI outputs are used as evidence?  Is the attorney the supervisor of the GenAI tool, bound by ethical rules to supervise, and subject to disclosing otherwise protected information if proper supervision is questioned? 

In Tremblay v. OpenAI, Inc., the District Court for the Northern District of California provided insight into how courts may approach questions like these.  The Tremblay court consolidated actions brought by a number of notable authors of highly regarded fiction and memoirs, including Paul Tremblay, Sarah Silverman, and Michael Chabon.[2]  These authors sued OpenAI, a developer of generative AI software, including ChatGPT, alleging that OpenAI infringed their copyrights by using the authors’ work to train its products. [3]

In support of their consolidated amended complaint, the Plaintiffs attached summaries of their written work that the Plaintiffs’ attorneys generated using ChatGPT. [4]  Plaintiffs alleged that the accuracy of these summaries proved that OpenAI illegally copied their original works.  In discovery, OpenAI requested that the Plaintiffs disclose “OpenAI account information, prompts, and outputs for Plaintiffs’ testing of ChatGPT in connection with their pre-suit investigation.”  Tremblay v. OpenAI, Inc., No. 23-cv-3223 (N.D. Cal. Aug. 8, 2024).  Plaintiffs agreed to produce prompts and outputs that led to the examples attached to the complaint but refused to produce prompts and outputs that were not used in the complaint (i.e., the negative test results).[5]  Plaintiffs claimed that protections for attorney work product shielded unused prompts and outputs from discovery.

The parties submitted their dispute to the presiding magistrate, Judge Robert M. Illman.  Judge Illman ultimately ruled that Plaintiffs’ attorneys’ interactions with ChatGPT were discoverable.  Judge Illman first set forth the familiar standard for the application of work product protections, noting that work product includes “documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative . . .”  Tremblay v. OpenAI, No. 23-cv-03223, at 5 (N.D. Cal. Jun. 24, 2024) (quoting Fed. R. Civ. P. 26(b)(3)(A) and citing United States v. Richey, 632 F.3d 559, 567-68 (9th Cir. 2011)).  Judge Illman specifically held that attorney work product is generally exempt from disclosure in discovery unless the requesting party shows a substantial need and demonstrates that it “cannot without undue hardship obtain their substantial equivalent by other means.”  Id. (quoting Fed. R. Civ. P. 26(b)(3)(A)).

Judge Illman also noted that work product can be classified as either fact work product or opinion work product.  Opinion work product, as opposed to fact work product, contains the “mental impressions, conclusions, opinions, or legal theories of a party’s attorney or other representative concerning the litigation.” Id. (quoting Fed. R. Civ. P. 26(b)(3)(B)).  Fact work product is discoverable upon a demonstration of substantial need.  In contrast, to obtain an opinion work product, a requesting party must show that an attorney’s mental impressions are at issue in the case and that the need for the material is compelling.  Id. at 5 (citing Holmgren v. State Farm Mut. Auto. Ins. Co., 976 F.2d 573, 577 (9th Cir. 1992)).  A litigant waives work product protections when the litigant reveals or “places the work product at issue during the course of litigation.”  Id. (citing United States v. Sanmina Corp. & Subsidiaries, 968 F.3d 1107, 1119 (9th Cir. 2020)) (emphasis omitted).

Applying this standard, Judge Illman determined that Plaintiffs’ assertion of work product protections over unused prompts and outputs failed.  The Judge found that “account settings and negative test results are more in the nature of fact work product than opinion work product.”  Id.  But even if the work product constituted opinion work product, protections would not apply because Plaintiffs’ negative test results are “reasonably related” to the positive results that Plaintiffs cited in their complaint.  Id. at 6.  Thus, Plaintiffs waived any work product protections that would otherwise apply to their use of ChatGPT during the investigation phase of the litigation.

The Judge was not persuaded by Plaintiffs’ arguments that Defendants could recreate their own tests of ChatGPT, noting that such replication is impossible without knowing the exact account settings and prompts originally utilized.  Id. at 6-7.

Plaintiffs objected to Judge Illman’s decision ordering it to disclose account settings, prompts, and outputs for unused or negative tests of ChatGPT, appealing the issue to the District Court.  The District Court agreed with Plaintiffs and overruled the opinion below.

Judge Araceli Martinez-Olguin, writing for the District Court, found that “ChatGPT account settings and negative test results” constitute opinion work product, rather than fact work product.  Tremblay v. OpenAI, Inc., No. 23-cv-3223, at 3 (N.D. Cal. Aug. 8, 2024).  The court reasoned that “the ChatGPT prompts were queries crafted by counsel and contain counsel’s mental impressions and opinions about how to interrogate ChatGPT, in an effort to vindicate Plaintiffs’ copyrights against the alleged infringements.”  Id. (citing Republic of Ecuador v. Mackay, 742 F.3d 860, 869 n.3 (9th Cir. 2014)).  Since the requests involved opinion work product, a higher standard for establishing waiver applied, and OpenAI, the requesting party, failed to satisfy that standard.  Id. at 4 & n.3 (citing United States v. Sanmina Corp., 968 F.3d 1107, 1119 (9th Cir. 2020)).

Under a similar set of facts and allegations, the District Court for the Southern District of New York denied a motion to compel without prejudice. The New York Times Co. v. Microsoft Corp., No. 23-cv-11195 (S.D. N.Y. Oct. 31, 2024), ECF No. 304.

Although the underlying circumstances are somewhat unique, these decisions suggest that attorneys may continue to experiment with GenAI in furtherance of their legal work, with the expectation that the doctrine of work product protects these activities from disclosure during discovery.  Nonetheless, the two opinions in Tremblay demonstrate that these are uncharted waters, subject to varying opinions regarding whether queries are fact or opinion and if and when a party might place its interactions with GenAI at issue.  Courts will likely continue to think through the application of these issues, standards, and tests to GenAI inputs whether the existing rules require revision.

In Tremblay, the deciding factor for the District Court was whether the attorneys’ interactions with ChatGPT resembled fact work product or opinion work product.  One option to ensure opinion work product protection is to avoid treating ChatGPT like a bare generator of facts.  Instead, attorneys should provide substantial human oversight and clearly inject mental impressions and opinions into the process to both ensure that interactions with ChatGPT are useful and to maximize the extent of available work product protections.  Such oversight and attorney direction in formulating GenAI prompts and reviewing outputs could be a deciding factor in a court’s consideration of applicable privileges.  Nevertheless, given the risks of waiver and the potential for ultimate disclosure of underlying prompts, counsel should proceed cautiously as this area of the law continues to develop.  With unchartered waters comes increased risk, and the need to safeguard confidential information as well as advice of counsel is worth introspection and careful utilization of any AI tool.

[1] “Generative AI is a specific subset of AI used to create new content based on training on existing data taken from massive data sources in response to a user’s prompt, or to replicate a style used as input.  The prompt and the new content may consist of text, images, audio, or video.”  Hon. Xavier Rodriguez, Artificial Intelligence (Ai) and the Practice of Law, 24 Sedona Conf. J. 783, 789 (2023) (quoting Maura R. Grossman, Paul W. Grimm, Daniel G. Brown & Molly (Yiming) Xu, The GPTJudge: Justice in a Generative AI World, 23 Duke L. & Tech. Rev. 1 (Dec. 2023),  https://ssrn.com/abstract=4460184).

[2] Tremblay v. OpenAI, Inc., No. 23-cv-03223 (N.D. Cal. Feb. 16, 2024), ECF No. 107 (consolidating Tremblay, No. 23-cv-03223; Silverman v. OpenAI, Inc., No. 23-cv-03416 (N.D. Cal. Filed Jul. 7, 2023); Chabon v. OpenAI, Inc., No. 23-cv-04625 (N.D. Cal. Filed Sept. 8, 2023)).

[3] First Amended Consol. Compl., In re OpenAI ChatGPT Litig., No. 23-cv-3223 (N.D. Cal. Mar. 13, 2024), ECF No. 120.

[4] First Amended Consol. Compl., Ex. B, In re OpenAI ChatGPT Litig., No. 23-cv-3223 (N.D. Cal. Mar. 13, 2024), ECF No. 120-2.

[5] The Tremblay Court referred to the unused prompts and outputs as “negative test results” because those unused tests presumably did not support the proposition that OpenAI used the authors’ work to train ChatGPT.