Steps To Take On AI Operations Amid FTC's ChatGPT Probe
When OpenAI launched ChatGPT late last year, a storm was unleashed that few had predicted. Articles this year have compared ChatGPT to the debut of the iPhone in 2007[1] and say it foreshadows a complete paradigm shift for, well, everything.[2]
To perhaps no one's surprise, the Federal Trade Commission, the nation's principal consumer and privacy protection agency, decided to investigate and sent a civil investigative demand, or CID, to OpenAI last month.[3]
What the FTC Is Investigating
The CID is — as usual — light on specifics about the precise scope of the investigation and merely states that the FTC is investigating whether OpenAI
has (1) engaged in unfair or deceptive privacy or data security practices or (2) engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm, in violation of Section 5 of the FTC Act [through] products and services incorporating, using, or relying on Large Language Models.[4]
The CID's specifications, however, make it very clear that the FTC is primarily focused on consumer protection issues relating to privacy and the related representations that OpenAI has actually made, impliedly made or failed to make to users.
In particular, the investigation homes in on the related questions of whether OpenAI violated the FTC Act by scraping public data and either (1) publishing false information or (2) publishing accurate personal information through ChatGPT and its predecessors without adequate safeguards and disclosures.[5]
Information the FTC Is Seeking
The CID seeks extensive information — 49 interrogatories and 17 requests for production, many with multiple subparts — regarding corporate structure and operations, marketing practices with respect to the "product," and the nature of the product itself. This includes information about, among other things:
- All related companies;
- The owners and officers of the company;
- Financial earnings and licensing agreements;
- All products and how they are marketed;
- How the products were created;
- How the products were tested;
- Policies for assessing risk and safety;
- Policies for monitoring how the products generate statements about individuals;
- How the company mitigates risks of false statements;
- How the company addresses complaints; and
- How the company collects, uses, and retains personal information.
The CID also calls for the extensive preservation of data and corporate records ranging from organizational charts and contracts to "all Documents relating to the testing, assessment, or measurement" of the products' capacity to generate false information about individuals or real, accurate personal information.[6]
What This Means
The answer, of course, is it depends. Are you in-house counsel, information technology or management in a company either focused on artificial intelligence or selling products or services that use it? A businessperson or attorney with clients that are heavily involved in artificial intelligence? A concerned consumer of ChatGPT? An interested bystander?
Regardless of who you are, a few broad conclusions might be useful.
A Comprehensive Picture
First, at a minimum, the FTC inquiry foreshadows other FTC — and similar state — inquiries into artificial intelligence and its impact on consumers.
Other companies making, using, or selling products or services that feature artificial intelligence should not be surprised if they receive CIDs from the FTC — and from other state or federal agencies.
Second, the CID is a blueprint for the types of inquiries the FTC is most concerned about regarding artificial intelligence, namely, preventing the proliferation of false and misleading information and protecting personal Information.
Third, investigations like the FTC's will likely continue to be exceedingly broad and seek extensive records that will need to be backed up by sufficient record-keeping.
Finally, given the increased interest in artificial intelligence generally, companies doing business outside the United States need to be mindful of the rapidly evolving foreign regulatory environment.
Businesses Operating in the AI Space Should Prepare
If you are working for or with a company creating, using or selling products or services that rely on artificial intelligence, here are a few tips and pointers:
First, to avoid potential FTC law violations, you should:
- Make sure the people who are creating, developing, and marketing your product or service have the technical knowledge and skills they need to do their jobs;
- Monitor and check your database to eliminate personal information that is not necessary for the purpose the end product is intended to serve, limit the personal information included in the database to that necessary for the product or service, and confirm that such information is accurate;
- Thoroughly know and understand the capabilities and limitations of the product or service you are selling or providing; and
- Carefully monitor and limit your public statements about the product or service and your advertisements to ensure they are accurate, truthful and complete.
Second, always be prepared for the possibility of getting an inquiry from the FTC. This means being prepared to explain, with documentation, the following factors:
- Purpose, capabilities and limitations of the algorithm you are using and the process by which it was designed, tested, and implemented;
- Technical knowledge, skills and experience of the people designing the algorithm;
- Source, nature, accuracy and reliability of the data set being analyzed;
- Technical knowledge and training of the people who monitor, test and operate the algorithm;
- Results of the algorithm — i.e., the quality, accuracy and content of the end product; and
- Your support for any performance claims you have made about the product or service.
Third, the company should be able to explain the mechanics of the testing and have a robust system in place for addressing business and consumer complaints that arise about the algorithm, its uses, and its output — a particularly challenging exercise due to the ever-evolving nature of large language models. Once again, documentation of the policies and processes are key: what inputs were tested, how were they tested, how often were they tested, etc. should all be rigorously recorded.
Fourth, the company should pay close attention to the sensitivity, accuracy, and potential confidentiality of any personal data that is included in its data sets.
Fifth, the company should be able to defend its decisions regarding the data sets it uses to train its models, particularly both the personal and the public nature of any underlying data. Ideally, any such decisions will — at risk of sounding like a broken record — be memorialized in writing, with supporting documentation.
Finally, the company should be aware that any investigation will likely include requests for sensitive information about owners, contracts, and financial details, as well as information covered by trade secret protection.
Conclusion
While ChatGPT, large language models, and artificial intelligence more broadly are exceedingly popular right now — and have the potential to change the world — with that prominence comes increased scrutiny from government agencies like the FTC.
As such, it is foreseeable that many more CIDs — and similar investigations — are coming, which will certainly be at least as broad and request equally extensive information as the CID to OpenAI. But with a little foresight and planning, companies can be prepared to chat effectively when the prompts come knocking.
David C. Shonka is a partner and Benjamin M. Redgrave is an attorney at Redgrave LLP.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] See, e.g., https://www.axios.com/2023/01/24/chatgpt-openai-iphone-boom, https://www.extremetech.com/computing/343069-nvidia-ceo-calls-chatgpt-th....
[2] See, e.g., https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai, https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-her... A cursory Google search reveals dozens of similar articles.
[3] A copy of the letter obtained and published by the Washington Post can be located at https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117....
[4] See id., supra Section I, page 2.
[5] See id., supra Request for Production No. 10 (page 19), a quick glance reveals these exact 2 concerns:
Produce all internal communications about or assessments of Your Products' potential to
a. Produce inaccurate statements about individuals; or
b. Reveal Personal Information.
[6] See id., supra Requests for Production Nos. 1–3 and 7–9, pages 18–19.