FTC’s “Operation AI Comply” Has Lessons for Companies Deploying AI

On September 25, 2024, the Federal Trade Commission (FTC) announced Operation AI Comply, an initiative cracking down on companies that use or make claims about artificial intelligence (AI) in ways that are misleading, deceptive, and harm consumers.  The FTC inaugurated Operation AI Comply with enforcement actions against five companies that it said made deceptive claims about their use of AI.  Companies promoting to customers the capabilities of their AI applications should be mindful of the claims that the FTC has identified as deceptive.

Three of the five companies facing FTC enforcement claimed that customers would earn outsized amounts of passive income by investing in ecommerce business opportunities claimed to be powered by AI:

  • Ascend Ecom advertised that it used home-grown AI software to power its business model and to maximize returns, and claimed that customers could make thousands of dollars per month in passive income by hiring Ascend to manage ecommerce storefronts on prominent web retailers.
  • Ecommerce Empire Builders, which offered both self-study programs and investment opportunities, claimed that customers could earn tens of thousands of dollars per month by investing in its ecommerce storefronts “powered by artificial intelligence.”
  • FBA Machine promised monthly returns of up to six figures to customers investing in its ecommerce storefronts, which it claimed utilized an “AI[-]Powered Repricing Tool” to automatically manage customers’ stores.

While the AI-related claims of these three companies were secondary to the deceptive nature of the investment schemes they offered, AI is fundamental to the business models of the other two targets of Operation AI Comply: DoNotPay and Rytr.

DoNotPay is an online subscription service that offered consumers an “AI lawyer,” which DoNotPay advertised as “the world’s first robot lawyer.” DoNotPay claimed the AI was capable of generating various legal documents, including demand letters, cease and desist letters, contracts, small claims court filings, and challenges to speeding and parking tickets.  After a customer selected the category of legal issue for which assistance was desired, the customer was prompted to enter information into a chatbot-like interface, after which the AI would purportedly apply the applicable law to generate suitable documents.  The FTC found that DoNotPay did not test whether its service operated like a human lawyer, did not train its AI model on a comprehensive body of state and federal laws and regulations, and did not validate the quality and accuracy of the documents AI generated.  The FTC also found that a tool that DoNotPay promoted as being able to check a customer’s small business website for federal and state law violations did not function as advertised.  DoNotPay agreed to a proposed consent order that would require it to pay $193,000, to notify past customers about the limitations of its law-related services, and to prohibit it from making claims about the abilities of its services without evidence.

Rytr offered customers access to an AI-enabled “writing assistant” that purportedly could be used to generate written content for over 40 “Use Cases,” one of which was “Testimonial & Review.”  After the customer chose this use case, selected the desired output language and tone, and input information such as keywords, phrases, and titles, Rytr’s service would generate “genuine-sounding, detailed reviews” that could then be copied and pasted to another website.  The FTC found that these “false” reviews “contain specific, often material details that have no relation to the user’s input” that “[i]n many instances…would deceive potential customers” of the product or service reviewed.  Moreover, the FTC stated that Rytr’s “Testimonial & Review” service had “no or de minimis reasonable, legitimate use,” and that the service’s ability to quickly generate an unlimited number of seemingly genuine reviews meant that its “likely only use” was to generate fake reviews meant to deceive consumers.  Rytr agreed to a proposed consent order prohibiting it from advertising, marketing, promoting, offering for sale, or selling a service intended to generate customer reviews or testimonials.

Significantly, FTC Commissioners Melissa Holyoak and Andrew Ferguson dissented from the enforcement action against Rytr, raising concerns that it would chill innovation in the nascent AI industry.  Commissioner Holyoak noted that the FTC did not allege that any user of Rytr’s service had actually posted deceptive reviews.  Additionally, she disagreed with the FTC’s assessment that Rytr’s “Testimonial & Review” service lacked any legitimate use and expressed concerns that the FTC had failed to consider the service’s possible benefits to the public as a time-saving drafting aid.  Commissioner Holyoak wrote that by banning a product with useful features, the FTC’s actions were inconsistent “with a legal environment that promotes innovation.”  Commissioner Ferguson likewise thought that Rytr’s service had legitimate uses, thus, the FTC’s complaint was both a misapplication of the Federal Trade Commission Act and not in the public’s interest because it “ threatens to turn honest innovators into lawbreakers and risks strangling a potentially revolutionary technology in its cradle.”

The Operation AI Comply enforcement actions should be viewed as an extension of previous FTC guidance, warning businesses about the types of AI claims that may draw FTC attention.  Companies that offer AI-based services should not say that their service uses AI when it does not or make unsupported or exaggerated claims about what an AI service is able to do.  Further, the Rytr case shows that companies should be on notice that the FTC may be inclined to give more weight to the potentially deceptive uses of an AI service, so caution may be warranted when offering an AI service with multiple applications.  However, as the dissents from Commissioners Holyoak and Ferguson demonstrate, this is a still-developing area with many significant unknowns.