Prepare For Federal Agency Scrutiny On AI Discrimination
As artificial intelligence and similar automated systems become more and more ubiquitous across all manner of industries, concerns about how those systems can exacerbate bias and inequality problems continue to grow.
A recent joint statement by several federal regulatory agencies sheds light on how those agencies view the federal government’s role in combatting bias and discrimination issues arising from AI, and organizations that use or may soon use such automated systems should be aware of the regulatory risks such systems can present.
Most significantly, the new joint statement, as well as other recent actions from these agencies, highlights the agencies’ intent to scrutinize organizations’ use of such technology under their existing mandates and indicates potential for further AI regulation in the future.
The Consumer Financial Protection Bureau (CFPB), the Civil Rights Division of the Department of Justice (DOJ), the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) have all separately taken action within the scope of their authority regarding how automated systems are used. The DOJ’s Civil Rights Division enforces laws against discrimination in many areas, and in January this year filed a Statement of Interest opposing a motion to dismiss in Louis et al. v. Saferent Solutions — a Fair Housing Act class action involving allegations that an automated screening tool for rental applications had a disparate impact on minority housing applicants.[1]
The EEOC enforces federal laws against employment discrimination, and has provided several statements already regarding algorithmic discrimination.[2] The FTC protects consumers against unfair business practices, including infringement of privacy rights, discriminatory impacts, and unsubstantiated marketing claims,[3] has submitted a report to Congress warning about the risks of relying on automated systems to combat online problems,[4] and has issued warnings about the need for companies to avoid automated systems that have discriminatory impacts.[5]
The CFPB, which enforces consumer financial laws including those prohibiting unfair acts and discrimination, has issued bulletins warning that financial services organizations that make adverse credit determinations on the basis of complex algorithms risk violating notification obligations that require identification of the specific reasons for an adverse action.[6]
On April 25, 2023, all four of these agencies jointly issued a statement (“Joint Statement”) about their enforcement efforts “to protect the public from bias in automated systems and artificial intelligence” to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws. The Joint Statement defines “automated systems” as “software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.” The agencies observe that these systems are used by private and public entities to make critical decisions that impact individuals’ rights and opportunities, including fair access to jobs, housing, credit opportunities, and other goods and services.
While acknowledging the benefits of automated systems, the Joint Statement emphasizes their potential to perpetuate unlawful bias, automate discrimination, and produce other harmful outcomes. The Joint Statement identifies sources of potential discrimination in automated systems, including: 1) unrepresentative or imbalanced datasets; 2) a lack of transparency regarding the systems’ internal workings; and 3) flawed assumptions that developers may make about the contexts in which the automated systems will be used. The Statement concludes by reiterating the agencies’ resolve to monitor the development and use of automated systems and promote responsible innovation, as well as their pledge to vigorously use their collective efforts to protect individual rights.
The CFPB also issued a separate press release, as well as prepared remarks by Director Chopra.[7] In the press release, the CFPB stated that it will be releasing a white paper this spring regarding the chatbot market and its integration by financial institutions. The press release also highlights a series of prior CFPB actions that related to automated systems, including actions addressing transparency, algorithmic marketing, abusive conduct, digital redlining,[8] detection of repeat offenders, and whistleblowing. In his prepared remarks, Director Chopra highlighted the threats posed by automated systems and said the Joint Statement is “an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.” He also raised concerns about generative AI, which can produce voices, images, and videos that simulate real-life human interactions and raises new additional harms.
The Joint Statement and the CFPB’s statements underscore the United States government’s commitment to enforcing federal laws related to discrimination and bias in automated systems. As the regulatory landscape for AI continues to develop, it is crucial that businesses and developers take steps to ensure that their automated systems do not perpetuate discrimination or violate federal law, including by resulting in disparate impacts on protected groups. The agencies’ ongoing monitoring and enforcement efforts will help advance responsible innovation and protect consumers’ rights in the rapidly evolving landscape of automated systems and artificial intelligence.
The recent statements, publications, and actions by these agencies highlight that additional regulation of automated systems is on the horizon—and that even without new regulations, agencies are leveraging their existing regulatory mandates to scrutinize the use of automated systems in potentially biased or discriminatory ways. Federal and international[9] governing bodies are actively considering regulatory frameworks, and organizations are beginning to release nascent best practices for the assessment of artificial intelligence risk.
For companies that build or use any automated systems (and many will eventually use such tools in some form, if they are not already), internal policies and processes should be constructed carefully to ensure legal compliance now and in the future. This may include:
- Documenting the purpose underlying the creation of the automated system, including any assumptions made in its planning and considerations made to avoid potential bias;
- Considering the completion of a broader impact assessment of the automated system;
- Selecting, validating, and documenting underlying data sets and related metadata in an effort to avoid potential bias;
- Documenting the selection of algorithms, as well as the training and calibration of models; and
- Continuously assessing output and impact of the automated system to correct any potential bias.[10]
As organizations increasingly implement automated systems, the list of best practices will continue to grow. But by documenting decisions made regarding these systems, your organization will be better positioned to address emerging and future regulatory requirements. The recent Joint Statement by four federal agencies suggests that, as is the case with any exciting new area of technology, the implementation of automated systems is likely to become more scrutinized in the not-too-distant.
This article was co-authored by Tara Emory and Mike Kearney, Redgrave Data
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] See Statement of Interest of the United States filed in Louis et al. v. Saferent Solutions, No. 22-cv-10800 (D. Mass.), available at https://www.justice.gov/usdoj-media/usao-ma/media/1267476/. Separately, the DOJ also issued a joint statement with the EEOC in May regarding algorithmic discrimination against people with disabilities in employment contexts. DOJ, Justice Department and EEOC Warn Against Disability Discrimination (May 12, 2022), available at https://www.justice.gov/opa/pr/justice-department-and-eeoc-warn-against-disability-discrimination.
[2] The EEOC has already issued several statements regarding algorithmic bias. See id.; The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), available at https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence; Tips for Workers: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence, available at https://www.eeoc.gov/tips-workers-americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence; Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier, available at https://www.youtube.com/watch?v=rfMRLestj6s.
[3] The FTC warned earlier this year about AI marketing. See M. Atleson, Keep your AI Claims in Check, FTC Division of Advertising Practices (February 27, 2023), available at https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check. It also warned that it may take action against a tool that is “effectively designed to deceive – even if that’s not its intended or sole purpose.” See M. Atleson, Chatbots, deepfakes, and voice clones: AI deception for sale, FTC Division of Advertising Practices (March 20, 2023), available at https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.
[4] See Combatting Online Harm Through Innovation (June 16, 2022), available at https://www.ftc.gov/reports/combatting-online-harms-through-innovation.
[5] See, e.g., Aiming for truth, fairness, and equity in your company’s use of AI (April 19, 2021), available at https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai; Keep Your AI Claims in Check (February 27, 2023), available at https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.
[6] See Consumer Financial Protection Circular 2022-03, Consumer Financial Protection Bureau (May 26, 2022), available at https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms/.
[7] CFPB, CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior (April 25, 2023), available at https://www.consumerfinance.gov/about-us/newsroom/cfpb-federal-partners-confirm-automated-systems-advanced-technology-not-an-excuse-for-lawbreaking-behavior/ (“CFPB Press Release”); CFPB, Director Chopra’s Prepared Remarks on the Interagency Enforcement Policy Statement on “Artificial Intelligence” (April 25, 2023), available at https://www.consumerfinance.gov/about-us/newsroom/director-chopra-prepared-remarks-on-interagency-enforcement-policy-statement-artificial-intelligence/ (“Chopra Remarks”).
[8] “Digital redlining” refers to perpetuating inequalities through digital technologies, such as internet service providers offering no or lower quality internet service in minority neighborhoods.
[9] See Proposal for EU Artificial Intelligence Act Passes Next Level – Where Do We Stand and What’s Next? (December 12, 2022), available at https://www.sidley.com/en/insights/newsupdates/2022/12/proposal-for-eu-artificial-intelligence-act-passes-next-level; Covington, Marianna Drake, Jiayen Ong, Marty Hansen, and Lisa Peets, EU AI Policy and Regulation: What to look out for in 2023 (February 2, 2023), available at https://www.insideprivacy.com/artificial-intelligence/eu-ai-policy-and-r....
[10] See NIST AI Risk Management Framework (January 26, 2023), available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf