In a joint statement, the Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ) and Equal Employment Opportunity Commission (EEOC) expressed concern about the potential for unlawful discrimination by artificial intelligence (AI) systems and asserted their enforcement authority over civil rights, nondiscrimination, fair competition and consumer protection as they apply to AI.
The use of AI is increasingly common in daily life, the agencies said, with private and public entities using automated systems—software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions—to make critical decisions that impact individuals’ rights and opportunities.
While the automated systems may increase efficiencies and cost savings, “their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination and produce other harmful outcomes,” the agencies wrote.
Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices, according to the statement.
“We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws,” the agencies said.
To that end, the CFPB published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology used, the FTC has issued a report evaluating the use and impact of AI in combating online harms identified by Congress and the EEOC and the DOJ released guidance last year to help employers using AI for employment-related decisions to avoid running afoul of the Americans with Disabilities Act.
The agencies identified potential discrimination from automated systems in data and datasets, as system outcomes “can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.”
In addition, many automated systems are “black boxes” whose internal workings are not clear to most people—and in some cases, even the developer of the tool—leaving businesses and individuals unclear whether a system is fair due to a lack of transparency.
Developers also do not always understand or account for the contexts in which entities will use their automated systems, the agencies noted, which could lead to a design based on flawed assumptions.
“[O]ur agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the FTC, CFPB, DOJ and EEOC wrote. “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”