Whether employers realize it or not, Artificial Intelligence (“AI”) currently is used in most workplace. Although AI can be tremendously beneficial in the right circumstances, it also can create significant liability for employers who do not leverage it appropriately.
Artificial Intelligence in the Workplace
AI is the use of machines to perform tasks traditionally performed by the human brain. It can take many forms. For instance, generative AI, like ChatGPT, can create documents or presentations from scratch. Algorithmic or decision-making AI use algorithms to screen candidates, and video and voice recognition software can rate a candidate’s cultural fit with your organization. Conversational AI, or chatbots, are used to manage initial complaint intake or employee requests for information. Digital assistants can manage calendars, edit and grammar check documents, and create transcripts or outlines of recorded meetings. This list goes on and on.
With the variety of AI options, the use of AI is creeping into workplaces, often without employers realizing they have implemented an AI tool. Additionally, employees may unilaterally use AI to complete their work. (“I’ll just send this super-sensitive document full of confidential information through the online grammar check software to double-check my work!”) Given the prominence of AI availability and its more frequent use in the workplace, employers should be aware of their potential liability and take steps to safeguard against it.
The Changing Law Related to Artificial Intelligence
At both state and federal levels, lawmakers are taking notice and action regarding AI at work. In October 2022, the Whitehouse published a “Blueprint for an AI Bill of Rights” that outlines five principles for the responsible use of AI, including preventing discrimination through algorithms, providing proactive notice and consent before using AI systems, and protecting against abusive data practices. Although the blueprint is not a law, it provides employers with some idea of where politicians are focused in this area.
In May 2022, the Equal Employment Opportunity Commission (“EEOC”) issued guidance regarding how the use of AI may violate the Americans with Disabilities Act. And, in January 2023, the EEOC published its Draft Strategic Plan, which included as its first priority addressing discrimination in hiring and recruitment processes through the use of AI and machine learning systems. In August 2023, the EEOC made good on that promise when it filed a first-of-its-kind lawsuit against iTutorGroup, Inc. for allegedly discriminating against female job seekers over 55 and male job seekers over 60 through its use of AI decision-making tools. The lawsuit settled for $365,000.
Although other jurisdictions, like Illinois and New York City, have enacted laws related to the use of AI tools, California has not yet done so. However, the California Civil Rights Council has proposed modifications to the state’s anti-discrimination regulations making it illegal to use AI to discriminate against job applicants or employees on the basis of a protected characteristic.
Potential Liability Created by Artificial Intelligence
Even without AI-specific laws, employers can face liability under our current legislative framework for inappropriate use of AI. There are four major areas of legal risk: discrimination, privacy, data security, and accuracy.
In addition to discrimination through decision-making tools, facial recognition tools programmed with light-skinned individuals as samples may discriminate against job applicants and employees on the basis of race, because the tool fails to recognize nuances on darker-skinned individuals. Additionally, voice recognition software that rely on databases of English spoken by white Americans may discriminate on the basis of race, ethnicity, and national origin if they fail to identify slang, dialects, accents, or English as a second language.
California’s Constitution confers an “inalienable right” to pursue and obtain “privacy,” so an employer that implements AI such as employee monitoring, productivity, and keystroke software may violate their employee’s Constitutional rights.
Using AI tools like transcription services, automatic note-taking, grammar checking, and prompted generative AI could lead to the disclosure of private, confidential, and/or proprietary information. An employer can safeguard against what tools it authorizes employees to use and how. But an employee’s decision to use such tools independently could jeopardize an employer’s data security.
Finally, generative AI may source its input data from inaccurate sources. Employers that fail to ensure the accuracy of such data and to verify the final AI-generated product take substantial risk.
Lessons for Employers
Because AI is becoming more prevalent in the workplace, employers should be mindful of how they and their employees are using AI, and safeguard against potential liability. Employers must be strategic about implementing AI, and consider who will vet and approve the internal use of AI tools.
Employers should create employee policies regarding the use of AI, including clear guidelines for the appropriate use of approved AI, and prohibitions against independently using AI for work purposes. Employers should work with legal counsel to draft such a policy to ensure that it does not prevent employees from engaging in lawful off-duty conduct and does not run afoul of recent National Labor Relations Board restrictions targeting broad employer policies.
Employers also should review the factors that drive automated decision-making tools, test the tools pre-implementation, and then re-test periodically to ensure there are no unintended outcomes. Employers should direct job seekers and employees to escalate any concerns about the outcomes of decision-making tools, and investigate and address where necessary, similar to any other complaint of discrimination.