Artificial Intelligence is beginning to permeate our lives, both on a personal and professional level. The technology is developing at a clip speed, while the law is slowly catching up. One area we are seeing AI emerging is in the workplace as employers begin adopting AI to screen job applicants and perform other tasks. Proponents of AI in the workplace hail the technology as unbiased as compared to human decision makers that may perform the same work. But skeptics warn that AI may still produce a disparate impact on individuals with protected characteristics, like race or gender. To date, there is little to no regulation on this technology, while the EEOC is working on guidance and direction when it comes to the use of AI in the workplace.
On January 31, 2023, the EEOC conducted a hearing titled, Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier. Witnesses at the hearing included professors in law and technology, lawyers representing various interests, the Vice President of the U.S. Chamber of Commerce, and an occupational psychologist. The witnesses presented pros and cons of using this technology in the workplace. The consensus was a push for transparency in how employers use the technology. An overarching question remains: How do we know if the technology will or will not lead to biased results?
Some witnesses cautioned that the data analyzed by AI technology may reflect systemic racism. Examples include criminal records and credit history records which have historically included an overrepresentation of people of color. Moreover, candidates in protected categories may be underrepresented in data. For example, transgender individuals may use pronouns online that do not match government issued documents therefore limiting the data analyzed by the technology.
Others cautioned of biases with AI programs. While an employer may not have to explicitly state a discriminatory preference when it comes to candidate selection, it is possible the software could “learn” a discriminatory preference from the employer’s past practices.
The counterargument to this approach is, why not just eliminate protected class information from the technology altogether? Yet, the response articulated during this hearing was that this is not enough because the AI can still discriminate by relying on what are called “proxy variables.” An example of a proxy variable that could lead to a biased result is a person’s zip code which can be telling of an individual’s race, ethnicity, and even age. Testing for proxy variables involves statistical analysis (which we will not get into here).
There is also the question of the quality of the data analyzed. Critics caution that it is imperative that the data measure job-related skills or attributes. This also raises the issue of correlation. For example, AI may reveal that positive job performance correlates to low employee turnover (i.e., employees who have been with their employer for extended periods of time). Yet, critics point out that there is a need to look beyond this correlation to test whether this is really a proxy for a biased outcome regarding a protected class, like people with disabilities, or women of childbearing ages, groups who sometimes have breaks in employment.
Critics also warn employers cognizance of how job matching platforms may also lead to biased results. For example, certain platforms may use algorithms that lead to biased outcomes.
As is evident, there are multiple layers of consideration when determining whether AI is helpful in the workplace, particularly regarding recruitment efforts. Like many complex issues, the answer is often, “it depends.” We offer these key considerations, which were discussed during the EEOC hearing, as employers assess the use of AI technology for recruitment:
- Are we leveraging AI technology that is vendor-tested for disparate impact results? And is the technology continuously monitored and scrutinized for disparate impact results?
- Are we aware of the type and quality of data being analyzed by AI technology?
- Are there less discriminatory alternatives to AI?
- Are we paying attention to patterns in the results and adjusting for those patterns?
- Are we disclosing to candidates and/or employees when this technology is being used and for what purpose?
- Are we engaging in any auditing of the technology?
As employers consider adding AI to their repertoire of recruiting tools, it is worth considering future impact, i.e., it is entirely possible employers will have to show due diligence in their use and reliance on AI and provide transparency to candidates and employees when (and why) they use the technology.
While there is much to consider, and much to be gained from leveraging AI in the workplace, we must also be mindful and continue to educate ourselves on the technology as it evolves. We will continue to monitor and update you on developments in AI in the workplace.