EEOC SETTLES FIRST AI DISCRIMINATION LAWSUIT
At the beginning of August, the Equal Employment Opportunity Commission (EEOC) reached its first-ever settlement in a case that involved artificial intelligence (AI) discrimination in the workplace. This may be the first case involving workplace AI discrimination, but it likely will not be the last.
As revealed in a August 9th filing, the EEOC has entered into a settlement wherein a tutoring company, iTutorGroup, agreed to pay $365,000 to settle charges related the company’s use of an AI-powered hiring tool. The claimants in the case alleged that the AI-powered hiring tool automatically rejected female applicants who were over the age of 55 and male applicants who were over the age of 60.
How did this case come about? An applicant who was initially rejected for a position with the tutoring company, reapplied using their same resume, but modified their birthdate to reflect a younger age. They were selected for an interview from the second application. The applicant filed a claim with the EEOC, who, after investigating and attempting to resolve the claim, filed a lawsuit against the company for age and gender discrimination on behalf of more than 200 applicants. The EEOC’s lawsuit claimed that iTutorGroup, through its AI-powered hiring selection tool, unlawfully screened out more than 200 qualified applicants based on age.
Under the terms of the settlement, in addition to paying out $365,000 to a group of more than 200 rejected applicants, iTutorGroup agreed to adopt anti-discrimination policies and conduct trainings for all employees to regarding equal opportunity laws. The company was also required to consider all previous applicants who were allegedly rejected because of their age.
This case is important for employers for two reasons.
First, this is the first of its kind settlement for the EEOC. The EEOC has recently launched a broader initiative to ensure that AI tools being used by employers comply with anti-discrimination laws. This settlement makes EEOC’s intentions regarding enforcement very clear and it is almost a certainty that employers will see more cases like this one coming in the near future.
Second, the use of AI by employers has grown exponentially in recent years. According to a recent study, around 80% of employers surveyed use some form of AI during the hiring process. The more use of AI, which is still emerging technology, the higher the likelihood that claims and legal actions will be filed. As such, any employer who uses or is thinking about using AI in the workplace, especially in the HR context, needs to make sure that they are remaining compliant with anti-discrimination laws at all times.
There are some steps that employers can take to make sure they are remaining complaint while still using AI tools in their workplace.
Prior to contracting to use any kind of AI in the workplace, a company should make sure that they complete diversity testing on the product to make sure that the software does not discriminate against any protected groups. Any AI tool which is regularly used, especially for AI-driven decision-making, needs to be regularly reviewed and monitored. Some jurisdictions have already implemented regulations that require companies to conduct AI bias audits. Regardless of any legal requirements, if a company already uses or intends to use AI in the future, it should consider starting the practice of conducting AI bias audits on a regular basis. AI bias audits are great tools for discovering unintentional discrimination.
Companies need to thoroughly train staff on any AI tools that the company will be using if the AI involves anything employment or human resources related. Your HR team should have the knowledge and skills to use the AI tools without inadvertently perpetuating biases. Companies should make sure to provide regular training regarding anti-discrimination laws and should make sure that HR professionals using AI tools receive support and guidance. Prior to implementing any AI tools, a company should create and implement a well thought out AI policy as well.
Along with a proper policy, companies should also create an environment of open communication. Employees should feel comfortable voicing concerns about AI and should have the ability to challenge any AI decision that the employee feels may be biased or not in line with anti-discrimination laws. Encourage individuals within the company that use the AI tools and even applicants who interact with the AI tools to provide feedback about their experiences. Feedback allows a company to gain knowledge about potential issues with the AI tools and can ensure that a company remains compliant with all laws.
Companies should always seek out professional expertise when implementing and using AI tools and should always remain up to date on the ever-changing field of AI in the workplace. Companies need to remember that AI tools in the workplace are new and imperfect and cannot ever fully replace humans in the workplace, particularly in human resources. HR staff should always play a vital role in making workplace decisions. AI tools should be used to supplement and support your HR team, not replace them.
Mailing List Sign Up Form
Fill out this mailing list sign up form to receive monthly email updates on the latest NAE news, HR issues, special events, training dates and more!