Introduction
This case highlights the challenges and potential legal implications associated with the use of AI-driven hiring technologies, particularly concerning discrimination against marginalized groups such as Indigenous, Deaf [2], and disabled individuals. It underscores the need for employers to ensure fairness and inclusivity in their hiring processes, especially when utilizing AI tools.
Description
DK, an Indigenous and Deaf woman [2], applied for a promotion to Seasonal Manager at Intuit [1], where she has worked since 2019 and received positive feedback and bonuses [2]. She completed an automated video interview administered by HireVue [1], an HR technology company [1], but reported significant accessibility issues with the platform, particularly the lack of accurate captioning [1]. Her request for human-generated captioning was denied [1], leading her to claim that the reliance on less accurate automated captions hindered her comprehension and performance [1], resulting in negative feedback regarding her communication skills [1]. DK attributes these challenges to the accessibility barriers she faced during the interview process, alleging that Intuit’s failure to accommodate her needs constitutes discrimination based on disability [1], as it disadvantaged her during the promotion process [1].
Furthermore, DK argues that HireVue’s AI tools discriminated against her due to her Indigenous ethnicity [1], asserting that automated speech recognition systems often misinterpret the speech patterns of non-white [1], accented speakers [1]. Her complaint cites violations of the Colorado Anti-Discrimination Act [2], the Americans with Disabilities Act [2] [3] [4], and Title VII of the Civil Rights Act [2] [3] [4], emphasizing the systemic discrimination perpetuated by biased AI technology in hiring processes. DK references Colorado’s AI Consumer Protection Act [1], which offers protections against discrimination based on disability and race [1], bolstering her federal claims [1].
Intuit denies the allegations [1] [4], asserting that it provides reasonable accommodations [1], while HireVue claims that no AI-backed assessment was utilized in this hiring process [1]. The Equal Employment Opportunity Commission (EEOC) and the Colorado Civil Rights Division are currently investigating the allegations to determine if there is sufficient cause for a finding of discrimination. If cause is found [1], mediation may occur [1], and if unsuccessful [1], DK may file a complaint in court within 90 days of receiving a right to sue letter [1]. She also has the option to withdraw her claims or request a right to sue letter after a no-cause finding [1].
This case highlights the intersection of AI hiring technologies and anti-discrimination legal frameworks [1], underscoring potential legal risks for employers using AI-driven hiring systems that may disadvantage protected groups [1]. Similar allegations have emerged against other companies regarding discrimination against Black [1], older [1], and disabled applicants due to algorithmic screening practices [1], with motions filed for national class actions [1]. The ACLU has also lodged complaints against AI screening tools for their discriminatory impact on applicants with disabilities and certain racial backgrounds [1].
To mitigate these risks [1], organizations should conduct regular adverse impact assessments and accessibility audits, review vendor agreements for bias-free solutions [1], and train HR teams on AI biases and legal requirements [1]. Employers must allow for manual review requests during interviews and establish clear pathways for applicants needing accommodations [1]. Legal representatives emphasize that companies cannot evade accountability for discrimination by relying on AI technologies and are committed to ensuring that AI-based hiring tools adhere to standards of fairness and inclusivity [2], particularly for marginalized groups such as Deaf [2], disabled [1] [2], and Indigenous individuals [2].
Employers must stay informed about evolving state and local regulations [4], particularly as federal oversight diminishes [4], and are responsible for any discriminatory outcomes produced by AI tools [4], even if developed by third-party vendors [4]. Tech companies are urged to develop non-discriminatory AI tools by refining algorithms to accurately assess diverse communication styles and dialects [3]. Ongoing testing and validation are essential to ensure equitable treatment across demographic segments [3], with transparency in decision-making processes being crucial [3]. Collaboration with advocacy groups and regulatory bodies can enhance the fairness and reliability of AI hiring tools [3].
Conclusion
The case of DK against Intuit and HireVue serves as a critical reminder of the potential for AI-driven hiring technologies to perpetuate discrimination if not carefully managed. It emphasizes the importance of developing and implementing AI tools that are fair, inclusive [2] [3], and free from bias. Employers [1] [3] [4], developers [3] [4], and regulatory bodies must work collaboratively to ensure that AI contributes positively to diversity and inclusion in the workplace, safeguarding the rights of all individuals, particularly those from marginalized communities.
References
[1] https://www.jdsupra.com/legalnews/ai-screening-systems-face-fresh-8284545/
[2] https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants
[3] https://hrcurated.com/hr-tech/is-ai-hiring-software-discriminating-against-deaf-and-non-white-workers/
[4] https://www.hrdive.com/news/ai-intuit-hirevue-deaf-indigenous-employee-discrimination-aclu/743273/