Your recruiting software is likely using AI even when it’s not marketed that way (“smart matching,” “automated screening,” “predictive analytics” ==> hidden AI). Likewise, voice-based screening, video analysis, resume parsing, and chatbot interviews all create unique discrimination risks that traditional hiring methods do not have.
Case Background
In May 2025, the Northern District of California certified Mobley v. Workday as a class action, making it one of the first major lawsuits to challenge AI-driven applicant screening for potential bias against protected groups. The suit targets Workday’s AI-powered tools used by employers to screen, rank, and recommend job candidates.
Key Allegations
- Plaintiff Derek Mobley, a Black man over 40 who suffers from anxiety and depression, applied to more than 100 positions between 2017 and 2024 on employer sites powered by Workday’s platform. Each time, he was rejected—often within hours—without human review3.
- Mobley contends that Workday’s algorithms “embed artificial intelligence and machine learning” trained on biased data, resulting in a disparate impact on applicants based on race, age, and disability.
- He asserts claims under Title VII, the Age Discrimination in Employment Act (ADEA), the Americans with Disabilities Act (ADA), Section 1981, and California’s Fair Employment and Housing Act (FEHA) for both intentional discrimination and disparate-impact liability.
Procedural Posture
- In July 2024, the court denied Workday’s motion to dismiss Mobley’s First Amended Complaint, holding that Workday could potentially be liable as an “agent” of its employer-licensees under anti-discrimination statutes.
- On June 11, 2025, the court granted conditional certification of the ADEA claim, finding that Workday’s unified use of its AI recommendation system across clients justified collective treatment of applicants denied employment “on equal footing”.
- Workday disclosed that its tools rejected over 1.1 billion applications in the relevant period, suggesting the collective could number in the hundreds of millions.
Implications for AI in Hiring
This case has further galvanized HR and legal leaders to:
- Audit AI hiring tools for bias and demand transparency on training data and algorithmic design.
- Establish governance frameworks that include human oversight, bias-testing protocols, and vendor accountability.
- Recognize that outsourcing screening to AI does not shield employers from liability under federal and state anti-discrimination laws5.
My Two Cents
If your vendor cannot explain why its AI rejected a candidate, you cannot defend the employment decision. Assume discrimination exists unless you are shown tests that were performed to prove no bias exists.
This is not a call to avoid AI. This is another reminder to choose only those vendors who treat compliance as non-negotiable. Organizations conducting proper due diligence now will avoid the legal disasters coming for those who do not.
David Seidman is the principal and founder of Seidman Law Group, LLC. He serves as outside general counsel for companies, which requires him to consider a diverse range of corporate, dispute resolution and avoidance, contract drafting and negotiation, and other issues. In particular, he has a significant amount of experience in hospitality law by representing third party management companies, owners, and developers.
He can be reached at david@seidmanlawgroup.com or 312-399-7390.
This blog post is not legal advice. Please consult an experienced attorney to assist with your legal issues.
Photo Credit: Pixentia