The Workday lawsuit isn’t just a story about one company. It’s a signal to every HR leader deploying AI in hiring — and a reminder that opacity carries a price.
A federal court in California is currently processing what may become one of the most consequential employment discrimination cases in recent history. Not because of what a manager said in a meeting. Not because of a biased job advert. But because of what an algorithm decided — automatically, at scale, across more than a billion applications.
The case is Mobley v. Workday, Inc. And if you work in HR or Talent Acquisition, it should be required reading.
What Happened
Derek Mobley applied for dozens of roles at companies using Workday’s AI-powered hiring platform. He was rejected from all of them — in some cases within minutes of submitting his application. As an African-American man over the age of 40, Mobley alleged that Workday’s automated screening tools were not simply sorting candidates. They were systematically deprioritising people based on protected characteristics: age, race, and disability.
The lawsuit claims that Workday’s AI models were trained on data from incumbent employees — workforces that, in many organisations, skew younger, less diverse, and less representative of the broader labour market. By learning from historical hiring patterns, the algorithm may have encoded those patterns as preferences. Not by design, but by inheritance.
The result, the complaint alleges, was a system that rejected candidates before a human ever reviewed their application — and did so in ways that disproportionately harmed people in protected groups.
The algorithm didn’t intend to discriminate. But intent, as the court has made clear, is not the threshold for liability.
The Legal Shift That Changes Everything
What makes this case genuinely significant — beyond the specifics of Workday — is a ruling the court issued in July 2024.
The judge determined that Workday could be held liable as an ‘agent’ of the employers using its platform. In practice, this means that technology vendors can no longer position themselves as neutral infrastructure. If your AI makes a consequential hiring decision, you share accountability for that decision.
For HR leaders, the implication is direct: you cannot outsource legal exposure to your vendor. The tools you deploy, and how they work, remain your responsibility.
The court subsequently authorised the lawsuit to proceed as a nationwide collective action — covering anyone aged 40 or older who applied through Workday’s platform from September 2020 onwards. Given that Workday’s own filings reference more than 1.1 billion processed applications, the potential scale of this case is without precedent.
The Mechanism Behind the Bias
Understanding the ‘how’ is more useful than dwelling on the ‘what’. The core allegation is not that someone programmed the system to discriminate. It’s that the system learned from data that was already skewed — and then amplified that skew at scale.
When AI models are trained on the hiring decisions of existing organisations, they absorb the preferences those organisations have historically expressed. That might mean favouring certain educational backgrounds. Certain career trajectories. Certain resume formats. Certain gaps — or their absence.
None of these are protected characteristics. But their correlation with age, race, or disability can produce outcomes that the law treats as discrimination, regardless of intent. This is the legal concept of ‘disparate impact’ — and the court has confirmed it applies here.
The practical lesson: an AI system that cannot explain why it ranked or rejected a candidate is a system that cannot be audited, challenged, or defended.
What HR Leaders Need to Ask Right Now
This case should prompt a set of specific, operational questions about any AI hiring tool currently in use:
What data was this system trained on — and how representative is it of the candidate population we serve?
When this tool ranks or rejects a candidate, can it explain its reasoning in terms a recruiter — and a regulator — could follow?
Do our processes include a meaningful human review stage before automated decisions become final?
Are we documenting our hiring decisions in a way that would withstand audit or legal scrutiny?
Does our contract with this vendor define their liability in the event of discriminatory outcomes?
These are not theoretical questions. They are the questions that determine whether an organisation is exposed or protected when the legal landscape continues to shift.
The Audit Trail Is No Longer Optional
Across the jurisdictions most relevant to enterprise HR — the US, the UK, and the EU — the regulatory direction is consistent. AI hiring tools must be auditable. Decision logic must be documented. Human oversight must be demonstrable, not assumed.
New York City’s Local Law 144 already mandates annual bias audits for automated employment decision tools. Colorado, Illinois, and California are at various stages of implementing or considering comparable requirements. The EU AI Act classifies AI systems used in employment and recruitment as high-risk — subject to conformity assessments, transparency obligations, and human oversight requirements.
The compliance floor is rising. Organisations that treat auditability as a feature of forward-thinking governance are not ahead of the curve — they are reaching the baseline.
An AI system that cannot explain its decisions cannot be defended. And a process that cannot be defended should not be running at scale.
The Structural Question
The Workday case is not an argument against AI in hiring. Algorithmic tools, applied well, can reduce the inconsistencies and cognitive shortcuts that make human-only processes unreliable. The case is an argument for a specific kind of AI — one that is explainable, auditable, and designed with fairness as a structural requirement rather than a downstream consideration.





