Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

AI Hiring Under Fire: What the Eightfold Lawsuit Means for Every Employer Using Algorithmic Screening

By Andrew R. Lee, Jason M. Loring
February 25, 2026

A January 2026 class action alleges that Eightfold AI scraped personal data on over one billion workers, scored job applicants on a zero-to-five scale, and discarded low-ranked candidates before a human being ever saw their applications. And the plaintiffs allege that all of this occurred without the disclosures required by the Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq. (FCRA), which mandates specific procedures when companies compile “consumer reports” for employment decisions. The lawsuit, brought by former EEOC chair Jenny R. Yang and the nonprofit Towards Justice, doesn't claim the algorithm was biased. It claims the algorithm existed in secret. 

That distinction matters. The Eightfold case isn't another AI discrimination lawsuit. It's a consumer protection action that reframes how plaintiffs can attack automated hiring. And it arrives at the worst possible moment for employers caught in what we've described as the AI vendor "liability squeeze": courts expanding vendor accountability while contracts shift risk to customers, widening the gap between who controls the algorithm and who pays when it fails.

The Case: FCRA as the New Weapon

Kistler et al. v. Eightfold AI Inc., filed January 20 in California's Contra Costa County Superior Court, targets the AI hiring platform used by Microsoft, PayPal, Morgan Stanley, Starbucks, Chevron, Bayer, and many others. The named plaintiffs, Erin Kistler and Sruti Bhaumik, both California residents with STEM backgrounds, applied to jobs through URLs containing "Eightfold.AI," were never interviewed, and never advanced.

The allegations state that Eightfold scraped social media profiles, location data, internet activity, and tracking data far beyond what candidates submitted. Its AI generated "Match Scores" ranked applicants zero to five. Lower-ranked candidates were filtered out before any human reviewed their application. Applicants were never told their data was being compiled, never given copies of the reports, and never offered the chance to dispute errors. These are protections the FCRA has required of consumer reporting agencies since 1970.

“I think I deserve to know what’s being collected about me and shared with employers. And they’re not giving me any feedback, so I can’t address the issues.”  

  Erin Kistler, Plaintiff, quoted in the New York Times

The plaintiffs' FCRA theory doesn't require proving the algorithm was biased. Plaintiffs need only show that Eightfold compiled "consumer reports" without following mandatory procedures. FCRA provides statutory damages of $100 to $1,000 per willful violation with robust class mechanisms. When your database covers a billion profiles, the arithmetic can rapidly get uncomfortable.

Eightfold denies the allegations, stating the platform "operates on data intentionally shared by candidates or provided by our customers."

The Pincer Movement: Eightfold Meets Workday

Read Eightfold alongside Mobley v. Workday, and a coherent theory of AI vendor liability emerges. In Mobley, Judge Rita Lin (N.D. Cal.) held that Workday acted as an “agent” of the employers using its automated screening, meaning the company was not merely providing software but performing a function traditionally handled by human employees. The case achieved preliminary nationwide collective action certification in May 2025, potentially covering millions of applicants over 40. Derek Mobley had applied to over 100 jobs through Workday over seven years and was rejected within minutes each time. This agent designation is significant because it means Workday isn't merely a neutral tool provider; it's acting on behalf of employers in making employment decisions, which triggers direct liability under the ADEA. Human bias is retail; algorithmic bias is wholesale. 

Together, these cases form a pincer. Workday says the vendor is an agent liable for discrimination. Eightfold says the vendor is a consumer reporting agency subject to transparency mandates. One attacks outcomes; the other attacks process. Both point the same direction: AI hiring vendors may no longer be able to hide behind the argument that they merely provide tools.

The Squeeze: Who Pays When It Fails?

As we previously discussed, 88% of AI vendors cap their own liability (often to monthly subscription fees) while only 17% warrant regulatory compliance. Map that onto Eightfold: an employer's platform scrapes data from sources the employer doesn't know about, scores candidates using logic the employer can't examine, and filters applicants before any human reviews their file. When the class action lands, the vendor agreement caps liability, disclaims compliance warranties, and restricts algorithmic audits.

The employer is legally responsible for outcomes it cannot control, generated by data it cannot audit, processed through logic it cannot understand.

A regulatory twist compounds the problem. The CFPB's 2024 guidance, which stated that algorithmic employment scores are FCRA-covered, was rescinded in 2025. But rescinding guidance doesn't change the statute. Private plaintiffs are now the primary enforcement mechanism for the same theory that the CFPB endorsed. The regulatory retreat makes private litigation more likely, not less.

And the state patchwork (Colorado's bias audits, Illinois's disparate-impact standard, NYC's annual audit mandate) creates additional exposure. Non-compliance with those frameworks becomes evidence of negligence in private litigation, even where the AI statute itself lacks a private right of action. (Read “Whose Rules Govern the Algorithmic Boss?”) 

What to Do Now

Re-paper your vendor contracts. Require transparency on data sources, negotiate independent bias and FCRA compliance audit rights, demand training data indemnities, and carve regulatory fines and class-action settlements out of standard liability caps. If the vendor's data practices create your FCRA exposure, the contract should reflect that.

Build governance infrastructure. Cross-functional AI hiring oversight (HR, legal, IT, compliance), vendor due diligence before procurement, periodic EEOC four-fifths rule analyses, and a formal protocol for responding to regulatory or plaintiff inquiries about training data and model behavior.

Document as you go. The companies best positioned in this environment are the ones that can explain how their AI hiring tools work, what data feeds them, and what steps they've taken to verify accuracy and fairness. Policies, impact assessments, vendor due diligence files, and human override logs are not just compliance artifacts. They are evidence that your organization takes these obligations seriously.

Audit your current AI hiring deployments. Many organizations don't have a comprehensive inventory of which roles use AI screening, which vendors power those tools, or what data sources feed the algorithms. Before you can fix the problem, you need to map it.

Key Takeaways

  1. The Eightfold lawsuit rewrites the playbook. By framing AI hiring scores as consumer reports, plaintiffs bypass the high bar of proving bias. Every employer using algorithmic screening should assume FCRA scrutiny is coming.
  2. The liability squeeze is real and getting worse. Courts treat AI vendors as agents; contracts treat customers as guarantors. Until your vendor agreements match your legal exposure, you are carrying risk you cannot control.
  3. Good governance is the best insurance. The organizations that can explain their AI hiring tools, document their data sources, and show a record of oversight are the ones best positioned, regardless of how the legal landscape settles.

As courts clarify that AI vendors are agents and consumer reporting agencies, and as state AI employment laws proliferate, the gap between contractual protection and legal exposure will widen. Employers who assume their vendor agreements insulate them from this risk are in for an unpleasant surprise. The time to close that gap is now — before the class action lands.

For questions about AI hiring compliance, vendor contracting, FCRA exposure, and litigation preparedness, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • name
    Andrew R. Lee
    title
    Partner
    phones
    D: 504.582.8664
    email
    Emailalee@joneswalker.com
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member