What this risk is, and why it matters
AI-driven HR decisions (resume screening, interview scoring, promotion analytics, performance management) sit in the highest-risk category under the EU AI Act and equivalent emerging regimes. Automated decisions about people now require disclosure, human review and bias auditing as default expectations. Programs deployed under older regulator postures will increasingly fail current audits without remediation.
Legal and regulatory framework
EU AI Act Annex III lists employment AI as high-risk, with documentation, transparency, human-oversight, accuracy and bias-management duties. NYC Local Law 144 (AEDT) requires bias audits and candidate notice. Equivalent regimes (Colorado AI Act, Illinois AI Video Interview Act, EU member-state implementations) extend the surface. Regulator enforcement has begun; recent cases have established the cost of non-compliant deployment.
Typical scenarios and impact
Documented enforcement has produced civil penalties for non-disclosed AEDT use, regulator-mandated audits with remediation timelines, candidate-led litigation against firms whose AI tools produced disparate-impact outcomes, and disclosure-regime damage where AI use was misrepresented in transparency reports. Cases have produced settlements in the mid-six to low-seven-figure range, with audit-cost implications running multiples of that.
Mitigation framework and when to engage an expert
Maintain an AI-use inventory across the HR estate. Run pre-deployment Data Protection Impact Assessments and bias audits. Document human-review override procedures. Provide candidate notice and opt-out where required. Engage AI-and-employment counsel before any new deployment; engage a specialist AI-audit firm for annual bias-audit cycles; engage employment counsel for any AI-driven termination or promotion decision that produces an adverse outcome.