Your Data Rights: What the EU AI Act and 2026 Regulations Mean for Job Seekers

Your Data Rights: What the EU AI Act and 2026 Regulations Mean for Job Seekers
On February 2, 2025, the first enforcement provisions of the EU Artificial Intelligence Act took effect. Among the newly prohibited practices: using AI for emotion recognition in workplaces and social scoring of individuals.
On August 2, 2026 — six months from now — the core obligations for high-risk AI systems become enforceable. AI used for recruitment, candidate screening, interview analysis, and employment decisions is explicitly classified as high-risk.
The penalties? For high-risk AI violations: up to €15 million or 3% of global annual revenue, whichever is higher. (The higher €35M / 7% ceiling applies only to outright prohibited practices such as social scoring.)
The regulatory landscape for AI in hiring has shifted from voluntary guidelines to binding law. If you are a job seeker in 2026, you have rights you did not have two years ago. This article explains what they are, how to exercise them, and what they mean in practice.
The EU AI Act: What Changed and When
The EU AI Act is the world's first comprehensive legal framework specifically designed to regulate artificial intelligence. It entered into force on August 1, 2024, and is being implemented in phases:
Already in Effect (February 2, 2025)
The following practices are now banned in the EU — including in hiring contexts:
- Emotion recognition in the workplace. AI systems that analyze your facial expressions, vocal tone, or micro-expressions during interviews to infer your emotional state are prohibited. This effectively outlaws the most controversial features of video interview platforms.
- Social scoring. Systems that rate individuals based on their social behavior or predicted personality traits for the purpose of unfavorable treatment are banned.
- Subliminal manipulation. AI designed to alter a person's behavior in ways they cannot perceive — for example, subconsciously steering candidates toward accepting lower salary offers.
- Biometric categorization for sensitive traits. AI that attempts to infer race, political opinions, religion, or sexual orientation from biometric data (face, voice, gait) is prohibited.
If you have completed a video interview in the EU since February 2025 and received an automated assessment that references your "emotional engagement" or "facial confidence score," that assessment may already be in violation of the law.
Coming August 2, 2026: Full High-Risk Obligations
This is the major deadline. From August 2026, any AI system used in recruitment or employment decisions must comply with:
- Mandatory human oversight. Every AI-assisted hiring decision must include meaningful human review. A recruiter clicking "approve" on a list generated by an algorithm does not count — there must be genuine evaluation.
- Technical documentation and logging. Employers must maintain detailed records of how their AI systems work, what data they process, and how decisions are made.
- Bias testing and monitoring. Regular audits for discriminatory outcomes across protected characteristics (gender, race, age, disability).
- Transparency to candidates. You must be informed that AI is being used to evaluate you, what data is being processed, and how it influences the decision.
- Right to explanation. If an AI system contributes to your rejection, you have the right to understand how and why.
What About Non-EU Companies?
The Act has extraterritorial reach. If you are a U.S. company recruiting EU-based candidates, using global HR tools that process EU data, or deploying AI whose output is used in the EU — the Act applies. A U.S. company hiring a remote employee in Berlin is subject to the same obligations as a company headquartered in Brussels.
Beyond the EU: The Global Regulatory Patchwork
The EU is not alone. AI hiring regulation is accelerating worldwide:
United States
- NYC Local Law 144 (effective since July 2023): Requires annual bias audits for automated employment decision tools (AEDTs) used in New York City. Employers must publish audit results and notify candidates when AEDTs are used.
- Illinois BIPA (Biometric Information Privacy Act): Requires written consent before collecting biometric data (including facial geometry from video interviews). Violations carry penalties of $1,000–$5,000 per incident.
- Colorado AI Act (signed 2024, effective 2026): Requires impact assessments for "high-risk" AI systems, including those used in employment. Developers and deployers share accountability.
- Several other states (California, Texas, Maryland, Washington) have enacted or are considering AI-specific hiring regulations.
United Kingdom
Post-Brexit, the UK is developing its own AI framework — currently sector-led rather than comprehensive legislation. The ICO (Information Commissioner's Office) enforces GDPR-derived data protection rules for AI in hiring.
China
China's AI regulations (effective since 2023) require algorithm registration, fairness assessments, and user notification for algorithmic decision-making systems — including those used in employment.
Your Rights as a Candidate in 2026: A Practical Guide
Here is what you can actually do — today — when you encounter AI in a hiring process:
1. The Right to Know
What it means: Before any AI-powered assessment, the employer must tell you that AI is being used, what data it will collect, and how it will influence the decision.
How to exercise it: If a company sends you a link to a video interview platform or gamified assessment without disclosing AI involvement, ask directly:
"Before I proceed, can you confirm whether AI or automated tools will be used to evaluate my responses? And if so, what data will be collected and how will it factor into the hiring decision?"
This is not confrontational — it is your legal right. Companies that are compliant will have straightforward answers. Companies that are evasive may be non-compliant.
2. The Right to Explanation
What it means: If you are rejected by a process involving AI evaluation, you can request an explanation of how the AI contributed to the decision.
How to exercise it: After a rejection:
"I understand I was not selected for this role. As the process included AI-powered assessment tools, I am exercising my right to understand how the automated evaluation influenced this decision. Specifically, I would like to know which factors the system flagged and how they were weighted."
Under GDPR Article 22 and the incoming EU AI Act provisions, companies are required to provide meaningful information — not just a generic "you did not meet our criteria."
3. The Right to Human Review
What it means: You can request that a human decision-maker reviews the AI's assessment of you — and that this review is substantive, not performative.
How to exercise it: This is particularly valuable when you believe the AI assessment does not reflect your actual capabilities — for example, if you are neurodivergent and the system's "engagement" metrics penalized your communication style.
4. The Right to Erasure
What it means: After the hiring process concludes, you can request that the company delete your video, audio, biometric, and assessment data.
How to exercise it: Send a formal data deletion request to the company's DPO (Data Protection Officer) or privacy team. Under GDPR, they have 30 days to comply.
"I am requesting the deletion of all personal data collected during my recent application process, including video recordings, assessment results, and any derived biometric or behavioral data. Please confirm deletion within 30 days as required by GDPR Article 17."
Practical Self-Protection
Beyond your legal rights, here are concrete steps to protect yourself:
Read the privacy policy before you record. Look for "Data Retention" and "Third-Party Sharing" clauses. If the company retains your video data for 2+ years or shares it with unnamed third parties, that is a red flag.
Use a dedicated email for job applications. This limits the data linkage between your professional job search and your personal digital identity.
Be aware of your background. In video interviews, ensure no personal information is visible — family photos, mail, prescriptions, religious items. AI systems can detect and log objects in your background. Even if they do not use that information for hiring decisions, you do not want it in their dataset.
Ask about data processing. Before any video interview, it is reasonable to ask: "How long is my video data stored? Is it shared with third parties? What AI analysis is performed on it?" Companies that have good practices will answer these questions readily.
How JobInterview.live Approaches Data
We believe that candidates should practice and improve without surveillance. Here is our approach:
- Data minimization. We analyze your practice sessions to help you improve. We do not build behavioral profiles for sale to employers.
- No biometric harvesting. We do not extract or store facial geometry, vocal biomarkers, or emotion inference data.
- You own your data. Practice sessions are yours. You can delete them at any time.
- Transparency. Our AI tells you exactly what it is measuring and why — because the purpose of our tool is to make you better, not to judge you.
The difference between a practice tool and a screening tool is who benefits from the data. In our case, the answer is you.
Read Our Full Privacy Policy →
The Bigger Picture
The era of "black box" AI hiring is ending. Not because companies voluntarily became ethical — but because regulators forced the issue. The EU AI Act, NYC Local Law 144, Illinois BIPA, and the Colorado AI Act are collectively creating a world where:
- Candidates know when AI is evaluating them
- Companies must prove their AI is not discriminatory
- Rejected candidates can ask why
- Biometric data has legal protections
These rights are new. They are imperfect. Enforcement will be uneven. But they exist — and exercising them is how they become meaningful.
You are not just a data point in someone's pipeline. You are a professional with legal rights, and the law is increasingly on your side.
Sources
- EU AI Act — Official implementation timeline (artificialintelligenceact.eu)
- European Commission — Digital Strategy: Regulatory Framework for AI
- Greenberg Traurig LLP — "Use of AI in Recruitment and Hiring: Considerations for EU and US Companies" (May 2025)
- Ogletree Deakins — "The EU AI Act Is Here: What It Means for U.S. Employers" (2024)
- NYC Local Law 144 — Automated Employment Decision Tools (2023)
- Illinois Biometric Information Privacy Act (BIPA)
- Colorado AI Act (2024, effective 2026)
Published: February 2026 | Reading Time: 16 minutes