Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
OPS-047pub4 May 2026rev4 May 2026read7 mininGovernance and risk

AI hiring at small business scale: what EU AI Act Annex III actually means at four employees

Most SMB owners using ChatGPT or a hiring tool to screen CVs do not know they have just deployed a high-risk AI system under EU AI Act Annex III. The threshold does not scale with company size. Here is what holds up at the regulator audit and what does not.

Holding·reviewed4 May 2026·next+62d

If you run a 1-25 person business in 2026 and you have used ChatGPT, Claude, or a dedicated hiring tool to screen CVs, draft job descriptions, or rank candidates in the last six months, you have almost certainly deployed a high-risk AI system under EU AI Act Annex III without knowing it. The threshold does not scale with company size. The 2 August 2026 enforcement window applies to your four-person agency the same way it applies to a 4,000-person enterprise.

The honest read for the SMB owner: most hiring-AI use is not banned, but it is regulated. What survives the regulator audit is AI-assisted decisions with a documented human decision-maker, not AI-decided hiring. The line is operational, not technical. Most SMB owners are on the wrong side of it because nobody told them where the line was.

The Annex III hiring categories, named precisely

Annex III of EU AI Act Regulation 2024/1689 lists the AI use cases classified as high-risk. Point 4 covers employment, workers management, and access to self-employment. Specifically:

“(a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;”

That language covers everything from “use ChatGPT to score CVs against a JD” to dedicated platforms like Workable AI, Greenhouse AI features, or BrightHire. The Annex does not require the system to be sophisticated to qualify; it requires the system to be used for the activity. A sole proprietor pasting a CV into ChatGPT and asking “rate this candidate 1-10” is operating a high-risk AI system under the Act.

Point 4(b) extends this to performance management, promotion decisions, task allocation, and termination decisions. If the AI tool that helps you hire is the same tool you use to write performance reviews, both uses are in scope.

What the national regulators have actually said

Three pieces of guidance to read before the next deployment:

The UK Information Commissioner’s Office (ICO) published AI in recruitment guidance in 2024 with case studies on what compliant AI hiring looks like. The recurring pattern: AI assists in screening, the named human makes the decision, the audit trail captures both. Where ICO investigated complaints, the failures were always at the “AI decided” boundary, not at the “AI assisted” boundary.

The Dutch Autoriteit Persoonsgegevens (AP) issued similar guidance in 2025 covering automated CV screening under both GDPR Article 22 (right not to be subject to a solely automated decision) and the EU AI Act overlay. The AP’s framework: the deployer must document the human-in-the-loop step at the point of any consequential candidate decision. Sending a rejection email triggered by an AI score with no human review is non-compliant.

The Italian Garante per la protezione dei dati personali ruled in 2024 against an Italian recruiting platform that used AI scoring to rank candidates without disclosure to the candidates and without a documented human-decision step. The fine was relatively small (€20,000) but the precedent is the editorial point: solely-automated scoring of natural persons is on the wrong side of GDPR Article 22, regardless of company size.

What changes between today and 2 August 2026

The EU AI Act enforcement window opens 2 August 2026 for Annex III deployments. The penalties under Article 99 are scaled to global turnover (3 percent for high-risk non-compliance, capped at €15 million). For a 1-25 person business the absolute fines are smaller in nominal terms but proportionally severe; the more material risk is regulator-mandated cessation orders that take the AI hiring tool offline mid-recruitment cycle.

The deeper enforcement risk is the candidate complaint. Under GDPR Article 22 every candidate has the right to know whether a decision affecting them was made solely by automated means and to demand human review. SMB owners who have no documented review process discover the gap only when the first rejected candidate sends a Subject Access Request.

The defensible posture: AI-assisted, human-decided

Three concrete operational items survive both the EU AI Act and GDPR Article 22 audit at SMB scale:

Document the human decision-maker. For every consequential decision (interview-or-not, offer-or-not, reject-or-not) name the human who decided. The documentation can be a Notion entry, a spreadsheet row, or an email chain. Format does not matter; the named human and the dated decision do.

Keep the AI score as advisory, not decisional. If a candidate scores 4/10 on an AI rubric, the human reviewing has to be able to override that score and document the override. A workflow that auto-rejects below a threshold without human review is solely-automated within the meaning of Article 22.

Retain the AI-output record. When the regulator audit lands or the candidate Subject Access Request comes in, the deployer needs to show what the AI said and what the human did with it. That means saving the AI output (CV summary, candidate ranking, recommendation) alongside the human decision. SMB owners who run hiring through ChatGPT without saving the chat are missing the artefact that demonstrates compliance.

Tool comparison for the SMB-scale hiring stack

The 2026 SMB hiring-AI category clusters into three patterns, each with a different procurement question:

General-purpose LLMs (ChatGPT, Claude, Gemini) for ad-hoc screening. Cheapest entry point; lowest compliance support. Each platform’s enterprise tier (ChatGPT Enterprise, Claude Enterprise, Gemini for Workspace) ships data residency commitments but neither the consumer nor the enterprise tier ships an EU AI Act Annex III compliance package out of the box. The deployer is responsible for the documentation layer entirely.

Dedicated hiring platforms with AI features (Workable, Greenhouse, Lever). Higher integration cost; some compliance support. Workable ships EU data residency and documents its AI screening features as advisory rather than decisional. Greenhouse and Lever take a similar posture. None of the three commits in writing to Article 11 technical documentation delivery as a procurement deliverable; SMB buyers should ask for it explicitly at contract signing.

AI-first recruitment platforms (Hireguide, BrightHire, Metaview). Tightest integration with hiring workflow; mixed compliance posture. These platforms specifically pitch AI scoring as their differentiator, which means the Article 22 surface is more exposed. Read the platform’s published documentation on how the human-in-the-loop step is enforced; if it is enforced by SMB-side discipline rather than platform-side workflow, the compliance gap stays with the deployer.

What to do this week

Five items, each completable in under one hour:

  1. List every AI tool you have used in hiring in the last six months. Include consumer-grade tools (ChatGPT, Claude). The inventory is the input the rest of the work runs against.
  2. For each tool, name the human who made the consequential decision the AI fed into. If you cannot name a human for a specific decision, that is the gap.
  3. Save the AI output for every active hiring decision going forward. A folder of screenshots is sufficient; a structured tool is better.
  4. Add one line to every job advertisement disclosing AI use in screening. The exact language matters less than the disclosure existing; ICO’s example phrasing is a defensible starting point.
  5. Read AM-127 on the 2 August 2026 deadline and the 14-field audit-evidence template (AM-046). Both apply at SMB scale even though the framing is enterprise.

Verdict

OPS-047 holds. The Annex III hiring categories are settled (point 4 of the published Annex). The ICO, AP, and Garante guidance is on the record. The 2 August 2026 enforcement timeline is fixed. The defensible AI-assisted posture is operational, not technical, which means it is achievable at SMB scale. The risk that downgrades the verdict in Q3 2026 is a national competent authority issuing SMB-specific guidance that materially raises the documentation bar above what a four-person business can sustain; nothing in the May 2026 published pipeline suggests that, but the cadence is sixty days because the Italian and French regulators have been the most active on hiring-AI cases.

This piece pairs with AM-120 on works councils and EU AI rollout at the multinational end of the same labour-relations surface, with OPS-038 on collective agreements at four employees at the SMB labour end, and with AM-127 on the 90-day deadline on the broader regulatory window.

ShareX / TwitterLinkedInEmail

OPS-047holdingsince 4 May 2026SiblingAM-120RegisterReporting

Spotted an error? See corrections policy →

Related reading

OPS-LEDGER · 56 reviewedAI hiring tools and EU AI Act Annex III for small businesses