Resource · Compliance · RES-002
The AI Data Protection Impact Assessment template — Article 35 + EU AI Act overlay
A pre-deployment DPIA template specific to AI and agentic AI systems. Covers GDPR Article 35 obligations, the Datenschutzkonferenz Muss-Liste triggers, and the EU AI Act Article 26 deployer documentation that arrives 2 August 2026. Sized to be completed in one working session.
- Version
- v1.0
- Last reviewed
- 4 May 2026 · today
- For
- Data Protection Officers, compliance leads, project managers
- Time
- 3–6 hours per AI deployment
A standard GDPR Data Protection Impact Assessment template asks the right questions for a CRM rollout. It asks the wrong questions, or none at all, for an AI deployment that ingests employee chat history, makes hiring recommendations, or autonomously books spend against a budget. This template is the AI overlay on the standard Article 35 DPIA, sized to be completed by a DPO and a project lead in one working session.
Three regulatory anchors converge on the AI DPIA:
- GDPR Article 35 still requires a DPIA before any processing “likely to result in a high risk to the rights and freedoms of natural persons.” The Datenschutzkonferenz Muss-Liste from 2024 explicitly enumerates AI systems that learn from employee data among the cases requiring mandatory DPIA, and the Dutch Autoriteit Persoonsgegevens equivalent guidance from 2025 mirrors that position.
- EU AI Act Article 26 (deployer obligations) takes effect 2 August 2026. Deployers of high-risk AI systems must operate them according to instructions, monitor operation, log automatically, and inform workers and their representatives before deployment. The DPIA is the document that captures the pre-deployment portion of these obligations.
- EU AI Act Article 27 requires a Fundamental Rights Impact Assessment (FRIA) for some deployer scenarios. The FRIA is not a substitute for the DPIA; it sits alongside. Most deployers conduct them as one combined exercise to avoid duplicate effort, with the FRIA captured as section 8 of the DPIA.
Use this template before vendor selection, not after. The most common DPIA failure observed in 2025 supervisory authority decisions is “DPIA conducted after deployment to retrofit the documentation,” which the supervisory authorities have consistently treated as non-compliance with the spirit of Article 35.
How to use this
Convene the people who can answer the questions in one room or one call: the DPO, the business owner of the AI deployment, the IT lead responsible for integration, and (for EU deployments) the works-council representative. The questionnaire is structured so that 70% of the answers come from the project lead, 20% from the IT lead, and 10% from the vendor. If you cannot answer a section without further information, mark it as a follow-up and continue.
The completed DPIA is a living document. Set a review trigger at three points: any material change to the AI system being deployed, any change to the categories of data being processed, and any extension of the use case beyond the originally documented scope. Calendar a routine review at 12 months regardless.
Section 1: System characterisation
The first eight questions establish what is being deployed and what data it touches. Most subsequent sections inherit from these answers.
- Name and version of the AI system being deployed, including foundation model lineage where applicable.
- Vendor name and the contract reference under which the system is procured.
- Intended use case described in one paragraph. State the business problem being solved and the alternative approaches considered.
- Categories of personal data processed at training time (if vendor-provided model) and at inference time (customer prompts and outputs).
- Categories of data subject affected: customers, employees, prospects, members of the public, vulnerable groups (minors, patients, applicants).
- Geographic scope of deployment and the geographic regions where inference data is processed.
- Lawful basis under GDPR Article 6, with the named compatibility assessment if the basis is legitimate interests.
- EU AI Act risk classification (prohibited, high-risk per Annex III, limited-risk transparency obligations, minimal-risk).
The eighth question is the gate that determines how much of the rest of the document applies. A high-risk Annex III system (which includes employment, education, essential services, law enforcement, and several other categories) carries the full deployer documentation burden under Article 26. A minimal-risk system carries only the GDPR posture. Most enterprise AI deployments in 2026 land in either Annex III high-risk or limited-risk transparency obligations.
Section 2: Necessity and proportionality
Article 35(7)(b) requires the DPIA to assess “the necessity and proportionality of the processing operations in relation to the purposes.” The supervisory authority decisions in 2025 against several large AI deployments turned on this question.
- State why personal data is necessary for the intended use case. Could the use case be achieved with anonymised, aggregated, or synthetic data? If yes, document why personal data was preferred.
- State why the AI approach is proportionate. Could a simpler rules-based system achieve the use case? Document the alternatives considered.
- Identify the data minimisation measures applied: which fields are excluded from prompts, which retention windows are shortened, which sub-processors are restricted.
- State the period for which personal data is retained at each layer (vendor inference logs, internal audit logs, training data if applicable).
- Identify the consent mechanism (if Article 6(1)(a) is the basis) or the legitimate-interests assessment outcome (if Article 6(1)(f)).
Section 3: Risks to rights and freedoms
Article 35(7)(c) requires assessment of the risks to data subjects. For AI systems, the risk surface differs from traditional processing.
- Risk of automated decision-making outcomes that materially affect the data subject (Article 22 territory). State the human-in-the-loop mechanism.
- Risk of model output leaking training data (memorisation attacks). State the mitigation, including any contractual non-memorisation commitments from the vendor.
- Risk of biased outcomes affecting protected characteristics. State the bias-testing methodology and the cadence at which testing recurs.
- Risk of confidentiality breach through prompt injection or jailbreaking. State the input sanitisation approach.
- Risk of accountability gap if the AI system makes an error. State the escalation, override, and appeal mechanisms available to data subjects.
- Risk of function creep: the system being used beyond its originally documented purpose. State the change-control process.
- Risk of vendor-side processing creep: the vendor expanding what they do with customer data. State the contract review cadence.
Section 4: Mitigation measures
Article 35(7)(d) requires the measures envisaged to address those risks.
- Technical measures: encryption at rest and in transit, access controls, sub-processor restrictions, retention windows, deletion mechanisms.
- Organisational measures: role-based access policies, training for system operators, escalation procedures for incidents, regular DPIA review cadence.
- Contractual measures: data processing agreement, sub-processor list with notification SLA, model deprecation notice period, kill-switch SLA, data portability on termination.
- Transparency measures: privacy notice updates, in-product disclosure of AI involvement, documentation accessible to data subjects.
- Data subject rights mechanisms: how access, rectification, erasure, objection, and (where applicable) Article 22 review requests are operationalised within the AI system’s data flow.
Section 5: Consultation
GDPR Article 35(2) requires DPO involvement; Article 35(9) recommends consulting data subjects “where appropriate.” For employee-facing AI systems, EU works-council law adds binding consultation requirements.
- Date and outcome of DPO consultation. Attach the DPO opinion.
- Date and outcome of works-council consultation, where applicable. Identify the jurisdiction-specific framework: BetrVG §87(1) point 6 (Germany), WOR Article 27 (Netherlands), CSE consultation (France), or equivalent.
- Where the system processes data of vulnerable groups (minors, patients, applicants), state the consultation conducted with representatives of those groups or with the relevant supervisory authority.
- Where consultation has been omitted, state the reason and the supervisory authority position relied upon.
The works-council question is the one that adds 9 to 12 months to mid-market European AI deployments when answered late. German Mittelstand deployments in particular fail this question repeatedly, because the BetrVG broad interpretation (any system that captures, processes, or analyses employee work activity, primary purpose immaterial) catches Microsoft 365 Copilot, Notion AI, ClickUp AI, and similar tools that vendors do not flag as triggering co-determination.
Section 6: Residual risk and decision
Article 35(11) requires the DPIA to be reviewed where there is a change of risk represented by processing operations.
- Residual risk after mitigation: low, medium, high. State the rationale.
- Decision: proceed with deployment, proceed with conditions, do not proceed, refer to supervisory authority under Article 36 prior consultation.
- If proceed with conditions: enumerate the conditions and the verification mechanism for each.
- If refer to supervisory authority: state the date of referral and the consultation outcome before deployment commences.
Section 7: EU AI Act Article 26 deployer obligations
Active from 2 August 2026 for high-risk systems classified under Annex III. Covered here so the DPIA document doubles as evidence for the AI Act audit surface.
- Technical and organisational measures ensuring use according to vendor instructions for use.
- Assignment of human oversight to natural persons with the necessary competence, training, and authority.
- Monitoring of operation based on instructions for use, with serious-incident reporting procedure.
- Automated logging arrangements: log retention period, log content, access mechanism for supervisory authority.
- For employment, workers management, recruitment, education, and essential-services use cases: information provided to workers and their representatives prior to deployment, dated and acknowledged.
- Suspension procedure where the system is suspected of presenting risk per Article 79.
Section 8: Fundamental Rights Impact Assessment (where applicable)
EU AI Act Article 27 requires a FRIA for deployers that are public bodies or private entities providing public services, and for some private deployers using high-risk systems for credit scoring or insurance pricing.
- Description of the deployer’s processes in which the system will be used.
- Period of time and frequency of intended use.
- Categories of natural persons and groups likely to be affected.
- Specific risks of harm likely to impact those categories or groups.
- Description of human oversight measures, internal governance arrangements, and complaint mechanisms.
- Notification of FRIA results to the market surveillance authority (where applicable).
Sign-off
The DPIA is signed off by:
- Data Protection Officer (date, signature, opinion attached or referenced)
- Business owner of the deployment (date, signature)
- IT lead responsible for integration (date, signature)
- Works-council representative where applicable (date, acknowledgement)
- Compliance committee or AI governance committee (date, decision reference)
The signed document is retained for the lifetime of the deployment plus six years, in line with the supervisory-authority guidance on AI-system documentation retention.
Versioning and review
This template is on a 90-day review cycle. Section 7 (Article 26 obligations) will be revised after the first quarter of EU AI Act enforcement actions in late 2026, when supervisory authority interpretations of “natural persons with the necessary competence, training, and authority” become observable. Section 5 (works-council consultation) will be revised when the first cross-jurisdictional case law on AI deployment notification clarifies the German BetrVG broad interpretation against the Dutch and French equivalents.
Spotted a missing question, an unclear field, or a section that does not stand up against a real DPIA conducted with this template? The corrections policy applies.
RES-002holdingsince 4 May 2026Tracked atRES-002 →
The analysis behind this
- 90-days-eu-ai-act-enforcement-what-corpus-says · Reporting
- ai-mittelstand-betrvg-dsgvo-deployment · Operators
- ai-hiring-smb-eu-ai-act-annex-iii · Operators
Spotted an error? See corrections policy →