Skip to content
Glossary · Industry term

High-risk AI system

Also known as: Annex III system, EU AI Act high-risk, high-risk AI

Under the EU AI Act, an AI system that falls into one of the categories listed in Annex III (or that is used as a safety component of a regulated product) inherits the full Article 8–17 obligations: risk-management system, data-governance documentation, technical documentation, lifecycle logging (Article 12), human oversight, accuracy and robustness measures, conformity assessment, and post-market monitoring. The August 2026 enforcement deadline applies these obligations to providers and deployers of in-scope systems.

How this publication uses it

The high-risk classification mistake to avoid is treating an agent as 'general-purpose' because the vendor markets it as productivity tooling, when its outputs feed into hiring, credit, infrastructure, education, or essential-services decisions. Annex III is an outcome test, not a marketing-category test. A general-purpose-classified deployment that turns out to be high-risk on operational scope will need a topology change inside the enforcement window — and that is the most-cited 2026 procurement remediation cost.

Related frameworks

Articles that analyse this term

Primary sources

Vigil · 78 reviewed