Skip to content
Glossary · Industry term

Prompt injection

Also known as: LLM01, direct prompt injection

An attack where adversary-controlled text is interpreted by an LLM as instructions, overriding the developer's intent. Catalogued as LLM01 in OWASP's Top 10 for LLM Applications. The leading exploit class against agentic systems.

How this publication uses it

Every enterprise agent deployment must threat-model prompt injection — it is not theoretical. MTTD-for-Agents includes a tripwire for refusal-rate anomalies precisely because successful injections often surface as a sudden change in tool-use frequency or refusal patterns before the operator notices the data leak.

Related frameworks

Articles that analyse this term

Primary sources

Vigil · 70 reviewed