← Back

prompt injection

technique

A class of security vulnerabilities in applications built on top of Large Language Models (LLMs) where malicious inputs can manipulate the model's behavior.

No approved mentions yet.