← Back
prompt injection
technique
A class of security vulnerabilities in applications built on top of Large Language Models (LLMs) where malicious inputs can manipulate the model's behavior.
Topics
No approved mentions yet.