Prompt injection is a security vulnerability where malicious input tricks an LLM into ignoring its original instructions and following attacker-controlled commands instead. It's similar to SQL injection but for language models.
This is a critical concern for any application that combines untrusted user input with system prompts.