Prompt injection attacks exploit the fundamental design of LLMs, which inherently mix "instructions" and "data" within prompts. This mixing violates a core principle of computer security: the separation of code and data. In traditional computing environments, great efforts are made to keep these elements distinct, as