Prompt Escaping
Proactively protect your LLM from prompt injection by escaping all prompts that contain untrusted user input.
Templatize your prompt
If you currently inject any untrusted user input into your prompt string, templatize your prompt. For each place that your prompt requires an untrusted user input, replace it with a variable. Variables must start with [%
and end with %]
, and are generally all uppercase.
Here is an example of how you can templatize your prompt.
Once you’ve templatized your prompt string, specify variable values in key-value pairs.
Call Layerup Prompt Escaping method
Now that you have segmented your trusted prompt from your untrusted input variables, Layerup is able to use prompt escaping technology to secure your prompt:
- Adding pre-input and post-input flags to quarantine the untrusted user input
- Searching for prompt injection attempts by removing spoofed pre-input and post-input flag strings in the untrusted user input.
Receive escaped prompt and invoke LLM
Layerup returns an escaped prompt with protections applied. This prompt is safe to use when invoking your LLM.
Here is an example of an escaped prompt that was generated from the templatized prompt above:
Now that your user input has been properly quarantined, your LLM is easily able to contrast the user input from the trusted input, and will abide by your original prompt accordingly.