Proactively protect your LLM from prompt injection by escaping all prompts that contain untrusted user input.
[%
and end with %]
, and are generally all uppercase. For example, [%DETAILS%]
or [%USER_INPUT%]
are variables.[%
and %]
), i.e. DETAILS
or USER_INPUT
. The value is the raw untrusted user input string, which may or may not contain a prompt injection attack.escapePrompt
method will return a string containing your escaped prompt, which is safe to pass directly to your LLM.