Proactively protect your LLM from prompt injection by escaping all prompts that contain untrusted user input.
Templatize your prompt
[%
and end with %]
, and are generally all uppercase.Here is an example of how you can templatize your prompt.Call Layerup Prompt Escaping method
Receive escaped prompt and invoke LLM