Execute pre-defined guardrails that allow you to send canned responses when prompts or responses meet a certain predicate.
layerup.hallucination
- detect & intercept hallucination in your LLM response before it is sent to the end userlayerup.prompt_injection
- detect & intercept prompt injection in your user prompt before it is processed by the LLMlayerup.jailbreaking
- detect & intercept jailbreaking attempts in your user prompt before it is sent to a third-party LLMlayerup.sensitive_data
- detect & intercept sensitive data in your user prompt before it is sent to a third-party LLMlayerup.abuse
- detect & intercept abuse on the project, scope, customer, or customer-scope level for any LLM requestlayerup.content_moderation
- detect & intercept harmful content returned by your LLM before it is sent to the userlayerup.phishing
- detect & intercept phishing content returned by your LLM before it is sent to the userlayerup.invisible_unicode
- detect & intercept invisible unicode in your user prompt before it is sent to a third-party LLMlayerup.code_detection
- detect & intercept computer code in your user prompt or LLM response before it is usedexecuteGuardrails
method will return a Promise
that resolves to an object with the following fields:false
.Note: if the response is false
, we strongly advise against proceeding with your application LLM call.null
.If there is a valid canned response specified on the dashboard, this object will have 2 fields:role
- will always be "assistant"
message
- the canned response specified on the dashboard