LLM Threat Vector Coverage
There are plenty of Gen AI-specific threat vectors that are constantly evolving. Layerup provides protection against all the threat vectors mentioned on OWASP’s Top 10 for LLMs. This includes:- Indirect Prompt Injection
- Jailbreak (with additional multi-layer protection)
- Direct Prompt Injection
- Hallucination
- Data Privacy: PII & Sensitive Data Detection & Interception
- Insecure Output Handling
- Code input/output santization
- Output Injection (XSS) Interception
- Invisible Unicode detection and interception
- Code input/output santization
- Backdoor Activation Attack Protection
- Model Theft/Adversarial Instructions Detection
- Content Filtering
- Profanity Detection
- Phishing Detection
- Anomaly Detection
- Model Abuse Protection
- Custom protection via Guardrails
Threat Vector Coverage
Learn more about how you can use Layerup’s guardrails to protect against LLM threat vectors.
Layerup Security SDK
The Layerup Security SDK has a myriad of tools to help you secure your LLM calls, including:- Execute pre-defined guardrails that allow you to send canned responses when prompts or responses meet a certain predicate, adding a layer of protection to your LLM calls.
- The ability to mask prompts to strip sensitive data before being sent to a third-party LLM. View how it works here.
- Automatically perform incident management using Layerup error logging. This allows you to seamlessly view insights as to why your LLM calls are failing or timing out, trace errors, and identify patterns.
- Automatic reporting to the Layerup Security dashboard for remediation and visibility.