LLM Threat Vector Coverage

There are plenty of Gen AI-specific threat vectors that are constantly evolving. Layerup provides protection against all the threat vectors mentioned on OWASP’s Top 10 for LLMs. This includes:

  1. Indirect Prompt Injection
  2. Jailbreak (with additional multi-layer protection)
  3. Direct Prompt Injection
  4. Hallucination
  5. Data Privacy: PII & Sensitive Data Detection & Interception
  6. Insecure Output Handling
  7. Code input/output santization
  8. Output Injection (XSS) Interception
  9. Invisible Unicode detection and interception
  10. Code input/output santization
  11. Backdoor Activation Attack Protection
  12. Model Theft/Adversarial Instructions Detection
  13. Content Filtering
  14. Profanity Detection
  15. Phishing Detection
  16. Anomaly Detection
  17. Model Abuse Protection
  18. Custom protection via Guardrails

Threat Vector Coverage

Learn more about how you can use Layerup’s guardrails to protect against LLM threat vectors.


Layerup Security SDK

The Layerup Security SDK has a myriad of tools to help you secure your LLM calls, including:

  • Execute pre-defined guardrails that allow you to send canned responses when prompts or responses meet a certain predicate, adding a layer of protection to your LLM calls.
  • The ability to mask prompts to strip sensitive data before being sent to a third-party LLM. View how it works here.
  • Automatically perform incident management using Layerup error logging. This allows you to seamlessly view insights as to why your LLM calls are failing or timing out, trace errors, and identify patterns.
  • Automatic reporting to the Layerup Security dashboard for remediation and visibility.