# Layerup Security ## Docs - [Introduction](https://docs.uselayerup.com/introduction.md): Layerup empowers you to develop secure, data-driven Generative AI applications with robust **Observability**, **Evaluations**, and **Security**. Instead of using LLMs on vibes, you can use Layerup to make data-driven decisions while securing against risks such as prompt injection or hallucinations.… - [Model Abuse](https://docs.uselayerup.com/knowledgebase/abuse.md): Layerup Security employs a custom algorithm to detect and prevent abuse at multiple levels using this guardrail. - [Code Detection](https://docs.uselayerup.com/knowledgebase/code-detection.md): Layerup Security employs an advanced model to detect the presence of code within LLM responses, excluding markdown or data formats like JSON. - [Content Moderation](https://docs.uselayerup.com/knowledgebase/content-moderation.md): Layerup Security employs a custom model to detect and moderate harmful content in LLM responses, ensuring safe and respectful interactions. - [Hallucination](https://docs.uselayerup.com/knowledgebase/hallucination.md): Layerup Security uses a custom-trained hallucination model based on curated proprietary datasets to detect hallucinations. - [Introduction](https://docs.uselayerup.com/knowledgebase/introduction.md): Welcome to the Layerup Security knowledge base. This section provides an overview of our security features, including guardrails for jailbreaking, prompt injection, abuse, and sensitive data protection. - [Invisible Unicode](https://docs.uselayerup.com/knowledgebase/invisible-unicode.md): Layerup Security utilizes a custom algorithm to detect invisible Unicode characters that could be used to manipulate LLM prompts without user awareness. - [Jailbreaking](https://docs.uselayerup.com/knowledgebase/jailbreaking.md): Layerup Security employs a custom model to detect and prevent jailbreaking attempts in LLM responses, ensuring the responsible use of language models. - [Logging and Monitoring](https://docs.uselayerup.com/knowledgebase/logging-and-monitoring.md): Layerup Security integrates advanced logging and monitoring mechanisms to oversee the operations of LLMs, ensuring transparency and accountability in user interactions. - [Phishing Detection](https://docs.uselayerup.com/knowledgebase/phishing.md): Layerup Security employs a custom-trained model to detect phishing attempts within LLM responses, ensuring the safety and integrity of interactions. - [PII & Sensitive Data Masking](https://docs.uselayerup.com/knowledgebase/pii-sensitive-data.md): When PII or sensitive data is received by Layerup Security masking, we'll ensure that none of it is ever sent to a third-party LLM, without any interruption in service. - [Prompt Escaping](https://docs.uselayerup.com/knowledgebase/prompt-escaping.md): Proactively protect your LLM from prompt injection by escaping all prompts that contain untrusted user input. - [Prompt Injection](https://docs.uselayerup.com/knowledgebase/prompt-injection.md): Layerup Security employs a custom model to detect and prevent prompt injection in LLM responses, ensuring the integrity of user interactions. - [Sensitive Data](https://docs.uselayerup.com/knowledgebase/sensitive-data.md): Layerup Security employs a custom model to detect and prevent the exposure of sensitive data in LLM responses, safeguarding user and company information. - [Quickstart](https://docs.uselayerup.com/quickstart.md): Start securing your AI applications in under 5 minutes! - [Escape Prompt](https://docs.uselayerup.com/sdk/escape-prompt.md): Proactively protect your LLM from prompt injection by escaping all prompts that contain untrusted user input. - [Execute Guardrails](https://docs.uselayerup.com/sdk/execute-guardrails.md): Execute pre-defined guardrails that allow you to send canned responses when prompts or responses meet a certain predicate. - [Introduction](https://docs.uselayerup.com/sdk/introduction.md): The Layerup Security SDK can be used to intercept threats in both prompts and responses, mask PII & sensitive data, and handle incident management for LLM calls. - [Log Error](https://docs.uselayerup.com/sdk/log-error.md): Log LLM errors in order to seamlessly view insights as to why your LLM calls are failing or timing out, trace errors, and identify patterns. - [Mask Prompt](https://docs.uselayerup.com/sdk/mask-prompt.md): Mask sensitive information in your prompts before sending them to a third-party LLM. ## OpenAPI Specs - [openapi](https://docs.uselayerup.com/api-reference/openapi.json) ## Optional - [Contact Support](https://calendly.com/arnav-layerupai/30min) - [Case Studies](https://uselayerup.com/) - [Blog](https://uselayerup.com/blog) - [Educational Videos](https://www.youtube.com/@layerupai)